1 /*
   2   This is a version (aka dlmalloc) of malloc/free/realloc written by
   3   Doug Lea and released to the public domain, as explained at
   4   http://creativecommons.org/licenses/publicdomain.  Send questions,
   5   comments, complaints, performance data, etc to dl@cs.oswego.edu
   6 
   7 * Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
   8 
   9    Note: There may be an updated version of this malloc obtainable at
  10            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
  11          Check before installing!
  12 
  13 * Quickstart
  14 
  15   This library is all in one file to simplify the most common usage:
  16   ftp it, compile it (-O3), and link it into another program. All of
  17   the compile-time options default to reasonable values for use on
  18   most platforms.  You might later want to step through various
  19   compile-time and dynamic tuning options.
  20 
  21   For convenience, an include file for code using this malloc is at:
  22      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
  23   You don't really need this .h file unless you call functions not
  24   defined in your system include files.  The .h file contains only the
  25   excerpts from this file needed for using this malloc on ANSI C/C++
  26   systems, so long as you haven't changed compile-time options about
  27   naming and tuning parameters.  If you do, then you can create your
  28   own malloc.h that does include all settings by cutting at the point
  29   indicated below. Note that you may already by default be using a C
  30   library containing a malloc that is based on some version of this
  31   malloc (for example in linux). You might still want to use the one
  32   in this file to customize settings or to avoid overheads associated
  33   with library versions.
  34 
  35 * Vital statistics:
  36 
  37   Supported pointer/size_t representation:       4 or 8 bytes
  38        size_t MUST be an unsigned type of the same width as
  39        pointers. (If you are using an ancient system that declares
  40        size_t as a signed type, or need it to be a different width
  41        than pointers, you can use a previous release of this malloc
  42        (e.g. 2.7.2) supporting these.)
  43 
  44   Alignment:                                     8 bytes (default)
  45        This suffices for nearly all current machines and C compilers.
  46        However, you can define MALLOC_ALIGNMENT to be wider than this
  47        if necessary (up to 128bytes), at the expense of using more space.
  48 
  49   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
  50                                           8 or 16 bytes (if 8byte sizes)
  51        Each malloced chunk has a hidden word of overhead holding size
  52        and status information, and additional cross-check word
  53        if FOOTERS is defined.
  54 
  55   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
  56                           8-byte ptrs:  32 bytes    (including overhead)
  57 
  58        Even a request for zero bytes (i.e., malloc(0)) returns a
  59        pointer to something of the minimum allocatable size.
  60        The maximum overhead wastage (i.e., number of extra bytes
  61        allocated than were requested in malloc) is less than or equal
  62        to the minimum size, except for requests >= mmap_threshold that
  63        are serviced via mmap(), where the worst case wastage is about
  64        32 bytes plus the remainder from a system page (the minimal
  65        mmap unit); typically 4096 or 8192 bytes.
  66 
  67   Security: static-safe; optionally more or less
  68        The "security" of malloc refers to the ability of malicious
  69        code to accentuate the effects of errors (for example, freeing
  70        space that is not currently malloc'ed or overwriting past the
  71        ends of chunks) in code that calls malloc.  This malloc
  72        guarantees not to modify any memory locations below the base of
  73        heap, i.e., static variables, even in the presence of usage
  74        errors.  The routines additionally detect most improper frees
  75        and reallocs.  All this holds as long as the static bookkeeping
  76        for malloc itself is not corrupted by some other means.  This
  77        is only one aspect of security -- these checks do not, and
  78        cannot, detect all possible programming errors.
  79 
  80        If FOOTERS is defined nonzero, then each allocated chunk
  81        carries an additional check word to verify that it was malloced
  82        from its space.  These check words are the same within each
  83        execution of a program using malloc, but differ across
  84        executions, so externally crafted fake chunks cannot be
  85        freed. This improves security by rejecting frees/reallocs that
  86        could corrupt heap memory, in addition to the checks preventing
  87        writes to statics that are always on.  This may further improve
  88        security at the expense of time and space overhead.  (Note that
  89        FOOTERS may also be worth using with MSPACES.)
  90 
  91        By default detected errors cause the program to abort (calling
  92        "abort()"). You can override this to instead proceed past
  93        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
  94        has no effect, and a malloc that encounters a bad address
  95        caused by user overwrites will ignore the bad address by
  96        dropping pointers and indices to all known memory. This may
  97        be appropriate for programs that should continue if at all
  98        possible in the face of programming errors, although they may
  99        run out of memory because dropped memory is never reclaimed.
 100 
 101        If you don't like either of these options, you can define
 102        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
 103        else. And if if you are sure that your program using malloc has
 104        no errors or vulnerabilities, you can define INSECURE to 1,
 105        which might (or might not) provide a small performance improvement.
 106 
 107   Thread-safety: NOT thread-safe unless USE_LOCKS defined
 108        When USE_LOCKS is defined, each public call to malloc, free,
 109        etc is surrounded with either a pthread mutex or a win32
 110        spinlock (depending on WIN32). This is not especially fast, and
 111        can be a major bottleneck.  It is designed only to provide
 112        minimal protection in concurrent environments, and to provide a
 113        basis for extensions.  If you are using malloc in a concurrent
 114        program, consider instead using ptmalloc, which is derived from
 115        a version of this malloc. (See http://www.malloc.de).
 116 
 117   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
 118        This malloc can use unix sbrk or any emulation (invoked using
 119        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
 120        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
 121        memory.  On most unix systems, it tends to work best if both
 122        MORECORE and MMAP are enabled.  On Win32, it uses emulations
 123        based on VirtualAlloc. It also uses common C library functions
 124        like memset.
 125 
 126   Compliance: I believe it is compliant with the Single Unix Specification
 127        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
 128        others as well.
 129 
 130 * Overview of algorithms
 131 
 132   This is not the fastest, most space-conserving, most portable, or
 133   most tunable malloc ever written. However it is among the fastest
 134   while also being among the most space-conserving, portable and
 135   tunable.  Consistent balance across these factors results in a good
 136   general-purpose allocator for malloc-intensive programs.
 137 
 138   In most ways, this malloc is a best-fit allocator. Generally, it
 139   chooses the best-fitting existing chunk for a request, with ties
 140   broken in approximately least-recently-used order. (This strategy
 141   normally maintains low fragmentation.) However, for requests less
 142   than 256bytes, it deviates from best-fit when there is not an
 143   exactly fitting available chunk by preferring to use space adjacent
 144   to that used for the previous small request, as well as by breaking
 145   ties in approximately most-recently-used order. (These enhance
 146   locality of series of small allocations.)  And for very large requests
 147   (>= 256Kb by default), it relies on system memory mapping
 148   facilities, if supported.  (This helps avoid carrying around and
 149   possibly fragmenting memory used only for large chunks.)
 150 
 151   All operations (except malloc_stats and mallinfo) have execution
 152   times that are bounded by a constant factor of the number of bits in
 153   a size_t, not counting any clearing in calloc or copying in realloc,
 154   or actions surrounding MORECORE and MMAP that have times
 155   proportional to the number of non-contiguous regions returned by
 156   system allocation routines, which is often just 1.
 157 
 158   The implementation is not very modular and seriously overuses
 159   macros. Perhaps someday all C compilers will do as good a job
 160   inlining modular code as can now be done by brute-force expansion,
 161   but now, enough of them seem not to.
 162 
 163   Some compilers issue a lot of warnings about code that is
 164   dead/unreachable only on some platforms, and also about intentional
 165   uses of negation on unsigned types. All known cases of each can be
 166   ignored.
 167 
 168   For a longer but out of date high-level description, see
 169      http://gee.cs.oswego.edu/dl/html/malloc.html
 170 
 171 * MSPACES
 172   If MSPACES is defined, then in addition to malloc, free, etc.,
 173   this file also defines mspace_malloc, mspace_free, etc. These
 174   are versions of malloc routines that take an "mspace" argument
 175   obtained using create_mspace, to control all internal bookkeeping.
 176   If ONLY_MSPACES is defined, only these versions are compiled.
 177   So if you would like to use this allocator for only some allocations,
 178   and your system malloc for others, you can compile with
 179   ONLY_MSPACES and then do something like...
 180     static mspace mymspace = create_mspace(0,0); // for example
 181     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
 182 
 183   (Note: If you only need one instance of an mspace, you can instead
 184   use "USE_DL_PREFIX" to relabel the global malloc.)
 185 
 186   You can similarly create thread-local allocators by storing
 187   mspaces as thread-locals. For example:
 188     static __thread mspace tlms = 0;
 189     void*  tlmalloc(size_t bytes) {
 190       if (tlms == 0) tlms = create_mspace(0, 0);
 191       return mspace_malloc(tlms, bytes);
 192     }
 193     void  tlfree(void* mem) { mspace_free(tlms, mem); }
 194 
 195   Unless FOOTERS is defined, each mspace is completely independent.
 196   You cannot allocate from one and free to another (although
 197   conformance is only weakly checked, so usage errors are not always
 198   caught). If FOOTERS is defined, then each chunk carries around a tag
 199   indicating its originating mspace, and frees are directed to their
 200   originating spaces.
 201 
 202  -------------------------  Compile-time options ---------------------------
 203 
 204 Be careful in setting #define values for numerical constants of type
 205 size_t. On some systems, literal values are not automatically extended
 206 to size_t precision unless they are explicitly casted.
 207 
 208 WIN32                    default: defined if _WIN32 defined
 209   Defining WIN32 sets up defaults for MS environment and compilers.
 210   Otherwise defaults are for unix.
 211 
 212 MALLOC_ALIGNMENT         default: (size_t)8
 213   Controls the minimum alignment for malloc'ed chunks.  It must be a
 214   power of two and at least 8, even on machines for which smaller
 215   alignments would suffice. It may be defined as larger than this
 216   though. Note however that code and data structures are optimized for
 217   the case of 8-byte alignment.
 218 
 219 MSPACES                  default: 0 (false)
 220   If true, compile in support for independent allocation spaces.
 221   This is only supported if HAVE_MMAP is true.
 222 
 223 ONLY_MSPACES             default: 0 (false)
 224   If true, only compile in mspace versions, not regular versions.
 225 
 226 USE_LOCKS                default: 0 (false)
 227   Causes each call to each public routine to be surrounded with
 228   pthread or WIN32 mutex lock/unlock. (If set true, this can be
 229   overridden on a per-mspace basis for mspace versions.)
 230 
 231 FOOTERS                  default: 0
 232   If true, provide extra checking and dispatching by placing
 233   information in the footers of allocated chunks. This adds
 234   space and time overhead.
 235 
 236 INSECURE                 default: 0
 237   If true, omit checks for usage errors and heap space overwrites.
 238 
 239 USE_DL_PREFIX            default: NOT defined
 240   Causes compiler to prefix all public routines with the string 'dl'.
 241   This can be useful when you only want to use this malloc in one part
 242   of a program, using your regular system malloc elsewhere.
 243 
 244 ABORT                    default: defined as abort()
 245   Defines how to abort on failed checks.  On most systems, a failed
 246   check cannot die with an "assert" or even print an informative
 247   message, because the underlying print routines in turn call malloc,
 248   which will fail again.  Generally, the best policy is to simply call
 249   abort(). It's not very useful to do more than this because many
 250   errors due to overwriting will show up as address faults (null, odd
 251   addresses etc) rather than malloc-triggered checks, so will also
 252   abort.  Also, most compilers know that abort() does not return, so
 253   can better optimize code conditionally calling it.
 254 
 255 PROCEED_ON_ERROR           default: defined as 0 (false)
 256   Controls whether detected bad addresses cause them to bypassed
 257   rather than aborting. If set, detected bad arguments to free and
 258   realloc are ignored. And all bookkeeping information is zeroed out
 259   upon a detected overwrite of freed heap space, thus losing the
 260   ability to ever return it from malloc again, but enabling the
 261   application to proceed. If PROCEED_ON_ERROR is defined, the
 262   static variable malloc_corruption_error_count is compiled in
 263   and can be examined to see if errors have occurred. This option
 264   generates slower code than the default abort policy.
 265 
 266 DEBUG                    default: NOT defined
 267   The DEBUG setting is mainly intended for people trying to modify
 268   this code or diagnose problems when porting to new platforms.
 269   However, it may also be able to better isolate user errors than just
 270   using runtime checks.  The assertions in the check routines spell
 271   out in more detail the assumptions and invariants underlying the
 272   algorithms.  The checking is fairly extensive, and will slow down
 273   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
 274   set will attempt to check every non-mmapped allocated and free chunk
 275   in the course of computing the summaries.
 276 
 277 ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
 278   Debugging assertion failures can be nearly impossible if your
 279   version of the assert macro causes malloc to be called, which will
 280   lead to a cascade of further failures, blowing the runtime stack.
 281   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
 282   which will usually make debugging easier.
 283 
 284 MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
 285   The action to take before "return 0" when malloc fails to be able to
 286   return memory because there is none available.
 287 
 288 HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
 289   True if this system supports sbrk or an emulation of it.
 290 
 291 MORECORE                  default: sbrk
 292   The name of the sbrk-style system routine to call to obtain more
 293   memory.  See below for guidance on writing custom MORECORE
 294   functions. The type of the argument to sbrk/MORECORE varies across
 295   systems.  It cannot be size_t, because it supports negative
 296   arguments, so it is normally the signed type of the same width as
 297   size_t (sometimes declared as "intptr_t").  It doesn't much matter
 298   though. Internally, we only call it with arguments less than half
 299   the max value of a size_t, which should work across all reasonable
 300   possibilities, although sometimes generating compiler warnings.  See
 301   near the end of this file for guidelines for creating a custom
 302   version of MORECORE.
 303 
 304 MORECORE_CONTIGUOUS       default: 1 (true)
 305   If true, take advantage of fact that consecutive calls to MORECORE
 306   with positive arguments always return contiguous increasing
 307   addresses.  This is true of unix sbrk. It does not hurt too much to
 308   set it true anyway, since malloc copes with non-contiguities.
 309   Setting it false when definitely non-contiguous saves time
 310   and possibly wasted space it would take to discover this though.
 311 
 312 MORECORE_CANNOT_TRIM      default: NOT defined
 313   True if MORECORE cannot release space back to the system when given
 314   negative arguments. This is generally necessary only if you are
 315   using a hand-crafted MORECORE function that cannot handle negative
 316   arguments.
 317 
 318 HAVE_MMAP                 default: 1 (true)
 319   True if this system supports mmap or an emulation of it.  If so, and
 320   HAVE_MORECORE is not true, MMAP is used for all system
 321   allocation. If set and HAVE_MORECORE is true as well, MMAP is
 322   primarily used to directly allocate very large blocks. It is also
 323   used as a backup strategy in cases where MORECORE fails to provide
 324   space from system. Note: A single call to MUNMAP is assumed to be
 325   able to unmap memory that may have be allocated using multiple calls
 326   to MMAP, so long as they are adjacent.
 327 
 328 HAVE_MREMAP               default: 1 on linux, else 0
 329   If true realloc() uses mremap() to re-allocate large blocks and
 330   extend or shrink allocation spaces.
 331 
 332 MMAP_CLEARS               default: 1 on unix
 333   True if mmap clears memory so calloc doesn't need to. This is true
 334   for standard unix mmap using /dev/zero.
 335 
 336 USE_BUILTIN_FFS            default: 0 (i.e., not used)
 337   Causes malloc to use the builtin ffs() function to compute indices.
 338   Some compilers may recognize and intrinsify ffs to be faster than the
 339   supplied C version. Also, the case of x86 using gcc is special-cased
 340   to an asm instruction, so is already as fast as it can be, and so
 341   this setting has no effect. (On most x86s, the asm version is only
 342   slightly faster than the C version.)
 343 
 344 malloc_getpagesize         default: derive from system includes, or 4096.
 345   The system page size. To the extent possible, this malloc manages
 346   memory from the system in page-size units.  This may be (and
 347   usually is) a function rather than a constant. This is ignored
 348   if WIN32, where page size is determined using getSystemInfo during
 349   initialization.
 350 
 351 USE_DEV_RANDOM             default: 0 (i.e., not used)
 352   Causes malloc to use /dev/random to initialize secure magic seed for
 353   stamping footers. Otherwise, the current time is used.
 354 
 355 NO_MALLINFO                default: 0
 356   If defined, don't compile "mallinfo". This can be a simple way
 357   of dealing with mismatches between system declarations and
 358   those in this file.
 359 
 360 MALLINFO_FIELD_TYPE        default: size_t
 361   The type of the fields in the mallinfo struct. This was originally
 362   defined as "int" in SVID etc, but is more usefully defined as
 363   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
 364 
 365 REALLOC_ZERO_BYTES_FREES    default: not defined
 366   This should be set if a call to realloc with zero bytes should
 367   be the same as a call to free. Some people think it should. Otherwise,
 368   since this malloc returns a unique pointer for malloc(0), so does
 369   realloc(p, 0).
 370 
 371 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
 372 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
 373 LACKS_STDLIB_H                default: NOT defined unless on WIN32
 374   Define these if your system does not have these header files.
 375   You might need to manually insert some of the declarations they provide.
 376 
 377 DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
 378                                 system_info.dwAllocationGranularity in WIN32,
 379                                 otherwise 64K.
 380       Also settable using mallopt(M_GRANULARITY, x)
 381   The unit for allocating and deallocating memory from the system.  On
 382   most systems with contiguous MORECORE, there is no reason to
 383   make this more than a page. However, systems with MMAP tend to
 384   either require or encourage larger granularities.  You can increase
 385   this value to prevent system allocation functions to be called so
 386   often, especially if they are slow.  The value must be at least one
 387   page and must be a power of two.  Setting to 0 causes initialization
 388   to either page size or win32 region size.  (Note: In previous
 389   versions of malloc, the equivalent of this option was called
 390   "TOP_PAD")
 391 
 392 DEFAULT_TRIM_THRESHOLD    default: 2MB
 393       Also settable using mallopt(M_TRIM_THRESHOLD, x)
 394   The maximum amount of unused top-most memory to keep before
 395   releasing via malloc_trim in free().  Automatic trimming is mainly
 396   useful in long-lived programs using contiguous MORECORE.  Because
 397   trimming via sbrk can be slow on some systems, and can sometimes be
 398   wasteful (in cases where programs immediately afterward allocate
 399   more large chunks) the value should be high enough so that your
 400   overall system performance would improve by releasing this much
 401   memory.  As a rough guide, you might set to a value close to the
 402   average size of a process (program) running on your system.
 403   Releasing this much memory would allow such a process to run in
 404   memory.  Generally, it is worth tuning trim thresholds when a
 405   program undergoes phases where several large chunks are allocated
 406   and released in ways that can reuse each other's storage, perhaps
 407   mixed with phases where there are no such chunks at all. The trim
 408   value must be greater than page size to have any useful effect.  To
 409   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
 410   some people use of mallocing a huge space and then freeing it at
 411   program startup, in an attempt to reserve system memory, doesn't
 412   have the intended effect under automatic trimming, since that memory
 413   will immediately be returned to the system.
 414 
 415 DEFAULT_MMAP_THRESHOLD       default: 256K
 416       Also settable using mallopt(M_MMAP_THRESHOLD, x)
 417   The request size threshold for using MMAP to directly service a
 418   request. Requests of at least this size that cannot be allocated
 419   using already-existing space will be serviced via mmap.  (If enough
 420   normal freed space already exists it is used instead.)  Using mmap
 421   segregates relatively large chunks of memory so that they can be
 422   individually obtained and released from the host system. A request
 423   serviced through mmap is never reused by any other request (at least
 424   not directly; the system may just so happen to remap successive
 425   requests to the same locations).  Segregating space in this way has
 426   the benefits that: Mmapped space can always be individually released
 427   back to the system, which helps keep the system level memory demands
 428   of a long-lived program low.  Also, mapped memory doesn't become
 429   `locked' between other chunks, as can happen with normally allocated
 430   chunks, which means that even trimming via malloc_trim would not
 431   release them.  However, it has the disadvantage that the space
 432   cannot be reclaimed, consolidated, and then used to service later
 433   requests, as happens with normal chunks.  The advantages of mmap
 434   nearly always outweigh disadvantages for "large" chunks, but the
 435   value of "large" may vary across systems.  The default is an
 436   empirically derived value that works well in most systems. You can
 437   disable mmap by setting to MAX_SIZE_T.
 438 
 439 */
 440 
 441 #if defined __linux__ && !defined _GNU_SOURCE
 442 /* mremap() on Linux requires this via sys/mman.h */
 443 #define _GNU_SOURCE 1
 444 #endif
 445 
 446 #ifndef WIN32
 447 #ifdef _WIN32
 448 #define WIN32 1
 449 #endif  /* _WIN32 */
 450 #endif  /* WIN32 */
 451 #ifdef WIN32
 452 #define WIN32_LEAN_AND_MEAN
 453 #include <windows.h>
 454 #define HAVE_MMAP 1
 455 #define HAVE_MORECORE 0
 456 #define LACKS_UNISTD_H
 457 #define LACKS_SYS_PARAM_H
 458 #define LACKS_SYS_MMAN_H
 459 #define LACKS_STRING_H
 460 #define LACKS_STRINGS_H
 461 #define LACKS_SYS_TYPES_H
 462 #define LACKS_ERRNO_H
 463 #define MALLOC_FAILURE_ACTION
 464 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
 465 #endif  /* WIN32 */
 466 
 467 #ifdef __OS2__
 468 #define INCL_DOS
 469 #include <os2.h>
 470 #define HAVE_MMAP 1
 471 #define HAVE_MORECORE 0
 472 #define LACKS_SYS_MMAN_H
 473 #endif  /* __OS2__ */
 474 
 475 #if defined(DARWIN) || defined(_DARWIN)
 476 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
 477 #ifndef HAVE_MORECORE
 478 #define HAVE_MORECORE 0
 479 #define HAVE_MMAP 1
 480 #endif  /* HAVE_MORECORE */
 481 #endif  /* DARWIN */
 482 
 483 #ifndef LACKS_SYS_TYPES_H
 484 #include <sys/types.h>  /* For size_t */
 485 #endif  /* LACKS_SYS_TYPES_H */
 486 
 487 /* The maximum possible size_t value has all bits set */
 488 #define MAX_SIZE_T           (~(size_t)0)
 489 
 490 #ifndef ONLY_MSPACES
 491 #define ONLY_MSPACES 0
 492 #endif  /* ONLY_MSPACES */
 493 #ifndef MSPACES
 494 #if ONLY_MSPACES
 495 #define MSPACES 1
 496 #else   /* ONLY_MSPACES */
 497 #define MSPACES 0
 498 #endif  /* ONLY_MSPACES */
 499 #endif  /* MSPACES */
 500 #ifndef MALLOC_ALIGNMENT
 501 #define MALLOC_ALIGNMENT ((size_t)8U)
 502 #endif  /* MALLOC_ALIGNMENT */
 503 #ifndef FOOTERS
 504 #define FOOTERS 0
 505 #endif  /* FOOTERS */
 506 #ifndef ABORT
 507 #define ABORT  abort()
 508 #endif  /* ABORT */
 509 #ifndef ABORT_ON_ASSERT_FAILURE
 510 #define ABORT_ON_ASSERT_FAILURE 1
 511 #endif  /* ABORT_ON_ASSERT_FAILURE */
 512 #ifndef PROCEED_ON_ERROR
 513 #define PROCEED_ON_ERROR 0
 514 #endif  /* PROCEED_ON_ERROR */
 515 #ifndef USE_LOCKS
 516 #define USE_LOCKS 0
 517 #endif  /* USE_LOCKS */
 518 #ifndef INSECURE
 519 #define INSECURE 0
 520 #endif  /* INSECURE */
 521 #ifndef HAVE_MMAP
 522 #define HAVE_MMAP 1
 523 #endif  /* HAVE_MMAP */
 524 #ifndef MMAP_CLEARS
 525 #define MMAP_CLEARS 1
 526 #endif  /* MMAP_CLEARS */
 527 #ifndef HAVE_MREMAP
 528 #ifdef linux
 529 #define HAVE_MREMAP 1
 530 #else   /* linux */
 531 #define HAVE_MREMAP 0
 532 #endif  /* linux */
 533 #endif  /* HAVE_MREMAP */
 534 #ifndef MALLOC_FAILURE_ACTION
 535 #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
 536 #endif  /* MALLOC_FAILURE_ACTION */
 537 #ifndef HAVE_MORECORE
 538 #if ONLY_MSPACES
 539 #define HAVE_MORECORE 0
 540 #else   /* ONLY_MSPACES */
 541 #define HAVE_MORECORE 1
 542 #endif  /* ONLY_MSPACES */
 543 #endif  /* HAVE_MORECORE */
 544 #if !HAVE_MORECORE
 545 #define MORECORE_CONTIGUOUS 0
 546 #else   /* !HAVE_MORECORE */
 547 #ifndef MORECORE
 548 #define MORECORE sbrk
 549 #endif  /* MORECORE */
 550 #ifndef MORECORE_CONTIGUOUS
 551 #define MORECORE_CONTIGUOUS 1
 552 #endif  /* MORECORE_CONTIGUOUS */
 553 #endif  /* HAVE_MORECORE */
 554 #ifndef DEFAULT_GRANULARITY
 555 #if MORECORE_CONTIGUOUS
 556 #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
 557 #else   /* MORECORE_CONTIGUOUS */
 558 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
 559 #endif  /* MORECORE_CONTIGUOUS */
 560 #endif  /* DEFAULT_GRANULARITY */
 561 #ifndef DEFAULT_TRIM_THRESHOLD
 562 #ifndef MORECORE_CANNOT_TRIM
 563 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
 564 #else   /* MORECORE_CANNOT_TRIM */
 565 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
 566 #endif  /* MORECORE_CANNOT_TRIM */
 567 #endif  /* DEFAULT_TRIM_THRESHOLD */
 568 #ifndef DEFAULT_MMAP_THRESHOLD
 569 #if HAVE_MMAP
 570 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
 571 #else   /* HAVE_MMAP */
 572 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
 573 #endif  /* HAVE_MMAP */
 574 #endif  /* DEFAULT_MMAP_THRESHOLD */
 575 #ifndef USE_BUILTIN_FFS
 576 #define USE_BUILTIN_FFS 0
 577 #endif  /* USE_BUILTIN_FFS */
 578 #ifndef USE_DEV_RANDOM
 579 #define USE_DEV_RANDOM 0
 580 #endif  /* USE_DEV_RANDOM */
 581 #ifndef NO_MALLINFO
 582 #define NO_MALLINFO 0
 583 #endif  /* NO_MALLINFO */
 584 #ifndef MALLINFO_FIELD_TYPE
 585 #define MALLINFO_FIELD_TYPE size_t
 586 #endif  /* MALLINFO_FIELD_TYPE */
 587 
 588 /*
 589   mallopt tuning options.  SVID/XPG defines four standard parameter
 590   numbers for mallopt, normally defined in malloc.h.  None of these
 591   are used in this malloc, so setting them has no effect. But this
 592   malloc does support the following options.
 593 */
 594 
 595 #define M_TRIM_THRESHOLD     (-1)
 596 #define M_GRANULARITY        (-2)
 597 #define M_MMAP_THRESHOLD     (-3)
 598 
 599 /* ------------------------ Mallinfo declarations ------------------------ */
 600 
 601 #if !NO_MALLINFO
 602 /*
 603   This version of malloc supports the standard SVID/XPG mallinfo
 604   routine that returns a struct containing usage properties and
 605   statistics. It should work on any system that has a
 606   /usr/include/malloc.h defining struct mallinfo.  The main
 607   declaration needed is the mallinfo struct that is returned (by-copy)
 608   by mallinfo().  The malloinfo struct contains a bunch of fields that
 609   are not even meaningful in this version of malloc.  These fields are
 610   are instead filled by mallinfo() with other numbers that might be of
 611   interest.
 612 
 613   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
 614   /usr/include/malloc.h file that includes a declaration of struct
 615   mallinfo.  If so, it is included; else a compliant version is
 616   declared below.  These must be precisely the same for mallinfo() to
 617   work.  The original SVID version of this struct, defined on most
 618   systems with mallinfo, declares all fields as ints. But some others
 619   define as unsigned long. If your system defines the fields using a
 620   type of different width than listed here, you MUST #include your
 621   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
 622 */
 623 
 624 /* #define HAVE_USR_INCLUDE_MALLOC_H */
 625 
 626 #ifdef HAVE_USR_INCLUDE_MALLOC_H
 627 #include "/usr/include/malloc.h"
 628 #else /* HAVE_USR_INCLUDE_MALLOC_H */
 629 
 630 /* HP-UX's stdlib.h redefines mallinfo unless _STRUCT_MALLINFO is defined */
 631 #define _STRUCT_MALLINFO
 632 
 633 struct mallinfo {
 634   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
 635   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
 636   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
 637   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
 638   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
 639   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
 640   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
 641   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
 642   MALLINFO_FIELD_TYPE fordblks; /* total free space */
 643   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
 644 };
 645 
 646 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
 647 #endif /* NO_MALLINFO */
 648 
 649 #ifdef __cplusplus
 650 extern "C" {
 651 #endif /* __cplusplus */
 652 
 653 #if !ONLY_MSPACES
 654 
 655 /* ------------------- Declarations of public routines ------------------- */
 656 
 657 #ifndef USE_DL_PREFIX
 658 #define dlcalloc               calloc
 659 #define dlfree                 free
 660 #define dlmalloc               malloc
 661 #define dlmemalign             memalign
 662 #define dlrealloc              realloc
 663 #define dlvalloc               valloc
 664 #define dlpvalloc              pvalloc
 665 #define dlmallinfo             mallinfo
 666 #define dlmallopt              mallopt
 667 #define dlmalloc_trim          malloc_trim
 668 #define dlmalloc_stats         malloc_stats
 669 #define dlmalloc_usable_size   malloc_usable_size
 670 #define dlmalloc_footprint     malloc_footprint
 671 #define dlmalloc_max_footprint malloc_max_footprint
 672 #define dlindependent_calloc   independent_calloc
 673 #define dlindependent_comalloc independent_comalloc
 674 #endif /* USE_DL_PREFIX */
 675 
 676 
 677 /*
 678   malloc(size_t n)
 679   Returns a pointer to a newly allocated chunk of at least n bytes, or
 680   null if no space is available, in which case errno is set to ENOMEM
 681   on ANSI C systems.
 682 
 683   If n is zero, malloc returns a minimum-sized chunk. (The minimum
 684   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
 685   systems.)  Note that size_t is an unsigned type, so calls with
 686   arguments that would be negative if signed are interpreted as
 687   requests for huge amounts of space, which will often fail. The
 688   maximum supported value of n differs across systems, but is in all
 689   cases less than the maximum representable value of a size_t.
 690 */
 691 void* dlmalloc(size_t);
 692 
 693 /*
 694   free(void* p)
 695   Releases the chunk of memory pointed to by p, that had been previously
 696   allocated using malloc or a related routine such as realloc.
 697   It has no effect if p is null. If p was not malloced or already
 698   freed, free(p) will by default cause the current program to abort.
 699 */
 700 void  dlfree(void*);
 701 
 702 /*
 703   calloc(size_t n_elements, size_t element_size);
 704   Returns a pointer to n_elements * element_size bytes, with all locations
 705   set to zero.
 706 */
 707 void* dlcalloc(size_t, size_t);
 708 
 709 /*
 710   realloc(void* p, size_t n)
 711   Returns a pointer to a chunk of size n that contains the same data
 712   as does chunk p up to the minimum of (n, p's size) bytes, or null
 713   if no space is available.
 714 
 715   The returned pointer may or may not be the same as p. The algorithm
 716   prefers extending p in most cases when possible, otherwise it
 717   employs the equivalent of a malloc-copy-free sequence.
 718 
 719   If p is null, realloc is equivalent to malloc.
 720 
 721   If space is not available, realloc returns null, errno is set (if on
 722   ANSI) and p is NOT freed.
 723 
 724   if n is for fewer bytes than already held by p, the newly unused
 725   space is lopped off and freed if possible.  realloc with a size
 726   argument of zero (re)allocates a minimum-sized chunk.
 727 
 728   The old unix realloc convention of allowing the last-free'd chunk
 729   to be used as an argument to realloc is not supported.
 730 */
 731 
 732 void* dlrealloc(void*, size_t);
 733 
 734 /*
 735   memalign(size_t alignment, size_t n);
 736   Returns a pointer to a newly allocated chunk of n bytes, aligned
 737   in accord with the alignment argument.
 738 
 739   The alignment argument should be a power of two. If the argument is
 740   not a power of two, the nearest greater power is used.
 741   8-byte alignment is guaranteed by normal malloc calls, so don't
 742   bother calling memalign with an argument of 8 or less.
 743 
 744   Overreliance on memalign is a sure way to fragment space.
 745 */
 746 void* dlmemalign(size_t, size_t);
 747 
 748 /*
 749   valloc(size_t n);
 750   Equivalent to memalign(pagesize, n), where pagesize is the page
 751   size of the system. If the pagesize is unknown, 4096 is used.
 752 */
 753 void* dlvalloc(size_t);
 754 
 755 /*
 756   mallopt(int parameter_number, int parameter_value)
 757   Sets tunable parameters The format is to provide a
 758   (parameter-number, parameter-value) pair.  mallopt then sets the
 759   corresponding parameter to the argument value if it can (i.e., so
 760   long as the value is meaningful), and returns 1 if successful else
 761   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
 762   normally defined in malloc.h.  None of these are use in this malloc,
 763   so setting them has no effect. But this malloc also supports other
 764   options in mallopt. See below for details.  Briefly, supported
 765   parameters are as follows (listed defaults are for "typical"
 766   configurations).
 767 
 768   Symbol            param #  default    allowed param values
 769   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
 770   M_GRANULARITY        -2     page size   any power of 2 >= page size
 771   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
 772 */
 773 int dlmallopt(int, int);
 774 
 775 /*
 776   malloc_footprint();
 777   Returns the number of bytes obtained from the system.  The total
 778   number of bytes allocated by malloc, realloc etc., is less than this
 779   value. Unlike mallinfo, this function returns only a precomputed
 780   result, so can be called frequently to monitor memory consumption.
 781   Even if locks are otherwise defined, this function does not use them,
 782   so results might not be up to date.
 783 */
 784 size_t dlmalloc_footprint(void);
 785 
 786 /*
 787   malloc_max_footprint();
 788   Returns the maximum number of bytes obtained from the system. This
 789   value will be greater than current footprint if deallocated space
 790   has been reclaimed by the system. The peak number of bytes allocated
 791   by malloc, realloc etc., is less than this value. Unlike mallinfo,
 792   this function returns only a precomputed result, so can be called
 793   frequently to monitor memory consumption.  Even if locks are
 794   otherwise defined, this function does not use them, so results might
 795   not be up to date.
 796 */
 797 size_t dlmalloc_max_footprint(void);
 798 
 799 #if !NO_MALLINFO
 800 /*
 801   mallinfo()
 802   Returns (by copy) a struct containing various summary statistics:
 803 
 804   arena:     current total non-mmapped bytes allocated from system
 805   ordblks:   the number of free chunks
 806   smblks:    always zero.
 807   hblks:     current number of mmapped regions
 808   hblkhd:    total bytes held in mmapped regions
 809   usmblks:   the maximum total allocated space. This will be greater
 810                 than current total if trimming has occurred.
 811   fsmblks:   always zero
 812   uordblks:  current total allocated space (normal or mmapped)
 813   fordblks:  total free space
 814   keepcost:  the maximum number of bytes that could ideally be released
 815                back to system via malloc_trim. ("ideally" means that
 816                it ignores page restrictions etc.)
 817 
 818   Because these fields are ints, but internal bookkeeping may
 819   be kept as longs, the reported values may wrap around zero and
 820   thus be inaccurate.
 821 */
 822 struct mallinfo dlmallinfo(void);
 823 #endif /* NO_MALLINFO */
 824 
 825 /*
 826   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
 827 
 828   independent_calloc is similar to calloc, but instead of returning a
 829   single cleared space, it returns an array of pointers to n_elements
 830   independent elements that can hold contents of size elem_size, each
 831   of which starts out cleared, and can be independently freed,
 832   realloc'ed etc. The elements are guaranteed to be adjacently
 833   allocated (this is not guaranteed to occur with multiple callocs or
 834   mallocs), which may also improve cache locality in some
 835   applications.
 836 
 837   The "chunks" argument is optional (i.e., may be null, which is
 838   probably the most typical usage). If it is null, the returned array
 839   is itself dynamically allocated and should also be freed when it is
 840   no longer needed. Otherwise, the chunks array must be of at least
 841   n_elements in length. It is filled in with the pointers to the
 842   chunks.
 843 
 844   In either case, independent_calloc returns this pointer array, or
 845   null if the allocation failed.  If n_elements is zero and "chunks"
 846   is null, it returns a chunk representing an array with zero elements
 847   (which should be freed if not wanted).
 848 
 849   Each element must be individually freed when it is no longer
 850   needed. If you'd like to instead be able to free all at once, you
 851   should instead use regular calloc and assign pointers into this
 852   space to represent elements.  (In this case though, you cannot
 853   independently free elements.)
 854 
 855   independent_calloc simplifies and speeds up implementations of many
 856   kinds of pools.  It may also be useful when constructing large data
 857   structures that initially have a fixed number of fixed-sized nodes,
 858   but the number is not known at compile time, and some of the nodes
 859   may later need to be freed. For example:
 860 
 861   struct Node { int item; struct Node* next; };
 862 
 863   struct Node* build_list() {
 864     struct Node** pool;
 865     int n = read_number_of_nodes_needed();
 866     if (n <= 0) return 0;
 867     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
 868     if (pool == 0) die();
 869     // organize into a linked list...
 870     struct Node* first = pool[0];
 871     for (i = 0; i < n-1; ++i)
 872       pool[i]->next = pool[i+1];
 873     free(pool);     // Can now free the array (or not, if it is needed later)
 874     return first;
 875   }
 876 */
 877 void** dlindependent_calloc(size_t, size_t, void**);
 878 
 879 /*
 880   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
 881 
 882   independent_comalloc allocates, all at once, a set of n_elements
 883   chunks with sizes indicated in the "sizes" array.    It returns
 884   an array of pointers to these elements, each of which can be
 885   independently freed, realloc'ed etc. The elements are guaranteed to
 886   be adjacently allocated (this is not guaranteed to occur with
 887   multiple callocs or mallocs), which may also improve cache locality
 888   in some applications.
 889 
 890   The "chunks" argument is optional (i.e., may be null). If it is null
 891   the returned array is itself dynamically allocated and should also
 892   be freed when it is no longer needed. Otherwise, the chunks array
 893   must be of at least n_elements in length. It is filled in with the
 894   pointers to the chunks.
 895 
 896   In either case, independent_comalloc returns this pointer array, or
 897   null if the allocation failed.  If n_elements is zero and chunks is
 898   null, it returns a chunk representing an array with zero elements
 899   (which should be freed if not wanted).
 900 
 901   Each element must be individually freed when it is no longer
 902   needed. If you'd like to instead be able to free all at once, you
 903   should instead use a single regular malloc, and assign pointers at
 904   particular offsets in the aggregate space. (In this case though, you
 905   cannot independently free elements.)
 906 
 907   independent_comallac differs from independent_calloc in that each
 908   element may have a different size, and also that it does not
 909   automatically clear elements.
 910 
 911   independent_comalloc can be used to speed up allocation in cases
 912   where several structs or objects must always be allocated at the
 913   same time.  For example:
 914 
 915   struct Head { ... }
 916   struct Foot { ... }
 917 
 918   void send_message(char* msg) {
 919     int msglen = strlen(msg);
 920     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
 921     void* chunks[3];
 922     if (independent_comalloc(3, sizes, chunks) == 0)
 923       die();
 924     struct Head* head = (struct Head*)(chunks[0]);
 925     char*        body = (char*)(chunks[1]);
 926     struct Foot* foot = (struct Foot*)(chunks[2]);
 927     // ...
 928   }
 929 
 930   In general though, independent_comalloc is worth using only for
 931   larger values of n_elements. For small values, you probably won't
 932   detect enough difference from series of malloc calls to bother.
 933 
 934   Overuse of independent_comalloc can increase overall memory usage,
 935   since it cannot reuse existing noncontiguous small chunks that
 936   might be available for some of the elements.
 937 */
 938 void** dlindependent_comalloc(size_t, size_t*, void**);
 939 
 940 
 941 /*
 942   pvalloc(size_t n);
 943   Equivalent to valloc(minimum-page-that-holds(n)), that is,
 944   round up n to nearest pagesize.
 945  */
 946 void*  dlpvalloc(size_t);
 947 
 948 /*
 949   malloc_trim(size_t pad);
 950 
 951   If possible, gives memory back to the system (via negative arguments
 952   to sbrk) if there is unused memory at the `high' end of the malloc
 953   pool or in unused MMAP segments. You can call this after freeing
 954   large blocks of memory to potentially reduce the system-level memory
 955   requirements of a program. However, it cannot guarantee to reduce
 956   memory. Under some allocation patterns, some large free blocks of
 957   memory will be locked between two used chunks, so they cannot be
 958   given back to the system.
 959 
 960   The `pad' argument to malloc_trim represents the amount of free
 961   trailing space to leave untrimmed. If this argument is zero, only
 962   the minimum amount of memory to maintain internal data structures
 963   will be left. Non-zero arguments can be supplied to maintain enough
 964   trailing space to service future expected allocations without having
 965   to re-obtain memory from the system.
 966 
 967   Malloc_trim returns 1 if it actually released any memory, else 0.
 968 */
 969 int  dlmalloc_trim(size_t);
 970 
 971 /*
 972   malloc_usable_size(void* p);
 973 
 974   Returns the number of bytes you can actually use in
 975   an allocated chunk, which may be more than you requested (although
 976   often not) due to alignment and minimum size constraints.
 977   You can use this many bytes without worrying about
 978   overwriting other allocated objects. This is not a particularly great
 979   programming practice. malloc_usable_size can be more useful in
 980   debugging and assertions, for example:
 981 
 982   p = malloc(n);
 983   assert(malloc_usable_size(p) >= 256);
 984 */
 985 size_t dlmalloc_usable_size(void*);
 986 
 987 /*
 988   malloc_stats();
 989   Prints on stderr the amount of space obtained from the system (both
 990   via sbrk and mmap), the maximum amount (which may be more than
 991   current if malloc_trim and/or munmap got called), and the current
 992   number of bytes allocated via malloc (or realloc, etc) but not yet
 993   freed. Note that this is the number of bytes allocated, not the
 994   number requested. It will be larger than the number requested
 995   because of alignment and bookkeeping overhead. Because it includes
 996   alignment wastage as being in use, this figure may be greater than
 997   zero even when no user-level chunks are allocated.
 998 
 999   The reported current and maximum system memory can be inaccurate if
1000   a program makes other calls to system memory allocation functions
1001   (normally sbrk) outside of malloc.
1002 
1003   malloc_stats prints only the most commonly interesting statistics.
1004   More information can be obtained by calling mallinfo.
1005 */
1006 void  dlmalloc_stats(void);
1007 
1008 #endif /* ONLY_MSPACES */
1009 
1010 #if MSPACES
1011 
1012 /*
1013   mspace is an opaque type representing an independent
1014   region of space that supports mspace_malloc, etc.
1015 */
1016 typedef void* mspace;
1017 
1018 /*
1019   create_mspace creates and returns a new independent space with the
1020   given initial capacity, or, if 0, the default granularity size.  It
1021   returns null if there is no system memory available to create the
1022   space.  If argument locked is non-zero, the space uses a separate
1023   lock to control access. The capacity of the space will grow
1024   dynamically as needed to service mspace_malloc requests.  You can
1025   control the sizes of incremental increases of this space by
1026   compiling with a different DEFAULT_GRANULARITY or dynamically
1027   setting with mallopt(M_GRANULARITY, value).
1028 */
1029 mspace create_mspace(size_t capacity, int locked);
1030 
1031 /*
1032   destroy_mspace destroys the given space, and attempts to return all
1033   of its memory back to the system, returning the total number of
1034   bytes freed. After destruction, the results of access to all memory
1035   used by the space become undefined.
1036 */
1037 size_t destroy_mspace(mspace msp);
1038 
1039 /*
1040   create_mspace_with_base uses the memory supplied as the initial base
1041   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1042   space is used for bookkeeping, so the capacity must be at least this
1043   large. (Otherwise 0 is returned.) When this initial space is
1044   exhausted, additional memory will be obtained from the system.
1045   Destroying this space will deallocate all additionally allocated
1046   space (if possible) but not the initial base.
1047 */
1048 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1049 
1050 /*
1051   mspace_malloc behaves as malloc, but operates within
1052   the given space.
1053 */
1054 void* mspace_malloc(mspace msp, size_t bytes);
1055 
1056 /*
1057   mspace_free behaves as free, but operates within
1058   the given space.
1059 
1060   If compiled with FOOTERS==1, mspace_free is not actually needed.
1061   free may be called instead of mspace_free because freed chunks from
1062   any space are handled by their originating spaces.
1063 */
1064 void mspace_free(mspace msp, void* mem);
1065 
1066 /*
1067   mspace_realloc behaves as realloc, but operates within
1068   the given space.
1069 
1070   If compiled with FOOTERS==1, mspace_realloc is not actually
1071   needed.  realloc may be called instead of mspace_realloc because
1072   realloced chunks from any space are handled by their originating
1073   spaces.
1074 */
1075 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1076 
1077 /*
1078   mspace_calloc behaves as calloc, but operates within
1079   the given space.
1080 */
1081 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1082 
1083 /*
1084   mspace_memalign behaves as memalign, but operates within
1085   the given space.
1086 */
1087 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1088 
1089 /*
1090   mspace_independent_calloc behaves as independent_calloc, but
1091   operates within the given space.
1092 */
1093 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1094                                  size_t elem_size, void* chunks[]);
1095 
1096 /*
1097   mspace_independent_comalloc behaves as independent_comalloc, but
1098   operates within the given space.
1099 */
1100 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1101                                    size_t sizes[], void* chunks[]);
1102 
1103 /*
1104   mspace_footprint() returns the number of bytes obtained from the
1105   system for this space.
1106 */
1107 size_t mspace_footprint(mspace msp);
1108 
1109 /*
1110   mspace_max_footprint() returns the peak number of bytes obtained from the
1111   system for this space.
1112 */
1113 size_t mspace_max_footprint(mspace msp);
1114 
1115 
1116 #if !NO_MALLINFO
1117 /*
1118   mspace_mallinfo behaves as mallinfo, but reports properties of
1119   the given space.
1120 */
1121 struct mallinfo mspace_mallinfo(mspace msp);
1122 #endif /* NO_MALLINFO */
1123 
1124 /*
1125   mspace_malloc_stats behaves as malloc_stats, but reports
1126   properties of the given space.
1127 */
1128 void mspace_malloc_stats(mspace msp);
1129 
1130 /*
1131   mspace_trim behaves as malloc_trim, but
1132   operates within the given space.
1133 */
1134 int mspace_trim(mspace msp, size_t pad);
1135 
1136 /*
1137   An alias for mallopt.
1138 */
1139 int mspace_mallopt(int, int);
1140 
1141 #endif /* MSPACES */
1142 
1143 #ifdef __cplusplus
1144 };  /* end of extern "C" */
1145 #endif /* __cplusplus */
1146 
1147 /*
1148   ========================================================================
1149   To make a fully customizable malloc.h header file, cut everything
1150   above this line, put into file malloc.h, edit to suit, and #include it
1151   on the next line, as well as in programs that use this malloc.
1152   ========================================================================
1153 */
1154 
1155 /* #include "malloc.h" */
1156 
1157 /*------------------------------ internal #includes ---------------------- */
1158 
1159 #ifdef _MSC_VER
1160 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1161 #endif /* _MSC_VER */
1162 
1163 #include <stdio.h>       /* for printing in malloc_stats */
1164 
1165 #ifndef LACKS_ERRNO_H
1166 #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1167 #endif /* LACKS_ERRNO_H */
1168 #if FOOTERS
1169 #include <time.h>        /* for magic initialization */
1170 #endif /* FOOTERS */
1171 #ifndef LACKS_STDLIB_H
1172 #include <stdlib.h>      /* for abort() */
1173 #endif /* LACKS_STDLIB_H */
1174 #ifdef DEBUG
1175 #if ABORT_ON_ASSERT_FAILURE
1176 #define assert(x) if(!(x)) ABORT
1177 #else /* ABORT_ON_ASSERT_FAILURE */
1178 #include <assert.h>
1179 #endif /* ABORT_ON_ASSERT_FAILURE */
1180 #else  /* DEBUG */
1181 #define assert(x)
1182 #endif /* DEBUG */
1183 #ifndef LACKS_STRING_H
1184 #include <string.h>      /* for memset etc */
1185 #endif  /* LACKS_STRING_H */
1186 #if USE_BUILTIN_FFS
1187 #ifndef LACKS_STRINGS_H
1188 #include <strings.h>     /* for ffs */
1189 #endif /* LACKS_STRINGS_H */
1190 #endif /* USE_BUILTIN_FFS */
1191 #if HAVE_MMAP
1192 #ifndef LACKS_SYS_MMAN_H
1193 #include <sys/mman.h>    /* for mmap */
1194 #endif /* LACKS_SYS_MMAN_H */
1195 #ifndef LACKS_FCNTL_H
1196 #include <fcntl.h>
1197 #endif /* LACKS_FCNTL_H */
1198 #endif /* HAVE_MMAP */
1199 #if HAVE_MORECORE
1200 #ifndef LACKS_UNISTD_H
1201 #include <unistd.h>     /* for sbrk */
1202 #else /* LACKS_UNISTD_H */
1203 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1204 extern void*     sbrk(ptrdiff_t);
1205 #endif /* FreeBSD etc */
1206 #endif /* LACKS_UNISTD_H */
1207 #endif /* HAVE_MMAP */
1208 
1209 #ifndef WIN32
1210 #ifndef malloc_getpagesize
1211 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1212 #    ifndef _SC_PAGE_SIZE
1213 #      define _SC_PAGE_SIZE _SC_PAGESIZE
1214 #    endif
1215 #  endif
1216 #  ifdef _SC_PAGE_SIZE
1217 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1218 #  else
1219 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1220        extern size_t getpagesize();
1221 #      define malloc_getpagesize getpagesize()
1222 #    else
1223 #      ifdef WIN32 /* use supplied emulation of getpagesize */
1224 #        define malloc_getpagesize getpagesize()
1225 #      else
1226 #        ifndef LACKS_SYS_PARAM_H
1227 #          include <sys/param.h>
1228 #        endif
1229 #        ifdef EXEC_PAGESIZE
1230 #          define malloc_getpagesize EXEC_PAGESIZE
1231 #        else
1232 #          ifdef NBPG
1233 #            ifndef CLSIZE
1234 #              define malloc_getpagesize NBPG
1235 #            else
1236 #              define malloc_getpagesize (NBPG * CLSIZE)
1237 #            endif
1238 #          else
1239 #            ifdef NBPC
1240 #              define malloc_getpagesize NBPC
1241 #            else
1242 #              ifdef PAGESIZE
1243 #                define malloc_getpagesize PAGESIZE
1244 #              else /* just guess */
1245 #                define malloc_getpagesize ((size_t)4096U)
1246 #              endif
1247 #            endif
1248 #          endif
1249 #        endif
1250 #      endif
1251 #    endif
1252 #  endif
1253 #endif
1254 #endif
1255 
1256 /* ------------------- size_t and alignment properties -------------------- */
1257 
1258 /* The byte and bit size of a size_t */
1259 #define SIZE_T_SIZE         (sizeof(size_t))
1260 #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1261 
1262 /* Some constants coerced to size_t */
1263 /* Annoying but necessary to avoid errors on some platforms */
1264 #define SIZE_T_ZERO         ((size_t)0)
1265 #define SIZE_T_ONE          ((size_t)1)
1266 #define SIZE_T_TWO          ((size_t)2)
1267 #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1268 #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1269 #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1270 #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1271 
1272 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1273 #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1274 
1275 /* True if address a has acceptable alignment */
1276 #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1277 
1278 /* the number of bytes to offset an address to align it */
1279 #define align_offset(A)\
1280  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1281   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1282 
1283 /* -------------------------- MMAP preliminaries ------------------------- */
1284 
1285 /*
1286    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1287    checks to fail so compiler optimizer can delete code rather than
1288    using so many "#if"s.
1289 */
1290 
1291 
1292 /* MORECORE and MMAP must return MFAIL on failure */
1293 #define MFAIL                ((void*)(MAX_SIZE_T))
1294 #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1295 
1296 #if !HAVE_MMAP
1297 #define IS_MMAPPED_BIT       (SIZE_T_ZERO)
1298 #define USE_MMAP_BIT         (SIZE_T_ZERO)
1299 #define CALL_MMAP(s)         MFAIL
1300 #define CALL_MUNMAP(a, s)    (-1)
1301 #define DIRECT_MMAP(s)       MFAIL
1302 
1303 #else /* HAVE_MMAP */
1304 #define IS_MMAPPED_BIT       (SIZE_T_ONE)
1305 #define USE_MMAP_BIT         (SIZE_T_ONE)
1306 
1307 #if !defined(WIN32) && !defined (__OS2__)
1308 #define CALL_MUNMAP(a, s)    munmap((a), (s))
1309 #define MMAP_PROT            (PROT_READ|PROT_WRITE)
1310 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1311 #define MAP_ANONYMOUS        MAP_ANON
1312 #endif /* MAP_ANON */
1313 #ifdef MAP_ANONYMOUS
1314 #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1315 #define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1316 #else /* MAP_ANONYMOUS */
1317 /*
1318    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1319    is unlikely to be needed, but is supplied just in case.
1320 */
1321 #define MMAP_FLAGS           (MAP_PRIVATE)
1322 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1323 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1324            (dev_zero_fd = open("/dev/zero", O_RDWR), \
1325             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1326             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1327 #endif /* MAP_ANONYMOUS */
1328 
1329 #define DIRECT_MMAP(s)       CALL_MMAP(s)
1330 
1331 #elif defined(__OS2__)
1332 
1333 /* OS/2 MMAP via DosAllocMem */
1334 static void* os2mmap(size_t size) {
1335   void* ptr;
1336   if (DosAllocMem(&ptr, size, OBJ_ANY|PAG_COMMIT|PAG_READ|PAG_WRITE) &&
1337       DosAllocMem(&ptr, size, PAG_COMMIT|PAG_READ|PAG_WRITE))
1338     return MFAIL;
1339   return ptr;
1340 }
1341 
1342 #define os2direct_mmap(n)     os2mmap(n)
1343 
1344 /* This function supports releasing coalesed segments */
1345 static int os2munmap(void* ptr, size_t size) {
1346   while (size) {
1347     ULONG ulSize = size;
1348     ULONG ulFlags = 0;
1349     if (DosQueryMem(ptr, &ulSize, &ulFlags) != 0)
1350       return -1;
1351     if ((ulFlags & PAG_BASE) == 0 ||(ulFlags & PAG_COMMIT) == 0 ||
1352         ulSize > size)
1353       return -1;
1354     if (DosFreeMem(ptr) != 0)
1355       return -1;
1356     ptr = ( void * ) ( ( char * ) ptr + ulSize );
1357     size -= ulSize;
1358   }
1359   return 0;
1360 }
1361 
1362 #define CALL_MMAP(s)         os2mmap(s)
1363 #define CALL_MUNMAP(a, s)    os2munmap((a), (s))
1364 #define DIRECT_MMAP(s)       os2direct_mmap(s)
1365 
1366 #else /* WIN32 */
1367 
1368 /* Win32 MMAP via VirtualAlloc */
1369 static void* win32mmap(size_t size) {
1370   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_EXECUTE_READWRITE);
1371   return (ptr != 0)? ptr: MFAIL;
1372 }
1373 
1374 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1375 static void* win32direct_mmap(size_t size) {
1376   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1377                            PAGE_EXECUTE_READWRITE);
1378   return (ptr != 0)? ptr: MFAIL;
1379 }
1380 
1381 /* This function supports releasing coalesed segments */
1382 static int win32munmap(void* ptr, size_t size) {
1383   MEMORY_BASIC_INFORMATION minfo;
1384   char* cptr = ptr;
1385   while (size) {
1386     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1387       return -1;
1388     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1389         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1390       return -1;
1391     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1392       return -1;
1393     cptr += minfo.RegionSize;
1394     size -= minfo.RegionSize;
1395   }
1396   return 0;
1397 }
1398 
1399 #define CALL_MMAP(s)         win32mmap(s)
1400 #define CALL_MUNMAP(a, s)    win32munmap((a), (s))
1401 #define DIRECT_MMAP(s)       win32direct_mmap(s)
1402 #endif /* WIN32 */
1403 #endif /* HAVE_MMAP */
1404 
1405 #if HAVE_MMAP && HAVE_MREMAP
1406 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1407 #else  /* HAVE_MMAP && HAVE_MREMAP */
1408 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1409 #endif /* HAVE_MMAP && HAVE_MREMAP */
1410 
1411 #if HAVE_MORECORE
1412 #define CALL_MORECORE(S)     MORECORE(S)
1413 #else  /* HAVE_MORECORE */
1414 #define CALL_MORECORE(S)     MFAIL
1415 #endif /* HAVE_MORECORE */
1416 
1417 /* mstate bit set if contiguous morecore disabled or failed */
1418 #define USE_NONCONTIGUOUS_BIT (4U)
1419 
1420 /* segment bit set in create_mspace_with_base */
1421 #define EXTERN_BIT            (8U)
1422 
1423 
1424 /* --------------------------- Lock preliminaries ------------------------ */
1425 
1426 #if USE_LOCKS
1427 
1428 /*
1429   When locks are defined, there are up to two global locks:
1430 
1431   * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1432     MORECORE.  In many cases sys_alloc requires two calls, that should
1433     not be interleaved with calls by other threads.  This does not
1434     protect against direct calls to MORECORE by other threads not
1435     using this lock, so there is still code to cope the best we can on
1436     interference.
1437 
1438   * magic_init_mutex ensures that mparams.magic and other
1439     unique mparams values are initialized only once.
1440 */
1441 
1442 #if !defined(WIN32) && !defined(__OS2__)
1443 /* By default use posix locks */
1444 #include <pthread.h>
1445 #define MLOCK_T pthread_mutex_t
1446 #define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
1447 #define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
1448 #define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
1449 
1450 #if HAVE_MORECORE
1451 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1452 #endif /* HAVE_MORECORE */
1453 
1454 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1455 
1456 #elif defined(__OS2__)
1457 #define MLOCK_T HMTX
1458 #define INITIAL_LOCK(l)      DosCreateMutexSem(0, l, 0, FALSE)
1459 #define ACQUIRE_LOCK(l)      DosRequestMutexSem(*l, SEM_INDEFINITE_WAIT)
1460 #define RELEASE_LOCK(l)      DosReleaseMutexSem(*l)
1461 #if HAVE_MORECORE
1462 static MLOCK_T morecore_mutex;
1463 #endif /* HAVE_MORECORE */
1464 static MLOCK_T magic_init_mutex;
1465 
1466 #else /* WIN32 */
1467 /*
1468    Because lock-protected regions have bounded times, and there
1469    are no recursive lock calls, we can use simple spinlocks.
1470 */
1471 
1472 #define MLOCK_T long
1473 static int win32_acquire_lock (MLOCK_T *sl) {
1474   for (;;) {
1475 #ifdef InterlockedCompareExchangePointer
1476     if (!InterlockedCompareExchange(sl, 1, 0))
1477       return 0;
1478 #else  /* Use older void* version */
1479     if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1480       return 0;
1481 #endif /* InterlockedCompareExchangePointer */
1482     Sleep (0);
1483   }
1484 }
1485 
1486 static void win32_release_lock (MLOCK_T *sl) {
1487   InterlockedExchange (sl, 0);
1488 }
1489 
1490 #define INITIAL_LOCK(l)      *(l)=0
1491 #define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
1492 #define RELEASE_LOCK(l)      win32_release_lock(l)
1493 #if HAVE_MORECORE
1494 static MLOCK_T morecore_mutex;
1495 #endif /* HAVE_MORECORE */
1496 static MLOCK_T magic_init_mutex;
1497 #endif /* WIN32 */
1498 
1499 #define USE_LOCK_BIT               (2U)
1500 #else  /* USE_LOCKS */
1501 #define USE_LOCK_BIT               (0U)
1502 #define INITIAL_LOCK(l)
1503 #endif /* USE_LOCKS */
1504 
1505 #if USE_LOCKS && HAVE_MORECORE
1506 #define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
1507 #define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
1508 #else /* USE_LOCKS && HAVE_MORECORE */
1509 #define ACQUIRE_MORECORE_LOCK()
1510 #define RELEASE_MORECORE_LOCK()
1511 #endif /* USE_LOCKS && HAVE_MORECORE */
1512 
1513 #if USE_LOCKS
1514 #define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
1515 #define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
1516 #else  /* USE_LOCKS */
1517 #define ACQUIRE_MAGIC_INIT_LOCK()
1518 #define RELEASE_MAGIC_INIT_LOCK()
1519 #endif /* USE_LOCKS */
1520 
1521 
1522 /* -----------------------  Chunk representations ------------------------ */
1523 
1524 /*
1525   (The following includes lightly edited explanations by Colin Plumb.)
1526 
1527   The malloc_chunk declaration below is misleading (but accurate and
1528   necessary).  It declares a "view" into memory allowing access to
1529   necessary fields at known offsets from a given base.
1530 
1531   Chunks of memory are maintained using a `boundary tag' method as
1532   originally described by Knuth.  (See the paper by Paul Wilson
1533   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1534   techniques.)  Sizes of free chunks are stored both in the front of
1535   each chunk and at the end.  This makes consolidating fragmented
1536   chunks into bigger chunks fast.  The head fields also hold bits
1537   representing whether chunks are free or in use.
1538 
1539   Here are some pictures to make it clearer.  They are "exploded" to
1540   show that the state of a chunk can be thought of as extending from
1541   the high 31 bits of the head field of its header through the
1542   prev_foot and PINUSE_BIT bit of the following chunk header.
1543 
1544   A chunk that's in use looks like:
1545 
1546    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1547            | Size of previous chunk (if P = 1)                             |
1548            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1549          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1550          | Size of this chunk                                         1| +-+
1551    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1552          |                                                               |
1553          +-                                                             -+
1554          |                                                               |
1555          +-                                                             -+
1556          |                                                               :
1557          +-      size - sizeof(size_t) available payload bytes          -+
1558          :                                                               |
1559  chunk-> +-                                                             -+
1560          |                                                               |
1561          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1562        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1563        | Size of next chunk (may or may not be in use)               | +-+
1564  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1565 
1566     And if it's free, it looks like this:
1567 
1568    chunk-> +-                                                             -+
1569            | User payload (must be in use, or we would have merged!)       |
1570            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1571          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1572          | Size of this chunk                                         0| +-+
1573    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1574          | Next pointer                                                  |
1575          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1576          | Prev pointer                                                  |
1577          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1578          |                                                               :
1579          +-      size - sizeof(struct chunk) unused bytes               -+
1580          :                                                               |
1581  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1582          | Size of this chunk                                            |
1583          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1584        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1585        | Size of next chunk (must be in use, or we would have merged)| +-+
1586  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1587        |                                                               :
1588        +- User payload                                                -+
1589        :                                                               |
1590        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1591                                                                      |0|
1592                                                                      +-+
1593   Note that since we always merge adjacent free chunks, the chunks
1594   adjacent to a free chunk must be in use.
1595 
1596   Given a pointer to a chunk (which can be derived trivially from the
1597   payload pointer) we can, in O(1) time, find out whether the adjacent
1598   chunks are free, and if so, unlink them from the lists that they
1599   are on and merge them with the current chunk.
1600 
1601   Chunks always begin on even word boundaries, so the mem portion
1602   (which is returned to the user) is also on an even word boundary, and
1603   thus at least double-word aligned.
1604 
1605   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1606   chunk size (which is always a multiple of two words), is an in-use
1607   bit for the *previous* chunk.  If that bit is *clear*, then the
1608   word before the current chunk size contains the previous chunk
1609   size, and can be used to find the front of the previous chunk.
1610   The very first chunk allocated always has this bit set, preventing
1611   access to non-existent (or non-owned) memory. If pinuse is set for
1612   any given chunk, then you CANNOT determine the size of the
1613   previous chunk, and might even get a memory addressing fault when
1614   trying to do so.
1615 
1616   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1617   the chunk size redundantly records whether the current chunk is
1618   inuse. This redundancy enables usage checks within free and realloc,
1619   and reduces indirection when freeing and consolidating chunks.
1620 
1621   Each freshly allocated chunk must have both cinuse and pinuse set.
1622   That is, each allocated chunk borders either a previously allocated
1623   and still in-use chunk, or the base of its memory arena. This is
1624   ensured by making all allocations from the the `lowest' part of any
1625   found chunk.  Further, no free chunk physically borders another one,
1626   so each free chunk is known to be preceded and followed by either
1627   inuse chunks or the ends of memory.
1628 
1629   Note that the `foot' of the current chunk is actually represented
1630   as the prev_foot of the NEXT chunk. This makes it easier to
1631   deal with alignments etc but can be very confusing when trying
1632   to extend or adapt this code.
1633 
1634   The exceptions to all this are
1635 
1636      1. The special chunk `top' is the top-most available chunk (i.e.,
1637         the one bordering the end of available memory). It is treated
1638         specially.  Top is never included in any bin, is used only if
1639         no other chunk is available, and is released back to the
1640         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1641         the top chunk is treated as larger (and thus less well
1642         fitting) than any other available chunk.  The top chunk
1643         doesn't update its trailing size field since there is no next
1644         contiguous chunk that would have to index off it. However,
1645         space is still allocated for it (TOP_FOOT_SIZE) to enable
1646         separation or merging when space is extended.
1647 
1648      3. Chunks allocated via mmap, which have the lowest-order bit
1649         (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1650         PINUSE_BIT in their head fields.  Because they are allocated
1651         one-by-one, each must carry its own prev_foot field, which is
1652         also used to hold the offset this chunk has within its mmapped
1653         region, which is needed to preserve alignment. Each mmapped
1654         chunk is trailed by the first two fields of a fake next-chunk
1655         for sake of usage checks.
1656 
1657 */
1658 
1659 struct malloc_chunk {
1660   size_t               prev_foot;  /* Size of previous chunk (if free).  */
1661   size_t               head;       /* Size and inuse bits. */
1662   struct malloc_chunk* fd;         /* double links -- used only if free. */
1663   struct malloc_chunk* bk;
1664 };
1665 
1666 typedef struct malloc_chunk  mchunk;
1667 typedef struct malloc_chunk* mchunkptr;
1668 typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1669 typedef size_t bindex_t;               /* Described below */
1670 typedef unsigned int binmap_t;         /* Described below */
1671 typedef unsigned int flag_t;           /* The type of various bit flag sets */
1672 
1673 /* ------------------- Chunks sizes and alignments ----------------------- */
1674 
1675 #define MCHUNK_SIZE         (sizeof(mchunk))
1676 
1677 #if FOOTERS
1678 #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1679 #else /* FOOTERS */
1680 #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1681 #endif /* FOOTERS */
1682 
1683 /* MMapped chunks need a second word of overhead ... */
1684 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1685 /* ... and additional padding for fake next-chunk at foot */
1686 #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1687 
1688 /* The smallest size we can malloc is an aligned minimal chunk */
1689 #define MIN_CHUNK_SIZE\
1690   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1691 
1692 /* conversion from malloc headers to user pointers, and back */
1693 #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1694 #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1695 /* chunk associated with aligned address A */
1696 #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1697 
1698 /* Bounds on request (not chunk) sizes. */
1699 #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1700 #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1701 
1702 /* pad request bytes into a usable size */
1703 #define pad_request(req) \
1704    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1705 
1706 /* pad request, checking for minimum (but not maximum) */
1707 #define request2size(req) \
1708   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1709 
1710 
1711 /* ------------------ Operations on head and foot fields ----------------- */
1712 
1713 /*
1714   The head field of a chunk is or'ed with PINUSE_BIT when previous
1715   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1716   use. If the chunk was obtained with mmap, the prev_foot field has
1717   IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1718   mmapped region to the base of the chunk.
1719 */
1720 
1721 #define PINUSE_BIT          (SIZE_T_ONE)
1722 #define CINUSE_BIT          (SIZE_T_TWO)
1723 #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1724 
1725 /* Head value for fenceposts */
1726 #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1727 
1728 /* extraction of fields from head words */
1729 #define cinuse(p)           ((p)->head & CINUSE_BIT)
1730 #define pinuse(p)           ((p)->head & PINUSE_BIT)
1731 #define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1732 
1733 #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1734 #define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1735 
1736 /* Treat space at ptr +/- offset as a chunk */
1737 #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1738 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1739 
1740 /* Ptr to next or previous physical malloc_chunk. */
1741 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1742 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1743 
1744 /* extract next chunk's pinuse bit */
1745 #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1746 
1747 /* Get/set size at footer */
1748 #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1749 #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1750 
1751 /* Set size, pinuse bit, and foot */
1752 #define set_size_and_pinuse_of_free_chunk(p, s)\
1753   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1754 
1755 /* Set size, pinuse bit, foot, and clear next pinuse */
1756 #define set_free_with_pinuse(p, s, n)\
1757   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1758 
1759 #define is_mmapped(p)\
1760   (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1761 
1762 /* Get the internal overhead associated with chunk p */
1763 #define overhead_for(p)\
1764  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1765 
1766 /* Return true if malloced space is not necessarily cleared */
1767 #if MMAP_CLEARS
1768 #define calloc_must_clear(p) (!is_mmapped(p))
1769 #else /* MMAP_CLEARS */
1770 #define calloc_must_clear(p) (1)
1771 #endif /* MMAP_CLEARS */
1772 
1773 /* ---------------------- Overlaid data structures ----------------------- */
1774 
1775 /*
1776   When chunks are not in use, they are treated as nodes of either
1777   lists or trees.
1778 
1779   "Small"  chunks are stored in circular doubly-linked lists, and look
1780   like this:
1781 
1782     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1783             |             Size of previous chunk                            |
1784             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1785     `head:' |             Size of chunk, in bytes                         |P|
1786       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1787             |             Forward pointer to next chunk in list             |
1788             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1789             |             Back pointer to previous chunk in list            |
1790             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1791             |             Unused space (may be 0 bytes long)                .
1792             .                                                               .
1793             .                                                               |
1794 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1795     `foot:' |             Size of chunk, in bytes                           |
1796             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1797 
1798   Larger chunks are kept in a form of bitwise digital trees (aka
1799   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1800   free chunks greater than 256 bytes, their size doesn't impose any
1801   constraints on user chunk sizes.  Each node looks like:
1802 
1803     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1804             |             Size of previous chunk                            |
1805             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1806     `head:' |             Size of chunk, in bytes                         |P|
1807       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1808             |             Forward pointer to next chunk of same size        |
1809             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1810             |             Back pointer to previous chunk of same size       |
1811             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1812             |             Pointer to left child (child[0])                  |
1813             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1814             |             Pointer to right child (child[1])                 |
1815             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1816             |             Pointer to parent                                 |
1817             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1818             |             bin index of this chunk                           |
1819             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1820             |             Unused space                                      .
1821             .                                                               |
1822 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1823     `foot:' |             Size of chunk, in bytes                           |
1824             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1825 
1826   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1827   of the same size are arranged in a circularly-linked list, with only
1828   the oldest chunk (the next to be used, in our FIFO ordering)
1829   actually in the tree.  (Tree members are distinguished by a non-null
1830   parent pointer.)  If a chunk with the same size an an existing node
1831   is inserted, it is linked off the existing node using pointers that
1832   work in the same way as fd/bk pointers of small chunks.
1833 
1834   Each tree contains a power of 2 sized range of chunk sizes (the
1835   smallest is 0x100 <= x < 0x180), which is is divided in half at each
1836   tree level, with the chunks in the smaller half of the range (0x100
1837   <= x < 0x140 for the top nose) in the left subtree and the larger
1838   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1839   done by inspecting individual bits.
1840 
1841   Using these rules, each node's left subtree contains all smaller
1842   sizes than its right subtree.  However, the node at the root of each
1843   subtree has no particular ordering relationship to either.  (The
1844   dividing line between the subtree sizes is based on trie relation.)
1845   If we remove the last chunk of a given size from the interior of the
1846   tree, we need to replace it with a leaf node.  The tree ordering
1847   rules permit a node to be replaced by any leaf below it.
1848 
1849   The smallest chunk in a tree (a common operation in a best-fit
1850   allocator) can be found by walking a path to the leftmost leaf in
1851   the tree.  Unlike a usual binary tree, where we follow left child
1852   pointers until we reach a null, here we follow the right child
1853   pointer any time the left one is null, until we reach a leaf with
1854   both child pointers null. The smallest chunk in the tree will be
1855   somewhere along that path.
1856 
1857   The worst case number of steps to add, find, or remove a node is
1858   bounded by the number of bits differentiating chunks within
1859   bins. Under current bin calculations, this ranges from 6 up to 21
1860   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1861   is of course much better.
1862 */
1863 
1864 struct malloc_tree_chunk {
1865   /* The first four fields must be compatible with malloc_chunk */
1866   size_t                    prev_foot;
1867   size_t                    head;
1868   struct malloc_tree_chunk* fd;
1869   struct malloc_tree_chunk* bk;
1870 
1871   struct malloc_tree_chunk* child[2];
1872   struct malloc_tree_chunk* parent;
1873   bindex_t                  index;
1874 };
1875 
1876 typedef struct malloc_tree_chunk  tchunk;
1877 typedef struct malloc_tree_chunk* tchunkptr;
1878 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1879 
1880 /* A little helper macro for trees */
1881 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1882 
1883 /* ----------------------------- Segments -------------------------------- */
1884 
1885 /*
1886   Each malloc space may include non-contiguous segments, held in a
1887   list headed by an embedded malloc_segment record representing the
1888   top-most space. Segments also include flags holding properties of
1889   the space. Large chunks that are directly allocated by mmap are not
1890   included in this list. They are instead independently created and
1891   destroyed without otherwise keeping track of them.
1892 
1893   Segment management mainly comes into play for spaces allocated by
1894   MMAP.  Any call to MMAP might or might not return memory that is
1895   adjacent to an existing segment.  MORECORE normally contiguously
1896   extends the current space, so this space is almost always adjacent,
1897   which is simpler and faster to deal with. (This is why MORECORE is
1898   used preferentially to MMAP when both are available -- see
1899   sys_alloc.)  When allocating using MMAP, we don't use any of the
1900   hinting mechanisms (inconsistently) supported in various
1901   implementations of unix mmap, or distinguish reserving from
1902   committing memory. Instead, we just ask for space, and exploit
1903   contiguity when we get it.  It is probably possible to do
1904   better than this on some systems, but no general scheme seems
1905   to be significantly better.
1906 
1907   Management entails a simpler variant of the consolidation scheme
1908   used for chunks to reduce fragmentation -- new adjacent memory is
1909   normally prepended or appended to an existing segment. However,
1910   there are limitations compared to chunk consolidation that mostly
1911   reflect the fact that segment processing is relatively infrequent
1912   (occurring only when getting memory from system) and that we
1913   don't expect to have huge numbers of segments:
1914 
1915   * Segments are not indexed, so traversal requires linear scans.  (It
1916     would be possible to index these, but is not worth the extra
1917     overhead and complexity for most programs on most platforms.)
1918   * New segments are only appended to old ones when holding top-most
1919     memory; if they cannot be prepended to others, they are held in
1920     different segments.
1921 
1922   Except for the top-most segment of an mstate, each segment record
1923   is kept at the tail of its segment. Segments are added by pushing
1924   segment records onto the list headed by &mstate.seg for the
1925   containing mstate.
1926 
1927   Segment flags control allocation/merge/deallocation policies:
1928   * If EXTERN_BIT set, then we did not allocate this segment,
1929     and so should not try to deallocate or merge with others.
1930     (This currently holds only for the initial segment passed
1931     into create_mspace_with_base.)
1932   * If IS_MMAPPED_BIT set, the segment may be merged with
1933     other surrounding mmapped segments and trimmed/de-allocated
1934     using munmap.
1935   * If neither bit is set, then the segment was obtained using
1936     MORECORE so can be merged with surrounding MORECORE'd segments
1937     and deallocated/trimmed using MORECORE with negative arguments.
1938 */
1939 
1940 struct malloc_segment {
1941   char*        base;             /* base address */
1942   size_t       size;             /* allocated size */
1943   struct malloc_segment* next;   /* ptr to next segment */
1944 #if FFI_MMAP_EXEC_WRIT
1945   /* The mmap magic is supposed to store the address of the executable
1946      segment at the very end of the requested block.  */
1947 
1948 # define mmap_exec_offset(b,s) (*(ptrdiff_t*)((b)+(s)-sizeof(ptrdiff_t)))
1949 
1950   /* We can only merge segments if their corresponding executable
1951      segments are at identical offsets.  */
1952 # define check_segment_merge(S,b,s) \
1953   (mmap_exec_offset((b),(s)) == (S)->exec_offset)
1954 
1955 # define add_segment_exec_offset(p,S) ((char*)(p) + (S)->exec_offset)
1956 # define sub_segment_exec_offset(p,S) ((char*)(p) - (S)->exec_offset)
1957 
1958   /* The removal of sflags only works with HAVE_MORECORE == 0.  */
1959 
1960 # define get_segment_flags(S)   (IS_MMAPPED_BIT)
1961 # define set_segment_flags(S,v) \
1962   (((v) != IS_MMAPPED_BIT) ? (ABORT, (v)) :             \
1963    (((S)->exec_offset =                         \
1964      mmap_exec_offset((S)->base, (S)->size)),               \
1965     (mmap_exec_offset((S)->base + (S)->exec_offset, (S)->size) !=   \
1966      (S)->exec_offset) ? (ABORT, (v)) :                 \
1967    (mmap_exec_offset((S)->base, (S)->size) = 0), (v)))
1968 
1969   /* We use an offset here, instead of a pointer, because then, when
1970      base changes, we don't have to modify this.  On architectures
1971      with segmented addresses, this might not work.  */
1972   ptrdiff_t    exec_offset;
1973 #else
1974 
1975 # define get_segment_flags(S)   ((S)->sflags)
1976 # define set_segment_flags(S,v) ((S)->sflags = (v))
1977 # define check_segment_merge(S,b,s) (1)
1978 
1979   flag_t       sflags;           /* mmap and extern flag */
1980 #endif
1981 };
1982 
1983 #define is_mmapped_segment(S)  (get_segment_flags(S) & IS_MMAPPED_BIT)
1984 #define is_extern_segment(S)   (get_segment_flags(S) & EXTERN_BIT)
1985 
1986 typedef struct malloc_segment  msegment;
1987 typedef struct malloc_segment* msegmentptr;
1988 
1989 /* ---------------------------- malloc_state ----------------------------- */
1990 
1991 /*
1992    A malloc_state holds all of the bookkeeping for a space.
1993    The main fields are:
1994 
1995   Top
1996     The topmost chunk of the currently active segment. Its size is
1997     cached in topsize.  The actual size of topmost space is
1998     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1999     fenceposts and segment records if necessary when getting more
2000     space from the system.  The size at which to autotrim top is
2001     cached from mparams in trim_check, except that it is disabled if
2002     an autotrim fails.
2003 
2004   Designated victim (dv)
2005     This is the preferred chunk for servicing small requests that
2006     don't have exact fits.  It is normally the chunk split off most
2007     recently to service another small request.  Its size is cached in
2008     dvsize. The link fields of this chunk are not maintained since it
2009     is not kept in a bin.
2010 
2011   SmallBins
2012     An array of bin headers for free chunks.  These bins hold chunks
2013     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2014     chunks of all the same size, spaced 8 bytes apart.  To simplify
2015     use in double-linked lists, each bin header acts as a malloc_chunk
2016     pointing to the real first node, if it exists (else pointing to
2017     itself).  This avoids special-casing for headers.  But to avoid
2018     waste, we allocate only the fd/bk pointers of bins, and then use
2019     repositioning tricks to treat these as the fields of a chunk.
2020 
2021   TreeBins
2022     Treebins are pointers to the roots of trees holding a range of
2023     sizes. There are 2 equally spaced treebins for each power of two
2024     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2025     larger.
2026 
2027   Bin maps
2028     There is one bit map for small bins ("smallmap") and one for
2029     treebins ("treemap).  Each bin sets its bit when non-empty, and
2030     clears the bit when empty.  Bit operations are then used to avoid
2031     bin-by-bin searching -- nearly all "search" is done without ever
2032     looking at bins that won't be selected.  The bit maps
2033     conservatively use 32 bits per map word, even if on 64bit system.
2034     For a good description of some of the bit-based techniques used
2035     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2036     supplement at http://hackersdelight.org/). Many of these are
2037     intended to reduce the branchiness of paths through malloc etc, as
2038     well as to reduce the number of memory locations read or written.
2039 
2040   Segments
2041     A list of segments headed by an embedded malloc_segment record
2042     representing the initial space.
2043 
2044   Address check support
2045     The least_addr field is the least address ever obtained from
2046     MORECORE or MMAP. Attempted frees and reallocs of any address less
2047     than this are trapped (unless INSECURE is defined).
2048 
2049   Magic tag
2050     A cross-check field that should always hold same value as mparams.magic.
2051 
2052   Flags
2053     Bits recording whether to use MMAP, locks, or contiguous MORECORE
2054 
2055   Statistics
2056     Each space keeps track of current and maximum system memory
2057     obtained via MORECORE or MMAP.
2058 
2059   Locking
2060     If USE_LOCKS is defined, the "mutex" lock is acquired and released
2061     around every public call using this mspace.
2062 */
2063 
2064 /* Bin types, widths and sizes */
2065 #define NSMALLBINS        (32U)
2066 #define NTREEBINS         (32U)
2067 #define SMALLBIN_SHIFT    (3U)
2068 #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2069 #define TREEBIN_SHIFT     (8U)
2070 #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2071 #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2072 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2073 
2074 struct malloc_state {
2075   binmap_t   smallmap;
2076   binmap_t   treemap;
2077   size_t     dvsize;
2078   size_t     topsize;
2079   char*      least_addr;
2080   mchunkptr  dv;
2081   mchunkptr  top;
2082   size_t     trim_check;
2083   size_t     magic;
2084   mchunkptr  smallbins[(NSMALLBINS+1)*2];
2085   tbinptr    treebins[NTREEBINS];
2086   size_t     footprint;
2087   size_t     max_footprint;
2088   flag_t     mflags;
2089 #if USE_LOCKS
2090   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2091 #endif /* USE_LOCKS */
2092   msegment   seg;
2093 };
2094 
2095 typedef struct malloc_state*    mstate;
2096 
2097 /* ------------- Global malloc_state and malloc_params ------------------- */
2098 
2099 /*
2100   malloc_params holds global properties, including those that can be
2101   dynamically set using mallopt. There is a single instance, mparams,
2102   initialized in init_mparams.
2103 */
2104 
2105 struct malloc_params {
2106   size_t magic;
2107   size_t page_size;
2108   size_t granularity;
2109   size_t mmap_threshold;
2110   size_t trim_threshold;
2111   flag_t default_mflags;
2112 };
2113 
2114 static struct malloc_params mparams;
2115 
2116 /* The global malloc_state used for all non-"mspace" calls */
2117 static struct malloc_state _gm_;
2118 #define gm                 (&_gm_)
2119 #define is_global(M)       ((M) == &_gm_)
2120 #define is_initialized(M)  ((M)->top != 0)
2121 
2122 /* -------------------------- system alloc setup ------------------------- */
2123 
2124 /* Operations on mflags */
2125 
2126 #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2127 #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2128 #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2129 
2130 #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2131 #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2132 #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2133 
2134 #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2135 #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2136 
2137 #define set_lock(M,L)\
2138  ((M)->mflags = (L)?\
2139   ((M)->mflags | USE_LOCK_BIT) :\
2140   ((M)->mflags & ~USE_LOCK_BIT))
2141 
2142 /* page-align a size */
2143 #define page_align(S)\
2144  (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2145 
2146 /* granularity-align a size */
2147 #define granularity_align(S)\
2148   (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2149 
2150 #define is_page_aligned(S)\
2151    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2152 #define is_granularity_aligned(S)\
2153    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2154 
2155 /*  True if segment S holds address A */
2156 #define segment_holds(S, A)\
2157   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2158 
2159 /* Return segment holding given address */
2160 static msegmentptr segment_holding(mstate m, char* addr) {
2161   msegmentptr sp = &m->seg;
2162   for (;;) {
2163     if (addr >= sp->base && addr < sp->base + sp->size)
2164       return sp;
2165     if ((sp = sp->next) == 0)
2166       return 0;
2167   }
2168 }
2169 
2170 /* Return true if segment contains a segment link */
2171 static int has_segment_link(mstate m, msegmentptr ss) {
2172   msegmentptr sp = &m->seg;
2173   for (;;) {
2174     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2175       return 1;
2176     if ((sp = sp->next) == 0)
2177       return 0;
2178   }
2179 }
2180 
2181 #ifndef MORECORE_CANNOT_TRIM
2182 #define should_trim(M,s)  ((s) > (M)->trim_check)
2183 #else  /* MORECORE_CANNOT_TRIM */
2184 #define should_trim(M,s)  (0)
2185 #endif /* MORECORE_CANNOT_TRIM */
2186 
2187 /*
2188   TOP_FOOT_SIZE is padding at the end of a segment, including space
2189   that may be needed to place segment records and fenceposts when new
2190   noncontiguous segments are added.
2191 */
2192 #define TOP_FOOT_SIZE\
2193   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2194 
2195 
2196 /* -------------------------------  Hooks -------------------------------- */
2197 
2198 /*
2199   PREACTION should be defined to return 0 on success, and nonzero on
2200   failure. If you are not using locking, you can redefine these to do
2201   anything you like.
2202 */
2203 
2204 #if USE_LOCKS
2205 
2206 /* Ensure locks are initialized */
2207 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2208 
2209 #define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2210 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2211 #else /* USE_LOCKS */
2212 
2213 #ifndef PREACTION
2214 #define PREACTION(M) (0)
2215 #endif  /* PREACTION */
2216 
2217 #ifndef POSTACTION
2218 #define POSTACTION(M)
2219 #endif  /* POSTACTION */
2220 
2221 #endif /* USE_LOCKS */
2222 
2223 /*
2224   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2225   USAGE_ERROR_ACTION is triggered on detected bad frees and
2226   reallocs. The argument p is an address that might have triggered the
2227   fault. It is ignored by the two predefined actions, but might be
2228   useful in custom actions that try to help diagnose errors.
2229 */
2230 
2231 #if PROCEED_ON_ERROR
2232 
2233 /* A count of the number of corruption errors causing resets */
2234 int malloc_corruption_error_count;
2235 
2236 /* default corruption action */
2237 static void reset_on_error(mstate m);
2238 
2239 #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2240 #define USAGE_ERROR_ACTION(m, p)
2241 
2242 #else /* PROCEED_ON_ERROR */
2243 
2244 #ifndef CORRUPTION_ERROR_ACTION
2245 #define CORRUPTION_ERROR_ACTION(m) ABORT
2246 #endif /* CORRUPTION_ERROR_ACTION */
2247 
2248 #ifndef USAGE_ERROR_ACTION
2249 #define USAGE_ERROR_ACTION(m,p) ABORT
2250 #endif /* USAGE_ERROR_ACTION */
2251 
2252 #endif /* PROCEED_ON_ERROR */
2253 
2254 /* -------------------------- Debugging setup ---------------------------- */
2255 
2256 #if ! DEBUG
2257 
2258 #define check_free_chunk(M,P)
2259 #define check_inuse_chunk(M,P)
2260 #define check_malloced_chunk(M,P,N)
2261 #define check_mmapped_chunk(M,P)
2262 #define check_malloc_state(M)
2263 #define check_top_chunk(M,P)
2264 
2265 #else /* DEBUG */
2266 #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2267 #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2268 #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2269 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2270 #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2271 #define check_malloc_state(M)       do_check_malloc_state(M)
2272 
2273 static void   do_check_any_chunk(mstate m, mchunkptr p);
2274 static void   do_check_top_chunk(mstate m, mchunkptr p);
2275 static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2276 static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2277 static void   do_check_free_chunk(mstate m, mchunkptr p);
2278 static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2279 static void   do_check_tree(mstate m, tchunkptr t);
2280 static void   do_check_treebin(mstate m, bindex_t i);
2281 static void   do_check_smallbin(mstate m, bindex_t i);
2282 static void   do_check_malloc_state(mstate m);
2283 static int    bin_find(mstate m, mchunkptr x);
2284 static size_t traverse_and_check(mstate m);
2285 #endif /* DEBUG */
2286 
2287 /* ---------------------------- Indexing Bins ---------------------------- */
2288 
2289 #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2290 #define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
2291 #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2292 #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2293 
2294 /* addressing by index. See above about smallbin repositioning */
2295 #define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2296 #define treebin_at(M,i)     (&((M)->treebins[i]))
2297 
2298 /* assign tree index for size S to variable I */
2299 #if defined(__GNUC__) && defined(__i386__)
2300 #define compute_tree_index(S, I)\
2301 {\
2302   size_t X = S >> TREEBIN_SHIFT;\
2303   if (X == 0)\
2304     I = 0;\
2305   else if (X > 0xFFFF)\
2306     I = NTREEBINS-1;\
2307   else {\
2308     unsigned int K;\
2309     __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
2310     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2311   }\
2312 }
2313 #else /* GNUC */
2314 #define compute_tree_index(S, I)\
2315 {\
2316   size_t X = S >> TREEBIN_SHIFT;\
2317   if (X == 0)\
2318     I = 0;\
2319   else if (X > 0xFFFF)\
2320     I = NTREEBINS-1;\
2321   else {\
2322     unsigned int Y = (unsigned int)X;\
2323     unsigned int N = ((Y - 0x100) >> 16) & 8;\
2324     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2325     N += K;\
2326     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2327     K = 14 - N + ((Y <<= K) >> 15);\
2328     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2329   }\
2330 }
2331 #endif /* GNUC */
2332 
2333 /* Bit representing maximum resolved size in a treebin at i */
2334 #define bit_for_tree_index(i) \
2335    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2336 
2337 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2338 #define leftshift_for_tree_index(i) \
2339    ((i == NTREEBINS-1)? 0 : \
2340     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2341 
2342 /* The size of the smallest chunk held in bin with index i */
2343 #define minsize_for_tree_index(i) \
2344    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2345    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2346 
2347 
2348 /* ------------------------ Operations on bin maps ----------------------- */
2349 
2350 /* bit corresponding to given index */
2351 #define idx2bit(i)              ((binmap_t)(1) << (i))
2352 
2353 /* Mark/Clear bits with given index */
2354 #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2355 #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2356 #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2357 
2358 #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2359 #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2360 #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2361 
2362 /* index corresponding to given bit */
2363 
2364 #if defined(__GNUC__) && defined(__i386__)
2365 #define compute_bit2idx(X, I)\
2366 {\
2367   unsigned int J;\
2368   __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2369   I = (bindex_t)J;\
2370 }
2371 
2372 #else /* GNUC */
2373 #if  USE_BUILTIN_FFS
2374 #define compute_bit2idx(X, I) I = ffs(X)-1
2375 
2376 #else /* USE_BUILTIN_FFS */
2377 #define compute_bit2idx(X, I)\
2378 {\
2379   unsigned int Y = X - 1;\
2380   unsigned int K = Y >> (16-4) & 16;\
2381   unsigned int N = K;        Y >>= K;\
2382   N += K = Y >> (8-3) &  8;  Y >>= K;\
2383   N += K = Y >> (4-2) &  4;  Y >>= K;\
2384   N += K = Y >> (2-1) &  2;  Y >>= K;\
2385   N += K = Y >> (1-0) &  1;  Y >>= K;\
2386   I = (bindex_t)(N + Y);\
2387 }
2388 #endif /* USE_BUILTIN_FFS */
2389 #endif /* GNUC */
2390 
2391 /* isolate the least set bit of a bitmap */
2392 #define least_bit(x)         ((x) & -(x))
2393 
2394 /* mask with all bits to left of least bit of x on */
2395 #define left_bits(x)         ((x<<1) | -(x<<1))
2396 
2397 /* mask with all bits to left of or equal to least bit of x on */
2398 #define same_or_left_bits(x) ((x) | -(x))
2399 
2400 
2401 /* ----------------------- Runtime Check Support ------------------------- */
2402 
2403 /*
2404   For security, the main invariant is that malloc/free/etc never
2405   writes to a static address other than malloc_state, unless static
2406   malloc_state itself has been corrupted, which cannot occur via
2407   malloc (because of these checks). In essence this means that we
2408   believe all pointers, sizes, maps etc held in malloc_state, but
2409   check all of those linked or offsetted from other embedded data
2410   structures.  These checks are interspersed with main code in a way
2411   that tends to minimize their run-time cost.
2412 
2413   When FOOTERS is defined, in addition to range checking, we also
2414   verify footer fields of inuse chunks, which can be used guarantee
2415   that the mstate controlling malloc/free is intact.  This is a
2416   streamlined version of the approach described by William Robertson
2417   et al in "Run-time Detection of Heap-based Overflows" LISA'03
2418   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2419   of an inuse chunk holds the xor of its mstate and a random seed,
2420   that is checked upon calls to free() and realloc().  This is
2421   (probablistically) unguessable from outside the program, but can be
2422   computed by any code successfully malloc'ing any chunk, so does not
2423   itself provide protection against code that has already broken
2424   security through some other means.  Unlike Robertson et al, we
2425   always dynamically check addresses of all offset chunks (previous,
2426   next, etc). This turns out to be cheaper than relying on hashes.
2427 */
2428 
2429 #if !INSECURE
2430 /* Check if address a is at least as high as any from MORECORE or MMAP */
2431 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2432 /* Check if address of next chunk n is higher than base chunk p */
2433 #define ok_next(p, n)    ((char*)(p) < (char*)(n))
2434 /* Check if p has its cinuse bit on */
2435 #define ok_cinuse(p)     cinuse(p)
2436 /* Check if p has its pinuse bit on */
2437 #define ok_pinuse(p)     pinuse(p)
2438 
2439 #else /* !INSECURE */
2440 #define ok_address(M, a) (1)
2441 #define ok_next(b, n)    (1)
2442 #define ok_cinuse(p)     (1)
2443 #define ok_pinuse(p)     (1)
2444 #endif /* !INSECURE */
2445 
2446 #if (FOOTERS && !INSECURE)
2447 /* Check if (alleged) mstate m has expected magic field */
2448 #define ok_magic(M)      ((M)->magic == mparams.magic)
2449 #else  /* (FOOTERS && !INSECURE) */
2450 #define ok_magic(M)      (1)
2451 #endif /* (FOOTERS && !INSECURE) */
2452 
2453 
2454 /* In gcc, use __builtin_expect to minimize impact of checks */
2455 #if !INSECURE
2456 #if defined(__GNUC__) && __GNUC__ >= 3
2457 #define RTCHECK(e)  __builtin_expect(e, 1)
2458 #else /* GNUC */
2459 #define RTCHECK(e)  (e)
2460 #endif /* GNUC */
2461 #else /* !INSECURE */
2462 #define RTCHECK(e)  (1)
2463 #endif /* !INSECURE */
2464 
2465 /* macros to set up inuse chunks with or without footers */
2466 
2467 #if !FOOTERS
2468 
2469 #define mark_inuse_foot(M,p,s)
2470 
2471 /* Set cinuse bit and pinuse bit of next chunk */
2472 #define set_inuse(M,p,s)\
2473   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2474   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2475 
2476 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2477 #define set_inuse_and_pinuse(M,p,s)\
2478   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2479   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2480 
2481 /* Set size, cinuse and pinuse bit of this chunk */
2482 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2483   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2484 
2485 #else /* FOOTERS */
2486 
2487 /* Set foot of inuse chunk to be xor of mstate and seed */
2488 #define mark_inuse_foot(M,p,s)\
2489   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2490 
2491 #define get_mstate_for(p)\
2492   ((mstate)(((mchunkptr)((char*)(p) +\
2493     (chunksize(p))))->prev_foot ^ mparams.magic))
2494 
2495 #define set_inuse(M,p,s)\
2496   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2497   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2498   mark_inuse_foot(M,p,s))
2499 
2500 #define set_inuse_and_pinuse(M,p,s)\
2501   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2502   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2503  mark_inuse_foot(M,p,s))
2504 
2505 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2506   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2507   mark_inuse_foot(M, p, s))
2508 
2509 #endif /* !FOOTERS */
2510 
2511 /* ---------------------------- setting mparams -------------------------- */
2512 
2513 /* Initialize mparams */
2514 static int init_mparams(void) {
2515   if (mparams.page_size == 0) {
2516     size_t s;
2517 
2518     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2519     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2520 #if MORECORE_CONTIGUOUS
2521     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2522 #else  /* MORECORE_CONTIGUOUS */
2523     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2524 #endif /* MORECORE_CONTIGUOUS */
2525 
2526 #if (FOOTERS && !INSECURE)
2527     {
2528 #if USE_DEV_RANDOM
2529       int fd;
2530       unsigned char buf[sizeof(size_t)];
2531       /* Try to use /dev/urandom, else fall back on using time */
2532       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2533           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2534         s = *((size_t *) buf);
2535         close(fd);
2536       }
2537       else
2538 #endif /* USE_DEV_RANDOM */
2539         s = (size_t)(time(0) ^ (size_t)0x55555555U);
2540 
2541       s |= (size_t)8U;    /* ensure nonzero */
2542       s &= ~(size_t)7U;   /* improve chances of fault for bad values */
2543 
2544     }
2545 #else /* (FOOTERS && !INSECURE) */
2546     s = (size_t)0x58585858U;
2547 #endif /* (FOOTERS && !INSECURE) */
2548     ACQUIRE_MAGIC_INIT_LOCK();
2549     if (mparams.magic == 0) {
2550       mparams.magic = s;
2551       /* Set up lock for main malloc area */
2552       INITIAL_LOCK(&gm->mutex);
2553       gm->mflags = mparams.default_mflags;
2554     }
2555     RELEASE_MAGIC_INIT_LOCK();
2556 
2557 #if !defined(WIN32) && !defined(__OS2__)
2558     mparams.page_size = malloc_getpagesize;
2559     mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2560                            DEFAULT_GRANULARITY : mparams.page_size);
2561 #elif defined (__OS2__)
2562  /* if low-memory is used, os2munmap() would break
2563     if it were anything other than 64k */
2564     mparams.page_size = 4096u;
2565     mparams.granularity = 65536u;
2566 #else /* WIN32 */
2567     {
2568       SYSTEM_INFO system_info;
2569       GetSystemInfo(&system_info);
2570       mparams.page_size = system_info.dwPageSize;
2571       mparams.granularity = system_info.dwAllocationGranularity;
2572     }
2573 #endif /* WIN32 */
2574 
2575     /* Sanity-check configuration:
2576        size_t must be unsigned and as wide as pointer type.
2577        ints must be at least 4 bytes.
2578        alignment must be at least 8.
2579        Alignment, min chunk size, and page size must all be powers of 2.
2580     */
2581     if ((sizeof(size_t) != sizeof(char*)) ||
2582         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
2583         (sizeof(int) < 4)  ||
2584         (MALLOC_ALIGNMENT < (size_t)8U) ||
2585         ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
2586         ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
2587         ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2588         ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
2589       ABORT;
2590   }
2591   return 0;
2592 }
2593 
2594 /* support for mallopt */
2595 static int change_mparam(int param_number, int value) {
2596   size_t val = (size_t)value;
2597   init_mparams();
2598   switch(param_number) {
2599   case M_TRIM_THRESHOLD:
2600     mparams.trim_threshold = val;
2601     return 1;
2602   case M_GRANULARITY:
2603     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2604       mparams.granularity = val;
2605       return 1;
2606     }
2607     else
2608       return 0;
2609   case M_MMAP_THRESHOLD:
2610     mparams.mmap_threshold = val;
2611     return 1;
2612   default:
2613     return 0;
2614   }
2615 }
2616 
2617 #if DEBUG
2618 /* ------------------------- Debugging Support --------------------------- */
2619 
2620 /* Check properties of any chunk, whether free, inuse, mmapped etc  */
2621 static void do_check_any_chunk(mstate m, mchunkptr p) {
2622   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2623   assert(ok_address(m, p));
2624 }
2625 
2626 /* Check properties of top chunk */
2627 static void do_check_top_chunk(mstate m, mchunkptr p) {
2628   msegmentptr sp = segment_holding(m, (char*)p);
2629   size_t  sz = chunksize(p);
2630   assert(sp != 0);
2631   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2632   assert(ok_address(m, p));
2633   assert(sz == m->topsize);
2634   assert(sz > 0);
2635   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2636   assert(pinuse(p));
2637   assert(!next_pinuse(p));
2638 }
2639 
2640 /* Check properties of (inuse) mmapped chunks */
2641 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2642   size_t  sz = chunksize(p);
2643   size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2644   assert(is_mmapped(p));
2645   assert(use_mmap(m));
2646   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2647   assert(ok_address(m, p));
2648   assert(!is_small(sz));
2649   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2650   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2651   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2652 }
2653 
2654 /* Check properties of inuse chunks */
2655 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2656   do_check_any_chunk(m, p);
2657   assert(cinuse(p));
2658   assert(next_pinuse(p));
2659   /* If not pinuse and not mmapped, previous chunk has OK offset */
2660   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2661   if (is_mmapped(p))
2662     do_check_mmapped_chunk(m, p);
2663 }
2664 
2665 /* Check properties of free chunks */
2666 static void do_check_free_chunk(mstate m, mchunkptr p) {
2667   size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2668   mchunkptr next = chunk_plus_offset(p, sz);
2669   do_check_any_chunk(m, p);
2670   assert(!cinuse(p));
2671   assert(!next_pinuse(p));
2672   assert (!is_mmapped(p));
2673   if (p != m->dv && p != m->top) {
2674     if (sz >= MIN_CHUNK_SIZE) {
2675       assert((sz & CHUNK_ALIGN_MASK) == 0);
2676       assert(is_aligned(chunk2mem(p)));
2677       assert(next->prev_foot == sz);
2678       assert(pinuse(p));
2679       assert (next == m->top || cinuse(next));
2680       assert(p->fd->bk == p);
2681       assert(p->bk->fd == p);
2682     }
2683     else  /* markers are always of size SIZE_T_SIZE */
2684       assert(sz == SIZE_T_SIZE);
2685   }
2686 }
2687 
2688 /* Check properties of malloced chunks at the point they are malloced */
2689 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2690   if (mem != 0) {
2691     mchunkptr p = mem2chunk(mem);
2692     size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2693     do_check_inuse_chunk(m, p);
2694     assert((sz & CHUNK_ALIGN_MASK) == 0);
2695     assert(sz >= MIN_CHUNK_SIZE);
2696     assert(sz >= s);
2697     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2698     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2699   }
2700 }
2701 
2702 /* Check a tree and its subtrees.  */
2703 static void do_check_tree(mstate m, tchunkptr t) {
2704   tchunkptr head = 0;
2705   tchunkptr u = t;
2706   bindex_t tindex = t->index;
2707   size_t tsize = chunksize(t);
2708   bindex_t idx;
2709   compute_tree_index(tsize, idx);
2710   assert(tindex == idx);
2711   assert(tsize >= MIN_LARGE_SIZE);
2712   assert(tsize >= minsize_for_tree_index(idx));
2713   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2714 
2715   do { /* traverse through chain of same-sized nodes */
2716     do_check_any_chunk(m, ((mchunkptr)u));
2717     assert(u->index == tindex);
2718     assert(chunksize(u) == tsize);
2719     assert(!cinuse(u));
2720     assert(!next_pinuse(u));
2721     assert(u->fd->bk == u);
2722     assert(u->bk->fd == u);
2723     if (u->parent == 0) {
2724       assert(u->child[0] == 0);
2725       assert(u->child[1] == 0);
2726     }
2727     else {
2728       assert(head == 0); /* only one node on chain has parent */
2729       head = u;
2730       assert(u->parent != u);
2731       assert (u->parent->child[0] == u ||
2732               u->parent->child[1] == u ||
2733               *((tbinptr*)(u->parent)) == u);
2734       if (u->child[0] != 0) {
2735         assert(u->child[0]->parent == u);
2736         assert(u->child[0] != u);
2737         do_check_tree(m, u->child[0]);
2738       }
2739       if (u->child[1] != 0) {
2740         assert(u->child[1]->parent == u);
2741         assert(u->child[1] != u);
2742         do_check_tree(m, u->child[1]);
2743       }
2744       if (u->child[0] != 0 && u->child[1] != 0) {
2745         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2746       }
2747     }
2748     u = u->fd;
2749   } while (u != t);
2750   assert(head != 0);
2751 }
2752 
2753 /*  Check all the chunks in a treebin.  */
2754 static void do_check_treebin(mstate m, bindex_t i) {
2755   tbinptr* tb = treebin_at(m, i);
2756   tchunkptr t = *tb;
2757   int empty = (m->treemap & (1U << i)) == 0;
2758   if (t == 0)
2759     assert(empty);
2760   if (!empty)
2761     do_check_tree(m, t);
2762 }
2763 
2764 /*  Check all the chunks in a smallbin.  */
2765 static void do_check_smallbin(mstate m, bindex_t i) {
2766   sbinptr b = smallbin_at(m, i);
2767   mchunkptr p = b->bk;
2768   unsigned int empty = (m->smallmap & (1U << i)) == 0;
2769   if (p == b)
2770     assert(empty);
2771   if (!empty) {
2772     for (; p != b; p = p->bk) {
2773       size_t size = chunksize(p);
2774       mchunkptr q;
2775       /* each chunk claims to be free */
2776       do_check_free_chunk(m, p);
2777       /* chunk belongs in bin */
2778       assert(small_index(size) == i);
2779       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2780       /* chunk is followed by an inuse chunk */
2781       q = next_chunk(p);
2782       if (q->head != FENCEPOST_HEAD)
2783         do_check_inuse_chunk(m, q);
2784     }
2785   }
2786 }
2787 
2788 /* Find x in a bin. Used in other check functions. */
2789 static int bin_find(mstate m, mchunkptr x) {
2790   size_t size = chunksize(x);
2791   if (is_small(size)) {
2792     bindex_t sidx = small_index(size);
2793     sbinptr b = smallbin_at(m, sidx);
2794     if (smallmap_is_marked(m, sidx)) {
2795       mchunkptr p = b;
2796       do {
2797         if (p == x)
2798           return 1;
2799       } while ((p = p->fd) != b);
2800     }
2801   }
2802   else {
2803     bindex_t tidx;
2804     compute_tree_index(size, tidx);
2805     if (treemap_is_marked(m, tidx)) {
2806       tchunkptr t = *treebin_at(m, tidx);
2807       size_t sizebits = size << leftshift_for_tree_index(tidx);
2808       while (t != 0 && chunksize(t) != size) {
2809         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2810         sizebits <<= 1;
2811       }
2812       if (t != 0) {
2813         tchunkptr u = t;
2814         do {
2815           if (u == (tchunkptr)x)
2816             return 1;
2817         } while ((u = u->fd) != t);
2818       }
2819     }
2820   }
2821   return 0;
2822 }
2823 
2824 /* Traverse each chunk and check it; return total */
2825 static size_t traverse_and_check(mstate m) {
2826   size_t sum = 0;
2827   if (is_initialized(m)) {
2828     msegmentptr s = &m->seg;
2829     sum += m->topsize + TOP_FOOT_SIZE;
2830     while (s != 0) {
2831       mchunkptr q = align_as_chunk(s->base);
2832       mchunkptr lastq = 0;
2833       assert(pinuse(q));
2834       while (segment_holds(s, q) &&
2835              q != m->top && q->head != FENCEPOST_HEAD) {
2836         sum += chunksize(q);
2837         if (cinuse(q)) {
2838           assert(!bin_find(m, q));
2839           do_check_inuse_chunk(m, q);
2840         }
2841         else {
2842           assert(q == m->dv || bin_find(m, q));
2843           assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2844           do_check_free_chunk(m, q);
2845         }
2846         lastq = q;
2847         q = next_chunk(q);
2848       }
2849       s = s->next;
2850     }
2851   }
2852   return sum;
2853 }
2854 
2855 /* Check all properties of malloc_state. */
2856 static void do_check_malloc_state(mstate m) {
2857   bindex_t i;
2858   size_t total;
2859   /* check bins */
2860   for (i = 0; i < NSMALLBINS; ++i)
2861     do_check_smallbin(m, i);
2862   for (i = 0; i < NTREEBINS; ++i)
2863     do_check_treebin(m, i);
2864 
2865   if (m->dvsize != 0) { /* check dv chunk */
2866     do_check_any_chunk(m, m->dv);
2867     assert(m->dvsize == chunksize(m->dv));
2868     assert(m->dvsize >= MIN_CHUNK_SIZE);
2869     assert(bin_find(m, m->dv) == 0);
2870   }
2871 
2872   if (m->top != 0) {   /* check top chunk */
2873     do_check_top_chunk(m, m->top);
2874     assert(m->topsize == chunksize(m->top));
2875     assert(m->topsize > 0);
2876     assert(bin_find(m, m->top) == 0);
2877   }
2878 
2879   total = traverse_and_check(m);
2880   assert(total <= m->footprint);
2881   assert(m->footprint <= m->max_footprint);
2882 }
2883 #endif /* DEBUG */
2884 
2885 /* ----------------------------- statistics ------------------------------ */
2886 
2887 #if !NO_MALLINFO
2888 static struct mallinfo internal_mallinfo(mstate m) {
2889   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2890   if (!PREACTION(m)) {
2891     check_malloc_state(m);
2892     if (is_initialized(m)) {
2893       size_t nfree = SIZE_T_ONE; /* top always free */
2894       size_t mfree = m->topsize + TOP_FOOT_SIZE;
2895       size_t sum = mfree;
2896       msegmentptr s = &m->seg;
2897       while (s != 0) {
2898         mchunkptr q = align_as_chunk(s->base);
2899         while (segment_holds(s, q) &&
2900                q != m->top && q->head != FENCEPOST_HEAD) {
2901           size_t sz = chunksize(q);
2902           sum += sz;
2903           if (!cinuse(q)) {
2904             mfree += sz;
2905             ++nfree;
2906           }
2907           q = next_chunk(q);
2908         }
2909         s = s->next;
2910       }
2911 
2912       nm.arena    = sum;
2913       nm.ordblks  = nfree;
2914       nm.hblkhd   = m->footprint - sum;
2915       nm.usmblks  = m->max_footprint;
2916       nm.uordblks = m->footprint - mfree;
2917       nm.fordblks = mfree;
2918       nm.keepcost = m->topsize;
2919     }
2920 
2921     POSTACTION(m);
2922   }
2923   return nm;
2924 }
2925 #endif /* !NO_MALLINFO */
2926 
2927 static void internal_malloc_stats(mstate m) {
2928   if (!PREACTION(m)) {
2929     size_t maxfp = 0;
2930     size_t fp = 0;
2931     size_t used = 0;
2932     check_malloc_state(m);
2933     if (is_initialized(m)) {
2934       msegmentptr s = &m->seg;
2935       maxfp = m->max_footprint;
2936       fp = m->footprint;
2937       used = fp - (m->topsize + TOP_FOOT_SIZE);
2938 
2939       while (s != 0) {
2940         mchunkptr q = align_as_chunk(s->base);
2941         while (segment_holds(s, q) &&
2942                q != m->top && q->head != FENCEPOST_HEAD) {
2943           if (!cinuse(q))
2944             used -= chunksize(q);
2945           q = next_chunk(q);
2946         }
2947         s = s->next;
2948       }
2949     }
2950 
2951     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2952     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2953     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2954 
2955     POSTACTION(m);
2956   }
2957 }
2958 
2959 /* ----------------------- Operations on smallbins ----------------------- */
2960 
2961 /*
2962   Various forms of linking and unlinking are defined as macros.  Even
2963   the ones for trees, which are very long but have very short typical
2964   paths.  This is ugly but reduces reliance on inlining support of
2965   compilers.
2966 */
2967 
2968 /* Link a free chunk into a smallbin  */
2969 #define insert_small_chunk(M, P, S) {\
2970   bindex_t I  = small_index(S);\
2971   mchunkptr B = smallbin_at(M, I);\
2972   mchunkptr F = B;\
2973   assert(S >= MIN_CHUNK_SIZE);\
2974   if (!smallmap_is_marked(M, I))\
2975     mark_smallmap(M, I);\
2976   else if (RTCHECK(ok_address(M, B->fd)))\
2977     F = B->fd;\
2978   else {\
2979     CORRUPTION_ERROR_ACTION(M);\
2980   }\
2981   B->fd = P;\
2982   F->bk = P;\
2983   P->fd = F;\
2984   P->bk = B;\
2985 }
2986 
2987 /* Unlink a chunk from a smallbin  */
2988 #define unlink_small_chunk(M, P, S) {\
2989   mchunkptr F = P->fd;\
2990   mchunkptr B = P->bk;\
2991   bindex_t I = small_index(S);\
2992   assert(P != B);\
2993   assert(P != F);\
2994   assert(chunksize(P) == small_index2size(I));\
2995   if (F == B)\
2996     clear_smallmap(M, I);\
2997   else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2998                    (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2999     F->bk = B;\
3000     B->fd = F;\
3001   }\
3002   else {\
3003     CORRUPTION_ERROR_ACTION(M);\
3004   }\
3005 }
3006 
3007 /* Unlink the first chunk from a smallbin */
3008 #define unlink_first_small_chunk(M, B, P, I) {\
3009   mchunkptr F = P->fd;\
3010   assert(P != B);\
3011   assert(P != F);\
3012   assert(chunksize(P) == small_index2size(I));\
3013   if (B == F)\
3014     clear_smallmap(M, I);\
3015   else if (RTCHECK(ok_address(M, F))) {\
3016     B->fd = F;\
3017     F->bk = B;\
3018   }\
3019   else {\
3020     CORRUPTION_ERROR_ACTION(M);\
3021   }\
3022 }
3023 
3024 /* Replace dv node, binning the old one */
3025 /* Used only when dvsize known to be small */
3026 #define replace_dv(M, P, S) {\
3027   size_t DVS = M->dvsize;\
3028   if (DVS != 0) {\
3029     mchunkptr DV = M->dv;\
3030     assert(is_small(DVS));\
3031     insert_small_chunk(M, DV, DVS);\
3032   }\
3033   M->dvsize = S;\
3034   M->dv = P;\
3035 }
3036 
3037 /* ------------------------- Operations on trees ------------------------- */
3038 
3039 /* Insert chunk into tree */
3040 #define insert_large_chunk(M, X, S) {\
3041   tbinptr* H;\
3042   bindex_t I;\
3043   compute_tree_index(S, I);\
3044   H = treebin_at(M, I);\
3045   X->index = I;\
3046   X->child[0] = X->child[1] = 0;\
3047   if (!treemap_is_marked(M, I)) {\
3048     mark_treemap(M, I);\
3049     *H = X;\
3050     X->parent = (tchunkptr)H;\
3051     X->fd = X->bk = X;\
3052   }\
3053   else {\
3054     tchunkptr T = *H;\
3055     size_t K = S << leftshift_for_tree_index(I);\
3056     for (;;) {\
3057       if (chunksize(T) != S) {\
3058         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3059         K <<= 1;\
3060         if (*C != 0)\
3061           T = *C;\
3062         else if (RTCHECK(ok_address(M, C))) {\
3063           *C = X;\
3064           X->parent = T;\
3065           X->fd = X->bk = X;\
3066           break;\
3067         }\
3068         else {\
3069           CORRUPTION_ERROR_ACTION(M);\
3070           break;\
3071         }\
3072       }\
3073       else {\
3074         tchunkptr F = T->fd;\
3075         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3076           T->fd = F->bk = X;\
3077           X->fd = F;\
3078           X->bk = T;\
3079           X->parent = 0;\
3080           break;\
3081         }\
3082         else {\
3083           CORRUPTION_ERROR_ACTION(M);\
3084           break;\
3085         }\
3086       }\
3087     }\
3088   }\
3089 }
3090 
3091 /*
3092   Unlink steps:
3093 
3094   1. If x is a chained node, unlink it from its same-sized fd/bk links
3095      and choose its bk node as its replacement.
3096   2. If x was the last node of its size, but not a leaf node, it must
3097      be replaced with a leaf node (not merely one with an open left or
3098      right), to make sure that lefts and rights of descendants
3099      correspond properly to bit masks.  We use the rightmost descendant
3100      of x.  We could use any other leaf, but this is easy to locate and
3101      tends to counteract removal of leftmosts elsewhere, and so keeps
3102      paths shorter than minimally guaranteed.  This doesn't loop much
3103      because on average a node in a tree is near the bottom.
3104   3. If x is the base of a chain (i.e., has parent links) relink
3105      x's parent and children to x's replacement (or null if none).
3106 */
3107 
3108 #define unlink_large_chunk(M, X) {\
3109   tchunkptr XP = X->parent;\
3110   tchunkptr R;\
3111   if (X->bk != X) {\
3112     tchunkptr F = X->fd;\
3113     R = X->bk;\
3114     if (RTCHECK(ok_address(M, F))) {\
3115       F->bk = R;\
3116       R->fd = F;\
3117     }\
3118     else {\
3119       CORRUPTION_ERROR_ACTION(M);\
3120     }\
3121   }\
3122   else {\
3123     tchunkptr* RP;\
3124     if (((R = *(RP = &(X->child[1]))) != 0) ||\
3125         ((R = *(RP = &(X->child[0]))) != 0)) {\
3126       tchunkptr* CP;\
3127       while ((*(CP = &(R->child[1])) != 0) ||\
3128              (*(CP = &(R->child[0])) != 0)) {\
3129         R = *(RP = CP);\
3130       }\
3131       if (RTCHECK(ok_address(M, RP)))\
3132         *RP = 0;\
3133       else {\
3134         CORRUPTION_ERROR_ACTION(M);\
3135       }\
3136     }\
3137   }\
3138   if (XP != 0) {\
3139     tbinptr* H = treebin_at(M, X->index);\
3140     if (X == *H) {\
3141       if ((*H = R) == 0) \
3142         clear_treemap(M, X->index);\
3143     }\
3144     else if (RTCHECK(ok_address(M, XP))) {\
3145       if (XP->child[0] == X) \
3146         XP->child[0] = R;\
3147       else \
3148         XP->child[1] = R;\
3149     }\
3150     else\
3151       CORRUPTION_ERROR_ACTION(M);\
3152     if (R != 0) {\
3153       if (RTCHECK(ok_address(M, R))) {\
3154         tchunkptr C0, C1;\
3155         R->parent = XP;\
3156         if ((C0 = X->child[0]) != 0) {\
3157           if (RTCHECK(ok_address(M, C0))) {\
3158             R->child[0] = C0;\
3159             C0->parent = R;\
3160           }\
3161           else\
3162             CORRUPTION_ERROR_ACTION(M);\
3163         }\
3164         if ((C1 = X->child[1]) != 0) {\
3165           if (RTCHECK(ok_address(M, C1))) {\
3166             R->child[1] = C1;\
3167             C1->parent = R;\
3168           }\
3169           else\
3170             CORRUPTION_ERROR_ACTION(M);\
3171         }\
3172       }\
3173       else\
3174         CORRUPTION_ERROR_ACTION(M);\
3175     }\
3176   }\
3177 }
3178 
3179 /* Relays to large vs small bin operations */
3180 
3181 #define insert_chunk(M, P, S)\
3182   if (is_small(S)) insert_small_chunk(M, P, S)\
3183   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3184 
3185 #define unlink_chunk(M, P, S)\
3186   if (is_small(S)) unlink_small_chunk(M, P, S)\
3187   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3188 
3189 
3190 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3191 
3192 #if ONLY_MSPACES
3193 #define internal_malloc(m, b) mspace_malloc(m, b)
3194 #define internal_free(m, mem) mspace_free(m,mem);
3195 #else /* ONLY_MSPACES */
3196 #if MSPACES
3197 #define internal_malloc(m, b)\
3198    (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3199 #define internal_free(m, mem)\
3200    if (m == gm) dlfree(mem); else mspace_free(m,mem);
3201 #else /* MSPACES */
3202 #define internal_malloc(m, b) dlmalloc(b)
3203 #define internal_free(m, mem) dlfree(mem)
3204 #endif /* MSPACES */
3205 #endif /* ONLY_MSPACES */
3206 
3207 /* -----------------------  Direct-mmapping chunks ----------------------- */
3208 
3209 /*
3210   Directly mmapped chunks are set up with an offset to the start of
3211   the mmapped region stored in the prev_foot field of the chunk. This
3212   allows reconstruction of the required argument to MUNMAP when freed,
3213   and also allows adjustment of the returned chunk to meet alignment
3214   requirements (especially in memalign).  There is also enough space
3215   allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3216   the PINUSE bit so frees can be checked.
3217 */
3218 
3219 /* Malloc using mmap */
3220 static void* mmap_alloc(mstate m, size_t nb) {
3221   size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3222   if (mmsize > nb) {     /* Check for wrap around 0 */
3223     char* mm = (char*)(DIRECT_MMAP(mmsize));
3224     if (mm != CMFAIL) {
3225       size_t offset = align_offset(chunk2mem(mm));
3226       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3227       mchunkptr p = (mchunkptr)(mm + offset);
3228       p->prev_foot = offset | IS_MMAPPED_BIT;
3229       (p)->head = (psize|CINUSE_BIT);
3230       mark_inuse_foot(m, p, psize);
3231       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3232       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3233 
3234       if (mm < m->least_addr)
3235         m->least_addr = mm;
3236       if ((m->footprint += mmsize) > m->max_footprint)
3237         m->max_footprint = m->footprint;
3238       assert(is_aligned(chunk2mem(p)));
3239       check_mmapped_chunk(m, p);
3240       return chunk2mem(p);
3241     }
3242   }
3243   return 0;
3244 }
3245 
3246 /* Realloc using mmap */
3247 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3248   size_t oldsize = chunksize(oldp);
3249   if (is_small(nb)) /* Can't shrink mmap regions below small size */
3250     return 0;
3251   /* Keep old chunk if big enough but not too big */
3252   if (oldsize >= nb + SIZE_T_SIZE &&
3253       (oldsize - nb) <= (mparams.granularity << 1))
3254     return oldp;
3255   else {
3256     size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3257     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3258     size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3259                                          CHUNK_ALIGN_MASK);
3260     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3261                                   oldmmsize, newmmsize, 1);
3262     if (cp != CMFAIL) {
3263       mchunkptr newp = (mchunkptr)(cp + offset);
3264       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3265       newp->head = (psize|CINUSE_BIT);
3266       mark_inuse_foot(m, newp, psize);
3267       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3268       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3269 
3270       if (cp < m->least_addr)
3271         m->least_addr = cp;
3272       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3273         m->max_footprint = m->footprint;
3274       check_mmapped_chunk(m, newp);
3275       return newp;
3276     }
3277   }
3278   return 0;
3279 }
3280 
3281 /* -------------------------- mspace management -------------------------- */
3282 
3283 /* Initialize top chunk and its size */
3284 static void init_top(mstate m, mchunkptr p, size_t psize) {
3285   /* Ensure alignment */
3286   size_t offset = align_offset(chunk2mem(p));
3287   p = (mchunkptr)((char*)p + offset);
3288   psize -= offset;
3289 
3290   m->top = p;
3291   m->topsize = psize;
3292   p->head = psize | PINUSE_BIT;
3293   /* set size of fake trailing chunk holding overhead space only once */
3294   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3295   m->trim_check = mparams.trim_threshold; /* reset on each update */
3296 }
3297 
3298 /* Initialize bins for a new mstate that is otherwise zeroed out */
3299 static void init_bins(mstate m) {
3300   /* Establish circular links for smallbins */
3301   bindex_t i;
3302   for (i = 0; i < NSMALLBINS; ++i) {
3303     sbinptr bin = smallbin_at(m,i);
3304     bin->fd = bin->bk = bin;
3305   }
3306 }
3307 
3308 #if PROCEED_ON_ERROR
3309 
3310 /* default corruption action */
3311 static void reset_on_error(mstate m) {
3312   int i;
3313   ++malloc_corruption_error_count;
3314   /* Reinitialize fields to forget about all memory */
3315   m->smallbins = m->treebins = 0;
3316   m->dvsize = m->topsize = 0;
3317   m->seg.base = 0;
3318   m->seg.size = 0;
3319   m->seg.next = 0;
3320   m->top = m->dv = 0;
3321   for (i = 0; i < NTREEBINS; ++i)
3322     *treebin_at(m, i) = 0;
3323   init_bins(m);
3324 }
3325 #endif /* PROCEED_ON_ERROR */
3326 
3327 /* Allocate chunk and prepend remainder with chunk in successor base. */
3328 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3329                            size_t nb) {
3330   mchunkptr p = align_as_chunk(newbase);
3331   mchunkptr oldfirst = align_as_chunk(oldbase);
3332   size_t psize = (char*)oldfirst - (char*)p;
3333   mchunkptr q = chunk_plus_offset(p, nb);
3334   size_t qsize = psize - nb;
3335   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3336 
3337   assert((char*)oldfirst > (char*)q);
3338   assert(pinuse(oldfirst));
3339   assert(qsize >= MIN_CHUNK_SIZE);
3340 
3341   /* consolidate remainder with first chunk of old base */
3342   if (oldfirst == m->top) {
3343     size_t tsize = m->topsize += qsize;
3344     m->top = q;
3345     q->head = tsize | PINUSE_BIT;
3346     check_top_chunk(m, q);
3347   }
3348   else if (oldfirst == m->dv) {
3349     size_t dsize = m->dvsize += qsize;
3350     m->dv = q;
3351     set_size_and_pinuse_of_free_chunk(q, dsize);
3352   }
3353   else {
3354     if (!cinuse(oldfirst)) {
3355       size_t nsize = chunksize(oldfirst);
3356       unlink_chunk(m, oldfirst, nsize);
3357       oldfirst = chunk_plus_offset(oldfirst, nsize);
3358       qsize += nsize;
3359     }
3360     set_free_with_pinuse(q, qsize, oldfirst);
3361     insert_chunk(m, q, qsize);
3362     check_free_chunk(m, q);
3363   }
3364 
3365   check_malloced_chunk(m, chunk2mem(p), nb);
3366   return chunk2mem(p);
3367 }
3368 
3369 
3370 /* Add a segment to hold a new noncontiguous region */
3371 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3372   /* Determine locations and sizes of segment, fenceposts, old top */
3373   char* old_top = (char*)m->top;
3374   msegmentptr oldsp = segment_holding(m, old_top);
3375   char* old_end = oldsp->base + oldsp->size;
3376   size_t ssize = pad_request(sizeof(struct malloc_segment));
3377   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3378   size_t offset = align_offset(chunk2mem(rawsp));
3379   char* asp = rawsp + offset;
3380   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3381   mchunkptr sp = (mchunkptr)csp;
3382   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3383   mchunkptr tnext = chunk_plus_offset(sp, ssize);
3384   mchunkptr p = tnext;
3385   int nfences = 0;
3386 
3387   /* reset top to new space */
3388   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3389 
3390   /* Set up segment record */
3391   assert(is_aligned(ss));
3392   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3393   *ss = m->seg; /* Push current record */
3394   m->seg.base = tbase;
3395   m->seg.size = tsize;
3396   (void)set_segment_flags(&m->seg, mmapped);
3397   m->seg.next = ss;
3398 
3399   /* Insert trailing fenceposts */
3400   for (;;) {
3401     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3402     p->head = FENCEPOST_HEAD;
3403     ++nfences;
3404     if ((char*)(&(nextp->head)) < old_end)
3405       p = nextp;
3406     else
3407       break;
3408   }
3409   assert(nfences >= 2);
3410 
3411   /* Insert the rest of old top into a bin as an ordinary free chunk */
3412   if (csp != old_top) {
3413     mchunkptr q = (mchunkptr)old_top;
3414     size_t psize = csp - old_top;
3415     mchunkptr tn = chunk_plus_offset(q, psize);
3416     set_free_with_pinuse(q, psize, tn);
3417     insert_chunk(m, q, psize);
3418   }
3419 
3420   check_top_chunk(m, m->top);
3421 }
3422 
3423 /* -------------------------- System allocation -------------------------- */
3424 
3425 /* Get memory from system using MORECORE or MMAP */
3426 static void* sys_alloc(mstate m, size_t nb) {
3427   char* tbase = CMFAIL;
3428   size_t tsize = 0;
3429   flag_t mmap_flag = 0;
3430 
3431   init_mparams();
3432 
3433   /* Directly map large chunks */
3434   if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3435     void* mem = mmap_alloc(m, nb);
3436     if (mem != 0)
3437       return mem;
3438   }
3439 
3440   /*
3441     Try getting memory in any of three ways (in most-preferred to
3442     least-preferred order):
3443     1. A call to MORECORE that can normally contiguously extend memory.
3444        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3445        or main space is mmapped or a previous contiguous call failed)
3446     2. A call to MMAP new space (disabled if not HAVE_MMAP).
3447        Note that under the default settings, if MORECORE is unable to
3448        fulfill a request, and HAVE_MMAP is true, then mmap is
3449        used as a noncontiguous system allocator. This is a useful backup
3450        strategy for systems with holes in address spaces -- in this case
3451        sbrk cannot contiguously expand the heap, but mmap may be able to
3452        find space.
3453     3. A call to MORECORE that cannot usually contiguously extend memory.
3454        (disabled if not HAVE_MORECORE)
3455   */
3456 
3457   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3458     char* br = CMFAIL;
3459     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3460     size_t asize = 0;
3461     ACQUIRE_MORECORE_LOCK();
3462 
3463     if (ss == 0) {  /* First time through or recovery */
3464       char* base = (char*)CALL_MORECORE(0);
3465       if (base != CMFAIL) {
3466         asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3467         /* Adjust to end on a page boundary */
3468         if (!is_page_aligned(base))
3469           asize += (page_align((size_t)base) - (size_t)base);
3470         /* Can't call MORECORE if size is negative when treated as signed */
3471         if (asize < HALF_MAX_SIZE_T &&
3472             (br = (char*)(CALL_MORECORE(asize))) == base) {
3473           tbase = base;
3474           tsize = asize;
3475         }
3476       }
3477     }
3478     else {
3479       /* Subtract out existing available top space from MORECORE request. */
3480       asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3481       /* Use mem here only if it did continuously extend old space */
3482       if (asize < HALF_MAX_SIZE_T &&
3483           (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3484         tbase = br;
3485         tsize = asize;
3486       }
3487     }
3488 
3489     if (tbase == CMFAIL) {    /* Cope with partial failure */
3490       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
3491         if (asize < HALF_MAX_SIZE_T &&
3492             asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3493           size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3494           if (esize < HALF_MAX_SIZE_T) {
3495             char* end = (char*)CALL_MORECORE(esize);
3496             if (end != CMFAIL)
3497               asize += esize;
3498             else {            /* Can't use; try to release */
3499               (void)CALL_MORECORE(-asize);
3500               br = CMFAIL;
3501             }
3502           }
3503         }
3504       }
3505       if (br != CMFAIL) {    /* Use the space we did get */
3506         tbase = br;
3507         tsize = asize;
3508       }
3509       else
3510         disable_contiguous(m); /* Don't try contiguous path in the future */
3511     }
3512 
3513     RELEASE_MORECORE_LOCK();
3514   }
3515 
3516   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
3517     size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3518     size_t rsize = granularity_align(req);
3519     if (rsize > nb) { /* Fail if wraps around zero */
3520       char* mp = (char*)(CALL_MMAP(rsize));
3521       if (mp != CMFAIL) {
3522         tbase = mp;
3523         tsize = rsize;
3524         mmap_flag = IS_MMAPPED_BIT;
3525       }
3526     }
3527   }
3528 
3529   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3530     size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3531     if (asize < HALF_MAX_SIZE_T) {
3532       char* br = CMFAIL;
3533       char* end = CMFAIL;
3534       ACQUIRE_MORECORE_LOCK();
3535       br = (char*)(CALL_MORECORE(asize));
3536       end = (char*)(CALL_MORECORE(0));
3537       RELEASE_MORECORE_LOCK();
3538       if (br != CMFAIL && end != CMFAIL && br < end) {
3539         size_t ssize = end - br;
3540         if (ssize > nb + TOP_FOOT_SIZE) {
3541           tbase = br;
3542           tsize = ssize;
3543         }
3544       }
3545     }
3546   }
3547 
3548   if (tbase != CMFAIL) {
3549 
3550     if ((m->footprint += tsize) > m->max_footprint)
3551       m->max_footprint = m->footprint;
3552 
3553     if (!is_initialized(m)) { /* first-time initialization */
3554       m->seg.base = m->least_addr = tbase;
3555       m->seg.size = tsize;
3556       (void)set_segment_flags(&m->seg, mmap_flag);
3557       m->magic = mparams.magic;
3558       init_bins(m);
3559       if (is_global(m))
3560         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3561       else {
3562         /* Offset top by embedded malloc_state */
3563         mchunkptr mn = next_chunk(mem2chunk(m));
3564         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3565       }
3566     }
3567 
3568     else {
3569       /* Try to merge with an existing segment */
3570       msegmentptr sp = &m->seg;
3571       while (sp != 0 && tbase != sp->base + sp->size)
3572         sp = sp->next;
3573       if (sp != 0 &&
3574           !is_extern_segment(sp) &&
3575       check_segment_merge(sp, tbase, tsize) &&
3576           (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag &&
3577           segment_holds(sp, m->top)) { /* append */
3578         sp->size += tsize;
3579         init_top(m, m->top, m->topsize + tsize);
3580       }
3581       else {
3582         if (tbase < m->least_addr)
3583           m->least_addr = tbase;
3584         sp = &m->seg;
3585         while (sp != 0 && sp->base != tbase + tsize)
3586           sp = sp->next;
3587         if (sp != 0 &&
3588             !is_extern_segment(sp) &&
3589         check_segment_merge(sp, tbase, tsize) &&
3590             (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag) {
3591           char* oldbase = sp->base;
3592           sp->base = tbase;
3593           sp->size += tsize;
3594           return prepend_alloc(m, tbase, oldbase, nb);
3595         }
3596         else
3597           add_segment(m, tbase, tsize, mmap_flag);
3598       }
3599     }
3600 
3601     if (nb < m->topsize) { /* Allocate from new or extended top space */
3602       size_t rsize = m->topsize -= nb;
3603       mchunkptr p = m->top;
3604       mchunkptr r = m->top = chunk_plus_offset(p, nb);
3605       r->head = rsize | PINUSE_BIT;
3606       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3607       check_top_chunk(m, m->top);
3608       check_malloced_chunk(m, chunk2mem(p), nb);
3609       return chunk2mem(p);
3610     }
3611   }
3612 
3613   MALLOC_FAILURE_ACTION;
3614   return 0;
3615 }
3616 
3617 /* -----------------------  system deallocation -------------------------- */
3618 
3619 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3620 static size_t release_unused_segments(mstate m) {
3621   size_t released = 0;
3622   msegmentptr pred = &m->seg;
3623   msegmentptr sp = pred->next;
3624   while (sp != 0) {
3625     char* base = sp->base;
3626     size_t size = sp->size;
3627     msegmentptr next = sp->next;
3628     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3629       mchunkptr p = align_as_chunk(base);
3630       size_t psize = chunksize(p);
3631       /* Can unmap if first chunk holds entire segment and not pinned */
3632       if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3633         tchunkptr tp = (tchunkptr)p;
3634         assert(segment_holds(sp, (char*)sp));
3635         if (p == m->dv) {
3636           m->dv = 0;
3637           m->dvsize = 0;
3638         }
3639         else {
3640           unlink_large_chunk(m, tp);
3641         }
3642         if (CALL_MUNMAP(base, size) == 0) {
3643           released += size;
3644           m->footprint -= size;
3645           /* unlink obsoleted record */
3646           sp = pred;
3647           sp->next = next;
3648         }
3649         else { /* back out if cannot unmap */
3650           insert_large_chunk(m, tp, psize);
3651         }
3652       }
3653     }
3654     pred = sp;
3655     sp = next;
3656   }
3657   return released;
3658 }
3659 
3660 static int sys_trim(mstate m, size_t pad) {
3661   size_t released = 0;
3662   if (pad < MAX_REQUEST && is_initialized(m)) {
3663     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3664 
3665     if (m->topsize > pad) {
3666       /* Shrink top space in granularity-size units, keeping at least one */
3667       size_t unit = mparams.granularity;
3668       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3669                       SIZE_T_ONE) * unit;
3670       msegmentptr sp = segment_holding(m, (char*)m->top);
3671 
3672       if (!is_extern_segment(sp)) {
3673         if (is_mmapped_segment(sp)) {
3674           if (HAVE_MMAP &&
3675               sp->size >= extra &&
3676               !has_segment_link(m, sp)) { /* can't shrink if pinned */
3677             size_t newsize = sp->size - extra;
3678             /* Prefer mremap, fall back to munmap */
3679             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3680                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3681               released = extra;
3682             }
3683           }
3684         }
3685         else if (HAVE_MORECORE) {
3686           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3687             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3688           ACQUIRE_MORECORE_LOCK();
3689           {
3690             /* Make sure end of memory is where we last set it. */
3691             char* old_br = (char*)(CALL_MORECORE(0));
3692             if (old_br == sp->base + sp->size) {
3693               char* rel_br = (char*)(CALL_MORECORE(-extra));
3694               char* new_br = (char*)(CALL_MORECORE(0));
3695               if (rel_br != CMFAIL && new_br < old_br)
3696                 released = old_br - new_br;
3697             }
3698           }
3699           RELEASE_MORECORE_LOCK();
3700         }
3701       }
3702 
3703       if (released != 0) {
3704         sp->size -= released;
3705         m->footprint -= released;
3706         init_top(m, m->top, m->topsize - released);
3707         check_top_chunk(m, m->top);
3708       }
3709     }
3710 
3711     /* Unmap any unused mmapped segments */
3712     if (HAVE_MMAP)
3713       released += release_unused_segments(m);
3714 
3715     /* On failure, disable autotrim to avoid repeated failed future calls */
3716     if (released == 0)
3717       m->trim_check = MAX_SIZE_T;
3718   }
3719 
3720   return (released != 0)? 1 : 0;
3721 }
3722 
3723 /* ---------------------------- malloc support --------------------------- */
3724 
3725 /* allocate a large request from the best fitting chunk in a treebin */
3726 static void* tmalloc_large(mstate m, size_t nb) {
3727   tchunkptr v = 0;
3728   size_t rsize = -nb; /* Unsigned negation */
3729   tchunkptr t;
3730   bindex_t idx;
3731   compute_tree_index(nb, idx);
3732 
3733   if ((t = *treebin_at(m, idx)) != 0) {
3734     /* Traverse tree for this bin looking for node with size == nb */
3735     size_t sizebits = nb << leftshift_for_tree_index(idx);
3736     tchunkptr rst = 0;  /* The deepest untaken right subtree */
3737     for (;;) {
3738       tchunkptr rt;
3739       size_t trem = chunksize(t) - nb;
3740       if (trem < rsize) {
3741         v = t;
3742         if ((rsize = trem) == 0)
3743           break;
3744       }
3745       rt = t->child[1];
3746       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3747       if (rt != 0 && rt != t)
3748         rst = rt;
3749       if (t == 0) {
3750         t = rst; /* set t to least subtree holding sizes > nb */
3751         break;
3752       }
3753       sizebits <<= 1;
3754     }
3755   }
3756 
3757   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3758     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3759     if (leftbits != 0) {
3760       bindex_t i;
3761       binmap_t leastbit = least_bit(leftbits);
3762       compute_bit2idx(leastbit, i);
3763       t = *treebin_at(m, i);
3764     }
3765   }
3766 
3767   while (t != 0) { /* find smallest of tree or subtree */
3768     size_t trem = chunksize(t) - nb;
3769     if (trem < rsize) {
3770       rsize = trem;
3771       v = t;
3772     }
3773     t = leftmost_child(t);
3774   }
3775 
3776   /*  If dv is a better fit, return 0 so malloc will use it */
3777   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3778     if (RTCHECK(ok_address(m, v))) { /* split */
3779       mchunkptr r = chunk_plus_offset(v, nb);
3780       assert(chunksize(v) == rsize + nb);
3781       if (RTCHECK(ok_next(v, r))) {
3782         unlink_large_chunk(m, v);
3783         if (rsize < MIN_CHUNK_SIZE)
3784           set_inuse_and_pinuse(m, v, (rsize + nb));
3785         else {
3786           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3787           set_size_and_pinuse_of_free_chunk(r, rsize);
3788           insert_chunk(m, r, rsize);
3789         }
3790         return chunk2mem(v);
3791       }
3792     }
3793     CORRUPTION_ERROR_ACTION(m);
3794   }
3795   return 0;
3796 }
3797 
3798 /* allocate a small request from the best fitting chunk in a treebin */
3799 static void* tmalloc_small(mstate m, size_t nb) {
3800   tchunkptr t, v;
3801   size_t rsize;
3802   bindex_t i;
3803   binmap_t leastbit = least_bit(m->treemap);
3804   compute_bit2idx(leastbit, i);
3805 
3806   v = t = *treebin_at(m, i);
3807   rsize = chunksize(t) - nb;
3808 
3809   while ((t = leftmost_child(t)) != 0) {
3810     size_t trem = chunksize(t) - nb;
3811     if (trem < rsize) {
3812       rsize = trem;
3813       v = t;
3814     }
3815   }
3816 
3817   if (RTCHECK(ok_address(m, v))) {
3818     mchunkptr r = chunk_plus_offset(v, nb);
3819     assert(chunksize(v) == rsize + nb);
3820     if (RTCHECK(ok_next(v, r))) {
3821       unlink_large_chunk(m, v);
3822       if (rsize < MIN_CHUNK_SIZE)
3823         set_inuse_and_pinuse(m, v, (rsize + nb));
3824       else {
3825         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3826         set_size_and_pinuse_of_free_chunk(r, rsize);
3827         replace_dv(m, r, rsize);
3828       }
3829       return chunk2mem(v);
3830     }
3831   }
3832 
3833   CORRUPTION_ERROR_ACTION(m);
3834   return 0;
3835 }
3836 
3837 /* --------------------------- realloc support --------------------------- */
3838 
3839 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3840   if (bytes >= MAX_REQUEST) {
3841     MALLOC_FAILURE_ACTION;
3842     return 0;
3843   }
3844   if (!PREACTION(m)) {
3845     mchunkptr oldp = mem2chunk(oldmem);
3846     size_t oldsize = chunksize(oldp);
3847     mchunkptr next = chunk_plus_offset(oldp, oldsize);
3848     mchunkptr newp = 0;
3849     void* extra = 0;
3850 
3851     /* Try to either shrink or extend into top. Else malloc-copy-free */
3852 
3853     if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3854                 ok_next(oldp, next) && ok_pinuse(next))) {
3855       size_t nb = request2size(bytes);
3856       if (is_mmapped(oldp))
3857         newp = mmap_resize(m, oldp, nb);
3858       else if (oldsize >= nb) { /* already big enough */
3859         size_t rsize = oldsize - nb;
3860         newp = oldp;
3861         if (rsize >= MIN_CHUNK_SIZE) {
3862           mchunkptr remainder = chunk_plus_offset(newp, nb);
3863           set_inuse(m, newp, nb);
3864           set_inuse(m, remainder, rsize);
3865           extra = chunk2mem(remainder);
3866         }
3867       }
3868       else if (next == m->top && oldsize + m->topsize > nb) {
3869         /* Expand into top */
3870         size_t newsize = oldsize + m->topsize;
3871         size_t newtopsize = newsize - nb;
3872         mchunkptr newtop = chunk_plus_offset(oldp, nb);
3873         set_inuse(m, oldp, nb);
3874         newtop->head = newtopsize |PINUSE_BIT;
3875         m->top = newtop;
3876         m->topsize = newtopsize;
3877         newp = oldp;
3878       }
3879     }
3880     else {
3881       USAGE_ERROR_ACTION(m, oldmem);
3882       POSTACTION(m);
3883       return 0;
3884     }
3885 
3886     POSTACTION(m);
3887 
3888     if (newp != 0) {
3889       if (extra != 0) {
3890         internal_free(m, extra);
3891       }
3892       check_inuse_chunk(m, newp);
3893       return chunk2mem(newp);
3894     }
3895     else {
3896       void* newmem = internal_malloc(m, bytes);
3897       if (newmem != 0) {
3898         size_t oc = oldsize - overhead_for(oldp);
3899         memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3900         internal_free(m, oldmem);
3901       }
3902       return newmem;
3903     }
3904   }
3905   return 0;
3906 }
3907 
3908 /* --------------------------- memalign support -------------------------- */
3909 
3910 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3911   if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3912     return internal_malloc(m, bytes);
3913   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3914     alignment = MIN_CHUNK_SIZE;
3915   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3916     size_t a = MALLOC_ALIGNMENT << 1;
3917     while (a < alignment) a <<= 1;
3918     alignment = a;
3919   }
3920 
3921   if (bytes >= MAX_REQUEST - alignment) {
3922     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3923       MALLOC_FAILURE_ACTION;
3924     }
3925   }
3926   else {
3927     size_t nb = request2size(bytes);
3928     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3929     char* mem = (char*)internal_malloc(m, req);
3930     if (mem != 0) {
3931       void* leader = 0;
3932       void* trailer = 0;
3933       mchunkptr p = mem2chunk(mem);
3934 
3935       if (PREACTION(m)) return 0;
3936       if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3937         /*
3938           Find an aligned spot inside chunk.  Since we need to give
3939           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3940           the first calculation places us at a spot with less than
3941           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3942           We've allocated enough total room so that this is always
3943           possible.
3944         */
3945         char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3946                                                        alignment -
3947                                                        SIZE_T_ONE)) &
3948                                              -alignment));
3949         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3950           br : br+alignment;
3951         mchunkptr newp = (mchunkptr)pos;
3952         size_t leadsize = pos - (char*)(p);
3953         size_t newsize = chunksize(p) - leadsize;
3954 
3955         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3956           newp->prev_foot = p->prev_foot + leadsize;
3957           newp->head = (newsize|CINUSE_BIT);
3958         }
3959         else { /* Otherwise, give back leader, use the rest */
3960           set_inuse(m, newp, newsize);
3961           set_inuse(m, p, leadsize);
3962           leader = chunk2mem(p);
3963         }
3964         p = newp;
3965       }
3966 
3967       /* Give back spare room at the end */
3968       if (!is_mmapped(p)) {
3969         size_t size = chunksize(p);
3970         if (size > nb + MIN_CHUNK_SIZE) {
3971           size_t remainder_size = size - nb;
3972           mchunkptr remainder = chunk_plus_offset(p, nb);
3973           set_inuse(m, p, nb);
3974           set_inuse(m, remainder, remainder_size);
3975           trailer = chunk2mem(remainder);
3976         }
3977       }
3978 
3979       assert (chunksize(p) >= nb);
3980       assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3981       check_inuse_chunk(m, p);
3982       POSTACTION(m);
3983       if (leader != 0) {
3984         internal_free(m, leader);
3985       }
3986       if (trailer != 0) {
3987         internal_free(m, trailer);
3988       }
3989       return chunk2mem(p);
3990     }
3991   }
3992   return 0;
3993 }
3994 
3995 /* ------------------------ comalloc/coalloc support --------------------- */
3996 
3997 static void** ialloc(mstate m,
3998                      size_t n_elements,
3999                      size_t* sizes,
4000                      int opts,
4001                      void* chunks[]) {
4002   /*
4003     This provides common support for independent_X routines, handling
4004     all of the combinations that can result.
4005 
4006     The opts arg has:
4007     bit 0 set if all elements are same size (using sizes[0])
4008     bit 1 set if elements should be zeroed
4009   */
4010 
4011   size_t    element_size;   /* chunksize of each element, if all same */
4012   size_t    contents_size;  /* total size of elements */
4013   size_t    array_size;     /* request size of pointer array */
4014   void*     mem;            /* malloced aggregate space */
4015   mchunkptr p;              /* corresponding chunk */
4016   size_t    remainder_size; /* remaining bytes while splitting */
4017   void**    marray;         /* either "chunks" or malloced ptr array */
4018   mchunkptr array_chunk;    /* chunk for malloced ptr array */
4019   flag_t    was_enabled;    /* to disable mmap */
4020   size_t    size;
4021   size_t    i;
4022 
4023   /* compute array length, if needed */
4024   if (chunks != 0) {
4025     if (n_elements == 0)
4026       return chunks; /* nothing to do */
4027     marray = chunks;
4028     array_size = 0;
4029   }
4030   else {
4031     /* if empty req, must still return chunk representing empty array */
4032     if (n_elements == 0)
4033       return (void**)internal_malloc(m, 0);
4034     marray = 0;
4035     array_size = request2size(n_elements * (sizeof(void*)));
4036   }
4037 
4038   /* compute total element size */
4039   if (opts & 0x1) { /* all-same-size */
4040     element_size = request2size(*sizes);
4041     contents_size = n_elements * element_size;
4042   }
4043   else { /* add up all the sizes */
4044     element_size = 0;
4045     contents_size = 0;
4046     for (i = 0; i != n_elements; ++i)
4047       contents_size += request2size(sizes[i]);
4048   }
4049 
4050   size = contents_size + array_size;
4051 
4052   /*
4053      Allocate the aggregate chunk.  First disable direct-mmapping so
4054      malloc won't use it, since we would not be able to later
4055      free/realloc space internal to a segregated mmap region.
4056   */
4057   was_enabled = use_mmap(m);
4058   disable_mmap(m);
4059   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4060   if (was_enabled)
4061     enable_mmap(m);
4062   if (mem == 0)
4063     return 0;
4064 
4065   if (PREACTION(m)) return 0;
4066   p = mem2chunk(mem);
4067   remainder_size = chunksize(p);
4068 
4069   assert(!is_mmapped(p));
4070 
4071   if (opts & 0x2) {       /* optionally clear the elements */
4072     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4073   }
4074 
4075   /* If not provided, allocate the pointer array as final part of chunk */
4076   if (marray == 0) {
4077     size_t  array_chunk_size;
4078     array_chunk = chunk_plus_offset(p, contents_size);
4079     array_chunk_size = remainder_size - contents_size;
4080     marray = (void**) (chunk2mem(array_chunk));
4081     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4082     remainder_size = contents_size;
4083   }
4084 
4085   /* split out elements */
4086   for (i = 0; ; ++i) {
4087     marray[i] = chunk2mem(p);
4088     if (i != n_elements-1) {
4089       if (element_size != 0)
4090         size = element_size;
4091       else
4092         size = request2size(sizes[i]);
4093       remainder_size -= size;
4094       set_size_and_pinuse_of_inuse_chunk(m, p, size);
4095       p = chunk_plus_offset(p, size);
4096     }
4097     else { /* the final element absorbs any overallocation slop */
4098       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4099       break;
4100     }
4101   }
4102 
4103 #if DEBUG
4104   if (marray != chunks) {
4105     /* final element must have exactly exhausted chunk */
4106     if (element_size != 0) {
4107       assert(remainder_size == element_size);
4108     }
4109     else {
4110       assert(remainder_size == request2size(sizes[i]));
4111     }
4112     check_inuse_chunk(m, mem2chunk(marray));
4113   }
4114   for (i = 0; i != n_elements; ++i)
4115     check_inuse_chunk(m, mem2chunk(marray[i]));
4116 
4117 #endif /* DEBUG */
4118 
4119   POSTACTION(m);
4120   return marray;
4121 }
4122 
4123 
4124 /* -------------------------- public routines ---------------------------- */
4125 
4126 #if !ONLY_MSPACES
4127 
4128 void* dlmalloc(size_t bytes) {
4129   /*
4130      Basic algorithm:
4131      If a small request (< 256 bytes minus per-chunk overhead):
4132        1. If one exists, use a remainderless chunk in associated smallbin.
4133           (Remainderless means that there are too few excess bytes to
4134           represent as a chunk.)
4135        2. If it is big enough, use the dv chunk, which is normally the
4136           chunk adjacent to the one used for the most recent small request.
4137        3. If one exists, split the smallest available chunk in a bin,
4138           saving remainder in dv.
4139        4. If it is big enough, use the top chunk.
4140        5. If available, get memory from system and use it
4141      Otherwise, for a large request:
4142        1. Find the smallest available binned chunk that fits, and use it
4143           if it is better fitting than dv chunk, splitting if necessary.
4144        2. If better fitting than any binned chunk, use the dv chunk.
4145        3. If it is big enough, use the top chunk.
4146        4. If request size >= mmap threshold, try to directly mmap this chunk.
4147        5. If available, get memory from system and use it
4148 
4149      The ugly goto's here ensure that postaction occurs along all paths.
4150   */
4151 
4152   if (!PREACTION(gm)) {
4153     void* mem;
4154     size_t nb;
4155     if (bytes <= MAX_SMALL_REQUEST) {
4156       bindex_t idx;
4157       binmap_t smallbits;
4158       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4159       idx = small_index(nb);
4160       smallbits = gm->smallmap >> idx;
4161 
4162       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4163         mchunkptr b, p;
4164         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4165         b = smallbin_at(gm, idx);
4166         p = b->fd;
4167         assert(chunksize(p) == small_index2size(idx));
4168         unlink_first_small_chunk(gm, b, p, idx);
4169         set_inuse_and_pinuse(gm, p, small_index2size(idx));
4170         mem = chunk2mem(p);
4171         check_malloced_chunk(gm, mem, nb);
4172         goto postaction;
4173       }
4174 
4175       else if (nb > gm->dvsize) {
4176         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4177           mchunkptr b, p, r;
4178           size_t rsize;
4179           bindex_t i;
4180           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4181           binmap_t leastbit = least_bit(leftbits);
4182           compute_bit2idx(leastbit, i);
4183           b = smallbin_at(gm, i);
4184           p = b->fd;
4185           assert(chunksize(p) == small_index2size(i));
4186           unlink_first_small_chunk(gm, b, p, i);
4187           rsize = small_index2size(i) - nb;
4188           /* Fit here cannot be remainderless if 4byte sizes */
4189           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4190             set_inuse_and_pinuse(gm, p, small_index2size(i));
4191           else {
4192             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4193             r = chunk_plus_offset(p, nb);
4194             set_size_and_pinuse_of_free_chunk(r, rsize);
4195             replace_dv(gm, r, rsize);
4196           }
4197           mem = chunk2mem(p);
4198           check_malloced_chunk(gm, mem, nb);
4199           goto postaction;
4200         }
4201 
4202         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4203           check_malloced_chunk(gm, mem, nb);
4204           goto postaction;
4205         }
4206       }
4207     }
4208     else if (bytes >= MAX_REQUEST)
4209       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4210     else {
4211       nb = pad_request(bytes);
4212       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4213         check_malloced_chunk(gm, mem, nb);
4214         goto postaction;
4215       }
4216     }
4217 
4218     if (nb <= gm->dvsize) {
4219       size_t rsize = gm->dvsize - nb;
4220       mchunkptr p = gm->dv;
4221       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4222         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4223         gm->dvsize = rsize;
4224         set_size_and_pinuse_of_free_chunk(r, rsize);
4225         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4226       }
4227       else { /* exhaust dv */
4228         size_t dvs = gm->dvsize;
4229         gm->dvsize = 0;
4230         gm->dv = 0;
4231         set_inuse_and_pinuse(gm, p, dvs);
4232       }
4233       mem = chunk2mem(p);
4234       check_malloced_chunk(gm, mem, nb);
4235       goto postaction;
4236     }
4237 
4238     else if (nb < gm->topsize) { /* Split top */
4239       size_t rsize = gm->topsize -= nb;
4240       mchunkptr p = gm->top;
4241       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4242       r->head = rsize | PINUSE_BIT;
4243       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4244       mem = chunk2mem(p);
4245       check_top_chunk(gm, gm->top);
4246       check_malloced_chunk(gm, mem, nb);
4247       goto postaction;
4248     }
4249 
4250     mem = sys_alloc(gm, nb);
4251 
4252   postaction:
4253     POSTACTION(gm);
4254     return mem;
4255   }
4256 
4257   return 0;
4258 }
4259 
4260 void dlfree(void* mem) {
4261   /*
4262      Consolidate freed chunks with preceding or succeeding bordering
4263      free chunks, if they exist, and then place in a bin.  Intermixed
4264      with special cases for top, dv, mmapped chunks, and usage errors.
4265   */
4266 
4267   if (mem != 0) {
4268     mchunkptr p  = mem2chunk(mem);
4269 #if FOOTERS
4270     mstate fm = get_mstate_for(p);
4271     if (!ok_magic(fm)) {
4272       USAGE_ERROR_ACTION(fm, p);
4273       return;
4274     }
4275 #else /* FOOTERS */
4276 #define fm gm
4277 #endif /* FOOTERS */
4278     if (!PREACTION(fm)) {
4279       check_inuse_chunk(fm, p);
4280       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4281         size_t psize = chunksize(p);
4282         mchunkptr next = chunk_plus_offset(p, psize);
4283         if (!pinuse(p)) {
4284           size_t prevsize = p->prev_foot;
4285           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4286             prevsize &= ~IS_MMAPPED_BIT;
4287             psize += prevsize + MMAP_FOOT_PAD;
4288             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4289               fm->footprint -= psize;
4290             goto postaction;
4291           }
4292           else {
4293             mchunkptr prev = chunk_minus_offset(p, prevsize);
4294             psize += prevsize;
4295             p = prev;
4296             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4297               if (p != fm->dv) {
4298                 unlink_chunk(fm, p, prevsize);
4299               }
4300               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4301                 fm->dvsize = psize;
4302                 set_free_with_pinuse(p, psize, next);
4303                 goto postaction;
4304               }
4305             }
4306             else
4307               goto erroraction;
4308           }
4309         }
4310 
4311         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4312           if (!cinuse(next)) {  /* consolidate forward */
4313             if (next == fm->top) {
4314               size_t tsize = fm->topsize += psize;
4315               fm->top = p;
4316               p->head = tsize | PINUSE_BIT;
4317               if (p == fm->dv) {
4318                 fm->dv = 0;
4319                 fm->dvsize = 0;
4320               }
4321               if (should_trim(fm, tsize))
4322                 sys_trim(fm, 0);
4323               goto postaction;
4324             }
4325             else if (next == fm->dv) {
4326               size_t dsize = fm->dvsize += psize;
4327               fm->dv = p;
4328               set_size_and_pinuse_of_free_chunk(p, dsize);
4329               goto postaction;
4330             }
4331             else {
4332               size_t nsize = chunksize(next);
4333               psize += nsize;
4334               unlink_chunk(fm, next, nsize);
4335               set_size_and_pinuse_of_free_chunk(p, psize);
4336               if (p == fm->dv) {
4337                 fm->dvsize = psize;
4338                 goto postaction;
4339               }
4340             }
4341           }
4342           else
4343             set_free_with_pinuse(p, psize, next);
4344           insert_chunk(fm, p, psize);
4345           check_free_chunk(fm, p);
4346           goto postaction;
4347         }
4348       }
4349     erroraction:
4350       USAGE_ERROR_ACTION(fm, p);
4351     postaction:
4352       POSTACTION(fm);
4353     }
4354   }
4355 #if !FOOTERS
4356 #undef fm
4357 #endif /* FOOTERS */
4358 }
4359 
4360 void* dlcalloc(size_t n_elements, size_t elem_size) {
4361   void* mem;
4362   size_t req = 0;
4363   if (n_elements != 0) {
4364     req = n_elements * elem_size;
4365     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4366         (req / n_elements != elem_size))
4367       req = MAX_SIZE_T; /* force downstream failure on overflow */
4368   }
4369   mem = dlmalloc(req);
4370   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4371     memset(mem, 0, req);
4372   return mem;
4373 }
4374 
4375 void* dlrealloc(void* oldmem, size_t bytes) {
4376   if (oldmem == 0)
4377     return dlmalloc(bytes);
4378 #ifdef REALLOC_ZERO_BYTES_FREES
4379   if (bytes == 0) {
4380     dlfree(oldmem);
4381     return 0;
4382   }
4383 #endif /* REALLOC_ZERO_BYTES_FREES */
4384   else {
4385 #if ! FOOTERS
4386     mstate m = gm;
4387 #else /* FOOTERS */
4388     mstate m = get_mstate_for(mem2chunk(oldmem));
4389     if (!ok_magic(m)) {
4390       USAGE_ERROR_ACTION(m, oldmem);
4391       return 0;
4392     }
4393 #endif /* FOOTERS */
4394     return internal_realloc(m, oldmem, bytes);
4395   }
4396 }
4397 
4398 void* dlmemalign(size_t alignment, size_t bytes) {
4399   return internal_memalign(gm, alignment, bytes);
4400 }
4401 
4402 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4403                                  void* chunks[]) {
4404   size_t sz = elem_size; /* serves as 1-element array */
4405   return ialloc(gm, n_elements, &sz, 3, chunks);
4406 }
4407 
4408 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4409                                    void* chunks[]) {
4410   return ialloc(gm, n_elements, sizes, 0, chunks);
4411 }
4412 
4413 void* dlvalloc(size_t bytes) {
4414   size_t pagesz;
4415   init_mparams();
4416   pagesz = mparams.page_size;
4417   return dlmemalign(pagesz, bytes);
4418 }
4419 
4420 void* dlpvalloc(size_t bytes) {
4421   size_t pagesz;
4422   init_mparams();
4423   pagesz = mparams.page_size;
4424   return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4425 }
4426 
4427 int dlmalloc_trim(size_t pad) {
4428   int result = 0;
4429   if (!PREACTION(gm)) {
4430     result = sys_trim(gm, pad);
4431     POSTACTION(gm);
4432   }
4433   return result;
4434 }
4435 
4436 size_t dlmalloc_footprint(void) {
4437   return gm->footprint;
4438 }
4439 
4440 size_t dlmalloc_max_footprint(void) {
4441   return gm->max_footprint;
4442 }
4443 
4444 #if !NO_MALLINFO
4445 struct mallinfo dlmallinfo(void) {
4446   return internal_mallinfo(gm);
4447 }
4448 #endif /* NO_MALLINFO */
4449 
4450 void dlmalloc_stats() {
4451   internal_malloc_stats(gm);
4452 }
4453 
4454 size_t dlmalloc_usable_size(void* mem) {
4455   if (mem != 0) {
4456     mchunkptr p = mem2chunk(mem);
4457     if (cinuse(p))
4458       return chunksize(p) - overhead_for(p);
4459   }
4460   return 0;
4461 }
4462 
4463 int dlmallopt(int param_number, int value) {
4464   return change_mparam(param_number, value);
4465 }
4466 
4467 #endif /* !ONLY_MSPACES */
4468 
4469 /* ----------------------------- user mspaces ---------------------------- */
4470 
4471 #if MSPACES
4472 
4473 static mstate init_user_mstate(char* tbase, size_t tsize) {
4474   size_t msize = pad_request(sizeof(struct malloc_state));
4475   mchunkptr mn;
4476   mchunkptr msp = align_as_chunk(tbase);
4477   mstate m = (mstate)(chunk2mem(msp));
4478   memset(m, 0, msize);
4479   INITIAL_LOCK(&m->mutex);
4480   msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4481   m->seg.base = m->least_addr = tbase;
4482   m->seg.size = m->footprint = m->max_footprint = tsize;
4483   m->magic = mparams.magic;
4484   m->mflags = mparams.default_mflags;
4485   disable_contiguous(m);
4486   init_bins(m);
4487   mn = next_chunk(mem2chunk(m));
4488   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4489   check_top_chunk(m, m->top);
4490   return m;
4491 }
4492 
4493 mspace create_mspace(size_t capacity, int locked) {
4494   mstate m = 0;
4495   size_t msize = pad_request(sizeof(struct malloc_state));
4496   init_mparams(); /* Ensure pagesize etc initialized */
4497 
4498   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4499     size_t rs = ((capacity == 0)? mparams.granularity :
4500                  (capacity + TOP_FOOT_SIZE + msize));
4501     size_t tsize = granularity_align(rs);
4502     char* tbase = (char*)(CALL_MMAP(tsize));
4503     if (tbase != CMFAIL) {
4504       m = init_user_mstate(tbase, tsize);
4505       set_segment_flags(&m->seg, IS_MMAPPED_BIT);
4506       set_lock(m, locked);
4507     }
4508   }
4509   return (mspace)m;
4510 }
4511 
4512 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4513   mstate m = 0;
4514   size_t msize = pad_request(sizeof(struct malloc_state));
4515   init_mparams(); /* Ensure pagesize etc initialized */
4516 
4517   if (capacity > msize + TOP_FOOT_SIZE &&
4518       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4519     m = init_user_mstate((char*)base, capacity);
4520     set_segment_flags(&m->seg, EXTERN_BIT);
4521     set_lock(m, locked);
4522   }
4523   return (mspace)m;
4524 }
4525 
4526 size_t destroy_mspace(mspace msp) {
4527   size_t freed = 0;
4528   mstate ms = (mstate)msp;
4529   if (ok_magic(ms)) {
4530     msegmentptr sp = &ms->seg;
4531     while (sp != 0) {
4532       char* base = sp->base;
4533       size_t size = sp->size;
4534       flag_t flag = get_segment_flags(sp);
4535       sp = sp->next;
4536       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4537           CALL_MUNMAP(base, size) == 0)
4538         freed += size;
4539     }
4540   }
4541   else {
4542     USAGE_ERROR_ACTION(ms,ms);
4543   }
4544   return freed;
4545 }
4546 
4547 /*
4548   mspace versions of routines are near-clones of the global
4549   versions. This is not so nice but better than the alternatives.
4550 */
4551 
4552 
4553 void* mspace_malloc(mspace msp, size_t bytes) {
4554   mstate ms = (mstate)msp;
4555   if (!ok_magic(ms)) {
4556     USAGE_ERROR_ACTION(ms,ms);
4557     return 0;
4558   }
4559   if (!PREACTION(ms)) {
4560     void* mem;
4561     size_t nb;
4562     if (bytes <= MAX_SMALL_REQUEST) {
4563       bindex_t idx;
4564       binmap_t smallbits;
4565       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4566       idx = small_index(nb);
4567       smallbits = ms->smallmap >> idx;
4568 
4569       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4570         mchunkptr b, p;
4571         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4572         b = smallbin_at(ms, idx);
4573         p = b->fd;
4574         assert(chunksize(p) == small_index2size(idx));
4575         unlink_first_small_chunk(ms, b, p, idx);
4576         set_inuse_and_pinuse(ms, p, small_index2size(idx));
4577         mem = chunk2mem(p);
4578         check_malloced_chunk(ms, mem, nb);
4579         goto postaction;
4580       }
4581 
4582       else if (nb > ms->dvsize) {
4583         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4584           mchunkptr b, p, r;
4585           size_t rsize;
4586           bindex_t i;
4587           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4588           binmap_t leastbit = least_bit(leftbits);
4589           compute_bit2idx(leastbit, i);
4590           b = smallbin_at(ms, i);
4591           p = b->fd;
4592           assert(chunksize(p) == small_index2size(i));
4593           unlink_first_small_chunk(ms, b, p, i);
4594           rsize = small_index2size(i) - nb;
4595           /* Fit here cannot be remainderless if 4byte sizes */
4596           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4597             set_inuse_and_pinuse(ms, p, small_index2size(i));
4598           else {
4599             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4600             r = chunk_plus_offset(p, nb);
4601             set_size_and_pinuse_of_free_chunk(r, rsize);
4602             replace_dv(ms, r, rsize);
4603           }
4604           mem = chunk2mem(p);
4605           check_malloced_chunk(ms, mem, nb);
4606           goto postaction;
4607         }
4608 
4609         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4610           check_malloced_chunk(ms, mem, nb);
4611           goto postaction;
4612         }
4613       }
4614     }
4615     else if (bytes >= MAX_REQUEST)
4616       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4617     else {
4618       nb = pad_request(bytes);
4619       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4620         check_malloced_chunk(ms, mem, nb);
4621         goto postaction;
4622       }
4623     }
4624 
4625     if (nb <= ms->dvsize) {
4626       size_t rsize = ms->dvsize - nb;
4627       mchunkptr p = ms->dv;
4628       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4629         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4630         ms->dvsize = rsize;
4631         set_size_and_pinuse_of_free_chunk(r, rsize);
4632         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4633       }
4634       else { /* exhaust dv */
4635         size_t dvs = ms->dvsize;
4636         ms->dvsize = 0;
4637         ms->dv = 0;
4638         set_inuse_and_pinuse(ms, p, dvs);
4639       }
4640       mem = chunk2mem(p);
4641       check_malloced_chunk(ms, mem, nb);
4642       goto postaction;
4643     }
4644 
4645     else if (nb < ms->topsize) { /* Split top */
4646       size_t rsize = ms->topsize -= nb;
4647       mchunkptr p = ms->top;
4648       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4649       r->head = rsize | PINUSE_BIT;
4650       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4651       mem = chunk2mem(p);
4652       check_top_chunk(ms, ms->top);
4653       check_malloced_chunk(ms, mem, nb);
4654       goto postaction;
4655     }
4656 
4657     mem = sys_alloc(ms, nb);
4658 
4659   postaction:
4660     POSTACTION(ms);
4661     return mem;
4662   }
4663 
4664   return 0;
4665 }
4666 
4667 void mspace_free(mspace msp, void* mem) {
4668   if (mem != 0) {
4669     mchunkptr p  = mem2chunk(mem);
4670 #if FOOTERS
4671     mstate fm = get_mstate_for(p);
4672 #else /* FOOTERS */
4673     mstate fm = (mstate)msp;
4674 #endif /* FOOTERS */
4675     if (!ok_magic(fm)) {
4676       USAGE_ERROR_ACTION(fm, p);
4677       return;
4678     }
4679     if (!PREACTION(fm)) {
4680       check_inuse_chunk(fm, p);
4681       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4682         size_t psize = chunksize(p);
4683         mchunkptr next = chunk_plus_offset(p, psize);
4684         if (!pinuse(p)) {
4685           size_t prevsize = p->prev_foot;
4686           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4687             prevsize &= ~IS_MMAPPED_BIT;
4688             psize += prevsize + MMAP_FOOT_PAD;
4689             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4690               fm->footprint -= psize;
4691             goto postaction;
4692           }
4693           else {
4694             mchunkptr prev = chunk_minus_offset(p, prevsize);
4695             psize += prevsize;
4696             p = prev;
4697             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4698               if (p != fm->dv) {
4699                 unlink_chunk(fm, p, prevsize);
4700               }
4701               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4702                 fm->dvsize = psize;
4703                 set_free_with_pinuse(p, psize, next);
4704                 goto postaction;
4705               }
4706             }
4707             else
4708               goto erroraction;
4709           }
4710         }
4711 
4712         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4713           if (!cinuse(next)) {  /* consolidate forward */
4714             if (next == fm->top) {
4715               size_t tsize = fm->topsize += psize;
4716               fm->top = p;
4717               p->head = tsize | PINUSE_BIT;
4718               if (p == fm->dv) {
4719                 fm->dv = 0;
4720                 fm->dvsize = 0;
4721               }
4722               if (should_trim(fm, tsize))
4723                 sys_trim(fm, 0);
4724               goto postaction;
4725             }
4726             else if (next == fm->dv) {
4727               size_t dsize = fm->dvsize += psize;
4728               fm->dv = p;
4729               set_size_and_pinuse_of_free_chunk(p, dsize);
4730               goto postaction;
4731             }
4732             else {
4733               size_t nsize = chunksize(next);
4734               psize += nsize;
4735               unlink_chunk(fm, next, nsize);
4736               set_size_and_pinuse_of_free_chunk(p, psize);
4737               if (p == fm->dv) {
4738                 fm->dvsize = psize;
4739                 goto postaction;
4740               }
4741             }
4742           }
4743           else
4744             set_free_with_pinuse(p, psize, next);
4745           insert_chunk(fm, p, psize);
4746           check_free_chunk(fm, p);
4747           goto postaction;
4748         }
4749       }
4750     erroraction:
4751       USAGE_ERROR_ACTION(fm, p);
4752     postaction:
4753       POSTACTION(fm);
4754     }
4755   }
4756 }
4757 
4758 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4759   void* mem;
4760   size_t req = 0;
4761   mstate ms = (mstate)msp;
4762   if (!ok_magic(ms)) {
4763     USAGE_ERROR_ACTION(ms,ms);
4764     return 0;
4765   }
4766   if (n_elements != 0) {
4767     req = n_elements * elem_size;
4768     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4769         (req / n_elements != elem_size))
4770       req = MAX_SIZE_T; /* force downstream failure on overflow */
4771   }
4772   mem = internal_malloc(ms, req);
4773   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4774     memset(mem, 0, req);
4775   return mem;
4776 }
4777 
4778 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4779   if (oldmem == 0)
4780     return mspace_malloc(msp, bytes);
4781 #ifdef REALLOC_ZERO_BYTES_FREES
4782   if (bytes == 0) {
4783     mspace_free(msp, oldmem);
4784     return 0;
4785   }
4786 #endif /* REALLOC_ZERO_BYTES_FREES */
4787   else {
4788 #if FOOTERS
4789     mchunkptr p  = mem2chunk(oldmem);
4790     mstate ms = get_mstate_for(p);
4791 #else /* FOOTERS */
4792     mstate ms = (mstate)msp;
4793 #endif /* FOOTERS */
4794     if (!ok_magic(ms)) {
4795       USAGE_ERROR_ACTION(ms,ms);
4796       return 0;
4797     }
4798     return internal_realloc(ms, oldmem, bytes);
4799   }
4800 }
4801 
4802 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4803   mstate ms = (mstate)msp;
4804   if (!ok_magic(ms)) {
4805     USAGE_ERROR_ACTION(ms,ms);
4806     return 0;
4807   }
4808   return internal_memalign(ms, alignment, bytes);
4809 }
4810 
4811 void** mspace_independent_calloc(mspace msp, size_t n_elements,
4812                                  size_t elem_size, void* chunks[]) {
4813   size_t sz = elem_size; /* serves as 1-element array */
4814   mstate ms = (mstate)msp;
4815   if (!ok_magic(ms)) {
4816     USAGE_ERROR_ACTION(ms,ms);
4817     return 0;
4818   }
4819   return ialloc(ms, n_elements, &sz, 3, chunks);
4820 }
4821 
4822 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4823                                    size_t sizes[], void* chunks[]) {
4824   mstate ms = (mstate)msp;
4825   if (!ok_magic(ms)) {
4826     USAGE_ERROR_ACTION(ms,ms);
4827     return 0;
4828   }
4829   return ialloc(ms, n_elements, sizes, 0, chunks);
4830 }
4831 
4832 int mspace_trim(mspace msp, size_t pad) {
4833   int result = 0;
4834   mstate ms = (mstate)msp;
4835   if (ok_magic(ms)) {
4836     if (!PREACTION(ms)) {
4837       result = sys_trim(ms, pad);
4838       POSTACTION(ms);
4839     }
4840   }
4841   else {
4842     USAGE_ERROR_ACTION(ms,ms);
4843   }
4844   return result;
4845 }
4846 
4847 void mspace_malloc_stats(mspace msp) {
4848   mstate ms = (mstate)msp;
4849   if (ok_magic(ms)) {
4850     internal_malloc_stats(ms);
4851   }
4852   else {
4853     USAGE_ERROR_ACTION(ms,ms);
4854   }
4855 }
4856 
4857 size_t mspace_footprint(mspace msp) {
4858   size_t result;
4859   mstate ms = (mstate)msp;
4860   if (ok_magic(ms)) {
4861     result = ms->footprint;
4862   }
4863   USAGE_ERROR_ACTION(ms,ms);
4864   return result;
4865 }
4866 
4867 
4868 size_t mspace_max_footprint(mspace msp) {
4869   size_t result;
4870   mstate ms = (mstate)msp;
4871   if (ok_magic(ms)) {
4872     result = ms->max_footprint;
4873   }
4874   USAGE_ERROR_ACTION(ms,ms);
4875   return result;
4876 }
4877 
4878 
4879 #if !NO_MALLINFO
4880 struct mallinfo mspace_mallinfo(mspace msp) {
4881   mstate ms = (mstate)msp;
4882   if (!ok_magic(ms)) {
4883     USAGE_ERROR_ACTION(ms,ms);
4884   }
4885   return internal_mallinfo(ms);
4886 }
4887 #endif /* NO_MALLINFO */
4888 
4889 int mspace_mallopt(int param_number, int value) {
4890   return change_mparam(param_number, value);
4891 }
4892 
4893 #endif /* MSPACES */
4894 
4895 /* -------------------- Alternative MORECORE functions ------------------- */
4896 
4897 /*
4898   Guidelines for creating a custom version of MORECORE:
4899 
4900   * For best performance, MORECORE should allocate in multiples of pagesize.
4901   * MORECORE may allocate more memory than requested. (Or even less,
4902       but this will usually result in a malloc failure.)
4903   * MORECORE must not allocate memory when given argument zero, but
4904       instead return one past the end address of memory from previous
4905       nonzero call.
4906   * For best performance, consecutive calls to MORECORE with positive
4907       arguments should return increasing addresses, indicating that
4908       space has been contiguously extended.
4909   * Even though consecutive calls to MORECORE need not return contiguous
4910       addresses, it must be OK for malloc'ed chunks to span multiple
4911       regions in those cases where they do happen to be contiguous.
4912   * MORECORE need not handle negative arguments -- it may instead
4913       just return MFAIL when given negative arguments.
4914       Negative arguments are always multiples of pagesize. MORECORE
4915       must not misinterpret negative args as large positive unsigned
4916       args. You can suppress all such calls from even occurring by defining
4917       MORECORE_CANNOT_TRIM,
4918 
4919   As an example alternative MORECORE, here is a custom allocator
4920   kindly contributed for pre-OSX macOS.  It uses virtually but not
4921   necessarily physically contiguous non-paged memory (locked in,
4922   present and won't get swapped out).  You can use it by uncommenting
4923   this section, adding some #includes, and setting up the appropriate
4924   defines above:
4925 
4926       #define MORECORE osMoreCore
4927 
4928   There is also a shutdown routine that should somehow be called for
4929   cleanup upon program exit.
4930 
4931   #define MAX_POOL_ENTRIES 100
4932   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4933   static int next_os_pool;
4934   void *our_os_pools[MAX_POOL_ENTRIES];
4935 
4936   void *osMoreCore(int size)
4937   {
4938     void *ptr = 0;
4939     static void *sbrk_top = 0;
4940 
4941     if (size > 0)
4942     {
4943       if (size < MINIMUM_MORECORE_SIZE)
4944          size = MINIMUM_MORECORE_SIZE;
4945       if (CurrentExecutionLevel() == kTaskLevel)
4946          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4947       if (ptr == 0)
4948       {
4949         return (void *) MFAIL;
4950       }
4951       // save ptrs so they can be freed during cleanup
4952       our_os_pools[next_os_pool] = ptr;
4953       next_os_pool++;
4954       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4955       sbrk_top = (char *) ptr + size;
4956       return ptr;
4957     }
4958     else if (size < 0)
4959     {
4960       // we don't currently support shrink behavior
4961       return (void *) MFAIL;
4962     }
4963     else
4964     {
4965       return sbrk_top;
4966     }
4967   }
4968 
4969   // cleanup any allocated memory pools
4970   // called as last thing before shutting down driver
4971 
4972   void osCleanupMem(void)
4973   {
4974     void **ptr;
4975 
4976     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4977       if (*ptr)
4978       {
4979          PoolDeallocate(*ptr);
4980          *ptr = 0;
4981       }
4982   }
4983 
4984 */
4985 
4986 
4987 /* -----------------------------------------------------------------------
4988 History:
4989     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4990       * Add max_footprint functions
4991       * Ensure all appropriate literals are size_t
4992       * Fix conditional compilation problem for some #define settings
4993       * Avoid concatenating segments with the one provided
4994         in create_mspace_with_base
4995       * Rename some variables to avoid compiler shadowing warnings
4996       * Use explicit lock initialization.
4997       * Better handling of sbrk interference.
4998       * Simplify and fix segment insertion, trimming and mspace_destroy
4999       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5000       * Thanks especially to Dennis Flanagan for help on these.
5001 
5002     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
5003       * Fix memalign brace error.
5004 
5005     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
5006       * Fix improper #endif nesting in C++
5007       * Add explicit casts needed for C++
5008 
5009     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
5010       * Use trees for large bins
5011       * Support mspaces
5012       * Use segments to unify sbrk-based and mmap-based system allocation,
5013         removing need for emulation on most platforms without sbrk.
5014       * Default safety checks
5015       * Optional footer checks. Thanks to William Robertson for the idea.
5016       * Internal code refactoring
5017       * Incorporate suggestions and platform-specific changes.
5018         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5019         Aaron Bachmann,  Emery Berger, and others.
5020       * Speed up non-fastbin processing enough to remove fastbins.
5021       * Remove useless cfree() to avoid conflicts with other apps.
5022       * Remove internal memcpy, memset. Compilers handle builtins better.
5023       * Remove some options that no one ever used and rename others.
5024 
5025     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
5026       * Fix malloc_state bitmap array misdeclaration
5027 
5028     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
5029       * Allow tuning of FIRST_SORTED_BIN_SIZE
5030       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5031       * Better detection and support for non-contiguousness of MORECORE.
5032         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5033       * Bypass most of malloc if no frees. Thanks To Emery Berger.
5034       * Fix freeing of old top non-contiguous chunk im sysmalloc.
5035       * Raised default trim and map thresholds to 256K.
5036       * Fix mmap-related #defines. Thanks to Lubos Lunak.
5037       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5038       * Branch-free bin calculation
5039       * Default trim and mmap thresholds now 256K.
5040 
5041     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
5042       * Introduce independent_comalloc and independent_calloc.
5043         Thanks to Michael Pachos for motivation and help.
5044       * Make optional .h file available
5045       * Allow > 2GB requests on 32bit systems.
5046       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5047         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5048         and Anonymous.
5049       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5050         helping test this.)
5051       * memalign: check alignment arg
5052       * realloc: don't try to shift chunks backwards, since this
5053         leads to  more fragmentation in some programs and doesn't
5054         seem to help in any others.
5055       * Collect all cases in malloc requiring system memory into sysmalloc
5056       * Use mmap as backup to sbrk
5057       * Place all internal state in malloc_state
5058       * Introduce fastbins (although similar to 2.5.1)
5059       * Many minor tunings and cosmetic improvements
5060       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5061       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5062         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5063       * Include errno.h to support default failure action.
5064 
5065     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
5066       * return null for negative arguments
5067       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5068          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5069           (e.g. WIN32 platforms)
5070          * Cleanup header file inclusion for WIN32 platforms
5071          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5072          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5073            memory allocation routines
5074          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5075          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5076            usage of 'assert' in non-WIN32 code
5077          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5078            avoid infinite loop
5079       * Always call 'fREe()' rather than 'free()'
5080 
5081     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
5082       * Fixed ordering problem with boundary-stamping
5083 
5084     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
5085       * Added pvalloc, as recommended by H.J. Liu
5086       * Added 64bit pointer support mainly from Wolfram Gloger
5087       * Added anonymously donated WIN32 sbrk emulation
5088       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5089       * malloc_extend_top: fix mask error that caused wastage after
5090         foreign sbrks
5091       * Add linux mremap support code from HJ Liu
5092 
5093     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
5094       * Integrated most documentation with the code.
5095       * Add support for mmap, with help from
5096         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5097       * Use last_remainder in more cases.
5098       * Pack bins using idea from  colin@nyx10.cs.du.edu
5099       * Use ordered bins instead of best-fit threshold
5100       * Eliminate block-local decls to simplify tracing and debugging.
5101       * Support another case of realloc via move into top
5102       * Fix error occurring when initial sbrk_base not word-aligned.
5103       * Rely on page size for units instead of SBRK_UNIT to
5104         avoid surprises about sbrk alignment conventions.
5105       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5106         (raymond@es.ele.tue.nl) for the suggestion.
5107       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5108       * More precautions for cases where other routines call sbrk,
5109         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5110       * Added macros etc., allowing use in linux libc from
5111         H.J. Lu (hjl@gnu.ai.mit.edu)
5112       * Inverted this history list
5113 
5114     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5115       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5116       * Removed all preallocation code since under current scheme
5117         the work required to undo bad preallocations exceeds
5118         the work saved in good cases for most test programs.
5119       * No longer use return list or unconsolidated bins since
5120         no scheme using them consistently outperforms those that don't
5121         given above changes.
5122       * Use best fit for very large chunks to prevent some worst-cases.
5123       * Added some support for debugging
5124 
5125     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5126       * Removed footers when chunks are in use. Thanks to
5127         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5128 
5129     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5130       * Added malloc_trim, with help from Wolfram Gloger
5131         (wmglo@Dent.MED.Uni-Muenchen.DE).
5132 
5133     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5134 
5135     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5136       * realloc: try to expand in both directions
5137       * malloc: swap order of clean-bin strategy;
5138       * realloc: only conditionally expand backwards
5139       * Try not to scavenge used bins
5140       * Use bin counts as a guide to preallocation
5141       * Occasionally bin return list chunks in first scan
5142       * Add a few optimizations from colin@nyx10.cs.du.edu
5143 
5144     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5145       * faster bin computation & slightly different binning
5146       * merged all consolidations to one part of malloc proper
5147          (eliminating old malloc_find_space & malloc_clean_bin)
5148       * Scan 2 returns chunks (not just 1)
5149       * Propagate failure in realloc if malloc returns 0
5150       * Add stuff to allow compilation on non-ANSI compilers
5151           from kpv@research.att.com
5152 
5153     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5154       * removed potential for odd address access in prev_chunk
5155       * removed dependency on getpagesize.h
5156       * misc cosmetics and a bit more internal documentation
5157       * anticosmetics: mangled names in macros to evade debugger strangeness
5158       * tested on sparc, hp-700, dec-mips, rs6000
5159           with gcc & native cc (hp, dec only) allowing
5160           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5161 
5162     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5163       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5164          structure of old version,  but most details differ.)
5165 
5166 */