Changeset 75f460 in git for omalloc


Ignore:
Timestamp:
Dec 16, 2014, 3:43:21 PM (9 years ago)
Author:
Hans Schoenemann <hannes@…>
Branches:
(u'spielwiese', 'fe61d9c35bf7c61f2b6cbf1b56e25e2f08d536cc')
Children:
fce947c9e6c3e8c6d5a622c7f6b0d724580993cc
Parents:
a2e4470c6e9a666de8ab7b706370c15e13092f76
Message:
format
Location:
omalloc
Files:
10 edited

Legend:

Unmodified
Added
Removed
  • omalloc/Makefile.am

    ra2e447 r75f460  
    2727EXTRA_DIST = omalloc.c omtTestAlloc.c omtTest.h omMmap.c
    2828
    29 AM_CPPFLAGS =-I${top_srcdir}/.. -I${top_builddir}/.. 
     29AM_CPPFLAGS =-I${top_srcdir}/.. -I${top_builddir}/..
    3030
    3131libomalloc_la_SOURCES=$(SOURCES) $(noinst_HEADERS)
  • omalloc/Misc/dlmalloc/Makefile

    ra2e447 r75f460  
    44RM = rm -f
    55DISTFILES = COPYRIGHT Makefile malloc-trace.c malloc-trace.h \
    6             print-trace.c trace-test.c 
     6            print-trace.c trace-test.c
    77
    88.c.o:
  • omalloc/Misc/dlmalloc/malloc.c

    ra2e447 r75f460  
    11/* ---------- To make a malloc.h, start cutting here ------------ */
    22
    3 /* 
    4   A version of malloc/free/realloc written by Doug Lea and released to the 
     3/*
     4  A version of malloc/free/realloc written by Doug Lea and released to the
    55  public domain.  Send questions/comments/complaints/performance data
    66  to dl@cs.oswego.edu
    77
    88* VERSION 2.6.5  Wed Jun 17 15:55:16 1998  Doug Lea  (dl at gee)
    9  
     9
    1010   Note: There may be an updated version of this malloc obtainable at
    1111           ftp://g.oswego.edu/pub/misc/malloc.c
     
    2222  most tunable malloc ever written. However it is among the fastest
    2323  while also being among the most space-conserving, portable and tunable.
    24   Consistent balance across these factors results in a good general-purpose 
    25   allocator. For a high-level description, see 
     24  Consistent balance across these factors results in a good general-purpose
     25  allocator. For a high-level description, see
    2626     http://g.oswego.edu/dl/html/malloc.html
    2727
     
    5959     Equivalent to free(p).
    6060  malloc_trim(size_t pad);
    61      Release all but pad bytes of freed top-most memory back 
     61     Release all but pad bytes of freed top-most memory back
    6262     to the system. Return 1 if successful, else 0.
    6363  malloc_usable_size(Void_t* p);
     
    8585
    8686  Assumed size_t  representation:       4 or 8 bytes
    87        Note that size_t is allowed to be 4 bytes even if pointers are 8.       
     87       Note that size_t is allowed to be 4 bytes even if pointers are 8.
    8888
    8989  Minimum overhead per allocated chunk: 4 or 8 bytes
    9090       Each malloced chunk has a hidden overhead of 4 bytes holding size
    91        and status information. 
     91       and status information.
    9292
    9393  Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
    9494                          8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
    95                                      
     95
    9696       When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
    97        ptrs but 4 byte size) or 24 (for 8/8) additional bytes are 
     97       ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
    9898       needed; 4 (8) for a trailing size field
    9999       and 8 (16) bytes for free list pointers. Thus, the minimum
     
    110110       that `size_t' may be defined on a system as either a signed or
    111111       an unsigned type. To be conservative, values that would appear
    112        as negative numbers are avoided. 
     112       as negative numbers are avoided.
    113113       Requests for sizes with a negative sign bit will return a
    114114       minimum-sized chunk.
     
    118118       Alignnment demands, plus the minimum allocatable size restriction
    119119       make the normal worst-case wastage 15 bytes (i.e., up to 15
    120        more bytes will be allocated than were requested in malloc), with 
     120       more bytes will be allocated than were requested in malloc), with
    121121       two exceptions:
    122122         1. Because requests for zero bytes allocate non-zero space,
     
    154154     a C compiler sufficiently close to ANSI to get away with it.
    155155  DEBUG                    (default: NOT defined)
    156      Define to enable debugging. Adds fairly extensive assertion-based 
     156     Define to enable debugging. Adds fairly extensive assertion-based
    157157     checking to help track down memory errors, but noticeably slows down
    158158     execution.
    159   REALLOC_ZERO_BYTES_FREES (default: NOT defined) 
     159  REALLOC_ZERO_BYTES_FREES (default: NOT defined)
    160160     Define this if you think that realloc(p, 0) should be equivalent
    161161     to free(p). Otherwise, since malloc returns a unique pointer for
    162162     malloc(0), so does realloc(p, 0).
    163163  HAVE_MEMCPY               (default: defined)
    164      Define if you are not otherwise using ANSI STD C, but still 
     164     Define if you are not otherwise using ANSI STD C, but still
    165165     have memcpy and memset in your C library and want to use them.
    166166     Otherwise, simple internal versions are supplied.
    167167  USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
    168168     Define as 1 if you want the C library versions of memset and
    169      memcpy called in realloc and calloc (otherwise macro versions are used). 
     169     memcpy called in realloc and calloc (otherwise macro versions are used).
    170170     At least on some platforms, the simple macro versions usually
    171171     outperform libc versions.
    172172  HAVE_MMAP                 (default: defined as 1)
    173173     Define to non-zero to optionally make malloc() use mmap() to
    174      allocate very large blocks. 
     174     allocate very large blocks.
    175175  HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
    176176     Define to non-zero to optionally make realloc() use mremap() to
    177      reallocate very large blocks. 
     177     reallocate very large blocks.
    178178  malloc_getpagesize        (default: derived from system #includes)
    179179     Either a constant or routine call returning the system page size.
    180   HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined) 
     180  HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
    181181     Optionally define if you are on a system with a /usr/include/malloc.h
    182182     that declares struct mallinfo. It is not at all necessary to
    183183     define this even if you do, but will ensure consistency.
    184184  INTERNAL_SIZE_T           (default: size_t)
    185      Define to a 32-bit type (probably `unsigned int') if you are on a 
    186      64-bit machine, yet do not want or need to allow malloc requests of 
     185     Define to a 32-bit type (probably `unsigned int') if you are on a
     186     64-bit machine, yet do not want or need to allow malloc requests of
    187187     greater than 2^31 to be handled. This saves space, especially for
    188188     very small chunks.
     
    205205     holds for sbrk).
    206206  DEFAULT_TRIM_THRESHOLD
    207   DEFAULT_TOP_PAD       
     207  DEFAULT_TOP_PAD
    208208  DEFAULT_MMAP_THRESHOLD
    209   DEFAULT_MMAP_MAX     
     209  DEFAULT_MMAP_MAX
    210210     Default values of tunable parameters (described in detail below)
    211211     controlling interaction with host system routines (sbrk, mmap, etc).
     
    279279    cannot be checked very much automatically.)
    280280
    281     Setting DEBUG may also be helpful if you are trying to modify 
    282     this code. The assertions in the check routines spell out in more 
     281    Setting DEBUG may also be helpful if you are trying to modify
     282    this code. The assertions in the check routines spell out in more
    283283    detail the assumptions and invariants underlying the algorithms.
    284284
    285285*/
    286286
    287 #if DEBUG 
     287#if DEBUG
    288288#include <assert.h>
    289289#else
     
    309309  realloc with zero bytes should be the same as a call to free.
    310310  Some people think it should. Otherwise, since this malloc
    311   returns a unique pointer for malloc(0), so does realloc(p, 0). 
     311  returns a unique pointer for malloc(0), so does realloc(p, 0).
    312312*/
    313313
     
    316316
    317317
    318 /* 
     318/*
    319319  WIN32 causes an emulation of sbrk to be compiled in
    320320  mmap-based options are not currently supported in WIN32.
     
    337337  have memset and memcpy called. People report that the macro
    338338  versions are often enough faster than libc versions on many
    339   systems that it is better to use them. 
    340 
    341 */
    342 
    343 #define HAVE_MEMCPY 
     339  systems that it is better to use them.
     340
     341*/
     342
     343#define HAVE_MEMCPY
    344344
    345345#ifndef USE_MEMCPY
     
    351351#endif
    352352
    353 #if (__STD_C || defined(HAVE_MEMCPY)) 
     353#if (__STD_C || defined(HAVE_MEMCPY))
    354354
    355355#if __STD_C
     
    484484  Access to system page size. To the extent possible, this malloc
    485485  manages memory from the system in page-size units.
    486  
    487   The following mechanics for getpagesize were adapted from 
    488   bsd/gnu getpagesize.h 
     486
     487  The following mechanics for getpagesize were adapted from
     488  bsd/gnu getpagesize.h
    489489*/
    490490
     
    516516#            define malloc_getpagesize (NBPG * CLSIZE)
    517517#          endif
    518 #        else 
     518#        else
    519519#          ifdef NBPC
    520520#            define malloc_getpagesize NBPC
     
    526526#            endif
    527527#          endif
    528 #        endif 
     528#        endif
    529529#      endif
    530 #    endif 
     530#    endif
    531531#  endif
    532532#endif
     
    578578  int fordblks; /* total non-inuse space */
    579579  int keepcost; /* top-most, releasable (via malloc_trim) space */
    580 };     
     580};
    581581
    582582/* SVID2/XPG mallopt options */
     
    603603
    604604/*
    605     M_TRIM_THRESHOLD is the maximum amount of unused top-most memory 
     605    M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
    606606      to keep before releasing via malloc_trim in free().
    607607
     
    611611      afterward allocate more large chunks) the value should be high
    612612      enough so that your overall system performance would improve by
    613       releasing. 
     613      releasing.
    614614
    615615      The trim threshold and the mmap control parameters (see below)
     
    621621      the XF86 X server on Linux, using a trim threshold of 128K and a
    622622      mmap threshold of 192K led to near-minimal long term resource
    623       consumption. 
     623      consumption.
    624624
    625625      If you are using this malloc in a long-lived program, it should
     
    657657
    658658/*
    659     M_TOP_PAD is the amount of extra `padding' space to allocate or 
     659    M_TOP_PAD is the amount of extra `padding' space to allocate or
    660660      retain whenever sbrk is called. It is used in two ways internally:
    661661
     
    667667        it is used as the `pad' argument.
    668668
    669       In both cases, the actual amount of padding is rounded 
     669      In both cases, the actual amount of padding is rounded
    670670      so that the end of the arena is always a system page boundary.
    671671
     
    674674      that nearly every malloc request during program start-up (or
    675675      after trimming) will invoke sbrk, which needlessly wastes
    676       time. 
     676      time.
    677677
    678678      Automatic rounding-up to page-size units is normally sufficient
    679679      to avoid measurable overhead, so the default is 0.  However, in
    680680      systems where sbrk is relatively slow, it can pay to increase
    681       this value, at the expense of carrying around more memory than 
     681      this value, at the expense of carrying around more memory than
    682682      the program needs.
    683683
     
    691691/*
    692692
    693     M_MMAP_THRESHOLD is the request size threshold for using mmap() 
    694       to service a request. Requests of at least this size that cannot 
    695       be allocated using already-existing space will be serviced via mmap. 
     693    M_MMAP_THRESHOLD is the request size threshold for using mmap()
     694      to service a request. Requests of at least this size that cannot
     695      be allocated using already-existing space will be serviced via mmap.
    696696      (If enough normal freed space already exists it is used instead.)
    697697
     
    712712
    713713         1. The space cannot be reclaimed, consolidated, and then
    714             used to service later requests, as happens with normal chunks. 
     714            used to service later requests, as happens with normal chunks.
    715715         2. It can lead to more wastage because of mmap page alignment
    716716            requirements
     
    722722
    723723      All together, these considerations should lead you to use mmap
    724       only for relatively large requests. 
     724      only for relatively large requests.
    725725
    726726
     
    738738
    739739/*
    740     M_MMAP_MAX is the maximum number of requests to simultaneously 
     740    M_MMAP_MAX is the maximum number of requests to simultaneously
    741741      service using mmap. This parameter exists because:
    742742
     
    759759
    760760
    761 /* 
     761/*
    762762
    763763  Special defines for linux libc
     
    791791#define MORECORE (*__morecore)
    792792#define MORECORE_FAILURE 0
    793 #define MORECORE_CLEARS 1 
     793#define MORECORE_CLEARS 1
    794794
    795795#else /* INTERNAL_LINUX_C_LIB */
     
    894894
    895895
    896 /* 
     896/*
    897897  Emulation of sbrk for WIN32
    898898  All code within the ifdef WIN32 is untested by me.
     
    905905~(malloc_getpagesize-1))
    906906
    907 /* resrve 64MB to insure large contiguous space */ 
     907/* resrve 64MB to insure large contiguous space */
    908908#define RESERVED_SIZE (1024*1024*64)
    909909#define NEXT_SIZE (2048*1024)
     
    913913typedef struct GmListElement GmListElement;
    914914
    915 struct GmListElement 
     915struct GmListElement
    916916{
    917917        GmListElement* next;
     
    945945        if (gAddressBase && (gNextAddress - gAddressBase))
    946946        {
    947                 rval = VirtualFree ((void*)gAddressBase, 
    948                                                         gNextAddress - gAddressBase, 
     947                rval = VirtualFree ((void*)gAddressBase,
     948                                                        gNextAddress - gAddressBase,
    949949                                                        MEM_DECOMMIT);
    950950        ASSERT (rval);
     
    959959        }
    960960}
    961                
     961
    962962static
    963963void* findRegion (void* start_address, unsigned long size)
     
    972972                        return start_address;
    973973                else
    974                         start_address = (char*)info.BaseAddress + info.RegionSize; 
     974                        start_address = (char*)info.BaseAddress + info.RegionSize;
    975975        }
    976976        return NULL;
    977        
     977
    978978}
    979979
     
    987987                {
    988988                        gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
    989                         gNextAddress = gAddressBase = 
    990                                 (unsigned int)VirtualAlloc (NULL, gAllocatedSize, 
     989                        gNextAddress = gAddressBase =
     990                                (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
    991991                                                                                        MEM_RESERVE, PAGE_NOACCESS);
    992992                } else if (AlignPage (gNextAddress + size) > (gAddressBase +
     
    995995                        long new_size = max (NEXT_SIZE, AlignPage (size));
    996996                        void* new_address = (void*)(gAddressBase+gAllocatedSize);
    997                         do 
     997                        do
    998998                        {
    999999                                new_address = findRegion (new_address, new_size);
    1000                                
     1000
    10011001                                if (new_address == 0)
    10021002                                        return (void*)-1;
     
    10061006                                                                                                MEM_RESERVE, PAGE_NOACCESS);
    10071007                                // repeat in case of race condition
    1008                                 // The region that we found has been snagged 
     1008                                // The region that we found has been snagged
    10091009                                // by another thread
    10101010                        }
     
    10221022                        void* res;
    10231023                        res = VirtualAlloc ((void*)AlignPage (gNextAddress),
    1024                                                                 (size + gNextAddress - 
    1025                                                                  AlignPage (gNextAddress)), 
     1024                                                                (size + gNextAddress -
     1025                                                                 AlignPage (gNextAddress)),
    10261026                                                                MEM_COMMIT, PAGE_READWRITE);
    10271027                        if (res == 0)
     
    10381038                if (alignedGoal >= gAddressBase)
    10391039                {
    1040                         VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal, 
     1040                        VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
    10411041                                                 MEM_DECOMMIT);
    10421042                        gNextAddress = gNextAddress + size;
    10431043                        return (void*)gNextAddress;
    10441044                }
    1045                 else 
     1045                else
    10461046                {
    10471047                        VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
     
    10921092    in use.
    10931093
    1094     An allocated chunk looks like this: 
     1094    An allocated chunk looks like this:
    10951095
    10961096
     
    11471147    deal with alignments etc).
    11481148
    1149     The two exceptions to all this are 
    1150 
    1151      1. The special chunk `top', which doesn't bother using the 
     1149    The two exceptions to all this are
     1150
     1151     1. The special chunk `top', which doesn't bother using the
    11521152        trailing size field since there is no
    11531153        next contiguous chunk that would have to index off it. (After
     
    11821182       order, which tends to give each chunk an equal opportunity to be
    11831183       consolidated with adjacent freed chunks, resulting in larger free
    1184        chunks and less fragmentation. 
     1184       chunks and less fragmentation.
    11851185
    11861186    * `top': The top-most available chunk (i.e., the one bordering the
     
    11931193       most recently split (non-top) chunk. This bin is checked
    11941194       before other non-fitting chunks, so as to provide better
    1195        locality for runs of sequentially allocated chunks. 
     1195       locality for runs of sequentially allocated chunks.
    11961196
    11971197    *  Implicitly, through the host system's memory mapping tables.
    1198        If supported, requests greater than a threshold are usually 
     1198       If supported, requests greater than a threshold are usually
    11991199       serviced via calls to mmap, and then later released via munmap.
    12001200
     
    12341234
    12351235
    1236 /* 
    1237   Physical chunk operations 
     1236/*
     1237  Physical chunk operations
    12381238*/
    12391239
     
    12411241/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
    12421242
    1243 #define PREV_INUSE 0x1 
     1243#define PREV_INUSE 0x1
    12441244
    12451245/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
     
    12701270
    12711271
    1272 /* 
    1273   Dealing with use bits 
     1272/*
     1273  Dealing with use bits
    12741274*/
    12751275
     
    13101310
    13111311
    1312 /* 
    1313   Dealing with size fields 
     1312/*
     1313  Dealing with size fields
    13141314*/
    13151315
     
    13911391/*
    13921392   Because top initially points to its own bin with initial
    1393    zero size, thus forcing extension on the first malloc request, 
    1394    we avoid having any special code in malloc to check whether 
     1393   zero size, thus forcing extension on the first malloc request,
     1394   we avoid having any special code in malloc to check whether
    13951395   it even exists yet. But we still need to in malloc_extend_top.
    13961396*/
     
    14301430#define last(b)  ((b)->bk)
    14311431
    1432 /* 
     1432/*
    14331433  Indexing into bins
    14341434*/
     
    14411441 ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15): \
    14421442 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
    1443                                           126)                     
    1444 /* 
     1443                                          126)
     1444/*
    14451445  bins for chunks < 512 are all spaced 8 bytes apart, and hold
    14461446  identically sized chunks. This is exploited in malloc.
     
    14531453#define smallbin_index(sz)  (((unsigned long)(sz)) >> 3)
    14541454
    1455 /* 
     1455/*
    14561456   Requests are `small' if both the corresponding and the next bin are small
    14571457*/
     
    15001500
    15011501/* The maximum memory obtained from system via sbrk */
    1502 static unsigned long max_sbrked_mem = 0; 
     1502static unsigned long max_sbrked_mem = 0;
    15031503
    15041504/* The maximum via either sbrk or mmap */
    1505 static unsigned long max_total_mem = 0; 
     1505static unsigned long max_total_mem = 0;
    15061506
    15071507/* internal working copy of mallinfo */
     
    15211521
    15221522
    1523 /* 
    1524   Debugging support 
     1523/*
     1524  Debugging support
    15251525*/
    15261526
     
    15371537
    15381538#if __STD_C
    1539 static void do_check_chunk(mchunkptr p) 
     1539static void do_check_chunk(mchunkptr p)
    15401540#else
    15411541static void do_check_chunk(p) mchunkptr p;
    15421542#endif
    1543 { 
     1543{
    15441544  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
    15451545
     
    15491549  /* Check for legal address ... */
    15501550  assert((char*)p >= sbrk_base);
    1551   if (p != top) 
     1551  if (p != top)
    15521552    assert((char*)p + sz <= (char*)top);
    15531553  else
     
    15581558
    15591559#if __STD_C
    1560 static void do_check_free_chunk(mchunkptr p) 
     1560static void do_check_free_chunk(mchunkptr p)
    15611561#else
    15621562static void do_check_free_chunk(p) mchunkptr p;
    15631563#endif
    1564 { 
     1564{
    15651565  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
    15661566  mchunkptr next = chunk_at_offset(p, sz);
     
    15811581    assert(prev_inuse(p));
    15821582    assert (next == top || inuse(next));
    1583    
     1583
    15841584    /* ... and has minimally sane links */
    15851585    assert(p->fd->bk == p);
     
    15871587  }
    15881588  else /* markers are always of size SIZE_SZ */
    1589     assert(sz == SIZE_SZ); 
     1589    assert(sz == SIZE_SZ);
    15901590}
    15911591
    15921592#if __STD_C
    1593 static void do_check_inuse_chunk(mchunkptr p) 
     1593static void do_check_inuse_chunk(mchunkptr p)
    15941594#else
    15951595static void do_check_inuse_chunk(p) mchunkptr p;
    15961596#endif
    1597 { 
     1597{
    15981598  mchunkptr next = next_chunk(p);
    15991599  do_check_chunk(p);
     
    16061606    if an inuse chunk borders them and debug is on, it's worth doing them.
    16071607  */
    1608   if (!prev_inuse(p)) 
     1608  if (!prev_inuse(p))
    16091609  {
    16101610    mchunkptr prv = prev_chunk(p);
     
    16231623
    16241624#if __STD_C
    1625 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) 
     1625static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
    16261626#else
    16271627static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
     
    16541654#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
    16551655#else
    1656 #define check_free_chunk(P) 
     1656#define check_free_chunk(P)
    16571657#define check_inuse_chunk(P)
    16581658#define check_chunk(P)
     
    16631663
    16641664
    1665 /* 
     1665/*
    16661666  Macro-based internal utilities
    16671667*/
    16681668
    16691669
    1670 /* 
     1670/*
    16711671  Linking chunks in bin lists.
    16721672  Call these only with variables, not arbitrary expressions, as arguments.
    16731673*/
    16741674
    1675 /* 
     1675/*
    16761676  Place chunk p of size s in its bin, in size order,
    16771677  putting it ahead of others of same size.
     
    17661766                      MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
    17671767#else /* !MAP_ANONYMOUS */
    1768   if (fd < 0) 
     1768  if (fd < 0)
    17691769  {
    17701770    fd = open("/dev/zero", O_RDWR);
     
    17781778  n_mmaps++;
    17791779  if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
    1780  
     1780
    17811781  /* We demand that eight bytes into a page must be 8-byte aligned. */
    17821782  assert(aligned_OK(chunk2mem(p)));
     
    17881788  p->prev_size = 0;
    17891789  set_head(p, size|IS_MMAPPED);
    1790  
     1790
    17911791  mmapped_mem += size;
    1792   if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
     1792  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
    17931793    max_mmapped_mem = mmapped_mem;
    1794   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
     1794  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
    17951795    max_total_mem = mmapped_mem + sbrked_mem;
    17961796  return p;
     
    18541854  mmapped_mem -= size + offset;
    18551855  mmapped_mem += new_size;
    1856   if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
     1856  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
    18571857    max_mmapped_mem = mmapped_mem;
    18581858  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
     
    18691869
    18701870
    1871 /* 
     1871/*
    18721872  Extend the top-most chunk by obtaining memory from system.
    18731873  Main interface to sbrk (but see also malloc_trim).
     
    18911891
    18921892  /* Pad request with top_pad plus minimal overhead */
    1893  
     1893
    18941894  INTERNAL_SIZE_T    sbrk_size     = nb + top_pad + MINSIZE;
    18951895  unsigned long pagesz    = malloc_getpagesize;
     
    19051905
    19061906  /* Fail if sbrk failed or if a foreign sbrk call killed our space */
    1907   if (brk == (char*)(MORECORE_FAILURE) || 
     1907  if (brk == (char*)(MORECORE_FAILURE) ||
    19081908      (brk < old_end && old_top != initial_top))
    1909     return;     
     1909    return;
    19101910
    19111911  sbrked_mem += sbrk_size;
     
    19251925    /* Guarantee alignment of first new chunk made from this space */
    19261926    front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
    1927     if (front_misalign > 0) 
     1927    if (front_misalign > 0)
    19281928    {
    19291929      correction = (MALLOC_ALIGNMENT) - front_misalign;
     
    19381938    /* Allocate correction */
    19391939    new_brk = (char*)(MORECORE (correction));
    1940     if (new_brk == (char*)(MORECORE_FAILURE)) return; 
     1940    if (new_brk == (char*)(MORECORE_FAILURE)) return;
    19411941
    19421942    sbrked_mem += correction;
     
    19531953
    19541954      /* If not enough space to do this, then user did something very wrong */
    1955       if (old_top_size < MINSIZE) 
     1955      if (old_top_size < MINSIZE)
    19561956      {
    19571957        set_head(top, PREV_INUSE); /* will force null return from malloc */
     
    19671967        SIZE_SZ|PREV_INUSE;
    19681968      /* If possible, release the rest. */
    1969       if (old_top_size >= MINSIZE) 
     1969      if (old_top_size >= MINSIZE)
    19701970        fREe(chunk2mem(old_top));
    19711971    }
    19721972  }
    19731973
    1974   if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem) 
     1974  if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
    19751975    max_sbrked_mem = sbrked_mem;
    1976   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
     1976  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
    19771977    max_total_mem = mmapped_mem + sbrked_mem;
    19781978
     
    20722072  if (is_small_request(nb))  /* Faster version for small requests */
    20732073  {
    2074     idx = smallbin_index(nb); 
     2074    idx = smallbin_index(nb);
    20752075
    20762076    /* No traversal or size check necessary for small bins.  */
     
    21062106      victim_size = chunksize(victim);
    21072107      remainder_size = victim_size - nb;
    2108      
     2108
    21092109      if (remainder_size >= (long)MINSIZE) /* too big */
    21102110      {
    21112111        --idx; /* adjust to rescan below after checking last remainder */
    2112         break;   
     2112        break;
    21132113      }
    21142114
     
    21222122    }
    21232123
    2124     ++idx; 
     2124    ++idx;
    21252125
    21262126  }
     
    21582158  }
    21592159
    2160   /* 
    2161      If there are any possibly nonempty big-enough blocks, 
     2160  /*
     2161     If there are any possibly nonempty big-enough blocks,
    21622162     search for best fitting chunk by scanning bins in blockwidth units.
    21632163  */
    21642164
    2165   if ( (block = idx2binblock(idx)) <= binblocks) 
     2165  if ( (block = idx2binblock(idx)) <= binblocks)
    21662166  {
    21672167
    21682168    /* Get to the first marked block */
    21692169
    2170     if ( (block & binblocks) == 0) 
     2170    if ( (block & binblocks) == 0)
    21712171    {
    21722172      /* force to an even block boundary */
     
    21792179      }
    21802180    }
    2181      
     2181
    21822182    /* For each possibly nonempty block ... */
    2183     for (;;) 
     2183    for (;;)
    21842184    {
    21852185      startidx = idx;          /* (track incomplete blocks) */
     
    22372237      /* Get to the next possibly nonempty block */
    22382238
    2239       if ( (block <<= 1) <= binblocks && (block != 0) ) 
     2239      if ( (block <<= 1) <= binblocks && (block != 0) )
    22402240      {
    22412241        while ((block & binblocks) == 0)
     
    22892289    cases:
    22902290
    2291        1. free(0) has no effect. 
     2291       1. free(0) has no effect.
    22922292
    22932293       2. If the chunk was allocated via mmap, it is release via munmap().
     
    23352335  }
    23362336#endif
    2337  
     2337
    23382338  check_inuse_chunk(p);
    2339  
     2339
    23402340  sz = hd & ~PREV_INUSE;
    23412341  next = chunk_at_offset(p, sz);
    23422342  nextsz = chunksize(next);
    2343  
     2343
    23442344  if (next == top)                            /* merge with top */
    23452345  {
     
    23562356    set_head(p, sz | PREV_INUSE);
    23572357    top = p;
    2358     if ((unsigned long)(sz) >= (unsigned long)trim_threshold) 
    2359       malloc_trim(top_pad); 
     2358    if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
     2359      malloc_trim(top_pad);
    23602360    return;
    23612361  }
     
    23702370    p = chunk_at_offset(p, -prevsz);
    23712371    sz += prevsz;
    2372    
     2372
    23732373    if (p->fd == last_remainder)             /* keep as last_remainder */
    23742374      islr = 1;
     
    23762376      unlink(p, bck, fwd);
    23772377  }
    2378  
     2378
    23792379  if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
    23802380  {
    23812381    sz += nextsz;
    2382    
     2382
    23832383    if (!islr && next->fd == last_remainder)  /* re-insert last_remainder */
    23842384    {
    23852385      islr = 1;
    2386       link_last_remainder(p);   
     2386      link_last_remainder(p);
    23872387    }
    23882388    else
     
    23942394  set_foot(p, sz);
    23952395  if (!islr)
    2396     frontlink(p, sz, idx, bck, fwd); 
     2396    frontlink(p, sz, idx, bck, fwd);
    23972397}
    23982398
     
    24312431    to be used as an argument to realloc is no longer supported.
    24322432    I don't know of any programs still relying on this feature,
    2433     and allowing it would also allow too many other incorrect 
     2433    and allowing it would also allow too many other incorrect
    24342434    usages of realloc to be sensible.
    24352435
     
    24802480
    24812481#if HAVE_MMAP
    2482   if (chunk_is_mmapped(oldp)) 
     2482  if (chunk_is_mmapped(oldp))
    24832483  {
    24842484#if HAVE_MREMAP
     
    24992499  check_inuse_chunk(oldp);
    25002500
    2501   if ((long)(oldsize) < (long)(nb)) 
     2501  if ((long)(oldsize) < (long)(nb))
    25022502  {
    25032503
     
    25052505
    25062506    next = chunk_at_offset(oldp, oldsize);
    2507     if (next == top || !inuse(next)) 
     2507    if (next == top || !inuse(next))
    25082508    {
    25092509      nextsize = chunksize(next);
     
    25242524      /* Forward into next chunk */
    25252525      else if (((long)(nextsize + newsize) >= (long)(nb)))
    2526       { 
     2526      {
    25272527        unlink(next, bck, fwd);
    25282528        newsize  += nextsize;
     
    25762576        }
    25772577      }
    2578      
     2578
    25792579      /* backward only */
    2580       if (prev != 0 && (long)(prevsize + newsize) >= (long)nb) 
     2580      if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
    25812581      {
    25822582        unlink(prev, bck, fwd);
     
    25942594
    25952595    if (newmem == 0)  /* propagate failure */
    2596       return 0; 
     2596      return 0;
    25972597
    25982598    /* Avoid copy if newp is next chunk after oldp. */
    25992599    /* (This can only happen when new chunk is sbrk'ed.) */
    26002600
    2601     if ( (newp = mem2chunk(newmem)) == next_chunk(oldp)) 
     2601    if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
    26022602    {
    26032603      newsize += chunksize(newp);
     
    26442644    memalign requests more than enough space from malloc, finds a spot
    26452645    within that chunk that meets the alignment request, and then
    2646     possibly frees the leading and trailing space. 
     2646    possibly frees the leading and trailing space.
    26472647
    26482648    The alignment argument must be a power of two. This property is not
     
    26782678
    26792679  /* Otherwise, ensure that it is at least a minimum chunk size */
    2680  
     2680
    26812681  if (alignment <  MINSIZE) alignment = MINSIZE;
    26822682
     
    26992699  else /* misaligned */
    27002700  {
    2701     /* 
     2701    /*
    27022702      Find an aligned spot inside chunk.
    2703       Since we need to give back leading space in a chunk of at 
     2703      Since we need to give back leading space in a chunk of at
    27042704      least MINSIZE, if the first calculation places us at
    27052705      a spot with less than MINSIZE leader, we can move to the
     
    27162716
    27172717#if HAVE_MMAP
    2718     if(chunk_is_mmapped(p)) 
     2718    if(chunk_is_mmapped(p))
    27192719    {
    27202720      newp->prev_size = p->prev_size + leadsize;
     
    27712771}
    27722772
    2773 /* 
     2773/*
    27742774  pvalloc just invokes valloc for the nearest pagesize
    27752775  that will accommodate request
     
    28112811  Void_t* mem = mALLOc (sz);
    28122812
    2813   if (mem == 0) 
     2813  if (mem == 0)
    28142814    return 0;
    28152815  else
     
    28272827
    28282828#if MORECORE_CLEARS
    2829     if (p == oldtop && csz > oldtopsize) 
     2829    if (p == oldtop && csz > oldtopsize)
    28302830    {
    28312831      /* clear only the bytes from non-freshly-sbrked memory */
     
    28402840
    28412841/*
    2842  
     2842
    28432843  cfree just calls free. It is needed/defined on some systems
    28442844  that pair it with calloc, presumably for odd historical reasons.
     
    29122912    {
    29132913      new_brk = (char*)(MORECORE (-extra));
    2914      
     2914
    29152915      if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
    29162916      {
     
    29242924        }
    29252925        check_chunk(top);
    2926         return 0; 
     2926        return 0;
    29272927      }
    29282928
     
    29812981/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
    29822982
    2983 static void malloc_update_mallinfo() 
     2983static void malloc_update_mallinfo()
    29842984{
    29852985  int i;
     
    29962996  {
    29972997    b = bin_at(i);
    2998     for (p = last(b); p != b; p = p->bk) 
     2998    for (p = last(b); p != b; p = p->bk)
    29992999    {
    30003000#if DEBUG
    30013001      check_free_chunk(p);
    3002       for (q = next_chunk(p); 
    3003            q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE; 
     3002      for (q = next_chunk(p);
     3003           q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
    30043004           q = next_chunk(q))
    30053005        check_inuse_chunk(q);
     
    30403040{
    30413041  malloc_update_mallinfo();
    3042   fprintf(stderr, "max system bytes = %10u\n", 
     3042  fprintf(stderr, "max system bytes = %10u\n",
    30433043          (unsigned int)(max_total_mem));
    3044   fprintf(stderr, "system bytes     = %10u\n", 
     3044  fprintf(stderr, "system bytes     = %10u\n",
    30453045          (unsigned int)(sbrked_mem + mmapped_mem));
    3046   fprintf(stderr, "in use bytes     = %10u\n", 
     3046  fprintf(stderr, "in use bytes     = %10u\n",
    30473047          (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
    30483048#if HAVE_MMAP
    3049   fprintf(stderr, "max mmap regions = %10u\n", 
     3049  fprintf(stderr, "max mmap regions = %10u\n",
    30503050          (unsigned int)max_n_mmaps);
    30513051#endif
     
    30853085#endif
    30863086{
    3087   switch(param_number) 
     3087  switch(param_number)
    30883088  {
    30893089    case M_TRIM_THRESHOLD:
    3090       trim_threshold = value; return 1; 
     3090      trim_threshold = value; return 1;
    30913091    case M_TOP_PAD:
    3092       top_pad = value; return 1; 
     3092      top_pad = value; return 1;
    30933093    case M_MMAP_THRESHOLD:
    30943094      mmap_threshold = value; return 1;
     
    31203120        foreign sbrks
    31213121      * Add linux mremap support code from HJ Liu
    3122    
     3122
    31233123    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
    31243124      * Integrated most documentation with the code.
    3125       * Add support for mmap, with help from 
     3125      * Add support for mmap, with help from
    31263126        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
    31273127      * Use last_remainder in more cases.
     
    31303130      * Eliminate block-local decls to simplify tracing and debugging.
    31313131      * Support another case of realloc via move into top
    3132       * Fix error occuring when initial sbrk_base not word-aligned. 
     3132      * Fix error occuring when initial sbrk_base not word-aligned.
    31333133      * Rely on page size for units instead of SBRK_UNIT to
    31343134        avoid surprises about sbrk alignment conventions.
    31353135      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
    3136         (raymond@es.ele.tue.nl) for the suggestion. 
     3136        (raymond@es.ele.tue.nl) for the suggestion.
    31373137      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
    31383138      * More precautions for cases where other routines call sbrk,
     
    31583158
    31593159    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
    3160       * Added malloc_trim, with help from Wolfram Gloger 
     3160      * Added malloc_trim, with help from Wolfram Gloger
    31613161        (wmglo@Dent.MED.Uni-Muenchen.DE).
    31623162
     
    31783178      * Scan 2 returns chunks (not just 1)
    31793179      * Propagate failure in realloc if malloc returns 0
    3180       * Add stuff to allow compilation on non-ANSI compilers 
     3180      * Add stuff to allow compilation on non-ANSI compilers
    31813181          from kpv@research.att.com
    3182      
     3182
    31833183    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
    31843184      * removed potential for odd address access in prev_chunk
     
    31863186      * misc cosmetics and a bit more internal documentation
    31873187      * anticosmetics: mangled names in macros to evade debugger strangeness
    3188       * tested on sparc, hp-700, dec-mips, rs6000 
     3188      * tested on sparc, hp-700, dec-mips, rs6000
    31893189          with gcc & native cc (hp, dec only) allowing
    31903190          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
    31913191
    31923192    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
    3193       * Based loosely on libg++-1.2X malloc. (It retains some of the overall 
     3193      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
    31943194         structure of old version,  but most details differ.)
    31953195
  • omalloc/Misc/dlmalloc/trace-test.c

    ra2e447 r75f460  
    7575
    7676unsigned long reqs[32] = {
    77  0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
    78  0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 
     77 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     78 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
    7979};
    8080
     
    9595  printf("#       bin        N\n");
    9696  for (i = 0; i < 32; ++i) {
    97     if (reqs[i] != 0) 
     97    if (reqs[i] != 0)
    9898      printf(" %10ld%10ld\n", b, reqs[i]);
    9999    b <<= 1;
  • omalloc/configure.ac

    ra2e447 r75f460  
    307307{
    308308  void* addr = OM_MALLOC_MALLOC(512);
    309 #ifdef OM_MALLOC_SIZEOF_ADDR 
     309#ifdef OM_MALLOC_SIZEOF_ADDR
    310310  if (OM_MALLOC_SIZEOF_ADDR(addr) < 512)
    311311    exit(1);
  • omalloc/omDebugTrack.c

    ra2e447 r75f460  
    676676  omTrackAddr d_addr = (omTrackAddr) addr;
    677677  if (!omCheckPtr(addr, omError_MaxError, OM_FLR))
    678   { 
     678  {
    679679    omAssume(omIsTrackAddr(addr) && omOutAddr_2_TrackAddr(addr) == d_addr);
    680680    d_addr->flags |= OM_FSTATIC;
    681   } 
     681  }
    682682}
    683683
  • omalloc/omGetBackTrace.c

    ra2e447 r75f460  
    5050    OM_GET_BACK_TRACE(2)
    5151/* the following fails on Mac OsX, but the debugging
    52  * support it provides is too useful to disable it 
     52 * support it provides is too useful to disable it
    5353 */
    5454#ifdef __linux
     
    6868    OM_GET_BACK_TRACE(15)
    6969    OM_GET_BACK_TRACE(16)
    70     OM_GET_BACK_TRACE(17) 
     70    OM_GET_BACK_TRACE(17)
    7171#endif
    7272#endif
  • omalloc/omRet2Info.c

    ra2e447 r75f460  
    3939  {
    4040    strncpy(om_this_prog, argv0, MAXPATHLEN); // // buf);
    41     om_this_prog[MAXPATHLEN - 1]= '\0';   
     41    om_this_prog[MAXPATHLEN - 1]= '\0';
    4242  }
    4343}
  • omalloc/omalloc.dox

    ra2e447 r75f460  
    66 handle the allocation and de-allocation of memory blocks of small
    77 size as efficient as it is possible (a few machine instructions in most cases).
    8  
    9  Short introduction to omalloc 
     8
     9 Short introduction to omalloc
    1010 <A HREF="http://www.mathematik.uni-kl.de/~motsak/talks/SICSA2011_Motsak_omalloc.pdf">short talk</A>.
    1111 For more details see
    1212 <A HREF="ftp://www.mathematik.uni-kl.de/pub/Math/Singular/doc/OMALLOC.ps.gz">Detailed manual (OMALLOC.ps.gz)</A>.
    1313
    14  Note that this package has no further dependencies,  while it is used by the rest of Singular packages 
    15  - \ref factory_page 
     14 Note that this package has no further dependencies,  while it is used by the rest of Singular packages
     15 - \ref factory_page
    1616 - \ref libpolys_page
    1717 - \ref kernel_page
    1818 - \ref singular_page
    19  
     19
    2020*/
  • omalloc/omalloc.pc.in

    ra2e447 r75f460  
    1010
    1111# Requires:
    12 # Conflicts: 
     12# Conflicts:
    1313
    1414Cflags: -I${includedir} -DOM_NDEBUG
    1515Libs: -L${libdir} -l@PACKAGE@
    16 # Libs.private: 
     16# Libs.private:
    1717
Note: See TracChangeset for help on using the changeset viewer.