1 | /* |
---|
2 | This is a version (aka dlmalloc) of malloc/free/realloc written by |
---|
3 | Doug Lea and released to the public domain. Use, modify, and |
---|
4 | redistribute this code without permission or acknowledgement in any |
---|
5 | way you wish. Send questions, comments, complaints, performance |
---|
6 | data, etc to dl@cs.oswego.edu |
---|
7 | |
---|
8 | * VERSION 2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) |
---|
9 | |
---|
10 | Note: There may be an updated version of this malloc obtainable at |
---|
11 | ftp://gee.cs.oswego.edu/pub/misc/malloc.c |
---|
12 | Check before installing! |
---|
13 | |
---|
14 | * Quickstart |
---|
15 | |
---|
16 | This library is all in one file to simplify the most common usage: |
---|
17 | ftp it, compile it (-O), and link it into another program. All |
---|
18 | of the compile-time options default to reasonable values for use on |
---|
19 | most unix platforms. Compile -DWIN32 for reasonable defaults on windows. |
---|
20 | You might later want to step through various compile-time and dynamic |
---|
21 | tuning options. |
---|
22 | |
---|
23 | For convenience, an include file for code using this malloc is at: |
---|
24 | ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.0.h |
---|
25 | You don't really need this .h file unless you call functions not |
---|
26 | defined in your system include files. The .h file contains only the |
---|
27 | excerpts from this file needed for using this malloc on ANSI C/C++ |
---|
28 | systems, so long as you haven't changed compile-time options about |
---|
29 | naming and tuning parameters. If you do, then you can create your |
---|
30 | own malloc.h that does include all settings by cutting at the point |
---|
31 | indicated below. |
---|
32 | |
---|
33 | * Why use this malloc? |
---|
34 | |
---|
35 | This is not the fastest, most space-conserving, most portable, or |
---|
36 | most tunable malloc ever written. However it is among the fastest |
---|
37 | while also being among the most space-conserving, portable and tunable. |
---|
38 | Consistent balance across these factors results in a good general-purpose |
---|
39 | allocator for malloc-intensive programs. |
---|
40 | |
---|
41 | The main properties of the algorithms are: |
---|
42 | * For large (>= 512 bytes) requests, it is a pure best-fit allocator, |
---|
43 | with ties normally decided via FIFO (i.e. least recently used). |
---|
44 | * For small (<= 64 bytes by default) requests, it is a caching |
---|
45 | allocator, that maintains pools of quickly recycled chunks. |
---|
46 | * In between, and for combinations of large and small requests, it does |
---|
47 | the best it can trying to meet both goals at once. |
---|
48 | * For very large requests (>= 128KB by default), it relies on system |
---|
49 | memory mapping facilities, if supported. |
---|
50 | |
---|
51 | For a longer but slightly out of date high-level description, see |
---|
52 | http://gee.cs.oswego.edu/dl/html/malloc.html |
---|
53 | |
---|
54 | You may already by default be using a C library containing a malloc |
---|
55 | that is based on some version of this malloc (for example in |
---|
56 | linux). You might still want to use the one in this file in order to |
---|
57 | customize settings or to avoid overheads associated with library |
---|
58 | versions. |
---|
59 | |
---|
60 | * Contents, described in more detail in "description of public routines" below. |
---|
61 | |
---|
62 | Standard (ANSI/SVID/...) functions: |
---|
63 | malloc(size_t n); |
---|
64 | calloc(size_t n_elements, size_t element_size); |
---|
65 | free(Void_t* p); |
---|
66 | realloc(Void_t* p, size_t n); |
---|
67 | memalign(size_t alignment, size_t n); |
---|
68 | valloc(size_t n); |
---|
69 | mallinfo() |
---|
70 | mallopt(int parameter_number, int parameter_value) |
---|
71 | |
---|
72 | Additional functions: |
---|
73 | independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]); |
---|
74 | independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]); |
---|
75 | pvalloc(size_t n); |
---|
76 | cfree(Void_t* p); |
---|
77 | malloc_trim(size_t pad); |
---|
78 | malloc_usable_size(Void_t* p); |
---|
79 | malloc_stats(); |
---|
80 | |
---|
81 | * Vital statistics: |
---|
82 | |
---|
83 | Supported pointer representation: 4 or 8 bytes |
---|
84 | Supported size_t representation: 4 or 8 bytes |
---|
85 | Note that size_t is allowed to be 4 bytes even if pointers are 8. |
---|
86 | You can adjust this by defining INTERNAL_SIZE_T |
---|
87 | |
---|
88 | Alignment: 2 * sizeof(size_t) (default) |
---|
89 | (i.e., 8 byte alignment with 4byte size_t). This suffices for |
---|
90 | nearly all current machines and C compilers. However, you can |
---|
91 | define MALLOC_ALIGNMENT to be wider than this if necessary. |
---|
92 | |
---|
93 | Minimum overhead per allocated chunk: 4 or 8 bytes |
---|
94 | Each malloced chunk has a hidden word of overhead holding size |
---|
95 | and status information. |
---|
96 | |
---|
97 | Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead) |
---|
98 | 8-byte ptrs: 24/32 bytes (including, 4/8 overhead) |
---|
99 | |
---|
100 | When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte |
---|
101 | ptrs but 4 byte size) or 24 (for 8/8) additional bytes are |
---|
102 | needed; 4 (8) for a trailing size field and 8 (16) bytes for |
---|
103 | free list pointers. Thus, the minimum allocatable size is |
---|
104 | 16/24/32 bytes. |
---|
105 | |
---|
106 | Even a request for zero bytes (i.e., malloc(0)) returns a |
---|
107 | pointer to something of the minimum allocatable size. |
---|
108 | |
---|
109 | The maximum overhead wastage (i.e., number of extra bytes |
---|
110 | allocated than were requested in malloc) is less than or equal |
---|
111 | to the minimum size, except for requests >= mmap_threshold that |
---|
112 | are serviced via mmap(), where the worst case wastage is 2 * |
---|
113 | sizeof(size_t) bytes plus the remainder from a system page (the |
---|
114 | minimal mmap unit); typically 4096 or 8192 bytes. |
---|
115 | |
---|
116 | Maximum allocated size: 4-byte size_t: 2^32 minus about two pages |
---|
117 | 8-byte size_t: 2^64 minus about two pages |
---|
118 | |
---|
119 | It is assumed that (possibly signed) size_t values suffice to |
---|
120 | represent chunk sizes. `Possibly signed' is due to the fact |
---|
121 | that `size_t' may be defined on a system as either a signed or |
---|
122 | an unsigned type. The ISO C standard says that it must be |
---|
123 | unsigned, but a few systems are known not to adhere to this. |
---|
124 | Additionally, even when size_t is unsigned, sbrk (which is by |
---|
125 | default used to obtain memory from system) accepts signed |
---|
126 | arguments, and may not be able to handle size_t-wide arguments |
---|
127 | with negative sign bit. Generally, values that would |
---|
128 | appear as negative after accounting for overhead and alignment |
---|
129 | are supported only via mmap(), which does not have this |
---|
130 | limitation. |
---|
131 | |
---|
132 | Requests for sizes outside the allowed range will perform an optional |
---|
133 | failure action and then return null. (Requests may also |
---|
134 | also fail because a system is out of memory.) |
---|
135 | |
---|
136 | Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined |
---|
137 | |
---|
138 | When USE_MALLOC_LOCK is defined, wrappers are created to |
---|
139 | surround every public call with either a pthread mutex or |
---|
140 | a win32 spinlock (depending on WIN32). This is not |
---|
141 | especially fast, and can be a major bottleneck. |
---|
142 | It is designed only to provide minimal protection |
---|
143 | in concurrent environments, and to provide a basis for |
---|
144 | extensions. If you are using malloc in a concurrent program, |
---|
145 | you would be far better off obtaining ptmalloc, which is |
---|
146 | derived from a version of this malloc, and is well-tuned for |
---|
147 | concurrent programs. (See http://www.malloc.de) |
---|
148 | |
---|
149 | Compliance: I believe it is compliant with the 1997 Single Unix Specification |
---|
150 | (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably |
---|
151 | others as well. |
---|
152 | |
---|
153 | * Synopsis of compile-time options: |
---|
154 | |
---|
155 | People have reported using previous versions of this malloc on all |
---|
156 | versions of Unix, sometimes by tweaking some of the defines |
---|
157 | below. It has been tested most extensively on Solaris and |
---|
158 | Linux. It is also reported to work on WIN32 platforms. |
---|
159 | People also report using it in stand-alone embedded systems. |
---|
160 | |
---|
161 | The implementation is in straight, hand-tuned ANSI C. It is not |
---|
162 | at all modular. (Sorry!) It uses a lot of macros. To be at all |
---|
163 | usable, this code should be compiled using an optimizing compiler |
---|
164 | (for example gcc -O3) that can simplify expressions and control |
---|
165 | paths. (FAQ: some macros import variables as arguments rather than |
---|
166 | declare locals because people reported that some debuggers |
---|
167 | otherwise get confused.) |
---|
168 | |
---|
169 | OPTION DEFAULT VALUE |
---|
170 | |
---|
171 | Compilation Environment options: |
---|
172 | |
---|
173 | __STD_C derived from C compiler defines |
---|
174 | WIN32 NOT defined |
---|
175 | HAVE_MEMCPY defined |
---|
176 | USE_MEMCPY 1 if HAVE_MEMCPY is defined |
---|
177 | HAVE_MMAP defined as 1 |
---|
178 | MMAP_CLEARS 1 |
---|
179 | HAVE_MREMAP 0 unless linux defined |
---|
180 | malloc_getpagesize derived from system #includes, or 4096 if not |
---|
181 | HAVE_USR_INCLUDE_MALLOC_H NOT defined |
---|
182 | LACKS_UNISTD_H NOT defined unless WIN32 |
---|
183 | LACKS_SYS_PARAM_H NOT defined unless WIN32 |
---|
184 | LACKS_SYS_MMAN_H NOT defined unless WIN32 |
---|
185 | |
---|
186 | Changing default word sizes: |
---|
187 | |
---|
188 | INTERNAL_SIZE_T size_t |
---|
189 | MALLOC_ALIGNMENT 2 * sizeof(INTERNAL_SIZE_T) |
---|
190 | |
---|
191 | Configuration and functionality options: |
---|
192 | |
---|
193 | USE_DL_PREFIX NOT defined |
---|
194 | USE_PUBLIC_MALLOC_WRAPPERS NOT defined |
---|
195 | USE_MALLOC_LOCK NOT defined |
---|
196 | DEBUG NOT defined |
---|
197 | REALLOC_ZERO_BYTES_FREES NOT defined |
---|
198 | MALLOC_FAILURE_ACTION errno = ENOMEM, if __STD_C defined, else no-op |
---|
199 | TRIM_FASTBINS 0 |
---|
200 | |
---|
201 | Options for customizing MORECORE: |
---|
202 | |
---|
203 | MORECORE sbrk |
---|
204 | MORECORE_CONTIGUOUS 1 |
---|
205 | MORECORE_CANNOT_TRIM NOT defined |
---|
206 | MMAP_AS_MORECORE_SIZE (1024 * 1024) |
---|
207 | |
---|
208 | Tuning options that are also dynamically changeable via mallopt: |
---|
209 | |
---|
210 | DEFAULT_MXFAST 64 |
---|
211 | DEFAULT_TRIM_THRESHOLD 128 * 1024 |
---|
212 | DEFAULT_TOP_PAD 0 |
---|
213 | DEFAULT_MMAP_THRESHOLD 128 * 1024 |
---|
214 | DEFAULT_MMAP_MAX 65536 |
---|
215 | |
---|
216 | There are several other #defined constants and macros that you |
---|
217 | probably don't want to touch unless you are extending or adapting malloc. |
---|
218 | */ |
---|
219 | |
---|
220 | /* |
---|
221 | WIN32 sets up defaults for MS environment and compilers. |
---|
222 | Otherwise defaults are for unix. |
---|
223 | */ |
---|
224 | |
---|
225 | /* #define WIN32 */ |
---|
226 | |
---|
227 | #ifdef WIN32 |
---|
228 | |
---|
229 | #define WIN32_LEAN_AND_MEAN |
---|
230 | #include <windows.h> |
---|
231 | |
---|
232 | /* Win32 doesn't supply or need the following headers */ |
---|
233 | #define LACKS_UNISTD_H |
---|
234 | #define LACKS_SYS_PARAM_H |
---|
235 | #define LACKS_SYS_MMAN_H |
---|
236 | |
---|
237 | /* Use the supplied emulation of sbrk */ |
---|
238 | #define MORECORE sbrk |
---|
239 | #define MORECORE_CONTIGUOUS 1 |
---|
240 | #define MORECORE_FAILURE ((void*)(-1)) |
---|
241 | |
---|
242 | /* Use the supplied emulation of mmap and munmap */ |
---|
243 | #define HAVE_MMAP 1 |
---|
244 | #define MUNMAP_FAILURE (-1) |
---|
245 | #define MMAP_CLEARS 1 |
---|
246 | |
---|
247 | /* These values don't really matter in windows mmap emulation */ |
---|
248 | #define MAP_PRIVATE 1 |
---|
249 | #define MAP_ANONYMOUS 2 |
---|
250 | #define PROT_READ 1 |
---|
251 | #define PROT_WRITE 2 |
---|
252 | |
---|
253 | /* Emulation functions defined at the end of this file */ |
---|
254 | |
---|
255 | /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */ |
---|
256 | #ifdef USE_MALLOC_LOCK |
---|
257 | static int slwait(int *sl); |
---|
258 | static int slrelease(int *sl); |
---|
259 | #endif |
---|
260 | |
---|
261 | static long getpagesize(void); |
---|
262 | static long getregionsize(void); |
---|
263 | static void *sbrk(long size); |
---|
264 | static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg); |
---|
265 | static long munmap(void *ptr, long size); |
---|
266 | |
---|
267 | static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed); |
---|
268 | static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user); |
---|
269 | |
---|
270 | #endif |
---|
271 | |
---|
272 | /* |
---|
273 | __STD_C should be nonzero if using ANSI-standard C compiler, a C++ |
---|
274 | compiler, or a C compiler sufficiently close to ANSI to get away |
---|
275 | with it. |
---|
276 | */ |
---|
277 | |
---|
278 | #ifndef __STD_C |
---|
279 | #if defined(__STDC__) || defined(_cplusplus) |
---|
280 | #define __STD_C 1 |
---|
281 | #else |
---|
282 | #define __STD_C 0 |
---|
283 | #endif |
---|
284 | #endif /*__STD_C*/ |
---|
285 | |
---|
286 | |
---|
287 | /* |
---|
288 | Void_t* is the pointer type that malloc should say it returns |
---|
289 | */ |
---|
290 | |
---|
291 | #ifndef Void_t |
---|
292 | #if (__STD_C || defined(WIN32)) |
---|
293 | #define Void_t void |
---|
294 | #else |
---|
295 | #define Void_t char |
---|
296 | #endif |
---|
297 | #endif /*Void_t*/ |
---|
298 | |
---|
299 | #if __STD_C |
---|
300 | #include <stddef.h> /* for size_t */ |
---|
301 | #else |
---|
302 | #include <sys/types.h> |
---|
303 | #endif |
---|
304 | |
---|
305 | #ifdef __cplusplus |
---|
306 | extern "C" { |
---|
307 | #endif |
---|
308 | |
---|
309 | /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */ |
---|
310 | |
---|
311 | /* #define LACKS_UNISTD_H */ |
---|
312 | |
---|
313 | #ifndef LACKS_UNISTD_H |
---|
314 | #include <unistd.h> |
---|
315 | #endif |
---|
316 | |
---|
317 | /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */ |
---|
318 | |
---|
319 | /* #define LACKS_SYS_PARAM_H */ |
---|
320 | |
---|
321 | |
---|
322 | #include <stdio.h> /* needed for malloc_stats */ |
---|
323 | #include <errno.h> /* needed for optional MALLOC_FAILURE_ACTION */ |
---|
324 | |
---|
325 | |
---|
326 | /* |
---|
327 | Debugging: |
---|
328 | |
---|
329 | Because freed chunks may be overwritten with bookkeeping fields, this |
---|
330 | malloc will often die when freed memory is overwritten by user |
---|
331 | programs. This can be very effective (albeit in an annoying way) |
---|
332 | in helping track down dangling pointers. |
---|
333 | |
---|
334 | If you compile with -DDEBUG, a number of assertion checks are |
---|
335 | enabled that will catch more memory errors. You probably won't be |
---|
336 | able to make much sense of the actual assertion errors, but they |
---|
337 | should help you locate incorrectly overwritten memory. The |
---|
338 | checking is fairly extensive, and will slow down execution |
---|
339 | noticeably. Calling malloc_stats or mallinfo with DEBUG set will |
---|
340 | attempt to check every non-mmapped allocated and free chunk in the |
---|
341 | course of computing the summmaries. (By nature, mmapped regions |
---|
342 | cannot be checked very much automatically.) |
---|
343 | |
---|
344 | Setting DEBUG may also be helpful if you are trying to modify |
---|
345 | this code. The assertions in the check routines spell out in more |
---|
346 | detail the assumptions and invariants underlying the algorithms. |
---|
347 | |
---|
348 | Setting DEBUG does NOT provide an automated mechanism for checking |
---|
349 | that all accesses to malloced memory stay within their |
---|
350 | bounds. However, there are several add-ons and adaptations of this |
---|
351 | or other mallocs available that do this. |
---|
352 | */ |
---|
353 | |
---|
354 | #if DEBUG |
---|
355 | #include <assert.h> |
---|
356 | #else |
---|
357 | #define assert(x) ((void)0) |
---|
358 | #endif |
---|
359 | |
---|
360 | |
---|
361 | /* |
---|
362 | INTERNAL_SIZE_T is the word-size used for internal bookkeeping |
---|
363 | of chunk sizes. |
---|
364 | |
---|
365 | The default version is the same as size_t. |
---|
366 | |
---|
367 | While not strictly necessary, it is best to define this as an |
---|
368 | unsigned type, even if size_t is a signed type. This may avoid some |
---|
369 | artificial size limitations on some systems. |
---|
370 | |
---|
371 | On a 64-bit machine, you may be able to reduce malloc overhead by |
---|
372 | defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the |
---|
373 | expense of not being able to handle more than 2^32 of malloced |
---|
374 | space. If this limitation is acceptable, you are encouraged to set |
---|
375 | this unless you are on a platform requiring 16byte alignments. In |
---|
376 | this case the alignment requirements turn out to negate any |
---|
377 | potential advantages of decreasing size_t word size. |
---|
378 | |
---|
379 | Implementors: Beware of the possible combinations of: |
---|
380 | - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits, |
---|
381 | and might be the same width as int or as long |
---|
382 | - size_t might have different width and signedness as INTERNAL_SIZE_T |
---|
383 | - int and long might be 32 or 64 bits, and might be the same width |
---|
384 | To deal with this, most comparisons and difference computations |
---|
385 | among INTERNAL_SIZE_Ts should cast them to unsigned long, being |
---|
386 | aware of the fact that casting an unsigned int to a wider long does |
---|
387 | not sign-extend. (This also makes checking for negative numbers |
---|
388 | awkward.) Some of these casts result in harmless compiler warnings |
---|
389 | on some systems. |
---|
390 | */ |
---|
391 | |
---|
392 | #ifndef INTERNAL_SIZE_T |
---|
393 | #define INTERNAL_SIZE_T size_t |
---|
394 | #endif |
---|
395 | |
---|
396 | /* The corresponding word size */ |
---|
397 | #define SIZE_SZ (sizeof(INTERNAL_SIZE_T)) |
---|
398 | |
---|
399 | |
---|
400 | /* |
---|
401 | MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks. |
---|
402 | It must be a power of two at least 2 * SIZE_SZ, even on machines |
---|
403 | for which smaller alignments would suffice. It may be defined as |
---|
404 | larger than this though. Note however that code and data structures |
---|
405 | are optimized for the case of 8-byte alignment. |
---|
406 | */ |
---|
407 | |
---|
408 | |
---|
409 | #ifndef MALLOC_ALIGNMENT |
---|
410 | #define MALLOC_ALIGNMENT (2 * SIZE_SZ) |
---|
411 | #endif |
---|
412 | |
---|
413 | /* The corresponding bit mask value */ |
---|
414 | #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1) |
---|
415 | |
---|
416 | |
---|
417 | |
---|
418 | /* |
---|
419 | REALLOC_ZERO_BYTES_FREES should be set if a call to |
---|
420 | realloc with zero bytes should be the same as a call to free. |
---|
421 | Some people think it should. Otherwise, since this malloc |
---|
422 | returns a unique pointer for malloc(0), so does realloc(p, 0). |
---|
423 | */ |
---|
424 | |
---|
425 | /* #define REALLOC_ZERO_BYTES_FREES */ |
---|
426 | |
---|
427 | /* |
---|
428 | TRIM_FASTBINS controls whether free() of a very small chunk can |
---|
429 | immediately lead to trimming. Setting to true (1) can reduce memory |
---|
430 | footprint, but will almost always slow down programs that use a lot |
---|
431 | of small chunks. |
---|
432 | |
---|
433 | Define this only if you are willing to give up some speed to more |
---|
434 | aggressively reduce system-level memory footprint when releasing |
---|
435 | memory in programs that use many small chunks. You can get |
---|
436 | essentially the same effect by setting MXFAST to 0, but this can |
---|
437 | lead to even greater slowdowns in programs using many small chunks. |
---|
438 | TRIM_FASTBINS is an in-between compile-time option, that disables |
---|
439 | only those chunks bordering topmost memory from being placed in |
---|
440 | fastbins. |
---|
441 | */ |
---|
442 | |
---|
443 | #ifndef TRIM_FASTBINS |
---|
444 | #define TRIM_FASTBINS 0 |
---|
445 | #endif |
---|
446 | |
---|
447 | |
---|
448 | /* |
---|
449 | USE_DL_PREFIX will prefix all public routines with the string 'dl'. |
---|
450 | This is necessary when you only want to use this malloc in one part |
---|
451 | of a program, using your regular system malloc elsewhere. |
---|
452 | */ |
---|
453 | |
---|
454 | /* #define USE_DL_PREFIX */ |
---|
455 | |
---|
456 | |
---|
457 | /* |
---|
458 | USE_MALLOC_LOCK causes wrapper functions to surround each |
---|
459 | callable routine with pthread mutex lock/unlock. |
---|
460 | |
---|
461 | USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined |
---|
462 | */ |
---|
463 | |
---|
464 | |
---|
465 | /* #define USE_MALLOC_LOCK */ |
---|
466 | |
---|
467 | |
---|
468 | /* |
---|
469 | If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is |
---|
470 | actually a wrapper function that first calls MALLOC_PREACTION, then |
---|
471 | calls the internal routine, and follows it with |
---|
472 | MALLOC_POSTACTION. This is needed for locking, but you can also use |
---|
473 | this, without USE_MALLOC_LOCK, for purposes of interception, |
---|
474 | instrumentation, etc. It is a sad fact that using wrappers often |
---|
475 | noticeably degrades performance of malloc-intensive programs. |
---|
476 | */ |
---|
477 | |
---|
478 | #ifdef USE_MALLOC_LOCK |
---|
479 | #define USE_PUBLIC_MALLOC_WRAPPERS |
---|
480 | #else |
---|
481 | /* #define USE_PUBLIC_MALLOC_WRAPPERS */ |
---|
482 | #endif |
---|
483 | |
---|
484 | |
---|
485 | /* |
---|
486 | Two-phase name translation. |
---|
487 | All of the actual routines are given mangled names. |
---|
488 | When wrappers are used, they become the public callable versions. |
---|
489 | When DL_PREFIX is used, the callable names are prefixed. |
---|
490 | */ |
---|
491 | |
---|
492 | #ifndef USE_PUBLIC_MALLOC_WRAPPERS |
---|
493 | #define cALLOc public_cALLOc |
---|
494 | #define fREe public_fREe |
---|
495 | #define cFREe public_cFREe |
---|
496 | #define mALLOc public_mALLOc |
---|
497 | #define mEMALIGn public_mEMALIGn |
---|
498 | #define rEALLOc public_rEALLOc |
---|
499 | #define vALLOc public_vALLOc |
---|
500 | #define pVALLOc public_pVALLOc |
---|
501 | #define mALLINFo public_mALLINFo |
---|
502 | #define mALLOPt public_mALLOPt |
---|
503 | #define mTRIm public_mTRIm |
---|
504 | #define mSTATs public_mSTATs |
---|
505 | #define mUSABLe public_mUSABLe |
---|
506 | #define iCALLOc public_iCALLOc |
---|
507 | #define iCOMALLOc public_iCOMALLOc |
---|
508 | #endif |
---|
509 | |
---|
510 | #ifdef USE_DL_PREFIX |
---|
511 | #define public_cALLOc dlcalloc |
---|
512 | #define public_fREe dlfree |
---|
513 | #define public_cFREe dlcfree |
---|
514 | #define public_mALLOc dlmalloc |
---|
515 | #define public_mEMALIGn dlmemalign |
---|
516 | #define public_rEALLOc dlrealloc |
---|
517 | #define public_vALLOc dlvalloc |
---|
518 | #define public_pVALLOc dlpvalloc |
---|
519 | #define public_mALLINFo dlmallinfo |
---|
520 | #define public_mALLOPt dlmallopt |
---|
521 | #define public_mTRIm dlmalloc_trim |
---|
522 | #define public_mSTATs dlmalloc_stats |
---|
523 | #define public_mUSABLe dlmalloc_usable_size |
---|
524 | #define public_iCALLOc dlindependent_calloc |
---|
525 | #define public_iCOMALLOc dlindependent_comalloc |
---|
526 | #else /* USE_DL_PREFIX */ |
---|
527 | #define public_cALLOc calloc |
---|
528 | #define public_fREe free |
---|
529 | #define public_cFREe cfree |
---|
530 | #define public_mALLOc malloc |
---|
531 | #define public_mEMALIGn memalign |
---|
532 | #define public_rEALLOc realloc |
---|
533 | #define public_vALLOc valloc |
---|
534 | #define public_pVALLOc pvalloc |
---|
535 | #define public_mALLINFo mallinfo |
---|
536 | #define public_mALLOPt mallopt |
---|
537 | #define public_mTRIm malloc_trim |
---|
538 | #define public_mSTATs malloc_stats |
---|
539 | #define public_mUSABLe malloc_usable_size |
---|
540 | #define public_iCALLOc independent_calloc |
---|
541 | #define public_iCOMALLOc independent_comalloc |
---|
542 | #endif /* USE_DL_PREFIX */ |
---|
543 | |
---|
544 | |
---|
545 | /* |
---|
546 | HAVE_MEMCPY should be defined if you are not otherwise using |
---|
547 | ANSI STD C, but still have memcpy and memset in your C library |
---|
548 | and want to use them in calloc and realloc. Otherwise simple |
---|
549 | macro versions are defined below. |
---|
550 | |
---|
551 | USE_MEMCPY should be defined as 1 if you actually want to |
---|
552 | have memset and memcpy called. People report that the macro |
---|
553 | versions are faster than libc versions on some systems. |
---|
554 | |
---|
555 | Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks |
---|
556 | (of <= 36 bytes) are manually unrolled in realloc and calloc. |
---|
557 | */ |
---|
558 | |
---|
559 | #define HAVE_MEMCPY |
---|
560 | |
---|
561 | #ifndef USE_MEMCPY |
---|
562 | #ifdef HAVE_MEMCPY |
---|
563 | #define USE_MEMCPY 1 |
---|
564 | #else |
---|
565 | #define USE_MEMCPY 0 |
---|
566 | #endif |
---|
567 | #endif |
---|
568 | |
---|
569 | |
---|
570 | #if (__STD_C || defined(HAVE_MEMCPY)) |
---|
571 | |
---|
572 | #ifdef WIN32 |
---|
573 | /* On Win32 memset and memcpy are already declared in windows.h */ |
---|
574 | #else |
---|
575 | #if __STD_C |
---|
576 | void* memset(void*, int, size_t); |
---|
577 | void* memcpy(void*, const void*, size_t); |
---|
578 | #else |
---|
579 | Void_t* memset(); |
---|
580 | Void_t* memcpy(); |
---|
581 | #endif |
---|
582 | #endif |
---|
583 | #endif |
---|
584 | |
---|
585 | /* |
---|
586 | MALLOC_FAILURE_ACTION is the action to take before "return 0" when |
---|
587 | malloc fails to be able to return memory, either because memory is |
---|
588 | exhausted or because of illegal arguments. |
---|
589 | |
---|
590 | By default, sets errno if running on STD_C platform, else does nothing. |
---|
591 | */ |
---|
592 | |
---|
593 | #ifndef MALLOC_FAILURE_ACTION |
---|
594 | #if __STD_C |
---|
595 | #define MALLOC_FAILURE_ACTION \ |
---|
596 | errno = ENOMEM; |
---|
597 | |
---|
598 | #else |
---|
599 | #define MALLOC_FAILURE_ACTION |
---|
600 | #endif |
---|
601 | #endif |
---|
602 | |
---|
603 | /* |
---|
604 | MORECORE-related declarations. By default, rely on sbrk |
---|
605 | */ |
---|
606 | |
---|
607 | |
---|
608 | #ifdef LACKS_UNISTD_H |
---|
609 | #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) |
---|
610 | #if __STD_C |
---|
611 | extern Void_t* sbrk(ptrdiff_t); |
---|
612 | #else |
---|
613 | extern Void_t* sbrk(); |
---|
614 | #endif |
---|
615 | #endif |
---|
616 | #endif |
---|
617 | |
---|
618 | /* |
---|
619 | MORECORE is the name of the routine to call to obtain more memory |
---|
620 | from the system. See below for general guidance on writing |
---|
621 | alternative MORECORE functions, as well as a version for WIN32 and a |
---|
622 | sample version for pre-OSX macos. |
---|
623 | */ |
---|
624 | |
---|
625 | #ifndef MORECORE |
---|
626 | #define MORECORE sbrk |
---|
627 | #endif |
---|
628 | |
---|
629 | /* |
---|
630 | MORECORE_FAILURE is the value returned upon failure of MORECORE |
---|
631 | as well as mmap. Since it cannot be an otherwise valid memory address, |
---|
632 | and must reflect values of standard sys calls, you probably ought not |
---|
633 | try to redefine it. |
---|
634 | */ |
---|
635 | |
---|
636 | #ifndef MORECORE_FAILURE |
---|
637 | #define MORECORE_FAILURE (-1) |
---|
638 | #endif |
---|
639 | |
---|
640 | /* |
---|
641 | If MORECORE_CONTIGUOUS is true, take advantage of fact that |
---|
642 | consecutive calls to MORECORE with positive arguments always return |
---|
643 | contiguous increasing addresses. This is true of unix sbrk. Even |
---|
644 | if not defined, when regions happen to be contiguous, malloc will |
---|
645 | permit allocations spanning regions obtained from different |
---|
646 | calls. But defining this when applicable enables some stronger |
---|
647 | consistency checks and space efficiencies. |
---|
648 | */ |
---|
649 | |
---|
650 | #ifndef MORECORE_CONTIGUOUS |
---|
651 | #define MORECORE_CONTIGUOUS 1 |
---|
652 | #endif |
---|
653 | |
---|
654 | /* |
---|
655 | Define MORECORE_CANNOT_TRIM if your version of MORECORE |
---|
656 | cannot release space back to the system when given negative |
---|
657 | arguments. This is generally necessary only if you are using |
---|
658 | a hand-crafted MORECORE function that cannot handle negative arguments. |
---|
659 | */ |
---|
660 | |
---|
661 | /* #define MORECORE_CANNOT_TRIM */ |
---|
662 | |
---|
663 | |
---|
664 | /* |
---|
665 | Define HAVE_MMAP as true to optionally make malloc() use mmap() to |
---|
666 | allocate very large blocks. These will be returned to the |
---|
667 | operating system immediately after a free(). Also, if mmap |
---|
668 | is available, it is used as a backup strategy in cases where |
---|
669 | MORECORE fails to provide space from system. |
---|
670 | |
---|
671 | This malloc is best tuned to work with mmap for large requests. |
---|
672 | If you do not have mmap, operations involving very large chunks (1MB |
---|
673 | or so) may be slower than you'd like. |
---|
674 | */ |
---|
675 | |
---|
676 | #ifndef HAVE_MMAP |
---|
677 | #define HAVE_MMAP 1 |
---|
678 | |
---|
679 | /* |
---|
680 | Standard unix mmap using /dev/zero clears memory so calloc doesn't |
---|
681 | need to. |
---|
682 | */ |
---|
683 | |
---|
684 | #ifndef MMAP_CLEARS |
---|
685 | #define MMAP_CLEARS 1 |
---|
686 | #endif |
---|
687 | |
---|
688 | #else /* no mmap */ |
---|
689 | #ifndef MMAP_CLEARS |
---|
690 | #define MMAP_CLEARS 0 |
---|
691 | #endif |
---|
692 | #endif |
---|
693 | |
---|
694 | |
---|
695 | /* |
---|
696 | MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if |
---|
697 | sbrk fails, and mmap is used as a backup (which is done only if |
---|
698 | HAVE_MMAP). The value must be a multiple of page size. This |
---|
699 | backup strategy generally applies only when systems have "holes" in |
---|
700 | address space, so sbrk cannot perform contiguous expansion, but |
---|
701 | there is still space available on system. On systems for which |
---|
702 | this is known to be useful (i.e. most linux kernels), this occurs |
---|
703 | only when programs allocate huge amounts of memory. Between this, |
---|
704 | and the fact that mmap regions tend to be limited, the size should |
---|
705 | be large, to avoid too many mmap calls and thus avoid running out |
---|
706 | of kernel resources. |
---|
707 | */ |
---|
708 | |
---|
709 | #ifndef MMAP_AS_MORECORE_SIZE |
---|
710 | #define MMAP_AS_MORECORE_SIZE (1024 * 1024) |
---|
711 | #endif |
---|
712 | |
---|
713 | /* |
---|
714 | Define HAVE_MREMAP to make realloc() use mremap() to re-allocate |
---|
715 | large blocks. This is currently only possible on Linux with |
---|
716 | kernel versions newer than 1.3.77. |
---|
717 | */ |
---|
718 | |
---|
719 | #ifndef HAVE_MREMAP |
---|
720 | #ifdef linux |
---|
721 | #define HAVE_MREMAP 1 |
---|
722 | #else |
---|
723 | #define HAVE_MREMAP 0 |
---|
724 | #endif |
---|
725 | |
---|
726 | #endif /* HAVE_MMAP */ |
---|
727 | |
---|
728 | |
---|
729 | /* |
---|
730 | The system page size. To the extent possible, this malloc manages |
---|
731 | memory from the system in page-size units. Note that this value is |
---|
732 | cached during initialization into a field of malloc_state. So even |
---|
733 | if malloc_getpagesize is a function, it is only called once. |
---|
734 | |
---|
735 | The following mechanics for getpagesize were adapted from bsd/gnu |
---|
736 | getpagesize.h. If none of the system-probes here apply, a value of |
---|
737 | 4096 is used, which should be OK: If they don't apply, then using |
---|
738 | the actual value probably doesn't impact performance. |
---|
739 | */ |
---|
740 | |
---|
741 | |
---|
742 | #ifndef malloc_getpagesize |
---|
743 | |
---|
744 | #ifndef LACKS_UNISTD_H |
---|
745 | # include <unistd.h> |
---|
746 | #endif |
---|
747 | |
---|
748 | # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ |
---|
749 | # ifndef _SC_PAGE_SIZE |
---|
750 | # define _SC_PAGE_SIZE _SC_PAGESIZE |
---|
751 | # endif |
---|
752 | # endif |
---|
753 | |
---|
754 | # ifdef _SC_PAGE_SIZE |
---|
755 | # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) |
---|
756 | # else |
---|
757 | # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) |
---|
758 | extern size_t getpagesize(); |
---|
759 | # define malloc_getpagesize getpagesize() |
---|
760 | # else |
---|
761 | # ifdef WIN32 /* use supplied emulation of getpagesize */ |
---|
762 | # define malloc_getpagesize getpagesize() |
---|
763 | # else |
---|
764 | # ifndef LACKS_SYS_PARAM_H |
---|
765 | # include <sys/param.h> |
---|
766 | # endif |
---|
767 | # ifdef EXEC_PAGESIZE |
---|
768 | # define malloc_getpagesize EXEC_PAGESIZE |
---|
769 | # else |
---|
770 | # ifdef NBPG |
---|
771 | # ifndef CLSIZE |
---|
772 | # define malloc_getpagesize NBPG |
---|
773 | # else |
---|
774 | # define malloc_getpagesize (NBPG * CLSIZE) |
---|
775 | # endif |
---|
776 | # else |
---|
777 | # ifdef NBPC |
---|
778 | # define malloc_getpagesize NBPC |
---|
779 | # else |
---|
780 | # ifdef PAGESIZE |
---|
781 | # define malloc_getpagesize PAGESIZE |
---|
782 | # else /* just guess */ |
---|
783 | # define malloc_getpagesize (4096) |
---|
784 | # endif |
---|
785 | # endif |
---|
786 | # endif |
---|
787 | # endif |
---|
788 | # endif |
---|
789 | # endif |
---|
790 | # endif |
---|
791 | #endif |
---|
792 | |
---|
793 | /* |
---|
794 | This version of malloc supports the standard SVID/XPG mallinfo |
---|
795 | routine that returns a struct containing usage properties and |
---|
796 | statistics. It should work on any SVID/XPG compliant system that has |
---|
797 | a /usr/include/malloc.h defining struct mallinfo. (If you'd like to |
---|
798 | install such a thing yourself, cut out the preliminary declarations |
---|
799 | as described above and below and save them in a malloc.h file. But |
---|
800 | there's no compelling reason to bother to do this.) |
---|
801 | |
---|
802 | The main declaration needed is the mallinfo struct that is returned |
---|
803 | (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a |
---|
804 | bunch of field that are not even meaningful in this version of |
---|
805 | malloc. These fields are are instead filled by mallinfo() with |
---|
806 | other numbers that might be of interest. |
---|
807 | |
---|
808 | HAVE_USR_INCLUDE_MALLOC_H should be set if you have a |
---|
809 | /usr/include/malloc.h file that includes a declaration of struct |
---|
810 | mallinfo. If so, it is included; else an SVID2/XPG2 compliant |
---|
811 | version is declared below. These must be precisely the same for |
---|
812 | mallinfo() to work. The original SVID version of this struct, |
---|
813 | defined on most systems with mallinfo, declares all fields as |
---|
814 | ints. But some others define as unsigned long. If your system |
---|
815 | defines the fields using a type of different width than listed here, |
---|
816 | you must #include your system version and #define |
---|
817 | HAVE_USR_INCLUDE_MALLOC_H. |
---|
818 | */ |
---|
819 | |
---|
820 | /* #define HAVE_USR_INCLUDE_MALLOC_H */ |
---|
821 | |
---|
822 | #ifdef HAVE_USR_INCLUDE_MALLOC_H |
---|
823 | #include "/usr/include/malloc.h" |
---|
824 | #else |
---|
825 | |
---|
826 | /* SVID2/XPG mallinfo structure */ |
---|
827 | |
---|
828 | struct mallinfo { |
---|
829 | int arena; /* non-mmapped space allocated from system */ |
---|
830 | int ordblks; /* number of free chunks */ |
---|
831 | int smblks; /* number of fastbin blocks */ |
---|
832 | int hblks; /* number of mmapped regions */ |
---|
833 | int hblkhd; /* space in mmapped regions */ |
---|
834 | int usmblks; /* maximum total allocated space */ |
---|
835 | int fsmblks; /* space available in freed fastbin blocks */ |
---|
836 | int uordblks; /* total allocated space */ |
---|
837 | int fordblks; /* total free space */ |
---|
838 | int keepcost; /* top-most, releasable (via malloc_trim) space */ |
---|
839 | }; |
---|
840 | |
---|
841 | /* |
---|
842 | SVID/XPG defines four standard parameter numbers for mallopt, |
---|
843 | normally defined in malloc.h. Only one of these (M_MXFAST) is used |
---|
844 | in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply, |
---|
845 | so setting them has no effect. But this malloc also supports other |
---|
846 | options in mallopt described below. |
---|
847 | */ |
---|
848 | #endif |
---|
849 | |
---|
850 | |
---|
851 | /* ---------- description of public routines ------------ */ |
---|
852 | |
---|
853 | /* |
---|
854 | malloc(size_t n) |
---|
855 | Returns a pointer to a newly allocated chunk of at least n bytes, or null |
---|
856 | if no space is available. Additionally, on failure, errno is |
---|
857 | set to ENOMEM on ANSI C systems. |
---|
858 | |
---|
859 | If n is zero, malloc returns a minumum-sized chunk. (The minimum |
---|
860 | size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit |
---|
861 | systems.) On most systems, size_t is an unsigned type, so calls |
---|
862 | with negative arguments are interpreted as requests for huge amounts |
---|
863 | of space, which will often fail. The maximum supported value of n |
---|
864 | differs across systems, but is in all cases less than the maximum |
---|
865 | representable value of a size_t. |
---|
866 | */ |
---|
867 | #if __STD_C |
---|
868 | Void_t* public_mALLOc(size_t); |
---|
869 | #else |
---|
870 | Void_t* public_mALLOc(); |
---|
871 | #endif |
---|
872 | |
---|
873 | /* |
---|
874 | free(Void_t* p) |
---|
875 | Releases the chunk of memory pointed to by p, that had been previously |
---|
876 | allocated using malloc or a related routine such as realloc. |
---|
877 | It has no effect if p is null. It can have arbitrary (i.e., bad!) |
---|
878 | effects if p has already been freed. |
---|
879 | |
---|
880 | Unless disabled (using mallopt), freeing very large spaces will |
---|
881 | when possible, automatically trigger operations that give |
---|
882 | back unused memory to the system, thus reducing program footprint. |
---|
883 | */ |
---|
884 | #if __STD_C |
---|
885 | void public_fREe(Void_t*); |
---|
886 | #else |
---|
887 | void public_fREe(); |
---|
888 | #endif |
---|
889 | |
---|
890 | /* |
---|
891 | calloc(size_t n_elements, size_t element_size); |
---|
892 | Returns a pointer to n_elements * element_size bytes, with all locations |
---|
893 | set to zero. |
---|
894 | */ |
---|
895 | #if __STD_C |
---|
896 | Void_t* public_cALLOc(size_t, size_t); |
---|
897 | #else |
---|
898 | Void_t* public_cALLOc(); |
---|
899 | #endif |
---|
900 | |
---|
901 | /* |
---|
902 | realloc(Void_t* p, size_t n) |
---|
903 | Returns a pointer to a chunk of size n that contains the same data |
---|
904 | as does chunk p up to the minimum of (n, p's size) bytes, or null |
---|
905 | if no space is available. |
---|
906 | |
---|
907 | The returned pointer may or may not be the same as p. The algorithm |
---|
908 | prefers extending p when possible, otherwise it employs the |
---|
909 | equivalent of a malloc-copy-free sequence. |
---|
910 | |
---|
911 | If p is null, realloc is equivalent to malloc. |
---|
912 | |
---|
913 | If space is not available, realloc returns null, errno is set (if on |
---|
914 | ANSI) and p is NOT freed. |
---|
915 | |
---|
916 | if n is for fewer bytes than already held by p, the newly unused |
---|
917 | space is lopped off and freed if possible. Unless the #define |
---|
918 | REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of |
---|
919 | zero (re)allocates a minimum-sized chunk. |
---|
920 | |
---|
921 | Large chunks that were internally obtained via mmap will always |
---|
922 | be reallocated using malloc-copy-free sequences unless |
---|
923 | the system supports MREMAP (currently only linux). |
---|
924 | |
---|
925 | The old unix realloc convention of allowing the last-free'd chunk |
---|
926 | to be used as an argument to realloc is not supported. |
---|
927 | */ |
---|
928 | #if __STD_C |
---|
929 | Void_t* public_rEALLOc(Void_t*, size_t); |
---|
930 | #else |
---|
931 | Void_t* public_rEALLOc(); |
---|
932 | #endif |
---|
933 | |
---|
934 | /* |
---|
935 | memalign(size_t alignment, size_t n); |
---|
936 | Returns a pointer to a newly allocated chunk of n bytes, aligned |
---|
937 | in accord with the alignment argument. |
---|
938 | |
---|
939 | The alignment argument should be a power of two. If the argument is |
---|
940 | not a power of two, the nearest greater power is used. |
---|
941 | 8-byte alignment is guaranteed by normal malloc calls, so don't |
---|
942 | bother calling memalign with an argument of 8 or less. |
---|
943 | |
---|
944 | Overreliance on memalign is a sure way to fragment space. |
---|
945 | */ |
---|
946 | #if __STD_C |
---|
947 | Void_t* public_mEMALIGn(size_t, size_t); |
---|
948 | #else |
---|
949 | Void_t* public_mEMALIGn(); |
---|
950 | #endif |
---|
951 | |
---|
952 | /* |
---|
953 | valloc(size_t n); |
---|
954 | Equivalent to memalign(pagesize, n), where pagesize is the page |
---|
955 | size of the system. If the pagesize is unknown, 4096 is used. |
---|
956 | */ |
---|
957 | #if __STD_C |
---|
958 | Void_t* public_vALLOc(size_t); |
---|
959 | #else |
---|
960 | Void_t* public_vALLOc(); |
---|
961 | #endif |
---|
962 | |
---|
963 | |
---|
964 | |
---|
965 | /* |
---|
966 | mallopt(int parameter_number, int parameter_value) |
---|
967 | Sets tunable parameters The format is to provide a |
---|
968 | (parameter-number, parameter-value) pair. mallopt then sets the |
---|
969 | corresponding parameter to the argument value if it can (i.e., so |
---|
970 | long as the value is meaningful), and returns 1 if successful else |
---|
971 | 0. SVID/XPG/ANSI defines four standard param numbers for mallopt, |
---|
972 | normally defined in malloc.h. Only one of these (M_MXFAST) is used |
---|
973 | in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply, |
---|
974 | so setting them has no effect. But this malloc also supports four |
---|
975 | other options in mallopt. See below for details. Briefly, supported |
---|
976 | parameters are as follows (listed defaults are for "typical" |
---|
977 | configurations). |
---|
978 | |
---|
979 | Symbol param # default allowed param values |
---|
980 | M_MXFAST 1 64 0-80 (0 disables fastbins) |
---|
981 | M_TRIM_THRESHOLD -1 128*1024 any (-1U disables trimming) |
---|
982 | M_TOP_PAD -2 0 any |
---|
983 | M_MMAP_THRESHOLD -3 128*1024 any (or 0 if no MMAP support) |
---|
984 | M_MMAP_MAX -4 65536 any (0 disables use of mmap) |
---|
985 | */ |
---|
986 | #if __STD_C |
---|
987 | int public_mALLOPt(int, int); |
---|
988 | #else |
---|
989 | int public_mALLOPt(); |
---|
990 | #endif |
---|
991 | |
---|
992 | |
---|
993 | /* |
---|
994 | mallinfo() |
---|
995 | Returns (by copy) a struct containing various summary statistics: |
---|
996 | |
---|
997 | arena: current total non-mmapped bytes allocated from system |
---|
998 | ordblks: the number of free chunks |
---|
999 | smblks: the number of fastbin blocks (i.e., small chunks that |
---|
1000 | have been freed but not use resused or consolidated) |
---|
1001 | hblks: current number of mmapped regions |
---|
1002 | hblkhd: total bytes held in mmapped regions |
---|
1003 | usmblks: the maximum total allocated space. This will be greater |
---|
1004 | than current total if trimming has occurred. |
---|
1005 | fsmblks: total bytes held in fastbin blocks |
---|
1006 | uordblks: current total allocated space (normal or mmapped) |
---|
1007 | fordblks: total free space |
---|
1008 | keepcost: the maximum number of bytes that could ideally be released |
---|
1009 | back to system via malloc_trim. ("ideally" means that |
---|
1010 | it ignores page restrictions etc.) |
---|
1011 | |
---|
1012 | Because these fields are ints, but internal bookkeeping may |
---|
1013 | be kept as longs, the reported values may wrap around zero and |
---|
1014 | thus be inaccurate. |
---|
1015 | */ |
---|
1016 | #if __STD_C |
---|
1017 | struct mallinfo public_mALLINFo(void); |
---|
1018 | #else |
---|
1019 | struct mallinfo public_mALLINFo(); |
---|
1020 | #endif |
---|
1021 | |
---|
1022 | /* |
---|
1023 | independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]); |
---|
1024 | |
---|
1025 | independent_calloc is similar to calloc, but instead of returning a |
---|
1026 | single cleared space, it returns an array of pointers to n_elements |
---|
1027 | independent elements that can hold contents of size elem_size, each |
---|
1028 | of which starts out cleared, and can be independently freed, |
---|
1029 | realloc'ed etc. The elements are guaranteed to be adjacently |
---|
1030 | allocated (this is not guaranteed to occur with multiple callocs or |
---|
1031 | mallocs), which may also improve cache locality in some |
---|
1032 | applications. |
---|
1033 | |
---|
1034 | The "chunks" argument is optional (i.e., may be null, which is |
---|
1035 | probably the most typical usage). If it is null, the returned array |
---|
1036 | is itself dynamically allocated and should also be freed when it is |
---|
1037 | no longer needed. Otherwise, the chunks array must be of at least |
---|
1038 | n_elements in length. It is filled in with the pointers to the |
---|
1039 | chunks. |
---|
1040 | |
---|
1041 | In either case, independent_calloc returns this pointer array, or |
---|
1042 | null if the allocation failed. If n_elements is zero and "chunks" |
---|
1043 | is null, it returns a chunk representing an array with zero elements |
---|
1044 | (which should be freed if not wanted). |
---|
1045 | |
---|
1046 | Each element must be individually freed when it is no longer |
---|
1047 | needed. If you'd like to instead be able to free all at once, you |
---|
1048 | should instead use regular calloc and assign pointers into this |
---|
1049 | space to represent elements. (In this case though, you cannot |
---|
1050 | independently free elements.) |
---|
1051 | |
---|
1052 | independent_calloc simplifies and speeds up implementations of many |
---|
1053 | kinds of pools. It may also be useful when constructing large data |
---|
1054 | structures that initially have a fixed number of fixed-sized nodes, |
---|
1055 | but the number is not known at compile time, and some of the nodes |
---|
1056 | may later need to be freed. For example: |
---|
1057 | |
---|
1058 | struct Node { int item; struct Node* next; }; |
---|
1059 | |
---|
1060 | struct Node* build_list() { |
---|
1061 | struct Node** pool; |
---|
1062 | int n = read_number_of_nodes_needed(); |
---|
1063 | if (n <= 0) return 0; |
---|
1064 | pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); |
---|
1065 | if (pool == 0) die(); |
---|
1066 | // organize into a linked list... |
---|
1067 | struct Node* first = pool[0]; |
---|
1068 | for (i = 0; i < n-1; ++i) |
---|
1069 | pool[i]->next = pool[i+1]; |
---|
1070 | free(pool); // Can now free the array (or not, if it is needed later) |
---|
1071 | return first; |
---|
1072 | } |
---|
1073 | */ |
---|
1074 | #if __STD_C |
---|
1075 | Void_t** public_iCALLOc(size_t, size_t, Void_t**); |
---|
1076 | #else |
---|
1077 | Void_t** public_iCALLOc(); |
---|
1078 | #endif |
---|
1079 | |
---|
1080 | /* |
---|
1081 | independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]); |
---|
1082 | |
---|
1083 | independent_comalloc allocates, all at once, a set of n_elements |
---|
1084 | chunks with sizes indicated in the "sizes" array. It returns |
---|
1085 | an array of pointers to these elements, each of which can be |
---|
1086 | independently freed, realloc'ed etc. The elements are guaranteed to |
---|
1087 | be adjacently allocated (this is not guaranteed to occur with |
---|
1088 | multiple callocs or mallocs), which may also improve cache locality |
---|
1089 | in some applications. |
---|
1090 | |
---|
1091 | The "chunks" argument is optional (i.e., may be null). If it is null |
---|
1092 | the returned array is itself dynamically allocated and should also |
---|
1093 | be freed when it is no longer needed. Otherwise, the chunks array |
---|
1094 | must be of at least n_elements in length. It is filled in with the |
---|
1095 | pointers to the chunks. |
---|
1096 | |
---|
1097 | In either case, independent_comalloc returns this pointer array, or |
---|
1098 | null if the allocation failed. If n_elements is zero and chunks is |
---|
1099 | null, it returns a chunk representing an array with zero elements |
---|
1100 | (which should be freed if not wanted). |
---|
1101 | |
---|
1102 | Each element must be individually freed when it is no longer |
---|
1103 | needed. If you'd like to instead be able to free all at once, you |
---|
1104 | should instead use a single regular malloc, and assign pointers at |
---|
1105 | particular offsets in the aggregate space. (In this case though, you |
---|
1106 | cannot independently free elements.) |
---|
1107 | |
---|
1108 | independent_comallac differs from independent_calloc in that each |
---|
1109 | element may have a different size, and also that it does not |
---|
1110 | automatically clear elements. |
---|
1111 | |
---|
1112 | independent_comalloc can be used to speed up allocation in cases |
---|
1113 | where several structs or objects must always be allocated at the |
---|
1114 | same time. For example: |
---|
1115 | |
---|
1116 | struct Head { ... } |
---|
1117 | struct Foot { ... } |
---|
1118 | |
---|
1119 | void send_message(char* msg) { |
---|
1120 | int msglen = strlen(msg); |
---|
1121 | size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; |
---|
1122 | void* chunks[3]; |
---|
1123 | if (independent_comalloc(3, sizes, chunks) == 0) |
---|
1124 | die(); |
---|
1125 | struct Head* head = (struct Head*)(chunks[0]); |
---|
1126 | char* body = (char*)(chunks[1]); |
---|
1127 | struct Foot* foot = (struct Foot*)(chunks[2]); |
---|
1128 | // ... |
---|
1129 | } |
---|
1130 | |
---|
1131 | In general though, independent_comalloc is worth using only for |
---|
1132 | larger values of n_elements. For small values, you probably won't |
---|
1133 | detect enough difference from series of malloc calls to bother. |
---|
1134 | |
---|
1135 | Overuse of independent_comalloc can increase overall memory usage, |
---|
1136 | since it cannot reuse existing noncontiguous small chunks that |
---|
1137 | might be available for some of the elements. |
---|
1138 | */ |
---|
1139 | #if __STD_C |
---|
1140 | Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**); |
---|
1141 | #else |
---|
1142 | Void_t** public_iCOMALLOc(); |
---|
1143 | #endif |
---|
1144 | |
---|
1145 | |
---|
1146 | /* |
---|
1147 | pvalloc(size_t n); |
---|
1148 | Equivalent to valloc(minimum-page-that-holds(n)), that is, |
---|
1149 | round up n to nearest pagesize. |
---|
1150 | */ |
---|
1151 | #if __STD_C |
---|
1152 | Void_t* public_pVALLOc(size_t); |
---|
1153 | #else |
---|
1154 | Void_t* public_pVALLOc(); |
---|
1155 | #endif |
---|
1156 | |
---|
1157 | /* |
---|
1158 | cfree(Void_t* p); |
---|
1159 | Equivalent to free(p). |
---|
1160 | |
---|
1161 | cfree is needed/defined on some systems that pair it with calloc, |
---|
1162 | for odd historical reasons (such as: cfree is used in example |
---|
1163 | code in the first edition of K&R). |
---|
1164 | */ |
---|
1165 | #if __STD_C |
---|
1166 | void public_cFREe(Void_t*); |
---|
1167 | #else |
---|
1168 | void public_cFREe(); |
---|
1169 | #endif |
---|
1170 | |
---|
1171 | /* |
---|
1172 | malloc_trim(size_t pad); |
---|
1173 | |
---|
1174 | If possible, gives memory back to the system (via negative |
---|
1175 | arguments to sbrk) if there is unused memory at the `high' end of |
---|
1176 | the malloc pool. You can call this after freeing large blocks of |
---|
1177 | memory to potentially reduce the system-level memory requirements |
---|
1178 | of a program. However, it cannot guarantee to reduce memory. Under |
---|
1179 | some allocation patterns, some large free blocks of memory will be |
---|
1180 | locked between two used chunks, so they cannot be given back to |
---|
1181 | the system. |
---|
1182 | |
---|
1183 | The `pad' argument to malloc_trim represents the amount of free |
---|
1184 | trailing space to leave untrimmed. If this argument is zero, |
---|
1185 | only the minimum amount of memory to maintain internal data |
---|
1186 | structures will be left (one page or less). Non-zero arguments |
---|
1187 | can be supplied to maintain enough trailing space to service |
---|
1188 | future expected allocations without having to re-obtain memory |
---|
1189 | from the system. |
---|
1190 | |
---|
1191 | Malloc_trim returns 1 if it actually released any memory, else 0. |
---|
1192 | On systems that do not support "negative sbrks", it will always |
---|
1193 | rreturn 0. |
---|
1194 | */ |
---|
1195 | #if __STD_C |
---|
1196 | int public_mTRIm(size_t); |
---|
1197 | #else |
---|
1198 | int public_mTRIm(); |
---|
1199 | #endif |
---|
1200 | |
---|
1201 | /* |
---|
1202 | malloc_usable_size(Void_t* p); |
---|
1203 | |
---|
1204 | Returns the number of bytes you can actually use in |
---|
1205 | an allocated chunk, which may be more than you requested (although |
---|
1206 | often not) due to alignment and minimum size constraints. |
---|
1207 | You can use this many bytes without worrying about |
---|
1208 | overwriting other allocated objects. This is not a particularly great |
---|
1209 | programming practice. malloc_usable_size can be more useful in |
---|
1210 | debugging and assertions, for example: |
---|
1211 | |
---|
1212 | p = malloc(n); |
---|
1213 | assert(malloc_usable_size(p) >= 256); |
---|
1214 | |
---|
1215 | */ |
---|
1216 | #if __STD_C |
---|
1217 | size_t public_mUSABLe(Void_t*); |
---|
1218 | #else |
---|
1219 | size_t public_mUSABLe(); |
---|
1220 | #endif |
---|
1221 | |
---|
1222 | /* |
---|
1223 | malloc_stats(); |
---|
1224 | Prints on stderr the amount of space obtained from the system (both |
---|
1225 | via sbrk and mmap), the maximum amount (which may be more than |
---|
1226 | current if malloc_trim and/or munmap got called), and the current |
---|
1227 | number of bytes allocated via malloc (or realloc, etc) but not yet |
---|
1228 | freed. Note that this is the number of bytes allocated, not the |
---|
1229 | number requested. It will be larger than the number requested |
---|
1230 | because of alignment and bookkeeping overhead. Because it includes |
---|
1231 | alignment wastage as being in use, this figure may be greater than |
---|
1232 | zero even when no user-level chunks are allocated. |
---|
1233 | |
---|
1234 | The reported current and maximum system memory can be inaccurate if |
---|
1235 | a program makes other calls to system memory allocation functions |
---|
1236 | (normally sbrk) outside of malloc. |
---|
1237 | |
---|
1238 | malloc_stats prints only the most commonly interesting statistics. |
---|
1239 | More information can be obtained by calling mallinfo. |
---|
1240 | |
---|
1241 | */ |
---|
1242 | #if __STD_C |
---|
1243 | void public_mSTATs(); |
---|
1244 | #else |
---|
1245 | void public_mSTATs(); |
---|
1246 | #endif |
---|
1247 | |
---|
1248 | /* mallopt tuning options */ |
---|
1249 | |
---|
1250 | /* |
---|
1251 | M_MXFAST is the maximum request size used for "fastbins", special bins |
---|
1252 | that hold returned chunks without consolidating their spaces. This |
---|
1253 | enables future requests for chunks of the same size to be handled |
---|
1254 | very quickly, but can increase fragmentation, and thus increase the |
---|
1255 | overall memory footprint of a program. |
---|
1256 | |
---|
1257 | This malloc manages fastbins very conservatively yet still |
---|
1258 | efficiently, so fragmentation is rarely a problem for values less |
---|
1259 | than or equal to the default. The maximum supported value of MXFAST |
---|
1260 | is 80. You wouldn't want it any higher than this anyway. Fastbins |
---|
1261 | are designed especially for use with many small structs, objects or |
---|
1262 | strings -- the default handles structs/objects/arrays with sizes up |
---|
1263 | to 8 4byte fields, or small strings representing words, tokens, |
---|
1264 | etc. Using fastbins for larger objects normally worsens |
---|
1265 | fragmentation without improving speed. |
---|
1266 | |
---|
1267 | M_MXFAST is set in REQUEST size units. It is internally used in |
---|
1268 | chunksize units, which adds padding and alignment. You can reduce |
---|
1269 | M_MXFAST to 0 to disable all use of fastbins. This causes the malloc |
---|
1270 | algorithm to be a closer approximation of fifo-best-fit in all cases, |
---|
1271 | not just for larger requests, but will generally cause it to be |
---|
1272 | slower. |
---|
1273 | */ |
---|
1274 | |
---|
1275 | |
---|
1276 | /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */ |
---|
1277 | #ifndef M_MXFAST |
---|
1278 | #define M_MXFAST 1 |
---|
1279 | #endif |
---|
1280 | |
---|
1281 | #ifndef DEFAULT_MXFAST |
---|
1282 | #define DEFAULT_MXFAST 64 |
---|
1283 | #endif |
---|
1284 | |
---|
1285 | |
---|
1286 | /* |
---|
1287 | M_TRIM_THRESHOLD is the maximum amount of unused top-most memory |
---|
1288 | to keep before releasing via malloc_trim in free(). |
---|
1289 | |
---|
1290 | Automatic trimming is mainly useful in long-lived programs. |
---|
1291 | Because trimming via sbrk can be slow on some systems, and can |
---|
1292 | sometimes be wasteful (in cases where programs immediately |
---|
1293 | afterward allocate more large chunks) the value should be high |
---|
1294 | enough so that your overall system performance would improve by |
---|
1295 | releasing this much memory. |
---|
1296 | |
---|
1297 | The trim threshold and the mmap control parameters (see below) |
---|
1298 | can be traded off with one another. Trimming and mmapping are |
---|
1299 | two different ways of releasing unused memory back to the |
---|
1300 | system. Between these two, it is often possible to keep |
---|
1301 | system-level demands of a long-lived program down to a bare |
---|
1302 | minimum. For example, in one test suite of sessions measuring |
---|
1303 | the XF86 X server on Linux, using a trim threshold of 128K and a |
---|
1304 | mmap threshold of 192K led to near-minimal long term resource |
---|
1305 | consumption. |
---|
1306 | |
---|
1307 | If you are using this malloc in a long-lived program, it should |
---|
1308 | pay to experiment with these values. As a rough guide, you |
---|
1309 | might set to a value close to the average size of a process |
---|
1310 | (program) running on your system. Releasing this much memory |
---|
1311 | would allow such a process to run in memory. Generally, it's |
---|
1312 | worth it to tune for trimming rather tham memory mapping when a |
---|
1313 | program undergoes phases where several large chunks are |
---|
1314 | allocated and released in ways that can reuse each other's |
---|
1315 | storage, perhaps mixed with phases where there are no such |
---|
1316 | chunks at all. And in well-behaved long-lived programs, |
---|
1317 | controlling release of large blocks via trimming versus mapping |
---|
1318 | is usually faster. |
---|
1319 | |
---|
1320 | However, in most programs, these parameters serve mainly as |
---|
1321 | protection against the system-level effects of carrying around |
---|
1322 | massive amounts of unneeded memory. Since frequent calls to |
---|
1323 | sbrk, mmap, and munmap otherwise degrade performance, the default |
---|
1324 | parameters are set to relatively high values that serve only as |
---|
1325 | safeguards. |
---|
1326 | |
---|
1327 | The trim value It must be greater than page size to have any useful |
---|
1328 | effect. To disable trimming completely, you can set to |
---|
1329 | (unsigned long)(-1) |
---|
1330 | |
---|
1331 | Trim settings interact with fastbin (MXFAST) settings: Unless |
---|
1332 | TRIM_FASTBINS is defined, automatic trimming never takes place upon |
---|
1333 | freeing a chunk with size less than or equal to MXFAST. Trimming is |
---|
1334 | instead delayed until subsequent freeing of larger chunks. However, |
---|
1335 | you can still force an attempted trim by calling malloc_trim. |
---|
1336 | |
---|
1337 | Also, trimming is not generally possible in cases where |
---|
1338 | the main arena is obtained via mmap. |
---|
1339 | |
---|
1340 | Note that the trick some people use of mallocing a huge space and |
---|
1341 | then freeing it at program startup, in an attempt to reserve system |
---|
1342 | memory, doesn't have the intended effect under automatic trimming, |
---|
1343 | since that memory will immediately be returned to the system. |
---|
1344 | */ |
---|
1345 | |
---|
1346 | #define M_TRIM_THRESHOLD -1 |
---|
1347 | |
---|
1348 | #ifndef DEFAULT_TRIM_THRESHOLD |
---|
1349 | #define DEFAULT_TRIM_THRESHOLD (128 * 1024) |
---|
1350 | #endif |
---|
1351 | |
---|
1352 | /* |
---|
1353 | M_TOP_PAD is the amount of extra `padding' space to allocate or |
---|
1354 | retain whenever sbrk is called. It is used in two ways internally: |
---|
1355 | |
---|
1356 | * When sbrk is called to extend the top of the arena to satisfy |
---|
1357 | a new malloc request, this much padding is added to the sbrk |
---|
1358 | request. |
---|
1359 | |
---|
1360 | * When malloc_trim is called automatically from free(), |
---|
1361 | it is used as the `pad' argument. |
---|
1362 | |
---|
1363 | In both cases, the actual amount of padding is rounded |
---|
1364 | so that the end of the arena is always a system page boundary. |
---|
1365 | |
---|
1366 | The main reason for using padding is to avoid calling sbrk so |
---|
1367 | often. Having even a small pad greatly reduces the likelihood |
---|
1368 | that nearly every malloc request during program start-up (or |
---|
1369 | after trimming) will invoke sbrk, which needlessly wastes |
---|
1370 | time. |
---|
1371 | |
---|
1372 | Automatic rounding-up to page-size units is normally sufficient |
---|
1373 | to avoid measurable overhead, so the default is 0. However, in |
---|
1374 | systems where sbrk is relatively slow, it can pay to increase |
---|
1375 | this value, at the expense of carrying around more memory than |
---|
1376 | the program needs. |
---|
1377 | */ |
---|
1378 | |
---|
1379 | #define M_TOP_PAD -2 |
---|
1380 | |
---|
1381 | #ifndef DEFAULT_TOP_PAD |
---|
1382 | #define DEFAULT_TOP_PAD (0) |
---|
1383 | #endif |
---|
1384 | |
---|
1385 | /* |
---|
1386 | M_MMAP_THRESHOLD is the request size threshold for using mmap() |
---|
1387 | to service a request. Requests of at least this size that cannot |
---|
1388 | be allocated using already-existing space will be serviced via mmap. |
---|
1389 | (If enough normal freed space already exists it is used instead.) |
---|
1390 | |
---|
1391 | Using mmap segregates relatively large chunks of memory so that |
---|
1392 | they can be individually obtained and released from the host |
---|
1393 | system. A request serviced through mmap is never reused by any |
---|
1394 | other request (at least not directly; the system may just so |
---|
1395 | happen to remap successive requests to the same locations). |
---|
1396 | |
---|
1397 | Segregating space in this way has the benefits that: |
---|
1398 | |
---|
1399 | 1. Mmapped space can ALWAYS be individually released back |
---|
1400 | to the system, which helps keep the system level memory |
---|
1401 | demands of a long-lived program low. |
---|
1402 | 2. Mapped memory can never become `locked' between |
---|
1403 | other chunks, as can happen with normally allocated chunks, which |
---|
1404 | means that even trimming via malloc_trim would not release them. |
---|
1405 | 3. On some systems with "holes" in address spaces, mmap can obtain |
---|
1406 | memory that sbrk cannot. |
---|
1407 | |
---|
1408 | However, it has the disadvantages that: |
---|
1409 | |
---|
1410 | 1. The space cannot be reclaimed, consolidated, and then |
---|
1411 | used to service later requests, as happens with normal chunks. |
---|
1412 | 2. It can lead to more wastage because of mmap page alignment |
---|
1413 | requirements |
---|
1414 | 3. It causes malloc performance to be more dependent on host |
---|
1415 | system memory management support routines which may vary in |
---|
1416 | implementation quality and may impose arbitrary |
---|
1417 | limitations. Generally, servicing a request via normal |
---|
1418 | malloc steps is faster than going through a system's mmap. |
---|
1419 | |
---|
1420 | The advantages of mmap nearly always outweigh disadvantages for |
---|
1421 | "large" chunks, but the value of "large" varies across systems. The |
---|
1422 | default is an empirically derived value that works well in most |
---|
1423 | systems. |
---|
1424 | */ |
---|
1425 | |
---|
1426 | #define M_MMAP_THRESHOLD -3 |
---|
1427 | |
---|
1428 | #ifndef DEFAULT_MMAP_THRESHOLD |
---|
1429 | #define DEFAULT_MMAP_THRESHOLD (128 * 1024) |
---|
1430 | #endif |
---|
1431 | |
---|
1432 | /* |
---|
1433 | M_MMAP_MAX is the maximum number of requests to simultaneously |
---|
1434 | service using mmap. This parameter exists because |
---|
1435 | . Some systems have a limited number of internal tables for |
---|
1436 | use by mmap, and using more than a few of them may degrade |
---|
1437 | performance. |
---|
1438 | |
---|
1439 | The default is set to a value that serves only as a safeguard. |
---|
1440 | Setting to 0 disables use of mmap for servicing large requests. If |
---|
1441 | HAVE_MMAP is not set, the default value is 0, and attempts to set it |
---|
1442 | to non-zero values in mallopt will fail. |
---|
1443 | */ |
---|
1444 | |
---|
1445 | #define M_MMAP_MAX -4 |
---|
1446 | |
---|
1447 | #ifndef DEFAULT_MMAP_MAX |
---|
1448 | #if HAVE_MMAP |
---|
1449 | #define DEFAULT_MMAP_MAX (65536) |
---|
1450 | #else |
---|
1451 | #define DEFAULT_MMAP_MAX (0) |
---|
1452 | #endif |
---|
1453 | #endif |
---|
1454 | |
---|
1455 | #ifdef __cplusplus |
---|
1456 | }; /* end of extern "C" */ |
---|
1457 | #endif |
---|
1458 | |
---|
1459 | /* |
---|
1460 | ======================================================================== |
---|
1461 | To make a fully customizable malloc.h header file, cut everything |
---|
1462 | above this line, put into file malloc.h, edit to suit, and #include it |
---|
1463 | on the next line, as well as in programs that use this malloc. |
---|
1464 | ======================================================================== |
---|
1465 | */ |
---|
1466 | |
---|
1467 | /* #include "malloc.h" */ |
---|
1468 | |
---|
1469 | /* --------------------- public wrappers ---------------------- */ |
---|
1470 | |
---|
1471 | #ifdef USE_PUBLIC_MALLOC_WRAPPERS |
---|
1472 | |
---|
1473 | /* Declare all routines as internal */ |
---|
1474 | #if __STD_C |
---|
1475 | static Void_t* mALLOc(size_t); |
---|
1476 | static void fREe(Void_t*); |
---|
1477 | static Void_t* rEALLOc(Void_t*, size_t); |
---|
1478 | static Void_t* mEMALIGn(size_t, size_t); |
---|
1479 | static Void_t* vALLOc(size_t); |
---|
1480 | static Void_t* pVALLOc(size_t); |
---|
1481 | static Void_t* cALLOc(size_t, size_t); |
---|
1482 | static Void_t** iCALLOc(size_t, size_t, Void_t**); |
---|
1483 | static Void_t** iCOMALLOc(size_t, size_t*, Void_t**); |
---|
1484 | static void cFREe(Void_t*); |
---|
1485 | static int mTRIm(size_t); |
---|
1486 | static size_t mUSABLe(Void_t*); |
---|
1487 | static void mSTATs(); |
---|
1488 | static int mALLOPt(int, int); |
---|
1489 | static struct mallinfo mALLINFo(void); |
---|
1490 | #else |
---|
1491 | static Void_t* mALLOc(); |
---|
1492 | static void fREe(); |
---|
1493 | static Void_t* rEALLOc(); |
---|
1494 | static Void_t* mEMALIGn(); |
---|
1495 | static Void_t* vALLOc(); |
---|
1496 | static Void_t* pVALLOc(); |
---|
1497 | static Void_t* cALLOc(); |
---|
1498 | static Void_t** iCALLOc(); |
---|
1499 | static Void_t** iCOMALLOc(); |
---|
1500 | static void cFREe(); |
---|
1501 | static int mTRIm(); |
---|
1502 | static size_t mUSABLe(); |
---|
1503 | static void mSTATs(); |
---|
1504 | static int mALLOPt(); |
---|
1505 | static struct mallinfo mALLINFo(); |
---|
1506 | #endif |
---|
1507 | |
---|
1508 | /* |
---|
1509 | MALLOC_PREACTION and MALLOC_POSTACTION should be |
---|
1510 | defined to return 0 on success, and nonzero on failure. |
---|
1511 | The return value of MALLOC_POSTACTION is currently ignored |
---|
1512 | in wrapper functions since there is no reasonable default |
---|
1513 | action to take on failure. |
---|
1514 | */ |
---|
1515 | |
---|
1516 | |
---|
1517 | #ifdef USE_MALLOC_LOCK |
---|
1518 | |
---|
1519 | #ifdef WIN32 |
---|
1520 | |
---|
1521 | static int mALLOC_MUTEx; |
---|
1522 | #define MALLOC_PREACTION slwait(&mALLOC_MUTEx) |
---|
1523 | #define MALLOC_POSTACTION slrelease(&mALLOC_MUTEx) |
---|
1524 | |
---|
1525 | #else |
---|
1526 | |
---|
1527 | #include <pthread.h> |
---|
1528 | |
---|
1529 | static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER; |
---|
1530 | |
---|
1531 | #define MALLOC_PREACTION pthread_mutex_lock(&mALLOC_MUTEx) |
---|
1532 | #define MALLOC_POSTACTION pthread_mutex_unlock(&mALLOC_MUTEx) |
---|
1533 | |
---|
1534 | #endif /* USE_MALLOC_LOCK */ |
---|
1535 | |
---|
1536 | #else |
---|
1537 | |
---|
1538 | /* Substitute anything you like for these */ |
---|
1539 | |
---|
1540 | #define MALLOC_PREACTION (0) |
---|
1541 | #define MALLOC_POSTACTION (0) |
---|
1542 | |
---|
1543 | #endif |
---|
1544 | |
---|
1545 | Void_t* public_mALLOc(size_t bytes) { |
---|
1546 | Void_t* m; |
---|
1547 | if (MALLOC_PREACTION != 0) { |
---|
1548 | return 0; |
---|
1549 | } |
---|
1550 | m = mALLOc(bytes); |
---|
1551 | if (MALLOC_POSTACTION != 0) { |
---|
1552 | } |
---|
1553 | return m; |
---|
1554 | } |
---|
1555 | |
---|
1556 | void public_fREe(Void_t* m) { |
---|
1557 | if (MALLOC_PREACTION != 0) { |
---|
1558 | return; |
---|
1559 | } |
---|
1560 | fREe(m); |
---|
1561 | if (MALLOC_POSTACTION != 0) { |
---|
1562 | } |
---|
1563 | } |
---|
1564 | |
---|
1565 | Void_t* public_rEALLOc(Void_t* m, size_t bytes) { |
---|
1566 | if (MALLOC_PREACTION != 0) { |
---|
1567 | return 0; |
---|
1568 | } |
---|
1569 | m = rEALLOc(m, bytes); |
---|
1570 | if (MALLOC_POSTACTION != 0) { |
---|
1571 | } |
---|
1572 | return m; |
---|
1573 | } |
---|
1574 | |
---|
1575 | Void_t* public_mEMALIGn(size_t alignment, size_t bytes) { |
---|
1576 | Void_t* m; |
---|
1577 | if (MALLOC_PREACTION != 0) { |
---|
1578 | return 0; |
---|
1579 | } |
---|
1580 | m = mEMALIGn(alignment, bytes); |
---|
1581 | if (MALLOC_POSTACTION != 0) { |
---|
1582 | } |
---|
1583 | return m; |
---|
1584 | } |
---|
1585 | |
---|
1586 | Void_t* public_vALLOc(size_t bytes) { |
---|
1587 | Void_t* m; |
---|
1588 | if (MALLOC_PREACTION != 0) { |
---|
1589 | return 0; |
---|
1590 | } |
---|
1591 | m = vALLOc(bytes); |
---|
1592 | if (MALLOC_POSTACTION != 0) { |
---|
1593 | } |
---|
1594 | return m; |
---|
1595 | } |
---|
1596 | |
---|
1597 | Void_t* public_pVALLOc(size_t bytes) { |
---|
1598 | Void_t* m; |
---|
1599 | if (MALLOC_PREACTION != 0) { |
---|
1600 | return 0; |
---|
1601 | } |
---|
1602 | m = pVALLOc(bytes); |
---|
1603 | if (MALLOC_POSTACTION != 0) { |
---|
1604 | } |
---|
1605 | return m; |
---|
1606 | } |
---|
1607 | |
---|
1608 | Void_t* public_cALLOc(size_t n, size_t elem_size) { |
---|
1609 | Void_t* m; |
---|
1610 | if (MALLOC_PREACTION != 0) { |
---|
1611 | return 0; |
---|
1612 | } |
---|
1613 | m = cALLOc(n, elem_size); |
---|
1614 | if (MALLOC_POSTACTION != 0) { |
---|
1615 | } |
---|
1616 | return m; |
---|
1617 | } |
---|
1618 | |
---|
1619 | |
---|
1620 | Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) { |
---|
1621 | Void_t** m; |
---|
1622 | if (MALLOC_PREACTION != 0) { |
---|
1623 | return 0; |
---|
1624 | } |
---|
1625 | m = iCALLOc(n, elem_size, chunks); |
---|
1626 | if (MALLOC_POSTACTION != 0) { |
---|
1627 | } |
---|
1628 | return m; |
---|
1629 | } |
---|
1630 | |
---|
1631 | Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) { |
---|
1632 | Void_t** m; |
---|
1633 | if (MALLOC_PREACTION != 0) { |
---|
1634 | return 0; |
---|
1635 | } |
---|
1636 | m = iCOMALLOc(n, sizes, chunks); |
---|
1637 | if (MALLOC_POSTACTION != 0) { |
---|
1638 | } |
---|
1639 | return m; |
---|
1640 | } |
---|
1641 | |
---|
1642 | void public_cFREe(Void_t* m) { |
---|
1643 | if (MALLOC_PREACTION != 0) { |
---|
1644 | return; |
---|
1645 | } |
---|
1646 | cFREe(m); |
---|
1647 | if (MALLOC_POSTACTION != 0) { |
---|
1648 | } |
---|
1649 | } |
---|
1650 | |
---|
1651 | int public_mTRIm(size_t s) { |
---|
1652 | int result; |
---|
1653 | if (MALLOC_PREACTION != 0) { |
---|
1654 | return 0; |
---|
1655 | } |
---|
1656 | result = mTRIm(s); |
---|
1657 | if (MALLOC_POSTACTION != 0) { |
---|
1658 | } |
---|
1659 | return result; |
---|
1660 | } |
---|
1661 | |
---|
1662 | size_t public_mUSABLe(Void_t* m) { |
---|
1663 | size_t result; |
---|
1664 | if (MALLOC_PREACTION != 0) { |
---|
1665 | return 0; |
---|
1666 | } |
---|
1667 | result = mUSABLe(m); |
---|
1668 | if (MALLOC_POSTACTION != 0) { |
---|
1669 | } |
---|
1670 | return result; |
---|
1671 | } |
---|
1672 | |
---|
1673 | void public_mSTATs() { |
---|
1674 | if (MALLOC_PREACTION != 0) { |
---|
1675 | return; |
---|
1676 | } |
---|
1677 | mSTATs(); |
---|
1678 | if (MALLOC_POSTACTION != 0) { |
---|
1679 | } |
---|
1680 | } |
---|
1681 | |
---|
1682 | struct mallinfo public_mALLINFo() { |
---|
1683 | struct mallinfo m; |
---|
1684 | if (MALLOC_PREACTION != 0) { |
---|
1685 | struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; |
---|
1686 | return nm; |
---|
1687 | } |
---|
1688 | m = mALLINFo(); |
---|
1689 | if (MALLOC_POSTACTION != 0) { |
---|
1690 | } |
---|
1691 | return m; |
---|
1692 | } |
---|
1693 | |
---|
1694 | int public_mALLOPt(int p, int v) { |
---|
1695 | int result; |
---|
1696 | if (MALLOC_PREACTION != 0) { |
---|
1697 | return 0; |
---|
1698 | } |
---|
1699 | result = mALLOPt(p, v); |
---|
1700 | if (MALLOC_POSTACTION != 0) { |
---|
1701 | } |
---|
1702 | return result; |
---|
1703 | } |
---|
1704 | |
---|
1705 | #endif |
---|
1706 | |
---|
1707 | |
---|
1708 | |
---|
1709 | /* ------------- Optional versions of memcopy ---------------- */ |
---|
1710 | |
---|
1711 | |
---|
1712 | #if USE_MEMCPY |
---|
1713 | |
---|
1714 | /* |
---|
1715 | Note: memcpy is ONLY invoked with non-overlapping regions, |
---|
1716 | so the (usually slower) memmove is not needed. |
---|
1717 | */ |
---|
1718 | |
---|
1719 | #define MALLOC_COPY(dest, src, nbytes) memcpy(dest, src, nbytes) |
---|
1720 | #define MALLOC_ZERO(dest, nbytes) memset(dest, 0, nbytes) |
---|
1721 | |
---|
1722 | #else /* !USE_MEMCPY */ |
---|
1723 | |
---|
1724 | /* Use Duff's device for good zeroing/copying performance. */ |
---|
1725 | |
---|
1726 | #define MALLOC_ZERO(charp, nbytes) \ |
---|
1727 | do { \ |
---|
1728 | INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \ |
---|
1729 | unsigned long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \ |
---|
1730 | long mcn; \ |
---|
1731 | if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ |
---|
1732 | switch (mctmp) { \ |
---|
1733 | case 0: for(;;) { *mzp++ = 0; \ |
---|
1734 | case 7: *mzp++ = 0; \ |
---|
1735 | case 6: *mzp++ = 0; \ |
---|
1736 | case 5: *mzp++ = 0; \ |
---|
1737 | case 4: *mzp++ = 0; \ |
---|
1738 | case 3: *mzp++ = 0; \ |
---|
1739 | case 2: *mzp++ = 0; \ |
---|
1740 | case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \ |
---|
1741 | } \ |
---|
1742 | } while(0) |
---|
1743 | |
---|
1744 | define MALLOC_COPY(dest,src,nbytes) \ |
---|
1745 | do { \ |
---|
1746 | INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \ |
---|
1747 | INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \ |
---|
1748 | unsigned long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \ |
---|
1749 | long mcn; \ |
---|
1750 | if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ |
---|
1751 | switch (mctmp) { \ |
---|
1752 | case 0: for(;;) { *mcdst++ = *mcsrc++; \ |
---|
1753 | case 7: *mcdst++ = *mcsrc++; \ |
---|
1754 | case 6: *mcdst++ = *mcsrc++; \ |
---|
1755 | case 5: *mcdst++ = *mcsrc++; \ |
---|
1756 | case 4: *mcdst++ = *mcsrc++; \ |
---|
1757 | case 3: *mcdst++ = *mcsrc++; \ |
---|
1758 | case 2: *mcdst++ = *mcsrc++; \ |
---|
1759 | case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \ |
---|
1760 | } \ |
---|
1761 | } while(0) |
---|
1762 | |
---|
1763 | #endif |
---|
1764 | |
---|
1765 | /* ------------------ MMAP support ------------------ */ |
---|
1766 | |
---|
1767 | |
---|
1768 | #if HAVE_MMAP |
---|
1769 | |
---|
1770 | #include <fcntl.h> |
---|
1771 | #ifndef LACKS_SYS_MMAN_H |
---|
1772 | #include <sys/mman.h> |
---|
1773 | #endif |
---|
1774 | |
---|
1775 | #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) |
---|
1776 | #define MAP_ANONYMOUS MAP_ANON |
---|
1777 | #endif |
---|
1778 | |
---|
1779 | /* |
---|
1780 | Nearly all versions of mmap support MAP_ANONYMOUS, |
---|
1781 | so the following is unlikely to be needed, but is |
---|
1782 | supplied just in case. |
---|
1783 | */ |
---|
1784 | |
---|
1785 | #ifndef MAP_ANONYMOUS |
---|
1786 | |
---|
1787 | static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ |
---|
1788 | |
---|
1789 | #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \ |
---|
1790 | (dev_zero_fd = open("/dev/zero", O_RDWR), \ |
---|
1791 | mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \ |
---|
1792 | mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) |
---|
1793 | |
---|
1794 | #else |
---|
1795 | |
---|
1796 | #define MMAP(addr, size, prot, flags) \ |
---|
1797 | (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0)) |
---|
1798 | |
---|
1799 | #endif |
---|
1800 | |
---|
1801 | |
---|
1802 | #endif /* HAVE_MMAP */ |
---|
1803 | |
---|
1804 | |
---|
1805 | /* |
---|
1806 | ----------------------- Chunk representations ----------------------- |
---|
1807 | */ |
---|
1808 | |
---|
1809 | |
---|
1810 | /* |
---|
1811 | This struct declaration is misleading (but accurate and necessary). |
---|
1812 | It declares a "view" into memory allowing access to necessary |
---|
1813 | fields at known offsets from a given base. See explanation below. |
---|
1814 | */ |
---|
1815 | |
---|
1816 | struct malloc_chunk { |
---|
1817 | |
---|
1818 | INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */ |
---|
1819 | INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */ |
---|
1820 | |
---|
1821 | struct malloc_chunk* fd; /* double links -- used only if free. */ |
---|
1822 | struct malloc_chunk* bk; |
---|
1823 | }; |
---|
1824 | |
---|
1825 | |
---|
1826 | typedef struct malloc_chunk* mchunkptr; |
---|
1827 | |
---|
1828 | /* |
---|
1829 | malloc_chunk details: |
---|
1830 | |
---|
1831 | (The following includes lightly edited explanations by Colin Plumb.) |
---|
1832 | |
---|
1833 | Chunks of memory are maintained using a `boundary tag' method as |
---|
1834 | described in e.g., Knuth or Standish. (See the paper by Paul |
---|
1835 | Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a |
---|
1836 | survey of such techniques.) Sizes of free chunks are stored both |
---|
1837 | in the front of each chunk and at the end. This makes |
---|
1838 | consolidating fragmented chunks into bigger chunks very fast. The |
---|
1839 | size fields also hold bits representing whether chunks are free or |
---|
1840 | in use. |
---|
1841 | |
---|
1842 | An allocated chunk looks like this: |
---|
1843 | |
---|
1844 | |
---|
1845 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1846 | | Size of previous chunk, if allocated | | |
---|
1847 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1848 | | Size of chunk, in bytes |P| |
---|
1849 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1850 | | User data starts here... . |
---|
1851 | . . |
---|
1852 | . (malloc_usable_space() bytes) . |
---|
1853 | . | |
---|
1854 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1855 | | Size of chunk | |
---|
1856 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1857 | |
---|
1858 | |
---|
1859 | Where "chunk" is the front of the chunk for the purpose of most of |
---|
1860 | the malloc code, but "mem" is the pointer that is returned to the |
---|
1861 | user. "Nextchunk" is the beginning of the next contiguous chunk. |
---|
1862 | |
---|
1863 | Chunks always begin on even word boundries, so the mem portion |
---|
1864 | (which is returned to the user) is also on an even word boundary, and |
---|
1865 | thus at least double-word aligned. |
---|
1866 | |
---|
1867 | Free chunks are stored in circular doubly-linked lists, and look like this: |
---|
1868 | |
---|
1869 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1870 | | Size of previous chunk | |
---|
1871 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1872 | `head:' | Size of chunk, in bytes |P| |
---|
1873 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1874 | | Forward pointer to next chunk in list | |
---|
1875 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1876 | | Back pointer to previous chunk in list | |
---|
1877 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1878 | | Unused space (may be 0 bytes long) . |
---|
1879 | . . |
---|
1880 | . | |
---|
1881 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1882 | `foot:' | Size of chunk, in bytes | |
---|
1883 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1884 | |
---|
1885 | The P (PREV_INUSE) bit, stored in the unused low-order bit of the |
---|
1886 | chunk size (which is always a multiple of two words), is an in-use |
---|
1887 | bit for the *previous* chunk. If that bit is *clear*, then the |
---|
1888 | word before the current chunk size contains the previous chunk |
---|
1889 | size, and can be used to find the front of the previous chunk. |
---|
1890 | The very first chunk allocated always has this bit set, |
---|
1891 | preventing access to non-existent (or non-owned) memory. If |
---|
1892 | prev_inuse is set for any given chunk, then you CANNOT determine |
---|
1893 | the size of the previous chunk, and might even get a memory |
---|
1894 | addressing fault when trying to do so. |
---|
1895 | |
---|
1896 | Note that the `foot' of the current chunk is actually represented |
---|
1897 | as the prev_size of the NEXT chunk. This makes it easier to |
---|
1898 | deal with alignments etc but can be very confusing when trying |
---|
1899 | to extend or adapt this code. |
---|
1900 | |
---|
1901 | The two exceptions to all this are |
---|
1902 | |
---|
1903 | 1. The special chunk `top' doesn't bother using the |
---|
1904 | trailing size field since there is no next contiguous chunk |
---|
1905 | that would have to index off it. After initialization, `top' |
---|
1906 | is forced to always exist. If it would become less than |
---|
1907 | MINSIZE bytes long, it is replenished. |
---|
1908 | |
---|
1909 | 2. Chunks allocated via mmap, which have the second-lowest-order |
---|
1910 | bit (IS_MMAPPED) set in their size fields. Because they are |
---|
1911 | allocated one-by-one, each must contain its own trailing size field. |
---|
1912 | |
---|
1913 | */ |
---|
1914 | |
---|
1915 | /* |
---|
1916 | ---------- Size and alignment checks and conversions ---------- |
---|
1917 | */ |
---|
1918 | |
---|
1919 | /* conversion from malloc headers to user pointers, and back */ |
---|
1920 | |
---|
1921 | #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ)) |
---|
1922 | #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ)) |
---|
1923 | |
---|
1924 | /* The smallest possible chunk */ |
---|
1925 | #define MIN_CHUNK_SIZE (sizeof(struct malloc_chunk)) |
---|
1926 | |
---|
1927 | /* The smallest size we can malloc is an aligned minimal chunk */ |
---|
1928 | |
---|
1929 | #define MINSIZE \ |
---|
1930 | (unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)) |
---|
1931 | |
---|
1932 | /* Check if m has acceptable alignment */ |
---|
1933 | |
---|
1934 | #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0) |
---|
1935 | |
---|
1936 | |
---|
1937 | /* |
---|
1938 | Check if a request is so large that it would wrap around zero when |
---|
1939 | padded and aligned. To simplify some other code, the bound is made |
---|
1940 | low enough so that adding MINSIZE will also not wrap around sero. |
---|
1941 | */ |
---|
1942 | |
---|
1943 | #define REQUEST_OUT_OF_RANGE(req) \ |
---|
1944 | ((unsigned long)(req) >= \ |
---|
1945 | (unsigned long)(INTERNAL_SIZE_T)(-2 * MINSIZE)) |
---|
1946 | |
---|
1947 | /* pad request bytes into a usable size -- internal version */ |
---|
1948 | |
---|
1949 | #define request2size(req) \ |
---|
1950 | (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \ |
---|
1951 | MINSIZE : \ |
---|
1952 | ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK) |
---|
1953 | |
---|
1954 | /* Same, except also perform argument check */ |
---|
1955 | |
---|
1956 | #define checked_request2size(req, sz) \ |
---|
1957 | if (REQUEST_OUT_OF_RANGE(req)) { \ |
---|
1958 | MALLOC_FAILURE_ACTION; \ |
---|
1959 | return 0; \ |
---|
1960 | } \ |
---|
1961 | (sz) = request2size(req); |
---|
1962 | |
---|
1963 | /* |
---|
1964 | --------------- Physical chunk operations --------------- |
---|
1965 | */ |
---|
1966 | |
---|
1967 | |
---|
1968 | /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */ |
---|
1969 | #define PREV_INUSE 0x1 |
---|
1970 | |
---|
1971 | /* extract inuse bit of previous chunk */ |
---|
1972 | #define prev_inuse(p) ((p)->size & PREV_INUSE) |
---|
1973 | |
---|
1974 | |
---|
1975 | /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */ |
---|
1976 | #define IS_MMAPPED 0x2 |
---|
1977 | |
---|
1978 | /* check for mmap()'ed chunk */ |
---|
1979 | #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED) |
---|
1980 | |
---|
1981 | /* |
---|
1982 | Bits to mask off when extracting size |
---|
1983 | |
---|
1984 | Note: IS_MMAPPED is intentionally not masked off from size field in |
---|
1985 | macros for which mmapped chunks should never be seen. This should |
---|
1986 | cause helpful core dumps to occur if it is tried by accident by |
---|
1987 | people extending or adapting this malloc. |
---|
1988 | */ |
---|
1989 | #define SIZE_BITS (PREV_INUSE|IS_MMAPPED) |
---|
1990 | |
---|
1991 | /* Get size, ignoring use bits */ |
---|
1992 | #define chunksize(p) ((p)->size & ~(SIZE_BITS)) |
---|
1993 | |
---|
1994 | |
---|
1995 | /* Ptr to next physical malloc_chunk. */ |
---|
1996 | #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) )) |
---|
1997 | |
---|
1998 | /* Ptr to previous physical malloc_chunk */ |
---|
1999 | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) )) |
---|
2000 | |
---|
2001 | /* Treat space at ptr + offset as a chunk */ |
---|
2002 | #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) |
---|
2003 | |
---|
2004 | /* extract p's inuse bit */ |
---|
2005 | #define inuse(p)\ |
---|
2006 | ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE) |
---|
2007 | |
---|
2008 | /* set/clear chunk as being inuse without otherwise disturbing */ |
---|
2009 | #define set_inuse(p)\ |
---|
2010 | ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE |
---|
2011 | |
---|
2012 | #define clear_inuse(p)\ |
---|
2013 | ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE) |
---|
2014 | |
---|
2015 | |
---|
2016 | /* check/set/clear inuse bits in known places */ |
---|
2017 | #define inuse_bit_at_offset(p, s)\ |
---|
2018 | (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE) |
---|
2019 | |
---|
2020 | #define set_inuse_bit_at_offset(p, s)\ |
---|
2021 | (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE) |
---|
2022 | |
---|
2023 | #define clear_inuse_bit_at_offset(p, s)\ |
---|
2024 | (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE)) |
---|
2025 | |
---|
2026 | |
---|
2027 | /* Set size at head, without disturbing its use bit */ |
---|
2028 | #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s))) |
---|
2029 | |
---|
2030 | /* Set size/use field */ |
---|
2031 | #define set_head(p, s) ((p)->size = (s)) |
---|
2032 | |
---|
2033 | /* Set size at footer (only when chunk is not in use) */ |
---|
2034 | #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s)) |
---|
2035 | |
---|
2036 | |
---|
2037 | /* |
---|
2038 | -------------------- Internal data structures -------------------- |
---|
2039 | |
---|
2040 | All internal state is held in an instance of malloc_state defined |
---|
2041 | below. There are no other static variables, except in two optional |
---|
2042 | cases: |
---|
2043 | * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above. |
---|
2044 | * If HAVE_MMAP is true, but mmap doesn't support |
---|
2045 | MAP_ANONYMOUS, a dummy file descriptor for mmap. |
---|
2046 | |
---|
2047 | Beware of lots of tricks that minimize the total bookkeeping space |
---|
2048 | requirements. The result is a little over 1K bytes (for 4byte |
---|
2049 | pointers and size_t.) |
---|
2050 | */ |
---|
2051 | |
---|
2052 | /* |
---|
2053 | Bins |
---|
2054 | |
---|
2055 | An array of bin headers for free chunks. Each bin is doubly |
---|
2056 | linked. The bins are approximately proportionally (log) spaced. |
---|
2057 | There are a lot of these bins (128). This may look excessive, but |
---|
2058 | works very well in practice. Most bins hold sizes that are |
---|
2059 | unusual as malloc request sizes, but are more usual for fragments |
---|
2060 | and consolidated sets of chunks, which is what these bins hold, so |
---|
2061 | they can be found quickly. All procedures maintain the invariant |
---|
2062 | that no consolidated chunk physically borders another one, so each |
---|
2063 | chunk in a list is known to be preceeded and followed by either |
---|
2064 | inuse chunks or the ends of memory. |
---|
2065 | |
---|
2066 | Chunks in bins are kept in size order, with ties going to the |
---|
2067 | approximately least recently used chunk. Ordering isn't needed |
---|
2068 | for the small bins, which all contain the same-sized chunks, but |
---|
2069 | facilitates best-fit allocation for larger chunks. These lists |
---|
2070 | are just sequential. Keeping them in order almost never requires |
---|
2071 | enough traversal to warrant using fancier ordered data |
---|
2072 | structures. |
---|
2073 | |
---|
2074 | Chunks of the same size are linked with the most |
---|
2075 | recently freed at the front, and allocations are taken from the |
---|
2076 | back. This results in LRU (FIFO) allocation order, which tends |
---|
2077 | to give each chunk an equal opportunity to be consolidated with |
---|
2078 | adjacent freed chunks, resulting in larger free chunks and less |
---|
2079 | fragmentation. |
---|
2080 | |
---|
2081 | To simplify use in double-linked lists, each bin header acts |
---|
2082 | as a malloc_chunk. This avoids special-casing for headers. |
---|
2083 | But to conserve space and improve locality, we allocate |
---|
2084 | only the fd/bk pointers of bins, and then use repositioning tricks |
---|
2085 | to treat these as the fields of a malloc_chunk*. |
---|
2086 | */ |
---|
2087 | |
---|
2088 | typedef struct malloc_chunk* mbinptr; |
---|
2089 | |
---|
2090 | /* addressing -- note that bin_at(0) does not exist */ |
---|
2091 | #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1))) |
---|
2092 | |
---|
2093 | /* analog of ++bin */ |
---|
2094 | #define next_bin(b) ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1))) |
---|
2095 | |
---|
2096 | /* Reminders about list directionality within bins */ |
---|
2097 | #define first(b) ((b)->fd) |
---|
2098 | #define last(b) ((b)->bk) |
---|
2099 | |
---|
2100 | /* Take a chunk off a bin list */ |
---|
2101 | #define unlink(P, BK, FD) { \ |
---|
2102 | FD = P->fd; \ |
---|
2103 | BK = P->bk; \ |
---|
2104 | FD->bk = BK; \ |
---|
2105 | BK->fd = FD; \ |
---|
2106 | } |
---|
2107 | |
---|
2108 | /* |
---|
2109 | Indexing |
---|
2110 | |
---|
2111 | Bins for sizes < 512 bytes contain chunks of all the same size, spaced |
---|
2112 | 8 bytes apart. Larger bins are approximately logarithmically spaced: |
---|
2113 | |
---|
2114 | 64 bins of size 8 |
---|
2115 | 32 bins of size 64 |
---|
2116 | 16 bins of size 512 |
---|
2117 | 8 bins of size 4096 |
---|
2118 | 4 bins of size 32768 |
---|
2119 | 2 bins of size 262144 |
---|
2120 | 1 bin of size what's left |
---|
2121 | |
---|
2122 | There is actually a little bit of slop in the numbers in bin_index |
---|
2123 | for the sake of speed. This makes no difference elsewhere. |
---|
2124 | |
---|
2125 | The bins top out around 1MB because we expect to service large |
---|
2126 | requests via mmap. |
---|
2127 | */ |
---|
2128 | |
---|
2129 | #define NBINS 128 |
---|
2130 | #define NSMALLBINS 64 |
---|
2131 | #define SMALLBIN_WIDTH 8 |
---|
2132 | #define MIN_LARGE_SIZE 512 |
---|
2133 | |
---|
2134 | #define in_smallbin_range(sz) \ |
---|
2135 | ((unsigned long)(sz) < (unsigned long)MIN_LARGE_SIZE) |
---|
2136 | |
---|
2137 | #define smallbin_index(sz) (((unsigned)(sz)) >> 3) |
---|
2138 | |
---|
2139 | #define largebin_index(sz) \ |
---|
2140 | (((((unsigned long)(sz)) >> 6) <= 32)? 56 + (((unsigned long)(sz)) >> 6): \ |
---|
2141 | ((((unsigned long)(sz)) >> 9) <= 20)? 91 + (((unsigned long)(sz)) >> 9): \ |
---|
2142 | ((((unsigned long)(sz)) >> 12) <= 10)? 110 + (((unsigned long)(sz)) >> 12): \ |
---|
2143 | ((((unsigned long)(sz)) >> 15) <= 4)? 119 + (((unsigned long)(sz)) >> 15): \ |
---|
2144 | ((((unsigned long)(sz)) >> 18) <= 2)? 124 + (((unsigned long)(sz)) >> 18): \ |
---|
2145 | 126) |
---|
2146 | |
---|
2147 | #define bin_index(sz) \ |
---|
2148 | ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz)) |
---|
2149 | |
---|
2150 | |
---|
2151 | /* |
---|
2152 | Unsorted chunks |
---|
2153 | |
---|
2154 | All remainders from chunk splits, as well as all returned chunks, |
---|
2155 | are first placed in the "unsorted" bin. They are then placed |
---|
2156 | in regular bins after malloc gives them ONE chance to be used before |
---|
2157 | binning. So, basically, the unsorted_chunks list acts as a queue, |
---|
2158 | with chunks being placed on it in free (and malloc_consolidate), |
---|
2159 | and taken off (to be either used or placed in bins) in malloc. |
---|
2160 | */ |
---|
2161 | |
---|
2162 | /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */ |
---|
2163 | #define unsorted_chunks(M) (bin_at(M, 1)) |
---|
2164 | |
---|
2165 | /* |
---|
2166 | Top |
---|
2167 | |
---|
2168 | The top-most available chunk (i.e., the one bordering the end of |
---|
2169 | available memory) is treated specially. It is never included in |
---|
2170 | any bin, is used only if no other chunk is available, and is |
---|
2171 | released back to the system if it is very large (see |
---|
2172 | M_TRIM_THRESHOLD). Because top initially |
---|
2173 | points to its own bin with initial zero size, thus forcing |
---|
2174 | extension on the first malloc request, we avoid having any special |
---|
2175 | code in malloc to check whether it even exists yet. But we still |
---|
2176 | need to do so when getting memory from system, so we make |
---|
2177 | initial_top treat the bin as a legal but unusable chunk during the |
---|
2178 | interval between initialization and the first call to |
---|
2179 | sYSMALLOc. (This is somewhat delicate, since it relies on |
---|
2180 | the 2 preceding words to be zero during this interval as well.) |
---|
2181 | */ |
---|
2182 | |
---|
2183 | /* Conveniently, the unsorted bin can be used as dummy top on first call */ |
---|
2184 | #define initial_top(M) (unsorted_chunks(M)) |
---|
2185 | |
---|
2186 | /* |
---|
2187 | Binmap |
---|
2188 | |
---|
2189 | To help compensate for the large number of bins, a one-level index |
---|
2190 | structure is used for bin-by-bin searching. `binmap' is a |
---|
2191 | bitvector recording whether bins are definitely empty so they can |
---|
2192 | be skipped over during during traversals. The bits are NOT always |
---|
2193 | cleared as soon as bins are empty, but instead only |
---|
2194 | when they are noticed to be empty during traversal in malloc. |
---|
2195 | */ |
---|
2196 | |
---|
2197 | /* Conservatively use 32 bits per map word, even if on 64bit system */ |
---|
2198 | #define BINMAPSHIFT 5 |
---|
2199 | #define BITSPERMAP (1U << BINMAPSHIFT) |
---|
2200 | #define BINMAPSIZE (NBINS / BITSPERMAP) |
---|
2201 | |
---|
2202 | #define idx2block(i) ((i) >> BINMAPSHIFT) |
---|
2203 | #define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT)-1)))) |
---|
2204 | |
---|
2205 | #define mark_bin(m,i) ((m)->binmap[idx2block(i)] |= idx2bit(i)) |
---|
2206 | #define unmark_bin(m,i) ((m)->binmap[idx2block(i)] &= ~(idx2bit(i))) |
---|
2207 | #define get_binmap(m,i) ((m)->binmap[idx2block(i)] & idx2bit(i)) |
---|
2208 | |
---|
2209 | /* |
---|
2210 | Fastbins |
---|
2211 | |
---|
2212 | An array of lists holding recently freed small chunks. Fastbins |
---|
2213 | are not doubly linked. It is faster to single-link them, and |
---|
2214 | since chunks are never removed from the middles of these lists, |
---|
2215 | double linking is not necessary. Also, unlike regular bins, they |
---|
2216 | are not even processed in FIFO order (they use faster LIFO) since |
---|
2217 | ordering doesn't much matter in the transient contexts in which |
---|
2218 | fastbins are normally used. |
---|
2219 | |
---|
2220 | Chunks in fastbins keep their inuse bit set, so they cannot |
---|
2221 | be consolidated with other free chunks. malloc_consolidate |
---|
2222 | releases all chunks in fastbins and consolidates them with |
---|
2223 | other free chunks. |
---|
2224 | */ |
---|
2225 | |
---|
2226 | typedef struct malloc_chunk* mfastbinptr; |
---|
2227 | |
---|
2228 | /* offset 2 to use otherwise unindexable first 2 bins */ |
---|
2229 | #define fastbin_index(sz) ((((unsigned int)(sz)) >> 3) - 2) |
---|
2230 | |
---|
2231 | /* The maximum fastbin request size we support */ |
---|
2232 | #define MAX_FAST_SIZE 80 |
---|
2233 | |
---|
2234 | #define NFASTBINS (fastbin_index(request2size(MAX_FAST_SIZE))+1) |
---|
2235 | |
---|
2236 | /* |
---|
2237 | FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free() |
---|
2238 | that triggers automatic consolidation of possibly-surrounding |
---|
2239 | fastbin chunks. This is a heuristic, so the exact value should not |
---|
2240 | matter too much. It is defined at half the default trim threshold as a |
---|
2241 | compromise heuristic to only attempt consolidation if it is likely |
---|
2242 | to lead to trimming. However, it is not dynamically tunable, since |
---|
2243 | consolidation reduces fragmentation surrounding loarge chunks even |
---|
2244 | if trimming is not used. |
---|
2245 | */ |
---|
2246 | |
---|
2247 | #define FASTBIN_CONSOLIDATION_THRESHOLD (65536UL) |
---|
2248 | |
---|
2249 | /* |
---|
2250 | Since the lowest 2 bits in max_fast don't matter in size comparisons, |
---|
2251 | they are used as flags. |
---|
2252 | */ |
---|
2253 | |
---|
2254 | /* |
---|
2255 | FASTCHUNKS_BIT held in max_fast indicates that there are probably |
---|
2256 | some fastbin chunks. It is set true on entering a chunk into any |
---|
2257 | fastbin, and cleared only in malloc_consolidate. |
---|
2258 | |
---|
2259 | The truth value is inverted so that have_fastchunks will be true |
---|
2260 | upon startup (since statics are zero-filled), simplifying |
---|
2261 | initialization checks. |
---|
2262 | */ |
---|
2263 | |
---|
2264 | #define FASTCHUNKS_BIT (1U) |
---|
2265 | |
---|
2266 | #define have_fastchunks(M) (((M)->max_fast & FASTCHUNKS_BIT) == 0) |
---|
2267 | #define clear_fastchunks(M) ((M)->max_fast |= FASTCHUNKS_BIT) |
---|
2268 | #define set_fastchunks(M) ((M)->max_fast &= ~FASTCHUNKS_BIT) |
---|
2269 | |
---|
2270 | /* |
---|
2271 | NONCONTIGUOUS_BIT indicates that MORECORE does not return contiguous |
---|
2272 | regions. Otherwise, contiguity is exploited in merging together, |
---|
2273 | when possible, results from consecutive MORECORE calls. |
---|
2274 | |
---|
2275 | The initial value comes from MORECORE_CONTIGUOUS, but is |
---|
2276 | changed dynamically if mmap is ever used as an sbrk substitute. |
---|
2277 | */ |
---|
2278 | |
---|
2279 | #define NONCONTIGUOUS_BIT (2U) |
---|
2280 | |
---|
2281 | #define contiguous(M) (((M)->max_fast & NONCONTIGUOUS_BIT) == 0) |
---|
2282 | #define noncontiguous(M) (((M)->max_fast & NONCONTIGUOUS_BIT) != 0) |
---|
2283 | #define set_noncontiguous(M) ((M)->max_fast |= NONCONTIGUOUS_BIT) |
---|
2284 | #define set_contiguous(M) ((M)->max_fast &= ~NONCONTIGUOUS_BIT) |
---|
2285 | |
---|
2286 | /* |
---|
2287 | Set value of max_fast. |
---|
2288 | Use impossibly small value if 0. |
---|
2289 | Precondition: there are no existing fastbin chunks. |
---|
2290 | Setting the value clears fastchunk bit but preserves noncontiguous bit. |
---|
2291 | */ |
---|
2292 | |
---|
2293 | #define set_max_fast(M, s) \ |
---|
2294 | (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \ |
---|
2295 | FASTCHUNKS_BIT | \ |
---|
2296 | ((M)->max_fast & NONCONTIGUOUS_BIT) |
---|
2297 | |
---|
2298 | |
---|
2299 | /* |
---|
2300 | ----------- Internal state representation and initialization ----------- |
---|
2301 | */ |
---|
2302 | |
---|
2303 | struct malloc_state { |
---|
2304 | |
---|
2305 | /* The maximum chunk size to be eligible for fastbin */ |
---|
2306 | INTERNAL_SIZE_T max_fast; /* low 2 bits used as flags */ |
---|
2307 | |
---|
2308 | /* Fastbins */ |
---|
2309 | mfastbinptr fastbins[NFASTBINS]; |
---|
2310 | |
---|
2311 | /* Base of the topmost chunk -- not otherwise kept in a bin */ |
---|
2312 | mchunkptr top; |
---|
2313 | |
---|
2314 | /* The remainder from the most recent split of a small request */ |
---|
2315 | mchunkptr last_remainder; |
---|
2316 | |
---|
2317 | /* Normal bins packed as described above */ |
---|
2318 | mchunkptr bins[NBINS * 2]; |
---|
2319 | |
---|
2320 | /* Bitmap of bins */ |
---|
2321 | unsigned int binmap[BINMAPSIZE]; |
---|
2322 | |
---|
2323 | /* Tunable parameters */ |
---|
2324 | unsigned long trim_threshold; |
---|
2325 | INTERNAL_SIZE_T top_pad; |
---|
2326 | INTERNAL_SIZE_T mmap_threshold; |
---|
2327 | |
---|
2328 | /* Memory map support */ |
---|
2329 | int n_mmaps; |
---|
2330 | int n_mmaps_max; |
---|
2331 | int max_n_mmaps; |
---|
2332 | |
---|
2333 | /* Cache malloc_getpagesize */ |
---|
2334 | unsigned int pagesize; |
---|
2335 | |
---|
2336 | /* Statistics */ |
---|
2337 | INTERNAL_SIZE_T mmapped_mem; |
---|
2338 | INTERNAL_SIZE_T sbrked_mem; |
---|
2339 | INTERNAL_SIZE_T max_sbrked_mem; |
---|
2340 | INTERNAL_SIZE_T max_mmapped_mem; |
---|
2341 | INTERNAL_SIZE_T max_total_mem; |
---|
2342 | }; |
---|
2343 | |
---|
2344 | typedef struct malloc_state *mstate; |
---|
2345 | |
---|
2346 | /* |
---|
2347 | There is exactly one instance of this struct in this malloc. |
---|
2348 | If you are adapting this malloc in a way that does NOT use a static |
---|
2349 | malloc_state, you MUST explicitly zero-fill it before using. This |
---|
2350 | malloc relies on the property that malloc_state is initialized to |
---|
2351 | all zeroes (as is true of C statics). |
---|
2352 | */ |
---|
2353 | |
---|
2354 | static struct malloc_state av_; /* never directly referenced */ |
---|
2355 | |
---|
2356 | /* |
---|
2357 | All uses of av_ are via get_malloc_state(). |
---|
2358 | At most one "call" to get_malloc_state is made per invocation of |
---|
2359 | the public versions of malloc and free, but other routines |
---|
2360 | that in turn invoke malloc and/or free may call more then once. |
---|
2361 | Also, it is called in check* routines if DEBUG is set. |
---|
2362 | */ |
---|
2363 | |
---|
2364 | #define get_malloc_state() (&(av_)) |
---|
2365 | |
---|
2366 | /* |
---|
2367 | Initialize a malloc_state struct. |
---|
2368 | |
---|
2369 | This is called only from within malloc_consolidate, which needs |
---|
2370 | be called in the same contexts anyway. It is never called directly |
---|
2371 | outside of malloc_consolidate because some optimizing compilers try |
---|
2372 | to inline it at all call points, which turns out not to be an |
---|
2373 | optimization at all. (Inlining it in malloc_consolidate is fine though.) |
---|
2374 | */ |
---|
2375 | |
---|
2376 | #if __STD_C |
---|
2377 | static void malloc_init_state(mstate av) |
---|
2378 | #else |
---|
2379 | static void malloc_init_state(av) mstate av; |
---|
2380 | #endif |
---|
2381 | { |
---|
2382 | int i; |
---|
2383 | mbinptr bin; |
---|
2384 | |
---|
2385 | /* Establish circular links for normal bins */ |
---|
2386 | for (i = 1; i < NBINS; ++i) { |
---|
2387 | bin = bin_at(av,i); |
---|
2388 | bin->fd = bin->bk = bin; |
---|
2389 | } |
---|
2390 | |
---|
2391 | av->top_pad = DEFAULT_TOP_PAD; |
---|
2392 | av->n_mmaps_max = DEFAULT_MMAP_MAX; |
---|
2393 | av->mmap_threshold = DEFAULT_MMAP_THRESHOLD; |
---|
2394 | av->trim_threshold = DEFAULT_TRIM_THRESHOLD; |
---|
2395 | |
---|
2396 | #if !MORECORE_CONTIGUOUS |
---|
2397 | set_noncontiguous(av); |
---|
2398 | #endif |
---|
2399 | |
---|
2400 | set_max_fast(av, DEFAULT_MXFAST); |
---|
2401 | |
---|
2402 | av->top = initial_top(av); |
---|
2403 | av->pagesize = malloc_getpagesize; |
---|
2404 | } |
---|
2405 | |
---|
2406 | /* |
---|
2407 | Other internal utilities operating on mstates |
---|
2408 | */ |
---|
2409 | |
---|
2410 | #if __STD_C |
---|
2411 | static Void_t* sYSMALLOc(INTERNAL_SIZE_T, mstate); |
---|
2412 | static int sYSTRIm(size_t, mstate); |
---|
2413 | static void malloc_consolidate(mstate); |
---|
2414 | static Void_t** iALLOc(size_t, size_t*, int, Void_t**); |
---|
2415 | #else |
---|
2416 | static Void_t* sYSMALLOc(); |
---|
2417 | static int sYSTRIm(); |
---|
2418 | static void malloc_consolidate(); |
---|
2419 | static Void_t** iALLOc(); |
---|
2420 | #endif |
---|
2421 | |
---|
2422 | /* |
---|
2423 | Debugging support |
---|
2424 | |
---|
2425 | These routines make a number of assertions about the states |
---|
2426 | of data structures that should be true at all times. If any |
---|
2427 | are not true, it's very likely that a user program has somehow |
---|
2428 | trashed memory. (It's also possible that there is a coding error |
---|
2429 | in malloc. In which case, please report it!) |
---|
2430 | */ |
---|
2431 | |
---|
2432 | #if ! DEBUG |
---|
2433 | |
---|
2434 | #define check_chunk(P) |
---|
2435 | #define check_free_chunk(P) |
---|
2436 | #define check_inuse_chunk(P) |
---|
2437 | #define check_remalloced_chunk(P,N) |
---|
2438 | #define check_malloced_chunk(P,N) |
---|
2439 | #define check_malloc_state() |
---|
2440 | |
---|
2441 | #else |
---|
2442 | #define check_chunk(P) do_check_chunk(P) |
---|
2443 | #define check_free_chunk(P) do_check_free_chunk(P) |
---|
2444 | #define check_inuse_chunk(P) do_check_inuse_chunk(P) |
---|
2445 | #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N) |
---|
2446 | #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N) |
---|
2447 | #define check_malloc_state() do_check_malloc_state() |
---|
2448 | |
---|
2449 | /* |
---|
2450 | Properties of all chunks |
---|
2451 | */ |
---|
2452 | |
---|
2453 | #if __STD_C |
---|
2454 | static void do_check_chunk(mchunkptr p) |
---|
2455 | #else |
---|
2456 | static void do_check_chunk(p) mchunkptr p; |
---|
2457 | #endif |
---|
2458 | { |
---|
2459 | mstate av = get_malloc_state(); |
---|
2460 | unsigned long sz = chunksize(p); |
---|
2461 | /* min and max possible addresses assuming contiguous allocation */ |
---|
2462 | char* max_address = (char*)(av->top) + chunksize(av->top); |
---|
2463 | char* min_address = max_address - av->sbrked_mem; |
---|
2464 | |
---|
2465 | if (!chunk_is_mmapped(p)) { |
---|
2466 | |
---|
2467 | /* Has legal address ... */ |
---|
2468 | if (p != av->top) { |
---|
2469 | if (contiguous(av)) { |
---|
2470 | assert(((char*)p) >= min_address); |
---|
2471 | assert(((char*)p + sz) <= ((char*)(av->top))); |
---|
2472 | } |
---|
2473 | } |
---|
2474 | else { |
---|
2475 | /* top size is always at least MINSIZE */ |
---|
2476 | assert((unsigned long)(sz) >= MINSIZE); |
---|
2477 | /* top predecessor always marked inuse */ |
---|
2478 | assert(prev_inuse(p)); |
---|
2479 | } |
---|
2480 | |
---|
2481 | } |
---|
2482 | else { |
---|
2483 | #if HAVE_MMAP |
---|
2484 | /* address is outside main heap */ |
---|
2485 | if (contiguous(av) && av->top != initial_top(av)) { |
---|
2486 | assert(((char*)p) < min_address || ((char*)p) > max_address); |
---|
2487 | } |
---|
2488 | /* chunk is page-aligned */ |
---|
2489 | assert(((p->prev_size + sz) & (av->pagesize-1)) == 0); |
---|
2490 | /* mem is aligned */ |
---|
2491 | assert(aligned_OK(chunk2mem(p))); |
---|
2492 | #else |
---|
2493 | /* force an appropriate assert violation if debug set */ |
---|
2494 | assert(!chunk_is_mmapped(p)); |
---|
2495 | #endif |
---|
2496 | } |
---|
2497 | } |
---|
2498 | |
---|
2499 | /* |
---|
2500 | Properties of free chunks |
---|
2501 | */ |
---|
2502 | |
---|
2503 | #if __STD_C |
---|
2504 | static void do_check_free_chunk(mchunkptr p) |
---|
2505 | #else |
---|
2506 | static void do_check_free_chunk(p) mchunkptr p; |
---|
2507 | #endif |
---|
2508 | { |
---|
2509 | mstate av = get_malloc_state(); |
---|
2510 | |
---|
2511 | INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; |
---|
2512 | mchunkptr next = chunk_at_offset(p, sz); |
---|
2513 | |
---|
2514 | do_check_chunk(p); |
---|
2515 | |
---|
2516 | /* Chunk must claim to be free ... */ |
---|
2517 | assert(!inuse(p)); |
---|
2518 | assert (!chunk_is_mmapped(p)); |
---|
2519 | |
---|
2520 | /* Unless a special marker, must have OK fields */ |
---|
2521 | if ((unsigned long)(sz) >= MINSIZE) |
---|
2522 | { |
---|
2523 | assert((sz & MALLOC_ALIGN_MASK) == 0); |
---|
2524 | assert(aligned_OK(chunk2mem(p))); |
---|
2525 | /* ... matching footer field */ |
---|
2526 | assert(next->prev_size == sz); |
---|
2527 | /* ... and is fully consolidated */ |
---|
2528 | assert(prev_inuse(p)); |
---|
2529 | assert (next == av->top || inuse(next)); |
---|
2530 | |
---|
2531 | /* ... and has minimally sane links */ |
---|
2532 | assert(p->fd->bk == p); |
---|
2533 | assert(p->bk->fd == p); |
---|
2534 | } |
---|
2535 | else /* markers are always of size SIZE_SZ */ |
---|
2536 | assert(sz == SIZE_SZ); |
---|
2537 | } |
---|
2538 | |
---|
2539 | /* |
---|
2540 | Properties of inuse chunks |
---|
2541 | */ |
---|
2542 | |
---|
2543 | #if __STD_C |
---|
2544 | static void do_check_inuse_chunk(mchunkptr p) |
---|
2545 | #else |
---|
2546 | static void do_check_inuse_chunk(p) mchunkptr p; |
---|
2547 | #endif |
---|
2548 | { |
---|
2549 | mstate av = get_malloc_state(); |
---|
2550 | mchunkptr next; |
---|
2551 | do_check_chunk(p); |
---|
2552 | |
---|
2553 | if (chunk_is_mmapped(p)) |
---|
2554 | return; /* mmapped chunks have no next/prev */ |
---|
2555 | |
---|
2556 | /* Check whether it claims to be in use ... */ |
---|
2557 | assert(inuse(p)); |
---|
2558 | |
---|
2559 | next = next_chunk(p); |
---|
2560 | |
---|
2561 | /* ... and is surrounded by OK chunks. |
---|
2562 | Since more things can be checked with free chunks than inuse ones, |
---|
2563 | if an inuse chunk borders them and debug is on, it's worth doing them. |
---|
2564 | */ |
---|
2565 | if (!prev_inuse(p)) { |
---|
2566 | /* Note that we cannot even look at prev unless it is not inuse */ |
---|
2567 | mchunkptr prv = prev_chunk(p); |
---|
2568 | assert(next_chunk(prv) == p); |
---|
2569 | do_check_free_chunk(prv); |
---|
2570 | } |
---|
2571 | |
---|
2572 | if (next == av->top) { |
---|
2573 | assert(prev_inuse(next)); |
---|
2574 | assert(chunksize(next) >= MINSIZE); |
---|
2575 | } |
---|
2576 | else if (!inuse(next)) |
---|
2577 | do_check_free_chunk(next); |
---|
2578 | } |
---|
2579 | |
---|
2580 | /* |
---|
2581 | Properties of chunks recycled from fastbins |
---|
2582 | */ |
---|
2583 | |
---|
2584 | #if __STD_C |
---|
2585 | static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) |
---|
2586 | #else |
---|
2587 | static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s; |
---|
2588 | #endif |
---|
2589 | { |
---|
2590 | INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; |
---|
2591 | |
---|
2592 | do_check_inuse_chunk(p); |
---|
2593 | |
---|
2594 | /* Legal size ... */ |
---|
2595 | assert((sz & MALLOC_ALIGN_MASK) == 0); |
---|
2596 | assert((unsigned long)(sz) >= MINSIZE); |
---|
2597 | /* ... and alignment */ |
---|
2598 | assert(aligned_OK(chunk2mem(p))); |
---|
2599 | /* chunk is less than MINSIZE more than request */ |
---|
2600 | assert((long)(sz) - (long)(s) >= 0); |
---|
2601 | assert((long)(sz) - (long)(s + MINSIZE) < 0); |
---|
2602 | } |
---|
2603 | |
---|
2604 | /* |
---|
2605 | Properties of nonrecycled chunks at the point they are malloced |
---|
2606 | */ |
---|
2607 | |
---|
2608 | #if __STD_C |
---|
2609 | static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) |
---|
2610 | #else |
---|
2611 | static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s; |
---|
2612 | #endif |
---|
2613 | { |
---|
2614 | /* same as recycled case ... */ |
---|
2615 | do_check_remalloced_chunk(p, s); |
---|
2616 | |
---|
2617 | /* |
---|
2618 | ... plus, must obey implementation invariant that prev_inuse is |
---|
2619 | always true of any allocated chunk; i.e., that each allocated |
---|
2620 | chunk borders either a previously allocated and still in-use |
---|
2621 | chunk, or the base of its memory arena. This is ensured |
---|
2622 | by making all allocations from the the `lowest' part of any found |
---|
2623 | chunk. This does not necessarily hold however for chunks |
---|
2624 | recycled via fastbins. |
---|
2625 | */ |
---|
2626 | |
---|
2627 | assert(prev_inuse(p)); |
---|
2628 | } |
---|
2629 | |
---|
2630 | |
---|
2631 | /* |
---|
2632 | Properties of malloc_state. |
---|
2633 | |
---|
2634 | This may be useful for debugging malloc, as well as detecting user |
---|
2635 | programmer errors that somehow write into malloc_state. |
---|
2636 | |
---|
2637 | If you are extending or experimenting with this malloc, you can |
---|
2638 | probably figure out how to hack this routine to print out or |
---|
2639 | display chunk addresses, sizes, bins, and other instrumentation. |
---|
2640 | */ |
---|
2641 | |
---|
2642 | static void do_check_malloc_state() |
---|
2643 | { |
---|
2644 | mstate av = get_malloc_state(); |
---|
2645 | int i; |
---|
2646 | mchunkptr p; |
---|
2647 | mchunkptr q; |
---|
2648 | mbinptr b; |
---|
2649 | unsigned int binbit; |
---|
2650 | int empty; |
---|
2651 | unsigned int idx; |
---|
2652 | INTERNAL_SIZE_T size; |
---|
2653 | unsigned long total = 0; |
---|
2654 | int max_fast_bin; |
---|
2655 | |
---|
2656 | /* internal size_t must be no wider than pointer type */ |
---|
2657 | assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*)); |
---|
2658 | |
---|
2659 | /* alignment is a power of 2 */ |
---|
2660 | assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0); |
---|
2661 | |
---|
2662 | /* cannot run remaining checks until fully initialized */ |
---|
2663 | if (av->top == 0 || av->top == initial_top(av)) |
---|
2664 | return; |
---|
2665 | |
---|
2666 | /* pagesize is a power of 2 */ |
---|
2667 | assert((av->pagesize & (av->pagesize-1)) == 0); |
---|
2668 | |
---|
2669 | /* properties of fastbins */ |
---|
2670 | |
---|
2671 | /* max_fast is in allowed range */ |
---|
2672 | assert((av->max_fast & ~1) <= request2size(MAX_FAST_SIZE)); |
---|
2673 | |
---|
2674 | max_fast_bin = fastbin_index(av->max_fast); |
---|
2675 | |
---|
2676 | for (i = 0; i < NFASTBINS; ++i) { |
---|
2677 | p = av->fastbins[i]; |
---|
2678 | |
---|
2679 | /* all bins past max_fast are empty */ |
---|
2680 | if (i > max_fast_bin) |
---|
2681 | assert(p == 0); |
---|
2682 | |
---|
2683 | while (p != 0) { |
---|
2684 | /* each chunk claims to be inuse */ |
---|
2685 | do_check_inuse_chunk(p); |
---|
2686 | total += chunksize(p); |
---|
2687 | /* chunk belongs in this bin */ |
---|
2688 | assert(fastbin_index(chunksize(p)) == i); |
---|
2689 | p = p->fd; |
---|
2690 | } |
---|
2691 | } |
---|
2692 | |
---|
2693 | if (total != 0) |
---|
2694 | assert(have_fastchunks(av)); |
---|
2695 | else if (!have_fastchunks(av)) |
---|
2696 | assert(total == 0); |
---|
2697 | |
---|
2698 | /* check normal bins */ |
---|
2699 | for (i = 1; i < NBINS; ++i) { |
---|
2700 | b = bin_at(av,i); |
---|
2701 | |
---|
2702 | /* binmap is accurate (except for bin 1 == unsorted_chunks) */ |
---|
2703 | if (i >= 2) { |
---|
2704 | binbit = get_binmap(av,i); |
---|
2705 | empty = last(b) == b; |
---|
2706 | if (!binbit) |
---|
2707 | assert(empty); |
---|
2708 | else if (!empty) |
---|
2709 | assert(binbit); |
---|
2710 | } |
---|
2711 | |
---|
2712 | for (p = last(b); p != b; p = p->bk) { |
---|
2713 | /* each chunk claims to be free */ |
---|
2714 | do_check_free_chunk(p); |
---|
2715 | size = chunksize(p); |
---|
2716 | total += size; |
---|
2717 | if (i >= 2) { |
---|
2718 | /* chunk belongs in bin */ |
---|
2719 | idx = bin_index(size); |
---|
2720 | assert(idx == i); |
---|
2721 | /* lists are sorted */ |
---|
2722 | assert(p->bk == b || |
---|
2723 | (unsigned long)chunksize(p->bk) >= (unsigned long)chunksize(p)); |
---|
2724 | } |
---|
2725 | /* chunk is followed by a legal chain of inuse chunks */ |
---|
2726 | for (q = next_chunk(p); |
---|
2727 | (q != av->top && inuse(q) && |
---|
2728 | (unsigned long)(chunksize(q)) >= MINSIZE); |
---|
2729 | q = next_chunk(q)) |
---|
2730 | do_check_inuse_chunk(q); |
---|
2731 | } |
---|
2732 | } |
---|
2733 | |
---|
2734 | /* top chunk is OK */ |
---|
2735 | check_chunk(av->top); |
---|
2736 | |
---|
2737 | /* sanity checks for statistics */ |
---|
2738 | |
---|
2739 | assert(total <= (unsigned long)(av->max_total_mem)); |
---|
2740 | assert(av->n_mmaps >= 0); |
---|
2741 | assert(av->n_mmaps <= av->n_mmaps_max); |
---|
2742 | assert(av->n_mmaps <= av->max_n_mmaps); |
---|
2743 | |
---|
2744 | assert((unsigned long)(av->sbrked_mem) <= |
---|
2745 | (unsigned long)(av->max_sbrked_mem)); |
---|
2746 | |
---|
2747 | assert((unsigned long)(av->mmapped_mem) <= |
---|
2748 | (unsigned long)(av->max_mmapped_mem)); |
---|
2749 | |
---|
2750 | assert((unsigned long)(av->max_total_mem) >= |
---|
2751 | (unsigned long)(av->mmapped_mem) + (unsigned long)(av->sbrked_mem)); |
---|
2752 | } |
---|
2753 | #endif |
---|
2754 | |
---|
2755 | |
---|
2756 | /* ----------- Routines dealing with system allocation -------------- */ |
---|
2757 | |
---|
2758 | /* |
---|
2759 | sysmalloc handles malloc cases requiring more memory from the system. |
---|
2760 | On entry, it is assumed that av->top does not have enough |
---|
2761 | space to service request for nb bytes, thus requiring that av->top |
---|
2762 | be extended or replaced. |
---|
2763 | */ |
---|
2764 | |
---|
2765 | #if __STD_C |
---|
2766 | static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av) |
---|
2767 | #else |
---|
2768 | static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av; |
---|
2769 | #endif |
---|
2770 | { |
---|
2771 | mchunkptr old_top; /* incoming value of av->top */ |
---|
2772 | INTERNAL_SIZE_T old_size; /* its size */ |
---|
2773 | char* old_end; /* its end address */ |
---|
2774 | |
---|
2775 | long size; /* arg to first MORECORE or mmap call */ |
---|
2776 | char* brk; /* return value from MORECORE */ |
---|
2777 | |
---|
2778 | long correction; /* arg to 2nd MORECORE call */ |
---|
2779 | char* snd_brk; /* 2nd return val */ |
---|
2780 | |
---|
2781 | INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */ |
---|
2782 | INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */ |
---|
2783 | char* aligned_brk; /* aligned offset into brk */ |
---|
2784 | |
---|
2785 | mchunkptr p; /* the allocated/returned chunk */ |
---|
2786 | mchunkptr remainder; /* remainder from allocation */ |
---|
2787 | unsigned long remainder_size; /* its size */ |
---|
2788 | |
---|
2789 | unsigned long sum; /* for updating stats */ |
---|
2790 | |
---|
2791 | size_t pagemask = av->pagesize - 1; |
---|
2792 | |
---|
2793 | |
---|
2794 | #if HAVE_MMAP |
---|
2795 | |
---|
2796 | /* |
---|
2797 | If have mmap, and the request size meets the mmap threshold, and |
---|
2798 | the system supports mmap, and there are few enough currently |
---|
2799 | allocated mmapped regions, try to directly map this request |
---|
2800 | rather than expanding top. |
---|
2801 | */ |
---|
2802 | |
---|
2803 | if ((unsigned long)(nb) >= (unsigned long)(av->mmap_threshold) && |
---|
2804 | (av->n_mmaps < av->n_mmaps_max)) { |
---|
2805 | |
---|
2806 | char* mm; /* return value from mmap call*/ |
---|
2807 | |
---|
2808 | /* |
---|
2809 | Round up size to nearest page. For mmapped chunks, the overhead |
---|
2810 | is one SIZE_SZ unit larger than for normal chunks, because there |
---|
2811 | is no following chunk whose prev_size field could be used. |
---|
2812 | */ |
---|
2813 | size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask; |
---|
2814 | |
---|
2815 | /* Don't try if size wraps around 0 */ |
---|
2816 | if ((unsigned long)(size) > (unsigned long)(nb)) { |
---|
2817 | |
---|
2818 | mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE)); |
---|
2819 | |
---|
2820 | if (mm != (char*)(MORECORE_FAILURE)) { |
---|
2821 | |
---|
2822 | /* |
---|
2823 | The offset to the start of the mmapped region is stored |
---|
2824 | in the prev_size field of the chunk. This allows us to adjust |
---|
2825 | returned start address to meet alignment requirements here |
---|
2826 | and in memalign(), and still be able to compute proper |
---|
2827 | address argument for later munmap in free() and realloc(). |
---|
2828 | */ |
---|
2829 | |
---|
2830 | front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK; |
---|
2831 | if (front_misalign > 0) { |
---|
2832 | correction = MALLOC_ALIGNMENT - front_misalign; |
---|
2833 | p = (mchunkptr)(mm + correction); |
---|
2834 | p->prev_size = correction; |
---|
2835 | set_head(p, (size - correction) |IS_MMAPPED); |
---|
2836 | } |
---|
2837 | else { |
---|
2838 | p = (mchunkptr)mm; |
---|
2839 | set_head(p, size|IS_MMAPPED); |
---|
2840 | } |
---|
2841 | |
---|
2842 | /* update statistics */ |
---|
2843 | |
---|
2844 | if (++av->n_mmaps > av->max_n_mmaps) |
---|
2845 | av->max_n_mmaps = av->n_mmaps; |
---|
2846 | |
---|
2847 | sum = av->mmapped_mem += size; |
---|
2848 | if (sum > (unsigned long)(av->max_mmapped_mem)) |
---|
2849 | av->max_mmapped_mem = sum; |
---|
2850 | sum += av->sbrked_mem; |
---|
2851 | if (sum > (unsigned long)(av->max_total_mem)) |
---|
2852 | av->max_total_mem = sum; |
---|
2853 | |
---|
2854 | check_chunk(p); |
---|
2855 | |
---|
2856 | return chunk2mem(p); |
---|
2857 | } |
---|
2858 | } |
---|
2859 | } |
---|
2860 | #endif |
---|
2861 | |
---|
2862 | /* Record incoming configuration of top */ |
---|
2863 | |
---|
2864 | old_top = av->top; |
---|
2865 | old_size = chunksize(old_top); |
---|
2866 | old_end = (char*)(chunk_at_offset(old_top, old_size)); |
---|
2867 | |
---|
2868 | brk = snd_brk = (char*)(MORECORE_FAILURE); |
---|
2869 | |
---|
2870 | /* |
---|
2871 | If not the first time through, we require old_size to be |
---|
2872 | at least MINSIZE and to have prev_inuse set. |
---|
2873 | */ |
---|
2874 | |
---|
2875 | assert((old_top == initial_top(av) && old_size == 0) || |
---|
2876 | ((unsigned long) (old_size) >= MINSIZE && |
---|
2877 | prev_inuse(old_top))); |
---|
2878 | |
---|
2879 | /* Precondition: not enough current space to satisfy nb request */ |
---|
2880 | assert((unsigned long)(old_size) < (unsigned long)(nb + MINSIZE)); |
---|
2881 | |
---|
2882 | /* Precondition: all fastbins are consolidated */ |
---|
2883 | assert(!have_fastchunks(av)); |
---|
2884 | |
---|
2885 | |
---|
2886 | /* Request enough space for nb + pad + overhead */ |
---|
2887 | |
---|
2888 | size = nb + av->top_pad + MINSIZE; |
---|
2889 | |
---|
2890 | /* |
---|
2891 | If contiguous, we can subtract out existing space that we hope to |
---|
2892 | combine with new space. We add it back later only if |
---|
2893 | we don't actually get contiguous space. |
---|
2894 | */ |
---|
2895 | |
---|
2896 | if (contiguous(av)) |
---|
2897 | size -= old_size; |
---|
2898 | |
---|
2899 | /* |
---|
2900 | Round to a multiple of page size. |
---|
2901 | If MORECORE is not contiguous, this ensures that we only call it |
---|
2902 | with whole-page arguments. And if MORECORE is contiguous and |
---|
2903 | this is not first time through, this preserves page-alignment of |
---|
2904 | previous calls. Otherwise, we correct to page-align below. |
---|
2905 | */ |
---|
2906 | |
---|
2907 | size = (size + pagemask) & ~pagemask; |
---|
2908 | |
---|
2909 | /* |
---|
2910 | Don't try to call MORECORE if argument is so big as to appear |
---|
2911 | negative. Note that since mmap takes size_t arg, it may succeed |
---|
2912 | below even if we cannot call MORECORE. |
---|
2913 | */ |
---|
2914 | |
---|
2915 | if (size > 0) |
---|
2916 | brk = (char*)(MORECORE(size)); |
---|
2917 | |
---|
2918 | /* |
---|
2919 | If have mmap, try using it as a backup when MORECORE fails or |
---|
2920 | cannot be used. This is worth doing on systems that have "holes" in |
---|
2921 | address space, so sbrk cannot extend to give contiguous space, but |
---|
2922 | space is available elsewhere. Note that we ignore mmap max count |
---|
2923 | and threshold limits, since the space will not be used as a |
---|
2924 | segregated mmap region. |
---|
2925 | */ |
---|
2926 | |
---|
2927 | #if HAVE_MMAP |
---|
2928 | if (brk == (char*)(MORECORE_FAILURE)) { |
---|
2929 | |
---|
2930 | /* Cannot merge with old top, so add its size back in */ |
---|
2931 | if (contiguous(av)) |
---|
2932 | size = (size + old_size + pagemask) & ~pagemask; |
---|
2933 | |
---|
2934 | /* If we are relying on mmap as backup, then use larger units */ |
---|
2935 | if ((unsigned long)(size) < (unsigned long)(MMAP_AS_MORECORE_SIZE)) |
---|
2936 | size = MMAP_AS_MORECORE_SIZE; |
---|
2937 | |
---|
2938 | /* Don't try if size wraps around 0 */ |
---|
2939 | if ((unsigned long)(size) > (unsigned long)(nb)) { |
---|
2940 | |
---|
2941 | brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE)); |
---|
2942 | |
---|
2943 | if (brk != (char*)(MORECORE_FAILURE)) { |
---|
2944 | |
---|
2945 | /* We do not need, and cannot use, another sbrk call to find end */ |
---|
2946 | snd_brk = brk + size; |
---|
2947 | |
---|
2948 | /* |
---|
2949 | Record that we no longer have a contiguous sbrk region. |
---|
2950 | After the first time mmap is used as backup, we do not |
---|
2951 | ever rely on contiguous space since this could incorrectly |
---|
2952 | bridge regions. |
---|
2953 | */ |
---|
2954 | set_noncontiguous(av); |
---|
2955 | } |
---|
2956 | } |
---|
2957 | } |
---|
2958 | #endif |
---|
2959 | |
---|
2960 | if (brk != (char*)(MORECORE_FAILURE)) { |
---|
2961 | av->sbrked_mem += size; |
---|
2962 | |
---|
2963 | /* |
---|
2964 | If MORECORE extends previous space, we can likewise extend top size. |
---|
2965 | */ |
---|
2966 | |
---|
2967 | if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) { |
---|
2968 | set_head(old_top, (size + old_size) | PREV_INUSE); |
---|
2969 | } |
---|
2970 | |
---|
2971 | /* |
---|
2972 | Otherwise, make adjustments: |
---|
2973 | |
---|
2974 | * If the first time through or noncontiguous, we need to call sbrk |
---|
2975 | just to find out where the end of memory lies. |
---|
2976 | |
---|
2977 | * We need to ensure that all returned chunks from malloc will meet |
---|
2978 | MALLOC_ALIGNMENT |
---|
2979 | |
---|
2980 | * If there was an intervening foreign sbrk, we need to adjust sbrk |
---|
2981 | request size to account for fact that we will not be able to |
---|
2982 | combine new space with existing space in old_top. |
---|
2983 | |
---|
2984 | * Almost all systems internally allocate whole pages at a time, in |
---|
2985 | which case we might as well use the whole last page of request. |
---|
2986 | So we allocate enough more memory to hit a page boundary now, |
---|
2987 | which in turn causes future contiguous calls to page-align. |
---|
2988 | */ |
---|
2989 | |
---|
2990 | else { |
---|
2991 | front_misalign = 0; |
---|
2992 | end_misalign = 0; |
---|
2993 | correction = 0; |
---|
2994 | aligned_brk = brk; |
---|
2995 | |
---|
2996 | /* handle contiguous cases */ |
---|
2997 | if (contiguous(av)) { |
---|
2998 | |
---|
2999 | /* Guarantee alignment of first new chunk made from this space */ |
---|
3000 | |
---|
3001 | front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK; |
---|
3002 | if (front_misalign > 0) { |
---|
3003 | |
---|
3004 | /* |
---|
3005 | Skip over some bytes to arrive at an aligned position. |
---|
3006 | We don't need to specially mark these wasted front bytes. |
---|
3007 | They will never be accessed anyway because |
---|
3008 | prev_inuse of av->top (and any chunk created from its start) |
---|
3009 | is always true after initialization. |
---|
3010 | */ |
---|
3011 | |
---|
3012 | correction = MALLOC_ALIGNMENT - front_misalign; |
---|
3013 | aligned_brk += correction; |
---|
3014 | } |
---|
3015 | |
---|
3016 | /* |
---|
3017 | If this isn't adjacent to existing space, then we will not |
---|
3018 | be able to merge with old_top space, so must add to 2nd request. |
---|
3019 | */ |
---|
3020 | |
---|
3021 | correction += old_size; |
---|
3022 | |
---|
3023 | /* Extend the end address to hit a page boundary */ |
---|
3024 | end_misalign = (INTERNAL_SIZE_T)(brk + size + correction); |
---|
3025 | correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign; |
---|
3026 | |
---|
3027 | assert(correction >= 0); |
---|
3028 | snd_brk = (char*)(MORECORE(correction)); |
---|
3029 | |
---|
3030 | /* |
---|
3031 | If can't allocate correction, try to at least find out current |
---|
3032 | brk. It might be enough to proceed without failing. |
---|
3033 | |
---|
3034 | Note that if second sbrk did NOT fail, we assume that space |
---|
3035 | is contiguous with first sbrk. This is a safe assumption unless |
---|
3036 | program is multithreaded but doesn't use locks and a foreign sbrk |
---|
3037 | occurred between our first and second calls. |
---|
3038 | */ |
---|
3039 | |
---|
3040 | if (snd_brk == (char*)(MORECORE_FAILURE)) { |
---|
3041 | correction = 0; |
---|
3042 | snd_brk = (char*)(MORECORE(0)); |
---|
3043 | } |
---|
3044 | } |
---|
3045 | |
---|
3046 | /* handle non-contiguous cases */ |
---|
3047 | else { |
---|
3048 | /* MORECORE/mmap must correctly align */ |
---|
3049 | assert(((unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK) == 0); |
---|
3050 | |
---|
3051 | /* Find out current end of memory */ |
---|
3052 | if (snd_brk == (char*)(MORECORE_FAILURE)) { |
---|
3053 | snd_brk = (char*)(MORECORE(0)); |
---|
3054 | } |
---|
3055 | } |
---|
3056 | |
---|
3057 | /* Adjust top based on results of second sbrk */ |
---|
3058 | if (snd_brk != (char*)(MORECORE_FAILURE)) { |
---|
3059 | av->top = (mchunkptr)aligned_brk; |
---|
3060 | set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE); |
---|
3061 | av->sbrked_mem += correction; |
---|
3062 | |
---|
3063 | /* |
---|
3064 | If not the first time through, we either have a |
---|
3065 | gap due to foreign sbrk or a non-contiguous region. Insert a |
---|
3066 | double fencepost at old_top to prevent consolidation with space |
---|
3067 | we don't own. These fenceposts are artificial chunks that are |
---|
3068 | marked as inuse and are in any case too small to use. We need |
---|
3069 | two to make sizes and alignments work out. |
---|
3070 | */ |
---|
3071 | |
---|
3072 | if (old_size != 0) { |
---|
3073 | /* |
---|
3074 | Shrink old_top to insert fenceposts, keeping size a |
---|
3075 | multiple of MALLOC_ALIGNMENT. We know there is at least |
---|
3076 | enough space in old_top to do this. |
---|
3077 | */ |
---|
3078 | old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK; |
---|
3079 | set_head(old_top, old_size | PREV_INUSE); |
---|
3080 | |
---|
3081 | /* |
---|
3082 | Note that the following assignments completely overwrite |
---|
3083 | old_top when old_size was previously MINSIZE. This is |
---|
3084 | intentional. We need the fencepost, even if old_top otherwise gets |
---|
3085 | lost. |
---|
3086 | */ |
---|
3087 | chunk_at_offset(old_top, old_size )->size = |
---|
3088 | SIZE_SZ|PREV_INUSE; |
---|
3089 | |
---|
3090 | chunk_at_offset(old_top, old_size + SIZE_SZ)->size = |
---|
3091 | SIZE_SZ|PREV_INUSE; |
---|
3092 | |
---|
3093 | /* If possible, release the rest. */ |
---|
3094 | if (old_size >= MINSIZE) { |
---|
3095 | fREe(chunk2mem(old_top)); |
---|
3096 | } |
---|
3097 | |
---|
3098 | } |
---|
3099 | } |
---|
3100 | } |
---|
3101 | |
---|
3102 | /* Update statistics */ |
---|
3103 | sum = av->sbrked_mem; |
---|
3104 | if (sum > (unsigned long)(av->max_sbrked_mem)) |
---|
3105 | av->max_sbrked_mem = sum; |
---|
3106 | |
---|
3107 | sum += av->mmapped_mem; |
---|
3108 | if (sum > (unsigned long)(av->max_total_mem)) |
---|
3109 | av->max_total_mem = sum; |
---|
3110 | |
---|
3111 | check_malloc_state(); |
---|
3112 | |
---|
3113 | /* finally, do the allocation */ |
---|
3114 | p = av->top; |
---|
3115 | size = chunksize(p); |
---|
3116 | |
---|
3117 | /* check that one of the above allocation paths succeeded */ |
---|
3118 | if ((unsigned long)(size) >= (unsigned long)(nb + MINSIZE)) { |
---|
3119 | remainder_size = size - nb; |
---|
3120 | remainder = chunk_at_offset(p, nb); |
---|
3121 | av->top = remainder; |
---|
3122 | set_head(p, nb | PREV_INUSE); |
---|
3123 | set_head(remainder, remainder_size | PREV_INUSE); |
---|
3124 | check_malloced_chunk(p, nb); |
---|
3125 | return chunk2mem(p); |
---|
3126 | } |
---|
3127 | } |
---|
3128 | |
---|
3129 | /* catch all failure paths */ |
---|
3130 | MALLOC_FAILURE_ACTION; |
---|
3131 | return 0; |
---|
3132 | } |
---|
3133 | |
---|
3134 | |
---|
3135 | /* |
---|
3136 | sYSTRIm is an inverse of sorts to sYSMALLOc. It gives memory back |
---|
3137 | to the system (via negative arguments to sbrk) if there is unused |
---|
3138 | memory at the `high' end of the malloc pool. It is called |
---|
3139 | automatically by free() when top space exceeds the trim |
---|
3140 | threshold. It is also called by the public malloc_trim routine. It |
---|
3141 | returns 1 if it actually released any memory, else 0. |
---|
3142 | */ |
---|
3143 | |
---|
3144 | #if __STD_C |
---|
3145 | static int sYSTRIm(size_t pad, mstate av) |
---|
3146 | #else |
---|
3147 | static int sYSTRIm(pad, av) size_t pad; mstate av; |
---|
3148 | #endif |
---|
3149 | { |
---|
3150 | long top_size; /* Amount of top-most memory */ |
---|
3151 | long extra; /* Amount to release */ |
---|
3152 | long released; /* Amount actually released */ |
---|
3153 | char* current_brk; /* address returned by pre-check sbrk call */ |
---|
3154 | char* new_brk; /* address returned by post-check sbrk call */ |
---|
3155 | size_t pagesz; |
---|
3156 | |
---|
3157 | pagesz = av->pagesize; |
---|
3158 | top_size = chunksize(av->top); |
---|
3159 | |
---|
3160 | /* Release in pagesize units, keeping at least one page */ |
---|
3161 | extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz; |
---|
3162 | |
---|
3163 | if (extra > 0) { |
---|
3164 | |
---|
3165 | /* |
---|
3166 | Only proceed if end of memory is where we last set it. |
---|
3167 | This avoids problems if there were foreign sbrk calls. |
---|
3168 | */ |
---|
3169 | current_brk = (char*)(MORECORE(0)); |
---|
3170 | if (current_brk == (char*)(av->top) + top_size) { |
---|
3171 | |
---|
3172 | /* |
---|
3173 | Attempt to release memory. We ignore MORECORE return value, |
---|
3174 | and instead call again to find out where new end of memory is. |
---|
3175 | This avoids problems if first call releases less than we asked, |
---|
3176 | of if failure somehow altered brk value. (We could still |
---|
3177 | encounter problems if it altered brk in some very bad way, |
---|
3178 | but the only thing we can do is adjust anyway, which will cause |
---|
3179 | some downstream failure.) |
---|
3180 | */ |
---|
3181 | |
---|
3182 | MORECORE(-extra); |
---|
3183 | new_brk = (char*)(MORECORE(0)); |
---|
3184 | |
---|
3185 | if (new_brk != (char*)MORECORE_FAILURE) { |
---|
3186 | released = (long)(current_brk - new_brk); |
---|
3187 | |
---|
3188 | if (released != 0) { |
---|
3189 | /* Success. Adjust top. */ |
---|
3190 | av->sbrked_mem -= released; |
---|
3191 | set_head(av->top, (top_size - released) | PREV_INUSE); |
---|
3192 | check_malloc_state(); |
---|
3193 | return 1; |
---|
3194 | } |
---|
3195 | } |
---|
3196 | } |
---|
3197 | } |
---|
3198 | return 0; |
---|
3199 | } |
---|
3200 | |
---|
3201 | /* |
---|
3202 | ------------------------------ malloc ------------------------------ |
---|
3203 | */ |
---|
3204 | |
---|
3205 | #if __STD_C |
---|
3206 | Void_t* mALLOc(size_t bytes) |
---|
3207 | #else |
---|
3208 | Void_t* mALLOc(bytes) size_t bytes; |
---|
3209 | #endif |
---|
3210 | { |
---|
3211 | mstate av = get_malloc_state(); |
---|
3212 | |
---|
3213 | INTERNAL_SIZE_T nb; /* normalized request size */ |
---|
3214 | unsigned int idx; /* associated bin index */ |
---|
3215 | mbinptr bin; /* associated bin */ |
---|
3216 | mfastbinptr* fb; /* associated fastbin */ |
---|
3217 | |
---|
3218 | mchunkptr victim; /* inspected/selected chunk */ |
---|
3219 | INTERNAL_SIZE_T size; /* its size */ |
---|
3220 | int victim_index; /* its bin index */ |
---|
3221 | |
---|
3222 | mchunkptr remainder; /* remainder from a split */ |
---|
3223 | unsigned long remainder_size; /* its size */ |
---|
3224 | |
---|
3225 | unsigned int block; /* bit map traverser */ |
---|
3226 | unsigned int bit; /* bit map traverser */ |
---|
3227 | unsigned int map; /* current word of binmap */ |
---|
3228 | |
---|
3229 | mchunkptr fwd; /* misc temp for linking */ |
---|
3230 | mchunkptr bck; /* misc temp for linking */ |
---|
3231 | |
---|
3232 | /* |
---|
3233 | Convert request size to internal form by adding SIZE_SZ bytes |
---|
3234 | overhead plus possibly more to obtain necessary alignment and/or |
---|
3235 | to obtain a size of at least MINSIZE, the smallest allocatable |
---|
3236 | size. Also, checked_request2size traps (returning 0) request sizes |
---|
3237 | that are so large that they wrap around zero when padded and |
---|
3238 | aligned. |
---|
3239 | */ |
---|
3240 | |
---|
3241 | checked_request2size(bytes, nb); |
---|
3242 | |
---|
3243 | /* |
---|
3244 | If the size qualifies as a fastbin, first check corresponding bin. |
---|
3245 | This code is safe to execute even if av is not yet initialized, so we |
---|
3246 | can try it without checking, which saves some time on this fast path. |
---|
3247 | */ |
---|
3248 | |
---|
3249 | if ((unsigned long)(nb) <= (unsigned long)(av->max_fast)) { |
---|
3250 | fb = &(av->fastbins[(fastbin_index(nb))]); |
---|
3251 | if ( (victim = *fb) != 0) { |
---|
3252 | *fb = victim->fd; |
---|
3253 | check_remalloced_chunk(victim, nb); |
---|
3254 | return chunk2mem(victim); |
---|
3255 | } |
---|
3256 | } |
---|
3257 | |
---|
3258 | /* |
---|
3259 | If a small request, check regular bin. Since these "smallbins" |
---|
3260 | hold one size each, no searching within bins is necessary. |
---|
3261 | (For a large request, we need to wait until unsorted chunks are |
---|
3262 | processed to find best fit. But for small ones, fits are exact |
---|
3263 | anyway, so we can check now, which is faster.) |
---|
3264 | */ |
---|
3265 | |
---|
3266 | if (in_smallbin_range(nb)) { |
---|
3267 | idx = smallbin_index(nb); |
---|
3268 | bin = bin_at(av,idx); |
---|
3269 | |
---|
3270 | if ( (victim = last(bin)) != bin) { |
---|
3271 | if (victim == 0) /* initialization check */ |
---|
3272 | malloc_consolidate(av); |
---|
3273 | else { |
---|
3274 | bck = victim->bk; |
---|
3275 | set_inuse_bit_at_offset(victim, nb); |
---|
3276 | bin->bk = bck; |
---|
3277 | bck->fd = bin; |
---|
3278 | |
---|
3279 | check_malloced_chunk(victim, nb); |
---|
3280 | return chunk2mem(victim); |
---|
3281 | } |
---|
3282 | } |
---|
3283 | } |
---|
3284 | |
---|
3285 | /* |
---|
3286 | If this is a large request, consolidate fastbins before continuing. |
---|
3287 | While it might look excessive to kill all fastbins before |
---|
3288 | even seeing if there is space available, this avoids |
---|
3289 | fragmentation problems normally associated with fastbins. |
---|
3290 | Also, in practice, programs tend to have runs of either small or |
---|
3291 | large requests, but less often mixtures, so consolidation is not |
---|
3292 | invoked all that often in most programs. And the programs that |
---|
3293 | it is called frequently in otherwise tend to fragment. |
---|
3294 | */ |
---|
3295 | |
---|
3296 | else { |
---|
3297 | idx = largebin_index(nb); |
---|
3298 | if (have_fastchunks(av)) |
---|
3299 | malloc_consolidate(av); |
---|
3300 | } |
---|
3301 | |
---|
3302 | /* |
---|
3303 | Process recently freed or remaindered chunks, taking one only if |
---|
3304 | it is exact fit, or, if this a small request, the chunk is remainder from |
---|
3305 | the most recent non-exact fit. Place other traversed chunks in |
---|
3306 | bins. Note that this step is the only place in any routine where |
---|
3307 | chunks are placed in bins. |
---|
3308 | |
---|
3309 | The outer loop here is needed because we might not realize until |
---|
3310 | near the end of malloc that we should have consolidated, so must |
---|
3311 | do so and retry. This happens at most once, and only when we would |
---|
3312 | otherwise need to expand memory to service a "small" request. |
---|
3313 | */ |
---|
3314 | |
---|
3315 | for(;;) { |
---|
3316 | |
---|
3317 | while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) { |
---|
3318 | bck = victim->bk; |
---|
3319 | size = chunksize(victim); |
---|
3320 | |
---|
3321 | /* |
---|
3322 | If a small request, try to use last remainder if it is the |
---|
3323 | only chunk in unsorted bin. This helps promote locality for |
---|
3324 | runs of consecutive small requests. This is the only |
---|
3325 | exception to best-fit, and applies only when there is |
---|
3326 | no exact fit for a small chunk. |
---|
3327 | */ |
---|
3328 | |
---|
3329 | if (in_smallbin_range(nb) && |
---|
3330 | bck == unsorted_chunks(av) && |
---|
3331 | victim == av->last_remainder && |
---|
3332 | (unsigned long)(size) > (unsigned long)(nb + MINSIZE)) { |
---|
3333 | |
---|
3334 | /* split and reattach remainder */ |
---|
3335 | remainder_size = size - nb; |
---|
3336 | remainder = chunk_at_offset(victim, nb); |
---|
3337 | unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder; |
---|
3338 | av->last_remainder = remainder; |
---|
3339 | remainder->bk = remainder->fd = unsorted_chunks(av); |
---|
3340 | |
---|
3341 | set_head(victim, nb | PREV_INUSE); |
---|
3342 | set_head(remainder, remainder_size | PREV_INUSE); |
---|
3343 | set_foot(remainder, remainder_size); |
---|
3344 | |
---|
3345 | check_malloced_chunk(victim, nb); |
---|
3346 | return chunk2mem(victim); |
---|
3347 | } |
---|
3348 | |
---|
3349 | /* remove from unsorted list */ |
---|
3350 | unsorted_chunks(av)->bk = bck; |
---|
3351 | bck->fd = unsorted_chunks(av); |
---|
3352 | |
---|
3353 | /* Take now instead of binning if exact fit */ |
---|
3354 | |
---|
3355 | if (size == nb) { |
---|
3356 | set_inuse_bit_at_offset(victim, size); |
---|
3357 | check_malloced_chunk(victim, nb); |
---|
3358 | return chunk2mem(victim); |
---|
3359 | } |
---|
3360 | |
---|
3361 | /* place chunk in bin */ |
---|
3362 | |
---|
3363 | if (in_smallbin_range(size)) { |
---|
3364 | victim_index = smallbin_index(size); |
---|
3365 | bck = bin_at(av, victim_index); |
---|
3366 | fwd = bck->fd; |
---|
3367 | } |
---|
3368 | else { |
---|
3369 | victim_index = largebin_index(size); |
---|
3370 | bck = bin_at(av, victim_index); |
---|
3371 | fwd = bck->fd; |
---|
3372 | |
---|
3373 | /* maintain large bins in sorted order */ |
---|
3374 | if (fwd != bck) { |
---|
3375 | size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */ |
---|
3376 | /* if smaller than smallest, bypass loop below */ |
---|
3377 | if ((unsigned long)(size) <= (unsigned long)(bck->bk->size)) { |
---|
3378 | fwd = bck; |
---|
3379 | bck = bck->bk; |
---|
3380 | } |
---|
3381 | else { |
---|
3382 | while ((unsigned long)(size) < (unsigned long)(fwd->size)) |
---|
3383 | fwd = fwd->fd; |
---|
3384 | bck = fwd->bk; |
---|
3385 | } |
---|
3386 | } |
---|
3387 | } |
---|
3388 | |
---|
3389 | mark_bin(av, victim_index); |
---|
3390 | victim->bk = bck; |
---|
3391 | victim->fd = fwd; |
---|
3392 | fwd->bk = victim; |
---|
3393 | bck->fd = victim; |
---|
3394 | } |
---|
3395 | |
---|
3396 | /* |
---|
3397 | If a large request, scan through the chunks of current bin in |
---|
3398 | sorted order to find smallest that fits. This is the only step |
---|
3399 | where an unbounded number of chunks might be scanned without doing |
---|
3400 | anything useful with them. However the lists tend to be short. |
---|
3401 | */ |
---|
3402 | |
---|
3403 | if (!in_smallbin_range(nb)) { |
---|
3404 | bin = bin_at(av, idx); |
---|
3405 | |
---|
3406 | /* skip scan if empty or largest chunk is too small */ |
---|
3407 | if ((victim = last(bin)) != bin && |
---|
3408 | (unsigned long)(first(bin)->size) >= (unsigned long)(nb)) { |
---|
3409 | |
---|
3410 | while (((unsigned long)(size = chunksize(victim)) < |
---|
3411 | (unsigned long)(nb))) |
---|
3412 | victim = victim->bk; |
---|
3413 | |
---|
3414 | remainder_size = size - nb; |
---|
3415 | unlink(victim, bck, fwd); |
---|
3416 | |
---|
3417 | /* Exhaust */ |
---|
3418 | if (remainder_size < MINSIZE) { |
---|
3419 | set_inuse_bit_at_offset(victim, size); |
---|
3420 | check_malloced_chunk(victim, nb); |
---|
3421 | return chunk2mem(victim); |
---|
3422 | } |
---|
3423 | /* Split */ |
---|
3424 | else { |
---|
3425 | remainder = chunk_at_offset(victim, nb); |
---|
3426 | unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder; |
---|
3427 | remainder->bk = remainder->fd = unsorted_chunks(av); |
---|
3428 | set_head(victim, nb | PREV_INUSE); |
---|
3429 | set_head(remainder, remainder_size | PREV_INUSE); |
---|
3430 | set_foot(remainder, remainder_size); |
---|
3431 | check_malloced_chunk(victim, nb); |
---|
3432 | return chunk2mem(victim); |
---|
3433 | } |
---|
3434 | } |
---|
3435 | } |
---|
3436 | |
---|
3437 | /* |
---|
3438 | Search for a chunk by scanning bins, starting with next largest |
---|
3439 | bin. This search is strictly by best-fit; i.e., the smallest |
---|
3440 | (with ties going to approximately the least recently used) chunk |
---|
3441 | that fits is selected. |
---|
3442 | |
---|
3443 | The bitmap avoids needing to check that most blocks are nonempty. |
---|
3444 | The particular case of skipping all bins during warm-up phases |
---|
3445 | when no chunks have been returned yet is faster than it might look. |
---|
3446 | */ |
---|
3447 | |
---|
3448 | ++idx; |
---|
3449 | bin = bin_at(av,idx); |
---|
3450 | block = idx2block(idx); |
---|
3451 | map = av->binmap[block]; |
---|
3452 | bit = idx2bit(idx); |
---|
3453 | |
---|
3454 | for (;;) { |
---|
3455 | |
---|
3456 | /* Skip rest of block if there are no more set bits in this block. */ |
---|
3457 | if (bit > map || bit == 0) { |
---|
3458 | do { |
---|
3459 | if (++block >= BINMAPSIZE) /* out of bins */ |
---|
3460 | goto use_top; |
---|
3461 | } while ( (map = av->binmap[block]) == 0); |
---|
3462 | |
---|
3463 | bin = bin_at(av, (block << BINMAPSHIFT)); |
---|
3464 | bit = 1; |
---|
3465 | } |
---|
3466 | |
---|
3467 | /* Advance to bin with set bit. There must be one. */ |
---|
3468 | while ((bit & map) == 0) { |
---|
3469 | bin = next_bin(bin); |
---|
3470 | bit <<= 1; |
---|
3471 | assert(bit != 0); |
---|
3472 | } |
---|
3473 | |
---|
3474 | /* Inspect the bin. It is likely to be non-empty */ |
---|
3475 | victim = last(bin); |
---|
3476 | |
---|
3477 | /* If a false alarm (empty bin), clear the bit. */ |
---|
3478 | if (victim == bin) { |
---|
3479 | av->binmap[block] = map &= ~bit; /* Write through */ |
---|
3480 | bin = next_bin(bin); |
---|
3481 | bit <<= 1; |
---|
3482 | } |
---|
3483 | |
---|
3484 | else { |
---|
3485 | size = chunksize(victim); |
---|
3486 | |
---|
3487 | /* We know the first chunk in this bin is big enough to use. */ |
---|
3488 | assert((unsigned long)(size) >= (unsigned long)(nb)); |
---|
3489 | |
---|
3490 | remainder_size = size - nb; |
---|
3491 | |
---|
3492 | /* unlink */ |
---|
3493 | bck = victim->bk; |
---|
3494 | bin->bk = bck; |
---|
3495 | bck->fd = bin; |
---|
3496 | |
---|
3497 | /* Exhaust */ |
---|
3498 | if (remainder_size < MINSIZE) { |
---|
3499 | set_inuse_bit_at_offset(victim, size); |
---|
3500 | check_malloced_chunk(victim, nb); |
---|
3501 | return chunk2mem(victim); |
---|
3502 | } |
---|
3503 | |
---|
3504 | /* Split */ |
---|
3505 | else { |
---|
3506 | remainder = chunk_at_offset(victim, nb); |
---|
3507 | |
---|
3508 | unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder; |
---|
3509 | remainder->bk = remainder->fd = unsorted_chunks(av); |
---|
3510 | /* advertise as last remainder */ |
---|
3511 | if (in_smallbin_range(nb)) |
---|
3512 | av->last_remainder = remainder; |
---|
3513 | |
---|
3514 | set_head(victim, nb | PREV_INUSE); |
---|
3515 | set_head(remainder, remainder_size | PREV_INUSE); |
---|
3516 | set_foot(remainder, remainder_size); |
---|
3517 | check_malloced_chunk(victim, nb); |
---|
3518 | return chunk2mem(victim); |
---|
3519 | } |
---|
3520 | } |
---|
3521 | } |
---|
3522 | |
---|
3523 | use_top: |
---|
3524 | /* |
---|
3525 | If large enough, split off the chunk bordering the end of memory |
---|
3526 | (held in av->top). Note that this is in accord with the best-fit |
---|
3527 | search rule. In effect, av->top is treated as larger (and thus |
---|
3528 | less well fitting) than any other available chunk since it can |
---|
3529 | be extended to be as large as necessary (up to system |
---|
3530 | limitations). |
---|
3531 | |
---|
3532 | We require that av->top always exists (i.e., has size >= |
---|
3533 | MINSIZE) after initialization, so if it would otherwise be |
---|
3534 | exhuasted by current request, it is replenished. (The main |
---|
3535 | reason for ensuring it exists is that we may need MINSIZE space |
---|
3536 | to put in fenceposts in sysmalloc.) |
---|
3537 | */ |
---|
3538 | |
---|
3539 | victim = av->top; |
---|
3540 | size = chunksize(victim); |
---|
3541 | |
---|
3542 | if ((unsigned long)(size) >= (unsigned long)(nb + MINSIZE)) { |
---|
3543 | remainder_size = size - nb; |
---|
3544 | remainder = chunk_at_offset(victim, nb); |
---|
3545 | av->top = remainder; |
---|
3546 | set_head(victim, nb | PREV_INUSE); |
---|
3547 | set_head(remainder, remainder_size | PREV_INUSE); |
---|
3548 | |
---|
3549 | check_malloced_chunk(victim, nb); |
---|
3550 | return chunk2mem(victim); |
---|
3551 | } |
---|
3552 | |
---|
3553 | /* |
---|
3554 | If there is space available in fastbins, consolidate and retry, |
---|
3555 | to possibly avoid expanding memory. This can occur only if nb is |
---|
3556 | in smallbin range so we didn't consolidate upon entry. |
---|
3557 | */ |
---|
3558 | |
---|
3559 | else if (have_fastchunks(av)) { |
---|
3560 | assert(in_smallbin_range(nb)); |
---|
3561 | malloc_consolidate(av); |
---|
3562 | idx = smallbin_index(nb); /* restore original bin index */ |
---|
3563 | } |
---|
3564 | |
---|
3565 | /* |
---|
3566 | Otherwise, relay to handle system-dependent cases |
---|
3567 | */ |
---|
3568 | else |
---|
3569 | return sYSMALLOc(nb, av); |
---|
3570 | } |
---|
3571 | } |
---|
3572 | |
---|
3573 | /* |
---|
3574 | ------------------------------ free ------------------------------ |
---|
3575 | */ |
---|
3576 | |
---|
3577 | #if __STD_C |
---|
3578 | void fREe(Void_t* mem) |
---|
3579 | #else |
---|
3580 | void fREe(mem) Void_t* mem; |
---|
3581 | #endif |
---|
3582 | { |
---|
3583 | mstate av = get_malloc_state(); |
---|
3584 | |
---|
3585 | mchunkptr p; /* chunk corresponding to mem */ |
---|
3586 | INTERNAL_SIZE_T size; /* its size */ |
---|
3587 | mfastbinptr* fb; /* associated fastbin */ |
---|
3588 | mchunkptr nextchunk; /* next contiguous chunk */ |
---|
3589 | INTERNAL_SIZE_T nextsize; /* its size */ |
---|
3590 | int nextinuse; /* true if nextchunk is used */ |
---|
3591 | INTERNAL_SIZE_T prevsize; /* size of previous contiguous chunk */ |
---|
3592 | mchunkptr bck; /* misc temp for linking */ |
---|
3593 | mchunkptr fwd; /* misc temp for linking */ |
---|
3594 | |
---|
3595 | |
---|
3596 | /* free(0) has no effect */ |
---|
3597 | if (mem != 0) { |
---|
3598 | p = mem2chunk(mem); |
---|
3599 | size = chunksize(p); |
---|
3600 | |
---|
3601 | check_inuse_chunk(p); |
---|
3602 | |
---|
3603 | /* |
---|
3604 | If eligible, place chunk on a fastbin so it can be found |
---|
3605 | and used quickly in malloc. |
---|
3606 | */ |
---|
3607 | |
---|
3608 | if ((unsigned long)(size) <= (unsigned long)(av->max_fast) |
---|
3609 | |
---|
3610 | #if TRIM_FASTBINS |
---|
3611 | /* |
---|
3612 | If TRIM_FASTBINS set, don't place chunks |
---|
3613 | bordering top into fastbins |
---|
3614 | */ |
---|
3615 | && (chunk_at_offset(p, size) != av->top) |
---|
3616 | #endif |
---|
3617 | ) { |
---|
3618 | |
---|
3619 | set_fastchunks(av); |
---|
3620 | fb = &(av->fastbins[fastbin_index(size)]); |
---|
3621 | p->fd = *fb; |
---|
3622 | *fb = p; |
---|
3623 | } |
---|
3624 | |
---|
3625 | /* |
---|
3626 | Consolidate other non-mmapped chunks as they arrive. |
---|
3627 | */ |
---|
3628 | |
---|
3629 | else if (!chunk_is_mmapped(p)) { |
---|
3630 | nextchunk = chunk_at_offset(p, size); |
---|
3631 | nextsize = chunksize(nextchunk); |
---|
3632 | |
---|
3633 | /* consolidate backward */ |
---|
3634 | if (!prev_inuse(p)) { |
---|
3635 | prevsize = p->prev_size; |
---|
3636 | size += prevsize; |
---|
3637 | p = chunk_at_offset(p, -((long) prevsize)); |
---|
3638 | unlink(p, bck, fwd); |
---|
3639 | } |
---|
3640 | |
---|
3641 | if (nextchunk != av->top) { |
---|
3642 | /* get and clear inuse bit */ |
---|
3643 | nextinuse = inuse_bit_at_offset(nextchunk, nextsize); |
---|
3644 | set_head(nextchunk, nextsize); |
---|
3645 | |
---|
3646 | /* consolidate forward */ |
---|
3647 | if (!nextinuse) { |
---|
3648 | unlink(nextchunk, bck, fwd); |
---|
3649 | size += nextsize; |
---|
3650 | } |
---|
3651 | |
---|
3652 | /* |
---|
3653 | Place the chunk in unsorted chunk list. Chunks are |
---|
3654 | not placed into regular bins until after they have |
---|
3655 | been given one chance to be used in malloc. |
---|
3656 | */ |
---|
3657 | |
---|
3658 | bck = unsorted_chunks(av); |
---|
3659 | fwd = bck->fd; |
---|
3660 | p->bk = bck; |
---|
3661 | p->fd = fwd; |
---|
3662 | bck->fd = p; |
---|
3663 | fwd->bk = p; |
---|
3664 | |
---|
3665 | set_head(p, size | PREV_INUSE); |
---|
3666 | set_foot(p, size); |
---|
3667 | |
---|
3668 | check_free_chunk(p); |
---|
3669 | } |
---|
3670 | |
---|
3671 | /* |
---|
3672 | If the chunk borders the current high end of memory, |
---|
3673 | consolidate into top |
---|
3674 | */ |
---|
3675 | |
---|
3676 | else { |
---|
3677 | size += nextsize; |
---|
3678 | set_head(p, size | PREV_INUSE); |
---|
3679 | av->top = p; |
---|
3680 | check_chunk(p); |
---|
3681 | } |
---|
3682 | |
---|
3683 | /* |
---|
3684 | If freeing a large space, consolidate possibly-surrounding |
---|
3685 | chunks. Then, if the total unused topmost memory exceeds trim |
---|
3686 | threshold, ask malloc_trim to reduce top. |
---|
3687 | |
---|
3688 | Unless max_fast is 0, we don't know if there are fastbins |
---|
3689 | bordering top, so we cannot tell for sure whether threshold |
---|
3690 | has been reached unless fastbins are consolidated. But we |
---|
3691 | don't want to consolidate on each free. As a compromise, |
---|
3692 | consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD |
---|
3693 | is reached. |
---|
3694 | */ |
---|
3695 | |
---|
3696 | if ((unsigned long)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) { |
---|
3697 | if (have_fastchunks(av)) |
---|
3698 | malloc_consolidate(av); |
---|
3699 | |
---|
3700 | #ifndef MORECORE_CANNOT_TRIM |
---|
3701 | if ((unsigned long)(chunksize(av->top)) >= |
---|
3702 | (unsigned long)(av->trim_threshold)) |
---|
3703 | sYSTRIm(av->top_pad, av); |
---|
3704 | #endif |
---|
3705 | } |
---|
3706 | |
---|
3707 | } |
---|
3708 | /* |
---|
3709 | If the chunk was allocated via mmap, release via munmap() |
---|
3710 | Note that if HAVE_MMAP is false but chunk_is_mmapped is |
---|
3711 | true, then user must have overwritten memory. There's nothing |
---|
3712 | we can do to catch this error unless DEBUG is set, in which case |
---|
3713 | check_inuse_chunk (above) will have triggered error. |
---|
3714 | */ |
---|
3715 | |
---|
3716 | else { |
---|
3717 | #if HAVE_MMAP |
---|
3718 | int ret; |
---|
3719 | INTERNAL_SIZE_T offset = p->prev_size; |
---|
3720 | av->n_mmaps--; |
---|
3721 | av->mmapped_mem -= (size + offset); |
---|
3722 | ret = munmap((char*)p - offset, size + offset); |
---|
3723 | /* munmap returns non-zero on failure */ |
---|
3724 | assert(ret == 0); |
---|
3725 | #endif |
---|
3726 | } |
---|
3727 | } |
---|
3728 | } |
---|
3729 | |
---|
3730 | /* |
---|
3731 | ------------------------- malloc_consolidate ------------------------- |
---|
3732 | |
---|
3733 | malloc_consolidate is a specialized version of free() that tears |
---|
3734 | down chunks held in fastbins. Free itself cannot be used for this |
---|
3735 | purpose since, among other things, it might place chunks back onto |
---|
3736 | fastbins. So, instead, we need to use a minor variant of the same |
---|
3737 | code. |
---|
3738 | |
---|
3739 | Also, because this routine needs to be called the first time through |
---|
3740 | malloc anyway, it turns out to be the perfect place to trigger |
---|
3741 | initialization code. |
---|
3742 | */ |
---|
3743 | |
---|
3744 | #if __STD_C |
---|
3745 | static void malloc_consolidate(mstate av) |
---|
3746 | #else |
---|
3747 | static void malloc_consolidate(av) mstate av; |
---|
3748 | #endif |
---|
3749 | { |
---|
3750 | mfastbinptr* fb; /* current fastbin being consolidated */ |
---|
3751 | mfastbinptr* maxfb; /* last fastbin (for loop control) */ |
---|
3752 | mchunkptr p; /* current chunk being consolidated */ |
---|
3753 | mchunkptr nextp; /* next chunk to consolidate */ |
---|
3754 | mchunkptr unsorted_bin; /* bin header */ |
---|
3755 | mchunkptr first_unsorted; /* chunk to link to */ |
---|
3756 | |
---|
3757 | /* These have same use as in free() */ |
---|
3758 | mchunkptr nextchunk; |
---|
3759 | INTERNAL_SIZE_T size; |
---|
3760 | INTERNAL_SIZE_T nextsize; |
---|
3761 | INTERNAL_SIZE_T prevsize; |
---|
3762 | int nextinuse; |
---|
3763 | mchunkptr bck; |
---|
3764 | mchunkptr fwd; |
---|
3765 | |
---|
3766 | /* |
---|
3767 | If max_fast is 0, we know that av hasn't |
---|
3768 | yet been initialized, in which case do so below |
---|
3769 | */ |
---|
3770 | |
---|
3771 | if (av->max_fast != 0) { |
---|
3772 | clear_fastchunks(av); |
---|
3773 | |
---|
3774 | unsorted_bin = unsorted_chunks(av); |
---|
3775 | |
---|
3776 | /* |
---|
3777 | Remove each chunk from fast bin and consolidate it, placing it |
---|
3778 | then in unsorted bin. Among other reasons for doing this, |
---|
3779 | placing in unsorted bin avoids needing to calculate actual bins |
---|
3780 | until malloc is sure that chunks aren't immediately going to be |
---|
3781 | reused anyway. |
---|
3782 | */ |
---|
3783 | |
---|
3784 | maxfb = &(av->fastbins[fastbin_index(av->max_fast)]); |
---|
3785 | fb = &(av->fastbins[0]); |
---|
3786 | do { |
---|
3787 | if ( (p = *fb) != 0) { |
---|
3788 | *fb = 0; |
---|
3789 | |
---|
3790 | do { |
---|
3791 | check_inuse_chunk(p); |
---|
3792 | nextp = p->fd; |
---|
3793 | |
---|
3794 | /* Slightly streamlined version of consolidation code in free() */ |
---|
3795 | size = p->size & ~PREV_INUSE; |
---|
3796 | nextchunk = chunk_at_offset(p, size); |
---|
3797 | nextsize = chunksize(nextchunk); |
---|
3798 | |
---|
3799 | if (!prev_inuse(p)) { |
---|
3800 | prevsize = p->prev_size; |
---|
3801 | size += prevsize; |
---|
3802 | p = chunk_at_offset(p, -((long) prevsize)); |
---|
3803 | unlink(p, bck, fwd); |
---|
3804 | } |
---|
3805 | |
---|
3806 | if (nextchunk != av->top) { |
---|
3807 | nextinuse = inuse_bit_at_offset(nextchunk, nextsize); |
---|
3808 | set_head(nextchunk, nextsize); |
---|
3809 | |
---|
3810 | if (!nextinuse) { |
---|
3811 | size += nextsize; |
---|
3812 | unlink(nextchunk, bck, fwd); |
---|
3813 | } |
---|
3814 | |
---|
3815 | first_unsorted = unsorted_bin->fd; |
---|
3816 | unsorted_bin->fd = p; |
---|
3817 | first_unsorted->bk = p; |
---|
3818 | |
---|
3819 | set_head(p, size | PREV_INUSE); |
---|
3820 | p->bk = unsorted_bin; |
---|
3821 | p->fd = first_unsorted; |
---|
3822 | set_foot(p, size); |
---|
3823 | } |
---|
3824 | |
---|
3825 | else { |
---|
3826 | size += nextsize; |
---|
3827 | set_head(p, size | PREV_INUSE); |
---|
3828 | av->top = p; |
---|
3829 | } |
---|
3830 | |
---|
3831 | } while ( (p = nextp) != 0); |
---|
3832 | |
---|
3833 | } |
---|
3834 | } while (fb++ != maxfb); |
---|
3835 | } |
---|
3836 | else { |
---|
3837 | malloc_init_state(av); |
---|
3838 | check_malloc_state(); |
---|
3839 | } |
---|
3840 | } |
---|
3841 | |
---|
3842 | /* |
---|
3843 | ------------------------------ realloc ------------------------------ |
---|
3844 | */ |
---|
3845 | |
---|
3846 | |
---|
3847 | #if __STD_C |
---|
3848 | Void_t* rEALLOc(Void_t* oldmem, size_t bytes) |
---|
3849 | #else |
---|
3850 | Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes; |
---|
3851 | #endif |
---|
3852 | { |
---|
3853 | mstate av = get_malloc_state(); |
---|
3854 | |
---|
3855 | INTERNAL_SIZE_T nb; /* padded request size */ |
---|
3856 | |
---|
3857 | mchunkptr oldp; /* chunk corresponding to oldmem */ |
---|
3858 | INTERNAL_SIZE_T oldsize; /* its size */ |
---|
3859 | |
---|
3860 | mchunkptr newp; /* chunk to return */ |
---|
3861 | INTERNAL_SIZE_T newsize; /* its size */ |
---|
3862 | Void_t* newmem; /* corresponding user mem */ |
---|
3863 | |
---|
3864 | mchunkptr next; /* next contiguous chunk after oldp */ |
---|
3865 | |
---|
3866 | mchunkptr remainder; /* extra space at end of newp */ |
---|
3867 | unsigned long remainder_size; /* its size */ |
---|
3868 | |
---|
3869 | mchunkptr bck; /* misc temp for linking */ |
---|
3870 | mchunkptr fwd; /* misc temp for linking */ |
---|
3871 | |
---|
3872 | unsigned long copysize; /* bytes to copy */ |
---|
3873 | unsigned int ncopies; /* INTERNAL_SIZE_T words to copy */ |
---|
3874 | INTERNAL_SIZE_T* s; /* copy source */ |
---|
3875 | INTERNAL_SIZE_T* d; /* copy destination */ |
---|
3876 | |
---|
3877 | |
---|
3878 | #ifdef REALLOC_ZERO_BYTES_FREES |
---|
3879 | if (bytes == 0) { |
---|
3880 | fREe(oldmem); |
---|
3881 | return 0; |
---|
3882 | } |
---|
3883 | #endif |
---|
3884 | |
---|
3885 | /* realloc of null is supposed to be same as malloc */ |
---|
3886 | if (oldmem == 0) return mALLOc(bytes); |
---|
3887 | |
---|
3888 | checked_request2size(bytes, nb); |
---|
3889 | |
---|
3890 | oldp = mem2chunk(oldmem); |
---|
3891 | oldsize = chunksize(oldp); |
---|
3892 | |
---|
3893 | check_inuse_chunk(oldp); |
---|
3894 | |
---|
3895 | if (!chunk_is_mmapped(oldp)) { |
---|
3896 | |
---|
3897 | if ((unsigned long)(oldsize) >= (unsigned long)(nb)) { |
---|
3898 | /* already big enough; split below */ |
---|
3899 | newp = oldp; |
---|
3900 | newsize = oldsize; |
---|
3901 | } |
---|
3902 | |
---|
3903 | else { |
---|
3904 | next = chunk_at_offset(oldp, oldsize); |
---|
3905 | |
---|
3906 | /* Try to expand forward into top */ |
---|
3907 | if (next == av->top && |
---|
3908 | (unsigned long)(newsize = oldsize + chunksize(next)) >= |
---|
3909 | (unsigned long)(nb + MINSIZE)) { |
---|
3910 | set_head_size(oldp, nb); |
---|
3911 | av->top = chunk_at_offset(oldp, nb); |
---|
3912 | set_head(av->top, (newsize - nb) | PREV_INUSE); |
---|
3913 | return chunk2mem(oldp); |
---|
3914 | } |
---|
3915 | |
---|
3916 | /* Try to expand forward into next chunk; split off remainder below */ |
---|
3917 | else if (next != av->top && |
---|
3918 | !inuse(next) && |
---|
3919 | (unsigned long)(newsize = oldsize + chunksize(next)) >= |
---|
3920 | (unsigned long)(nb)) { |
---|
3921 | newp = oldp; |
---|
3922 | unlink(next, bck, fwd); |
---|
3923 | } |
---|
3924 | |
---|
3925 | /* allocate, copy, free */ |
---|
3926 | else { |
---|
3927 | newmem = mALLOc(nb - MALLOC_ALIGN_MASK); |
---|
3928 | if (newmem == 0) |
---|
3929 | return 0; /* propagate failure */ |
---|
3930 | |
---|
3931 | newp = mem2chunk(newmem); |
---|
3932 | newsize = chunksize(newp); |
---|
3933 | |
---|
3934 | /* |
---|
3935 | Avoid copy if newp is next chunk after oldp. |
---|
3936 | */ |
---|
3937 | if (newp == next) { |
---|
3938 | newsize += oldsize; |
---|
3939 | newp = oldp; |
---|
3940 | } |
---|
3941 | else { |
---|
3942 | /* |
---|
3943 | Unroll copy of <= 36 bytes (72 if 8byte sizes) |
---|
3944 | We know that contents have an odd number of |
---|
3945 | INTERNAL_SIZE_T-sized words; minimally 3. |
---|
3946 | */ |
---|
3947 | |
---|
3948 | copysize = oldsize - SIZE_SZ; |
---|
3949 | s = (INTERNAL_SIZE_T*)(oldmem); |
---|
3950 | d = (INTERNAL_SIZE_T*)(newmem); |
---|
3951 | ncopies = copysize / sizeof(INTERNAL_SIZE_T); |
---|
3952 | assert(ncopies >= 3); |
---|
3953 | |
---|
3954 | if (ncopies > 9) |
---|
3955 | MALLOC_COPY(d, s, copysize); |
---|
3956 | |
---|
3957 | else { |
---|
3958 | *(d+0) = *(s+0); |
---|
3959 | *(d+1) = *(s+1); |
---|
3960 | *(d+2) = *(s+2); |
---|
3961 | if (ncopies > 4) { |
---|
3962 | *(d+3) = *(s+3); |
---|
3963 | *(d+4) = *(s+4); |
---|
3964 | if (ncopies > 6) { |
---|
3965 | *(d+5) = *(s+5); |
---|
3966 | *(d+6) = *(s+6); |
---|
3967 | if (ncopies > 8) { |
---|
3968 | *(d+7) = *(s+7); |
---|
3969 | *(d+8) = *(s+8); |
---|
3970 | } |
---|
3971 | } |
---|
3972 | } |
---|
3973 | } |
---|
3974 | |
---|
3975 | fREe(oldmem); |
---|
3976 | check_inuse_chunk(newp); |
---|
3977 | return chunk2mem(newp); |
---|
3978 | } |
---|
3979 | } |
---|
3980 | } |
---|
3981 | |
---|
3982 | /* If possible, free extra space in old or extended chunk */ |
---|
3983 | |
---|
3984 | assert((unsigned long)(newsize) >= (unsigned long)(nb)); |
---|
3985 | |
---|
3986 | remainder_size = newsize - nb; |
---|
3987 | |
---|
3988 | if (remainder_size < MINSIZE) { /* not enough extra to split off */ |
---|
3989 | set_head_size(newp, newsize); |
---|
3990 | set_inuse_bit_at_offset(newp, newsize); |
---|
3991 | } |
---|
3992 | else { /* split remainder */ |
---|
3993 | remainder = chunk_at_offset(newp, nb); |
---|
3994 | set_head_size(newp, nb); |
---|
3995 | set_head(remainder, remainder_size | PREV_INUSE); |
---|
3996 | /* Mark remainder as inuse so free() won't complain */ |
---|
3997 | set_inuse_bit_at_offset(remainder, remainder_size); |
---|
3998 | fREe(chunk2mem(remainder)); |
---|
3999 | } |
---|
4000 | |
---|
4001 | check_inuse_chunk(newp); |
---|
4002 | return chunk2mem(newp); |
---|
4003 | } |
---|
4004 | |
---|
4005 | /* |
---|
4006 | Handle mmap cases |
---|
4007 | */ |
---|
4008 | |
---|
4009 | else { |
---|
4010 | #if HAVE_MMAP |
---|
4011 | |
---|
4012 | #if HAVE_MREMAP |
---|
4013 | INTERNAL_SIZE_T offset = oldp->prev_size; |
---|
4014 | size_t pagemask = av->pagesize - 1; |
---|
4015 | char *cp; |
---|
4016 | unsigned long sum; |
---|
4017 | |
---|
4018 | /* Note the extra SIZE_SZ overhead */ |
---|
4019 | newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask; |
---|
4020 | |
---|
4021 | /* don't need to remap if still within same page */ |
---|
4022 | if (oldsize == newsize - offset) |
---|
4023 | return oldmem; |
---|
4024 | |
---|
4025 | cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1); |
---|
4026 | |
---|
4027 | if (cp != (char*)MORECORE_FAILURE) { |
---|
4028 | |
---|
4029 | newp = (mchunkptr)(cp + offset); |
---|
4030 | set_head(newp, (newsize - offset)|IS_MMAPPED); |
---|
4031 | |
---|
4032 | assert(aligned_OK(chunk2mem(newp))); |
---|
4033 | assert((newp->prev_size == offset)); |
---|
4034 | |
---|
4035 | /* update statistics */ |
---|
4036 | sum = av->mmapped_mem += newsize - oldsize; |
---|
4037 | if (sum > (unsigned long)(av->max_mmapped_mem)) |
---|
4038 | av->max_mmapped_mem = sum; |
---|
4039 | sum += av->sbrked_mem; |
---|
4040 | if (sum > (unsigned long)(av->max_total_mem)) |
---|
4041 | av->max_total_mem = sum; |
---|
4042 | |
---|
4043 | return chunk2mem(newp); |
---|
4044 | } |
---|
4045 | #endif |
---|
4046 | |
---|
4047 | /* Note the extra SIZE_SZ overhead. */ |
---|
4048 | if ((unsigned long)(oldsize) >= (unsigned long)(nb + SIZE_SZ)) |
---|
4049 | newmem = oldmem; /* do nothing */ |
---|
4050 | else { |
---|
4051 | /* Must alloc, copy, free. */ |
---|
4052 | newmem = mALLOc(nb - MALLOC_ALIGN_MASK); |
---|
4053 | if (newmem != 0) { |
---|
4054 | MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ); |
---|
4055 | fREe(oldmem); |
---|
4056 | } |
---|
4057 | } |
---|
4058 | return newmem; |
---|
4059 | |
---|
4060 | #else |
---|
4061 | /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */ |
---|
4062 | check_malloc_state(); |
---|
4063 | MALLOC_FAILURE_ACTION; |
---|
4064 | return 0; |
---|
4065 | #endif |
---|
4066 | } |
---|
4067 | } |
---|
4068 | |
---|
4069 | /* |
---|
4070 | ------------------------------ memalign ------------------------------ |
---|
4071 | */ |
---|
4072 | |
---|
4073 | #if __STD_C |
---|
4074 | Void_t* mEMALIGn(size_t alignment, size_t bytes) |
---|
4075 | #else |
---|
4076 | Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes; |
---|
4077 | #endif |
---|
4078 | { |
---|
4079 | INTERNAL_SIZE_T nb; /* padded request size */ |
---|
4080 | char* m; /* memory returned by malloc call */ |
---|
4081 | mchunkptr p; /* corresponding chunk */ |
---|
4082 | char* brk; /* alignment point within p */ |
---|
4083 | mchunkptr newp; /* chunk to return */ |
---|
4084 | INTERNAL_SIZE_T newsize; /* its size */ |
---|
4085 | INTERNAL_SIZE_T leadsize; /* leading space before alignment point */ |
---|
4086 | mchunkptr remainder; /* spare room at end to split off */ |
---|
4087 | unsigned long remainder_size; /* its size */ |
---|
4088 | INTERNAL_SIZE_T size; |
---|
4089 | |
---|
4090 | /* If need less alignment than we give anyway, just relay to malloc */ |
---|
4091 | |
---|
4092 | if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes); |
---|
4093 | |
---|
4094 | /* Otherwise, ensure that it is at least a minimum chunk size */ |
---|
4095 | |
---|
4096 | if (alignment < MINSIZE) alignment = MINSIZE; |
---|
4097 | |
---|
4098 | /* Make sure alignment is power of 2 (in case MINSIZE is not). */ |
---|
4099 | if ((alignment & (alignment - 1)) != 0) { |
---|
4100 | size_t a = MALLOC_ALIGNMENT * 2; |
---|
4101 | while ((unsigned long)a < (unsigned long)alignment) a <<= 1; |
---|
4102 | alignment = a; |
---|
4103 | } |
---|
4104 | |
---|
4105 | checked_request2size(bytes, nb); |
---|
4106 | |
---|
4107 | /* |
---|
4108 | Strategy: find a spot within that chunk that meets the alignment |
---|
4109 | request, and then possibly free the leading and trailing space. |
---|
4110 | */ |
---|
4111 | |
---|
4112 | |
---|
4113 | /* Call malloc with worst case padding to hit alignment. */ |
---|
4114 | |
---|
4115 | m = (char*)(mALLOc(nb + alignment + MINSIZE)); |
---|
4116 | |
---|
4117 | if (m == 0) return 0; /* propagate failure */ |
---|
4118 | |
---|
4119 | p = mem2chunk(m); |
---|
4120 | |
---|
4121 | if ((((unsigned long)(m)) % alignment) != 0) { /* misaligned */ |
---|
4122 | |
---|
4123 | /* |
---|
4124 | Find an aligned spot inside chunk. Since we need to give back |
---|
4125 | leading space in a chunk of at least MINSIZE, if the first |
---|
4126 | calculation places us at a spot with less than MINSIZE leader, |
---|
4127 | we can move to the next aligned spot -- we've allocated enough |
---|
4128 | total room so that this is always possible. |
---|
4129 | */ |
---|
4130 | |
---|
4131 | brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & |
---|
4132 | -((signed long) alignment)); |
---|
4133 | if ((unsigned long)(brk - (char*)(p)) < MINSIZE) |
---|
4134 | brk += alignment; |
---|
4135 | |
---|
4136 | newp = (mchunkptr)brk; |
---|
4137 | leadsize = brk - (char*)(p); |
---|
4138 | newsize = chunksize(p) - leadsize; |
---|
4139 | |
---|
4140 | /* For mmapped chunks, just adjust offset */ |
---|
4141 | if (chunk_is_mmapped(p)) { |
---|
4142 | newp->prev_size = p->prev_size + leadsize; |
---|
4143 | set_head(newp, newsize|IS_MMAPPED); |
---|
4144 | return chunk2mem(newp); |
---|
4145 | } |
---|
4146 | |
---|
4147 | /* Otherwise, give back leader, use the rest */ |
---|
4148 | set_head(newp, newsize | PREV_INUSE); |
---|
4149 | set_inuse_bit_at_offset(newp, newsize); |
---|
4150 | set_head_size(p, leadsize); |
---|
4151 | fREe(chunk2mem(p)); |
---|
4152 | p = newp; |
---|
4153 | |
---|
4154 | assert (newsize >= nb && |
---|
4155 | (((unsigned long)(chunk2mem(p))) % alignment) == 0); |
---|
4156 | } |
---|
4157 | |
---|
4158 | /* Also give back spare room at the end */ |
---|
4159 | if (!chunk_is_mmapped(p)) { |
---|
4160 | size = chunksize(p); |
---|
4161 | if ((unsigned long)(size) > (unsigned long)(nb + MINSIZE)) { |
---|
4162 | remainder_size = size - nb; |
---|
4163 | remainder = chunk_at_offset(p, nb); |
---|
4164 | set_head(remainder, remainder_size | PREV_INUSE); |
---|
4165 | set_head_size(p, nb); |
---|
4166 | fREe(chunk2mem(remainder)); |
---|
4167 | } |
---|
4168 | } |
---|
4169 | |
---|
4170 | check_inuse_chunk(p); |
---|
4171 | return chunk2mem(p); |
---|
4172 | } |
---|
4173 | |
---|
4174 | /* |
---|
4175 | ------------------------------ calloc ------------------------------ |
---|
4176 | */ |
---|
4177 | |
---|
4178 | #if __STD_C |
---|
4179 | Void_t* cALLOc(size_t n_elements, size_t elem_size) |
---|
4180 | #else |
---|
4181 | Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size; |
---|
4182 | #endif |
---|
4183 | { |
---|
4184 | mchunkptr p; |
---|
4185 | unsigned long clearsize; |
---|
4186 | unsigned long nclears; |
---|
4187 | INTERNAL_SIZE_T* d; |
---|
4188 | |
---|
4189 | Void_t* mem = mALLOc(n_elements * elem_size); |
---|
4190 | |
---|
4191 | if (mem != 0) { |
---|
4192 | p = mem2chunk(mem); |
---|
4193 | |
---|
4194 | #if MMAP_CLEARS |
---|
4195 | if (!chunk_is_mmapped(p)) /* don't need to clear mmapped space */ |
---|
4196 | #endif |
---|
4197 | { |
---|
4198 | /* |
---|
4199 | Unroll clear of <= 36 bytes (72 if 8byte sizes) |
---|
4200 | We know that contents have an odd number of |
---|
4201 | INTERNAL_SIZE_T-sized words; minimally 3. |
---|
4202 | */ |
---|
4203 | |
---|
4204 | d = (INTERNAL_SIZE_T*)mem; |
---|
4205 | clearsize = chunksize(p) - SIZE_SZ; |
---|
4206 | nclears = clearsize / sizeof(INTERNAL_SIZE_T); |
---|
4207 | assert(nclears >= 3); |
---|
4208 | |
---|
4209 | if (nclears > 9) |
---|
4210 | MALLOC_ZERO(d, clearsize); |
---|
4211 | |
---|
4212 | else { |
---|
4213 | *(d+0) = 0; |
---|
4214 | *(d+1) = 0; |
---|
4215 | *(d+2) = 0; |
---|
4216 | if (nclears > 4) { |
---|
4217 | *(d+3) = 0; |
---|
4218 | *(d+4) = 0; |
---|
4219 | if (nclears > 6) { |
---|
4220 | *(d+5) = 0; |
---|
4221 | *(d+6) = 0; |
---|
4222 | if (nclears > 8) { |
---|
4223 | *(d+7) = 0; |
---|
4224 | *(d+8) = 0; |
---|
4225 | } |
---|
4226 | } |
---|
4227 | } |
---|
4228 | } |
---|
4229 | } |
---|
4230 | } |
---|
4231 | return mem; |
---|
4232 | } |
---|
4233 | |
---|
4234 | /* |
---|
4235 | ------------------------------ cfree ------------------------------ |
---|
4236 | */ |
---|
4237 | |
---|
4238 | #if __STD_C |
---|
4239 | void cFREe(Void_t *mem) |
---|
4240 | #else |
---|
4241 | void cFREe(mem) Void_t *mem; |
---|
4242 | #endif |
---|
4243 | { |
---|
4244 | fREe(mem); |
---|
4245 | } |
---|
4246 | |
---|
4247 | /* |
---|
4248 | ------------------------- independent_calloc ------------------------- |
---|
4249 | */ |
---|
4250 | |
---|
4251 | #if __STD_C |
---|
4252 | Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[]) |
---|
4253 | #else |
---|
4254 | Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements; size_t elem_size; Void_t* chunks[]; |
---|
4255 | #endif |
---|
4256 | { |
---|
4257 | size_t sz = elem_size; /* serves as 1-element array */ |
---|
4258 | /* opts arg of 3 means all elements are same size, and should be cleared */ |
---|
4259 | return iALLOc(n_elements, &sz, 3, chunks); |
---|
4260 | } |
---|
4261 | |
---|
4262 | /* |
---|
4263 | ------------------------- independent_comalloc ------------------------- |
---|
4264 | */ |
---|
4265 | |
---|
4266 | #if __STD_C |
---|
4267 | Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[]) |
---|
4268 | #else |
---|
4269 | Void_t** iCOMALLOc(n_elements, sizes, chunks) size_t n_elements; size_t sizes[]; Void_t* chunks[]; |
---|
4270 | #endif |
---|
4271 | { |
---|
4272 | return iALLOc(n_elements, sizes, 0, chunks); |
---|
4273 | } |
---|
4274 | |
---|
4275 | |
---|
4276 | /* |
---|
4277 | ------------------------------ ialloc ------------------------------ |
---|
4278 | ialloc provides common support for independent_X routines, handling all of |
---|
4279 | the combinations that can result. |
---|
4280 | |
---|
4281 | The opts arg has: |
---|
4282 | bit 0 set if all elements are same size (using sizes[0]) |
---|
4283 | bit 1 set if elements should be zeroed |
---|
4284 | */ |
---|
4285 | |
---|
4286 | |
---|
4287 | #if __STD_C |
---|
4288 | static Void_t** iALLOc(size_t n_elements, |
---|
4289 | size_t* sizes, |
---|
4290 | int opts, |
---|
4291 | Void_t* chunks[]) |
---|
4292 | #else |
---|
4293 | static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements; size_t* sizes; int opts; Void_t* chunks[]; |
---|
4294 | #endif |
---|
4295 | { |
---|
4296 | mstate av = get_malloc_state(); |
---|
4297 | INTERNAL_SIZE_T element_size; /* chunksize of each element, if all same */ |
---|
4298 | INTERNAL_SIZE_T contents_size; /* total size of elements */ |
---|
4299 | INTERNAL_SIZE_T array_size; /* request size of pointer array */ |
---|
4300 | Void_t* mem; /* malloced aggregate space */ |
---|
4301 | mchunkptr p; /* corresponding chunk */ |
---|
4302 | INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */ |
---|
4303 | Void_t** marray; /* either "chunks" or malloced ptr array */ |
---|
4304 | mchunkptr array_chunk; /* chunk for malloced ptr array */ |
---|
4305 | int mmx; /* to disable mmap */ |
---|
4306 | INTERNAL_SIZE_T size; |
---|
4307 | size_t i; |
---|
4308 | |
---|
4309 | /* Ensure initialization/consolidation */ |
---|
4310 | if (have_fastchunks(av)) malloc_consolidate(av); |
---|
4311 | |
---|
4312 | /* compute array length, if needed */ |
---|
4313 | if (chunks != 0) { |
---|
4314 | if (n_elements == 0) |
---|
4315 | return chunks; /* nothing to do */ |
---|
4316 | marray = chunks; |
---|
4317 | array_size = 0; |
---|
4318 | } |
---|
4319 | else { |
---|
4320 | /* if empty req, must still return chunk representing empty array */ |
---|
4321 | if (n_elements == 0) |
---|
4322 | return (Void_t**) mALLOc(0); |
---|
4323 | marray = 0; |
---|
4324 | array_size = request2size(n_elements * (sizeof(Void_t*))); |
---|
4325 | } |
---|
4326 | |
---|
4327 | /* compute total element size */ |
---|
4328 | if (opts & 0x1) { /* all-same-size */ |
---|
4329 | element_size = request2size(*sizes); |
---|
4330 | contents_size = n_elements * element_size; |
---|
4331 | } |
---|
4332 | else { /* add up all the sizes */ |
---|
4333 | element_size = 0; |
---|
4334 | contents_size = 0; |
---|
4335 | for (i = 0; i != n_elements; ++i) |
---|
4336 | contents_size += request2size(sizes[i]); |
---|
4337 | } |
---|
4338 | |
---|
4339 | /* subtract out alignment bytes from total to minimize overallocation */ |
---|
4340 | size = contents_size + array_size - MALLOC_ALIGN_MASK; |
---|
4341 | |
---|
4342 | /* |
---|
4343 | Allocate the aggregate chunk. |
---|
4344 | But first disable mmap so malloc won't use it, since |
---|
4345 | we would not be able to later free/realloc space internal |
---|
4346 | to a segregated mmap region. |
---|
4347 | */ |
---|
4348 | mmx = av->n_mmaps_max; /* disable mmap */ |
---|
4349 | av->n_mmaps_max = 0; |
---|
4350 | mem = mALLOc(size); |
---|
4351 | av->n_mmaps_max = mmx; /* reset mmap */ |
---|
4352 | if (mem == 0) |
---|
4353 | return 0; |
---|
4354 | |
---|
4355 | p = mem2chunk(mem); |
---|
4356 | assert(!chunk_is_mmapped(p)); |
---|
4357 | remainder_size = chunksize(p); |
---|
4358 | |
---|
4359 | if (opts & 0x2) { /* optionally clear the elements */ |
---|
4360 | MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size); |
---|
4361 | } |
---|
4362 | |
---|
4363 | /* If not provided, allocate the pointer array as final part of chunk */ |
---|
4364 | if (marray == 0) { |
---|
4365 | array_chunk = chunk_at_offset(p, contents_size); |
---|
4366 | marray = (Void_t**) (chunk2mem(array_chunk)); |
---|
4367 | set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE); |
---|
4368 | remainder_size = contents_size; |
---|
4369 | } |
---|
4370 | |
---|
4371 | /* split out elements */ |
---|
4372 | for (i = 0; ; ++i) { |
---|
4373 | marray[i] = chunk2mem(p); |
---|
4374 | if (i != n_elements-1) { |
---|
4375 | if (element_size != 0) |
---|
4376 | size = element_size; |
---|
4377 | else |
---|
4378 | size = request2size(sizes[i]); |
---|
4379 | remainder_size -= size; |
---|
4380 | set_head(p, size | PREV_INUSE); |
---|
4381 | p = chunk_at_offset(p, size); |
---|
4382 | } |
---|
4383 | else { /* the final element absorbs any overallocation slop */ |
---|
4384 | set_head(p, remainder_size | PREV_INUSE); |
---|
4385 | break; |
---|
4386 | } |
---|
4387 | } |
---|
4388 | |
---|
4389 | #if DEBUG |
---|
4390 | if (marray != chunks) { |
---|
4391 | /* final element must have exactly exhausted chunk */ |
---|
4392 | if (element_size != 0) |
---|
4393 | assert(remainder_size == element_size); |
---|
4394 | else |
---|
4395 | assert(remainder_size == request2size(sizes[i])); |
---|
4396 | check_inuse_chunk(mem2chunk(marray)); |
---|
4397 | } |
---|
4398 | |
---|
4399 | for (i = 0; i != n_elements; ++i) |
---|
4400 | check_inuse_chunk(mem2chunk(marray[i])); |
---|
4401 | #endif |
---|
4402 | |
---|
4403 | return marray; |
---|
4404 | } |
---|
4405 | |
---|
4406 | |
---|
4407 | /* |
---|
4408 | ------------------------------ valloc ------------------------------ |
---|
4409 | */ |
---|
4410 | |
---|
4411 | #if __STD_C |
---|
4412 | Void_t* vALLOc(size_t bytes) |
---|
4413 | #else |
---|
4414 | Void_t* vALLOc(bytes) size_t bytes; |
---|
4415 | #endif |
---|
4416 | { |
---|
4417 | /* Ensure initialization/consolidation */ |
---|
4418 | mstate av = get_malloc_state(); |
---|
4419 | if (have_fastchunks(av)) malloc_consolidate(av); |
---|
4420 | return mEMALIGn(av->pagesize, bytes); |
---|
4421 | } |
---|
4422 | |
---|
4423 | /* |
---|
4424 | ------------------------------ pvalloc ------------------------------ |
---|
4425 | */ |
---|
4426 | |
---|
4427 | |
---|
4428 | #if __STD_C |
---|
4429 | Void_t* pVALLOc(size_t bytes) |
---|
4430 | #else |
---|
4431 | Void_t* pVALLOc(bytes) size_t bytes; |
---|
4432 | #endif |
---|
4433 | { |
---|
4434 | mstate av = get_malloc_state(); |
---|
4435 | size_t pagesz; |
---|
4436 | |
---|
4437 | /* Ensure initialization/consolidation */ |
---|
4438 | if (have_fastchunks(av)) malloc_consolidate(av); |
---|
4439 | pagesz = av->pagesize; |
---|
4440 | return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1)); |
---|
4441 | } |
---|
4442 | |
---|
4443 | |
---|
4444 | /* |
---|
4445 | ------------------------------ malloc_trim ------------------------------ |
---|
4446 | */ |
---|
4447 | |
---|
4448 | #if __STD_C |
---|
4449 | int mTRIm(size_t pad) |
---|
4450 | #else |
---|
4451 | int mTRIm(pad) size_t pad; |
---|
4452 | #endif |
---|
4453 | { |
---|
4454 | mstate av = get_malloc_state(); |
---|
4455 | /* Ensure initialization/consolidation */ |
---|
4456 | malloc_consolidate(av); |
---|
4457 | |
---|
4458 | #ifndef MORECORE_CANNOT_TRIM |
---|
4459 | return sYSTRIm(pad, av); |
---|
4460 | #else |
---|
4461 | return 0; |
---|
4462 | #endif |
---|
4463 | } |
---|
4464 | |
---|
4465 | |
---|
4466 | /* |
---|
4467 | ------------------------- malloc_usable_size ------------------------- |
---|
4468 | */ |
---|
4469 | |
---|
4470 | #if __STD_C |
---|
4471 | size_t mUSABLe(Void_t* mem) |
---|
4472 | #else |
---|
4473 | size_t mUSABLe(mem) Void_t* mem; |
---|
4474 | #endif |
---|
4475 | { |
---|
4476 | mchunkptr p; |
---|
4477 | if (mem != 0) { |
---|
4478 | p = mem2chunk(mem); |
---|
4479 | if (chunk_is_mmapped(p)) |
---|
4480 | return chunksize(p) - 2*SIZE_SZ; |
---|
4481 | else if (inuse(p)) |
---|
4482 | return chunksize(p) - SIZE_SZ; |
---|
4483 | } |
---|
4484 | return 0; |
---|
4485 | } |
---|
4486 | |
---|
4487 | /* |
---|
4488 | ------------------------------ mallinfo ------------------------------ |
---|
4489 | */ |
---|
4490 | |
---|
4491 | struct mallinfo mALLINFo() |
---|
4492 | { |
---|
4493 | mstate av = get_malloc_state(); |
---|
4494 | struct mallinfo mi; |
---|
4495 | int i; |
---|
4496 | mbinptr b; |
---|
4497 | mchunkptr p; |
---|
4498 | INTERNAL_SIZE_T avail; |
---|
4499 | INTERNAL_SIZE_T fastavail; |
---|
4500 | int nblocks; |
---|
4501 | int nfastblocks; |
---|
4502 | |
---|
4503 | /* Ensure initialization */ |
---|
4504 | if (av->top == 0) malloc_consolidate(av); |
---|
4505 | |
---|
4506 | check_malloc_state(); |
---|
4507 | |
---|
4508 | /* Account for top */ |
---|
4509 | avail = chunksize(av->top); |
---|
4510 | nblocks = 1; /* top always exists */ |
---|
4511 | |
---|
4512 | /* traverse fastbins */ |
---|
4513 | nfastblocks = 0; |
---|
4514 | fastavail = 0; |
---|
4515 | |
---|
4516 | for (i = 0; i < NFASTBINS; ++i) { |
---|
4517 | for (p = av->fastbins[i]; p != 0; p = p->fd) { |
---|
4518 | ++nfastblocks; |
---|
4519 | fastavail += chunksize(p); |
---|
4520 | } |
---|
4521 | } |
---|
4522 | |
---|
4523 | avail += fastavail; |
---|
4524 | |
---|
4525 | /* traverse regular bins */ |
---|
4526 | for (i = 1; i < NBINS; ++i) { |
---|
4527 | b = bin_at(av, i); |
---|
4528 | for (p = last(b); p != b; p = p->bk) { |
---|
4529 | ++nblocks; |
---|
4530 | avail += chunksize(p); |
---|
4531 | } |
---|
4532 | } |
---|
4533 | |
---|
4534 | mi.smblks = nfastblocks; |
---|
4535 | mi.ordblks = nblocks; |
---|
4536 | mi.fordblks = avail; |
---|
4537 | mi.uordblks = av->sbrked_mem - avail; |
---|
4538 | mi.arena = av->sbrked_mem; |
---|
4539 | mi.hblks = av->n_mmaps; |
---|
4540 | mi.hblkhd = av->mmapped_mem; |
---|
4541 | mi.fsmblks = fastavail; |
---|
4542 | mi.keepcost = chunksize(av->top); |
---|
4543 | mi.usmblks = av->max_total_mem; |
---|
4544 | return mi; |
---|
4545 | } |
---|
4546 | |
---|
4547 | /* |
---|
4548 | ------------------------------ malloc_stats ------------------------------ |
---|
4549 | */ |
---|
4550 | |
---|
4551 | void mSTATs() |
---|
4552 | { |
---|
4553 | struct mallinfo mi = mALLINFo(); |
---|
4554 | |
---|
4555 | #ifdef WIN32 |
---|
4556 | { |
---|
4557 | unsigned long free, reserved, committed; |
---|
4558 | vminfo (&free, &reserved, &committed); |
---|
4559 | fprintf(stderr, "free bytes = %10lu\n", |
---|
4560 | free); |
---|
4561 | fprintf(stderr, "reserved bytes = %10lu\n", |
---|
4562 | reserved); |
---|
4563 | fprintf(stderr, "committed bytes = %10lu\n", |
---|
4564 | committed); |
---|
4565 | } |
---|
4566 | #endif |
---|
4567 | |
---|
4568 | |
---|
4569 | fprintf(stderr, "max system bytes = %10lu\n", |
---|
4570 | (unsigned long)(mi.usmblks)); |
---|
4571 | fprintf(stderr, "system bytes = %10lu\n", |
---|
4572 | (unsigned long)(mi.arena + mi.hblkhd)); |
---|
4573 | fprintf(stderr, "in use bytes = %10lu\n", |
---|
4574 | (unsigned long)(mi.uordblks + mi.hblkhd)); |
---|
4575 | |
---|
4576 | |
---|
4577 | #ifdef WIN32 |
---|
4578 | { |
---|
4579 | unsigned long kernel, user; |
---|
4580 | if (cpuinfo (TRUE, &kernel, &user)) { |
---|
4581 | fprintf(stderr, "kernel ms = %10lu\n", |
---|
4582 | kernel); |
---|
4583 | fprintf(stderr, "user ms = %10lu\n", |
---|
4584 | user); |
---|
4585 | } |
---|
4586 | } |
---|
4587 | #endif |
---|
4588 | } |
---|
4589 | |
---|
4590 | |
---|
4591 | /* |
---|
4592 | ------------------------------ mallopt ------------------------------ |
---|
4593 | */ |
---|
4594 | |
---|
4595 | #if __STD_C |
---|
4596 | int mALLOPt(int param_number, int value) |
---|
4597 | #else |
---|
4598 | int mALLOPt(param_number, value) int param_number; int value; |
---|
4599 | #endif |
---|
4600 | { |
---|
4601 | mstate av = get_malloc_state(); |
---|
4602 | /* Ensure initialization/consolidation */ |
---|
4603 | malloc_consolidate(av); |
---|
4604 | |
---|
4605 | switch(param_number) { |
---|
4606 | case M_MXFAST: |
---|
4607 | if (value >= 0 && value <= MAX_FAST_SIZE) { |
---|
4608 | set_max_fast(av, value); |
---|
4609 | return 1; |
---|
4610 | } |
---|
4611 | else |
---|
4612 | return 0; |
---|
4613 | |
---|
4614 | case M_TRIM_THRESHOLD: |
---|
4615 | av->trim_threshold = value; |
---|
4616 | return 1; |
---|
4617 | |
---|
4618 | case M_TOP_PAD: |
---|
4619 | av->top_pad = value; |
---|
4620 | return 1; |
---|
4621 | |
---|
4622 | case M_MMAP_THRESHOLD: |
---|
4623 | av->mmap_threshold = value; |
---|
4624 | return 1; |
---|
4625 | |
---|
4626 | case M_MMAP_MAX: |
---|
4627 | #if !HAVE_MMAP |
---|
4628 | if (value != 0) |
---|
4629 | return 0; |
---|
4630 | #endif |
---|
4631 | av->n_mmaps_max = value; |
---|
4632 | return 1; |
---|
4633 | |
---|
4634 | default: |
---|
4635 | return 0; |
---|
4636 | } |
---|
4637 | } |
---|
4638 | |
---|
4639 | |
---|
4640 | /* |
---|
4641 | -------------------- Alternative MORECORE functions -------------------- |
---|
4642 | */ |
---|
4643 | |
---|
4644 | |
---|
4645 | /* |
---|
4646 | General Requirements for MORECORE. |
---|
4647 | |
---|
4648 | The MORECORE function must have the following properties: |
---|
4649 | |
---|
4650 | If MORECORE_CONTIGUOUS is false: |
---|
4651 | |
---|
4652 | * MORECORE must allocate in multiples of pagesize. It will |
---|
4653 | only be called with arguments that are multiples of pagesize. |
---|
4654 | |
---|
4655 | * MORECORE(0) must return an address that is at least |
---|
4656 | MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.) |
---|
4657 | |
---|
4658 | else (i.e. If MORECORE_CONTIGUOUS is true): |
---|
4659 | |
---|
4660 | * Consecutive calls to MORECORE with positive arguments |
---|
4661 | return increasing addresses, indicating that space has been |
---|
4662 | contiguously extended. |
---|
4663 | |
---|
4664 | * MORECORE need not allocate in multiples of pagesize. |
---|
4665 | Calls to MORECORE need not have args of multiples of pagesize. |
---|
4666 | |
---|
4667 | * MORECORE need not page-align. |
---|
4668 | |
---|
4669 | In either case: |
---|
4670 | |
---|
4671 | * MORECORE may allocate more memory than requested. (Or even less, |
---|
4672 | but this will generally result in a malloc failure.) |
---|
4673 | |
---|
4674 | * MORECORE must not allocate memory when given argument zero, but |
---|
4675 | instead return one past the end address of memory from previous |
---|
4676 | nonzero call. This malloc does NOT call MORECORE(0) |
---|
4677 | until at least one call with positive arguments is made, so |
---|
4678 | the initial value returned is not important. |
---|
4679 | |
---|
4680 | * Even though consecutive calls to MORECORE need not return contiguous |
---|
4681 | addresses, it must be OK for malloc'ed chunks to span multiple |
---|
4682 | regions in those cases where they do happen to be contiguous. |
---|
4683 | |
---|
4684 | * MORECORE need not handle negative arguments -- it may instead |
---|
4685 | just return MORECORE_FAILURE when given negative arguments. |
---|
4686 | Negative arguments are always multiples of pagesize. MORECORE |
---|
4687 | must not misinterpret negative args as large positive unsigned |
---|
4688 | args. You can suppress all such calls from even occurring by defining |
---|
4689 | MORECORE_CANNOT_TRIM, |
---|
4690 | |
---|
4691 | There is some variation across systems about the type of the |
---|
4692 | argument to sbrk/MORECORE. If size_t is unsigned, then it cannot |
---|
4693 | actually be size_t, because sbrk supports negative args, so it is |
---|
4694 | normally the signed type of the same width as size_t (sometimes |
---|
4695 | declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much |
---|
4696 | matter though. Internally, we use "long" as arguments, which should |
---|
4697 | work across all reasonable possibilities. |
---|
4698 | |
---|
4699 | Additionally, if MORECORE ever returns failure for a positive |
---|
4700 | request, and HAVE_MMAP is true, then mmap is used as a noncontiguous |
---|
4701 | system allocator. This is a useful backup strategy for systems with |
---|
4702 | holes in address spaces -- in this case sbrk cannot contiguously |
---|
4703 | expand the heap, but mmap may be able to map noncontiguous space. |
---|
4704 | |
---|
4705 | If you'd like mmap to ALWAYS be used, you can define MORECORE to be |
---|
4706 | a function that always returns MORECORE_FAILURE. |
---|
4707 | |
---|
4708 | If you are using this malloc with something other than sbrk (or its |
---|
4709 | emulation) to supply memory regions, you probably want to set |
---|
4710 | MORECORE_CONTIGUOUS as false. As an example, here is a custom |
---|
4711 | allocator kindly contributed for pre-OSX macOS. It uses virtually |
---|
4712 | but not necessarily physically contiguous non-paged memory (locked |
---|
4713 | in, present and won't get swapped out). You can use it by |
---|
4714 | uncommenting this section, adding some #includes, and setting up the |
---|
4715 | appropriate defines above: |
---|
4716 | |
---|
4717 | #define MORECORE osMoreCore |
---|
4718 | #define MORECORE_CONTIGUOUS 0 |
---|
4719 | |
---|
4720 | There is also a shutdown routine that should somehow be called for |
---|
4721 | cleanup upon program exit. |
---|
4722 | |
---|
4723 | #define MAX_POOL_ENTRIES 100 |
---|
4724 | #define MINIMUM_MORECORE_SIZE (64 * 1024) |
---|
4725 | static int next_os_pool; |
---|
4726 | void *our_os_pools[MAX_POOL_ENTRIES]; |
---|
4727 | |
---|
4728 | void *osMoreCore(int size) |
---|
4729 | { |
---|
4730 | void *ptr = 0; |
---|
4731 | static void *sbrk_top = 0; |
---|
4732 | |
---|
4733 | if (size > 0) |
---|
4734 | { |
---|
4735 | if (size < MINIMUM_MORECORE_SIZE) |
---|
4736 | size = MINIMUM_MORECORE_SIZE; |
---|
4737 | if (CurrentExecutionLevel() == kTaskLevel) |
---|
4738 | ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); |
---|
4739 | if (ptr == 0) |
---|
4740 | { |
---|
4741 | return (void *) MORECORE_FAILURE; |
---|
4742 | } |
---|
4743 | // save ptrs so they can be freed during cleanup |
---|
4744 | our_os_pools[next_os_pool] = ptr; |
---|
4745 | next_os_pool++; |
---|
4746 | ptr = (void *) ((((unsigned long) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); |
---|
4747 | sbrk_top = (char *) ptr + size; |
---|
4748 | return ptr; |
---|
4749 | } |
---|
4750 | else if (size < 0) |
---|
4751 | { |
---|
4752 | // we don't currently support shrink behavior |
---|
4753 | return (void *) MORECORE_FAILURE; |
---|
4754 | } |
---|
4755 | else |
---|
4756 | { |
---|
4757 | return sbrk_top; |
---|
4758 | } |
---|
4759 | } |
---|
4760 | |
---|
4761 | // cleanup any allocated memory pools |
---|
4762 | // called as last thing before shutting down driver |
---|
4763 | |
---|
4764 | void osCleanupMem(void) |
---|
4765 | { |
---|
4766 | void **ptr; |
---|
4767 | |
---|
4768 | for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) |
---|
4769 | if (*ptr) |
---|
4770 | { |
---|
4771 | PoolDeallocate(*ptr); |
---|
4772 | *ptr = 0; |
---|
4773 | } |
---|
4774 | } |
---|
4775 | |
---|
4776 | */ |
---|
4777 | |
---|
4778 | |
---|
4779 | /* |
---|
4780 | -------------------------------------------------------------- |
---|
4781 | |
---|
4782 | Emulation of sbrk for win32. |
---|
4783 | Donated by J. Walter <Walter@GeNeSys-e.de>. |
---|
4784 | For additional information about this code, and malloc on Win32, see |
---|
4785 | http://www.genesys-e.de/jwalter/ |
---|
4786 | */ |
---|
4787 | |
---|
4788 | |
---|
4789 | #ifdef WIN32 |
---|
4790 | |
---|
4791 | #ifdef _DEBUG |
---|
4792 | /* #define TRACE */ |
---|
4793 | #endif |
---|
4794 | |
---|
4795 | /* Support for USE_MALLOC_LOCK */ |
---|
4796 | #ifdef USE_MALLOC_LOCK |
---|
4797 | |
---|
4798 | /* Wait for spin lock */ |
---|
4799 | static int slwait (int *sl) { |
---|
4800 | while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0) |
---|
4801 | Sleep (0); |
---|
4802 | return 0; |
---|
4803 | } |
---|
4804 | |
---|
4805 | /* Release spin lock */ |
---|
4806 | static int slrelease (int *sl) { |
---|
4807 | InterlockedExchange (sl, 0); |
---|
4808 | return 0; |
---|
4809 | } |
---|
4810 | |
---|
4811 | #ifdef NEEDED |
---|
4812 | /* Spin lock for emulation code */ |
---|
4813 | static int g_sl; |
---|
4814 | #endif |
---|
4815 | |
---|
4816 | #endif /* USE_MALLOC_LOCK */ |
---|
4817 | |
---|
4818 | /* getpagesize for windows */ |
---|
4819 | static long getpagesize (void) { |
---|
4820 | static long g_pagesize = 0; |
---|
4821 | if (! g_pagesize) { |
---|
4822 | SYSTEM_INFO system_info; |
---|
4823 | GetSystemInfo (&system_info); |
---|
4824 | g_pagesize = system_info.dwPageSize; |
---|
4825 | } |
---|
4826 | return g_pagesize; |
---|
4827 | } |
---|
4828 | static long getregionsize (void) { |
---|
4829 | static long g_regionsize = 0; |
---|
4830 | if (! g_regionsize) { |
---|
4831 | SYSTEM_INFO system_info; |
---|
4832 | GetSystemInfo (&system_info); |
---|
4833 | g_regionsize = system_info.dwAllocationGranularity; |
---|
4834 | } |
---|
4835 | return g_regionsize; |
---|
4836 | } |
---|
4837 | |
---|
4838 | /* A region list entry */ |
---|
4839 | typedef struct _region_list_entry { |
---|
4840 | void *top_allocated; |
---|
4841 | void *top_committed; |
---|
4842 | void *top_reserved; |
---|
4843 | long reserve_size; |
---|
4844 | struct _region_list_entry *previous; |
---|
4845 | } region_list_entry; |
---|
4846 | |
---|
4847 | /* Allocate and link a region entry in the region list */ |
---|
4848 | static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) { |
---|
4849 | region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry)); |
---|
4850 | if (! next) |
---|
4851 | return FALSE; |
---|
4852 | next->top_allocated = (char *) base_reserved; |
---|
4853 | next->top_committed = (char *) base_reserved; |
---|
4854 | next->top_reserved = (char *) base_reserved + reserve_size; |
---|
4855 | next->reserve_size = reserve_size; |
---|
4856 | next->previous = *last; |
---|
4857 | *last = next; |
---|
4858 | return TRUE; |
---|
4859 | } |
---|
4860 | /* Free and unlink the last region entry from the region list */ |
---|
4861 | static int region_list_remove (region_list_entry **last) { |
---|
4862 | region_list_entry *previous = (*last)->previous; |
---|
4863 | if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last)) |
---|
4864 | return FALSE; |
---|
4865 | *last = previous; |
---|
4866 | return TRUE; |
---|
4867 | } |
---|
4868 | |
---|
4869 | #define CEIL(size,to) (((size)+(to)-1)&~((to)-1)) |
---|
4870 | #define FLOOR(size,to) ((size)&~((to)-1)) |
---|
4871 | |
---|
4872 | #define SBRK_SCALE 0 |
---|
4873 | /* #define SBRK_SCALE 1 */ |
---|
4874 | /* #define SBRK_SCALE 2 */ |
---|
4875 | /* #define SBRK_SCALE 4 */ |
---|
4876 | |
---|
4877 | /* sbrk for windows */ |
---|
4878 | static void *sbrk (long size) { |
---|
4879 | static long g_pagesize, g_my_pagesize; |
---|
4880 | static long g_regionsize, g_my_regionsize; |
---|
4881 | static region_list_entry *g_last; |
---|
4882 | void *result = (void *) MORECORE_FAILURE; |
---|
4883 | #ifdef TRACE |
---|
4884 | printf ("sbrk %d\n", size); |
---|
4885 | #endif |
---|
4886 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
---|
4887 | /* Wait for spin lock */ |
---|
4888 | slwait (&g_sl); |
---|
4889 | #endif |
---|
4890 | /* First time initialization */ |
---|
4891 | if (! g_pagesize) { |
---|
4892 | g_pagesize = getpagesize (); |
---|
4893 | g_my_pagesize = g_pagesize << SBRK_SCALE; |
---|
4894 | } |
---|
4895 | if (! g_regionsize) { |
---|
4896 | g_regionsize = getregionsize (); |
---|
4897 | g_my_regionsize = g_regionsize << SBRK_SCALE; |
---|
4898 | } |
---|
4899 | if (! g_last) { |
---|
4900 | if (! region_list_append (&g_last, 0, 0)) |
---|
4901 | goto sbrk_exit; |
---|
4902 | } |
---|
4903 | /* Assert invariants */ |
---|
4904 | assert (g_last); |
---|
4905 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated && |
---|
4906 | g_last->top_allocated <= g_last->top_committed); |
---|
4907 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed && |
---|
4908 | g_last->top_committed <= g_last->top_reserved && |
---|
4909 | (unsigned) g_last->top_committed % g_pagesize == 0); |
---|
4910 | assert ((unsigned) g_last->top_reserved % g_regionsize == 0); |
---|
4911 | assert ((unsigned) g_last->reserve_size % g_regionsize == 0); |
---|
4912 | /* Allocation requested? */ |
---|
4913 | if (size >= 0) { |
---|
4914 | /* Allocation size is the requested size */ |
---|
4915 | long allocate_size = size; |
---|
4916 | /* Compute the size to commit */ |
---|
4917 | long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed; |
---|
4918 | /* Do we reach the commit limit? */ |
---|
4919 | if (to_commit > 0) { |
---|
4920 | /* Round size to commit */ |
---|
4921 | long commit_size = CEIL (to_commit, g_my_pagesize); |
---|
4922 | /* Compute the size to reserve */ |
---|
4923 | long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved; |
---|
4924 | /* Do we reach the reserve limit? */ |
---|
4925 | if (to_reserve > 0) { |
---|
4926 | /* Compute the remaining size to commit in the current region */ |
---|
4927 | long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed; |
---|
4928 | if (remaining_commit_size > 0) { |
---|
4929 | /* Assert preconditions */ |
---|
4930 | assert ((unsigned) g_last->top_committed % g_pagesize == 0); |
---|
4931 | assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); { |
---|
4932 | /* Commit this */ |
---|
4933 | void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size, |
---|
4934 | MEM_COMMIT, PAGE_READWRITE); |
---|
4935 | /* Check returned pointer for consistency */ |
---|
4936 | if (base_committed != g_last->top_committed) |
---|
4937 | goto sbrk_exit; |
---|
4938 | /* Assert postconditions */ |
---|
4939 | assert ((unsigned) base_committed % g_pagesize == 0); |
---|
4940 | #ifdef TRACE |
---|
4941 | printf ("Commit %p %d\n", base_committed, remaining_commit_size); |
---|
4942 | #endif |
---|
4943 | /* Adjust the regions commit top */ |
---|
4944 | g_last->top_committed = (char *) base_committed + remaining_commit_size; |
---|
4945 | } |
---|
4946 | } { |
---|
4947 | /* Now we are going to search and reserve. */ |
---|
4948 | int contiguous = -1; |
---|
4949 | int found = FALSE; |
---|
4950 | MEMORY_BASIC_INFORMATION memory_info; |
---|
4951 | void *base_reserved; |
---|
4952 | long reserve_size; |
---|
4953 | do { |
---|
4954 | /* Assume contiguous memory */ |
---|
4955 | contiguous = TRUE; |
---|
4956 | /* Round size to reserve */ |
---|
4957 | reserve_size = CEIL (to_reserve, g_my_regionsize); |
---|
4958 | /* Start with the current region's top */ |
---|
4959 | memory_info.BaseAddress = g_last->top_reserved; |
---|
4960 | /* Assert preconditions */ |
---|
4961 | assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0); |
---|
4962 | assert (0 < reserve_size && reserve_size % g_regionsize == 0); |
---|
4963 | while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) { |
---|
4964 | /* Assert postconditions */ |
---|
4965 | assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0); |
---|
4966 | #ifdef TRACE |
---|
4967 | printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize, |
---|
4968 | memory_info.State == MEM_FREE ? "FREE": |
---|
4969 | (memory_info.State == MEM_RESERVE ? "RESERVED": |
---|
4970 | (memory_info.State == MEM_COMMIT ? "COMMITTED": "?"))); |
---|
4971 | #endif |
---|
4972 | /* Region is free, well aligned and big enough: we are done */ |
---|
4973 | if (memory_info.State == MEM_FREE && |
---|
4974 | (unsigned) memory_info.BaseAddress % g_regionsize == 0 && |
---|
4975 | memory_info.RegionSize >= (unsigned) reserve_size) { |
---|
4976 | found = TRUE; |
---|
4977 | break; |
---|
4978 | } |
---|
4979 | /* From now on we can't get contiguous memory! */ |
---|
4980 | contiguous = FALSE; |
---|
4981 | /* Recompute size to reserve */ |
---|
4982 | reserve_size = CEIL (allocate_size, g_my_regionsize); |
---|
4983 | memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize; |
---|
4984 | /* Assert preconditions */ |
---|
4985 | assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0); |
---|
4986 | assert (0 < reserve_size && reserve_size % g_regionsize == 0); |
---|
4987 | } |
---|
4988 | /* Search failed? */ |
---|
4989 | if (! found) |
---|
4990 | goto sbrk_exit; |
---|
4991 | /* Assert preconditions */ |
---|
4992 | assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0); |
---|
4993 | assert (0 < reserve_size && reserve_size % g_regionsize == 0); |
---|
4994 | /* Try to reserve this */ |
---|
4995 | base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size, |
---|
4996 | MEM_RESERVE, PAGE_NOACCESS); |
---|
4997 | if (! base_reserved) { |
---|
4998 | int rc = GetLastError (); |
---|
4999 | if (rc != ERROR_INVALID_ADDRESS) |
---|
5000 | goto sbrk_exit; |
---|
5001 | } |
---|
5002 | /* A null pointer signals (hopefully) a race condition with another thread. */ |
---|
5003 | /* In this case, we try again. */ |
---|
5004 | } while (! base_reserved); |
---|
5005 | /* Check returned pointer for consistency */ |
---|
5006 | if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress) |
---|
5007 | goto sbrk_exit; |
---|
5008 | /* Assert postconditions */ |
---|
5009 | assert ((unsigned) base_reserved % g_regionsize == 0); |
---|
5010 | #ifdef TRACE |
---|
5011 | printf ("Reserve %p %d\n", base_reserved, reserve_size); |
---|
5012 | #endif |
---|
5013 | /* Did we get contiguous memory? */ |
---|
5014 | if (contiguous) { |
---|
5015 | long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated; |
---|
5016 | /* Adjust allocation size */ |
---|
5017 | allocate_size -= start_size; |
---|
5018 | /* Adjust the regions allocation top */ |
---|
5019 | g_last->top_allocated = g_last->top_committed; |
---|
5020 | /* Recompute the size to commit */ |
---|
5021 | to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed; |
---|
5022 | /* Round size to commit */ |
---|
5023 | commit_size = CEIL (to_commit, g_my_pagesize); |
---|
5024 | } |
---|
5025 | /* Append the new region to the list */ |
---|
5026 | if (! region_list_append (&g_last, base_reserved, reserve_size)) |
---|
5027 | goto sbrk_exit; |
---|
5028 | /* Didn't we get contiguous memory? */ |
---|
5029 | if (! contiguous) { |
---|
5030 | /* Recompute the size to commit */ |
---|
5031 | to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed; |
---|
5032 | /* Round size to commit */ |
---|
5033 | commit_size = CEIL (to_commit, g_my_pagesize); |
---|
5034 | } |
---|
5035 | } |
---|
5036 | } |
---|
5037 | /* Assert preconditions */ |
---|
5038 | assert ((unsigned) g_last->top_committed % g_pagesize == 0); |
---|
5039 | assert (0 < commit_size && commit_size % g_pagesize == 0); { |
---|
5040 | /* Commit this */ |
---|
5041 | void *base_committed = VirtualAlloc (g_last->top_committed, commit_size, |
---|
5042 | MEM_COMMIT, PAGE_READWRITE); |
---|
5043 | /* Check returned pointer for consistency */ |
---|
5044 | if (base_committed != g_last->top_committed) |
---|
5045 | goto sbrk_exit; |
---|
5046 | /* Assert postconditions */ |
---|
5047 | assert ((unsigned) base_committed % g_pagesize == 0); |
---|
5048 | #ifdef TRACE |
---|
5049 | printf ("Commit %p %d\n", base_committed, commit_size); |
---|
5050 | #endif |
---|
5051 | /* Adjust the regions commit top */ |
---|
5052 | g_last->top_committed = (char *) base_committed + commit_size; |
---|
5053 | } |
---|
5054 | } |
---|
5055 | /* Adjust the regions allocation top */ |
---|
5056 | g_last->top_allocated = (char *) g_last->top_allocated + allocate_size; |
---|
5057 | result = (char *) g_last->top_allocated - size; |
---|
5058 | /* Deallocation requested? */ |
---|
5059 | } else if (size < 0) { |
---|
5060 | long deallocate_size = - size; |
---|
5061 | /* As long as we have a region to release */ |
---|
5062 | while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) { |
---|
5063 | /* Get the size to release */ |
---|
5064 | long release_size = g_last->reserve_size; |
---|
5065 | /* Get the base address */ |
---|
5066 | void *base_reserved = (char *) g_last->top_reserved - release_size; |
---|
5067 | /* Assert preconditions */ |
---|
5068 | assert ((unsigned) base_reserved % g_regionsize == 0); |
---|
5069 | assert (0 < release_size && release_size % g_regionsize == 0); { |
---|
5070 | /* Release this */ |
---|
5071 | int rc = VirtualFree (base_reserved, 0, |
---|
5072 | MEM_RELEASE); |
---|
5073 | /* Check returned code for consistency */ |
---|
5074 | if (! rc) |
---|
5075 | goto sbrk_exit; |
---|
5076 | #ifdef TRACE |
---|
5077 | printf ("Release %p %d\n", base_reserved, release_size); |
---|
5078 | #endif |
---|
5079 | } |
---|
5080 | /* Adjust deallocation size */ |
---|
5081 | deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved; |
---|
5082 | /* Remove the old region from the list */ |
---|
5083 | if (! region_list_remove (&g_last)) |
---|
5084 | goto sbrk_exit; |
---|
5085 | } { |
---|
5086 | /* Compute the size to decommit */ |
---|
5087 | long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size); |
---|
5088 | if (to_decommit >= g_my_pagesize) { |
---|
5089 | /* Compute the size to decommit */ |
---|
5090 | long decommit_size = FLOOR (to_decommit, g_my_pagesize); |
---|
5091 | /* Compute the base address */ |
---|
5092 | void *base_committed = (char *) g_last->top_committed - decommit_size; |
---|
5093 | /* Assert preconditions */ |
---|
5094 | assert ((unsigned) base_committed % g_pagesize == 0); |
---|
5095 | assert (0 < decommit_size && decommit_size % g_pagesize == 0); { |
---|
5096 | /* Decommit this */ |
---|
5097 | int rc = VirtualFree ((char *) base_committed, decommit_size, |
---|
5098 | MEM_DECOMMIT); |
---|
5099 | /* Check returned code for consistency */ |
---|
5100 | if (! rc) |
---|
5101 | goto sbrk_exit; |
---|
5102 | #ifdef TRACE |
---|
5103 | printf ("Decommit %p %d\n", base_committed, decommit_size); |
---|
5104 | #endif |
---|
5105 | } |
---|
5106 | /* Adjust deallocation size and regions commit and allocate top */ |
---|
5107 | deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed; |
---|
5108 | g_last->top_committed = base_committed; |
---|
5109 | g_last->top_allocated = base_committed; |
---|
5110 | } |
---|
5111 | } |
---|
5112 | /* Adjust regions allocate top */ |
---|
5113 | g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size; |
---|
5114 | /* Check for underflow */ |
---|
5115 | if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated || |
---|
5116 | g_last->top_allocated > g_last->top_committed) { |
---|
5117 | /* Adjust regions allocate top */ |
---|
5118 | g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size; |
---|
5119 | goto sbrk_exit; |
---|
5120 | } |
---|
5121 | result = g_last->top_allocated; |
---|
5122 | } |
---|
5123 | /* Assert invariants */ |
---|
5124 | assert (g_last); |
---|
5125 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated && |
---|
5126 | g_last->top_allocated <= g_last->top_committed); |
---|
5127 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed && |
---|
5128 | g_last->top_committed <= g_last->top_reserved && |
---|
5129 | (unsigned) g_last->top_committed % g_pagesize == 0); |
---|
5130 | assert ((unsigned) g_last->top_reserved % g_regionsize == 0); |
---|
5131 | assert ((unsigned) g_last->reserve_size % g_regionsize == 0); |
---|
5132 | |
---|
5133 | sbrk_exit: |
---|
5134 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
---|
5135 | /* Release spin lock */ |
---|
5136 | slrelease (&g_sl); |
---|
5137 | #endif |
---|
5138 | return result; |
---|
5139 | } |
---|
5140 | |
---|
5141 | /* mmap for windows */ |
---|
5142 | static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) { |
---|
5143 | static long g_pagesize; |
---|
5144 | static long g_regionsize; |
---|
5145 | #ifdef TRACE |
---|
5146 | printf ("mmap %d\n", size); |
---|
5147 | #endif |
---|
5148 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
---|
5149 | /* Wait for spin lock */ |
---|
5150 | slwait (&g_sl); |
---|
5151 | #endif |
---|
5152 | /* First time initialization */ |
---|
5153 | if (! g_pagesize) |
---|
5154 | g_pagesize = getpagesize (); |
---|
5155 | if (! g_regionsize) |
---|
5156 | g_regionsize = getregionsize (); |
---|
5157 | /* Assert preconditions */ |
---|
5158 | assert ((unsigned) ptr % g_regionsize == 0); |
---|
5159 | assert (size % g_pagesize == 0); |
---|
5160 | /* Allocate this */ |
---|
5161 | ptr = VirtualAlloc (ptr, size, |
---|
5162 | MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE); |
---|
5163 | if (! ptr) { |
---|
5164 | ptr = (void *) MORECORE_FAILURE; |
---|
5165 | goto mmap_exit; |
---|
5166 | } |
---|
5167 | /* Assert postconditions */ |
---|
5168 | assert ((unsigned) ptr % g_regionsize == 0); |
---|
5169 | #ifdef TRACE |
---|
5170 | printf ("Commit %p %d\n", ptr, size); |
---|
5171 | #endif |
---|
5172 | mmap_exit: |
---|
5173 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
---|
5174 | /* Release spin lock */ |
---|
5175 | slrelease (&g_sl); |
---|
5176 | #endif |
---|
5177 | return ptr; |
---|
5178 | } |
---|
5179 | |
---|
5180 | /* munmap for windows */ |
---|
5181 | static long munmap (void *ptr, long size) { |
---|
5182 | static long g_pagesize; |
---|
5183 | static long g_regionsize; |
---|
5184 | int rc = MUNMAP_FAILURE; |
---|
5185 | #ifdef TRACE |
---|
5186 | printf ("munmap %p %d\n", ptr, size); |
---|
5187 | #endif |
---|
5188 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
---|
5189 | /* Wait for spin lock */ |
---|
5190 | slwait (&g_sl); |
---|
5191 | #endif |
---|
5192 | /* First time initialization */ |
---|
5193 | if (! g_pagesize) |
---|
5194 | g_pagesize = getpagesize (); |
---|
5195 | if (! g_regionsize) |
---|
5196 | g_regionsize = getregionsize (); |
---|
5197 | /* Assert preconditions */ |
---|
5198 | assert ((unsigned) ptr % g_regionsize == 0); |
---|
5199 | assert (size % g_pagesize == 0); |
---|
5200 | /* Free this */ |
---|
5201 | if (! VirtualFree (ptr, 0, |
---|
5202 | MEM_RELEASE)) |
---|
5203 | goto munmap_exit; |
---|
5204 | rc = 0; |
---|
5205 | #ifdef TRACE |
---|
5206 | printf ("Release %p %d\n", ptr, size); |
---|
5207 | #endif |
---|
5208 | munmap_exit: |
---|
5209 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
---|
5210 | /* Release spin lock */ |
---|
5211 | slrelease (&g_sl); |
---|
5212 | #endif |
---|
5213 | return rc; |
---|
5214 | } |
---|
5215 | |
---|
5216 | static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed) { |
---|
5217 | MEMORY_BASIC_INFORMATION memory_info; |
---|
5218 | memory_info.BaseAddress = 0; |
---|
5219 | *free = *reserved = *committed = 0; |
---|
5220 | while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) { |
---|
5221 | switch (memory_info.State) { |
---|
5222 | case MEM_FREE: |
---|
5223 | *free += memory_info.RegionSize; |
---|
5224 | break; |
---|
5225 | case MEM_RESERVE: |
---|
5226 | *reserved += memory_info.RegionSize; |
---|
5227 | break; |
---|
5228 | case MEM_COMMIT: |
---|
5229 | *committed += memory_info.RegionSize; |
---|
5230 | break; |
---|
5231 | } |
---|
5232 | memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize; |
---|
5233 | } |
---|
5234 | } |
---|
5235 | |
---|
5236 | static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user) { |
---|
5237 | if (whole) { |
---|
5238 | __int64 creation64, exit64, kernel64, user64; |
---|
5239 | int rc = GetProcessTimes (GetCurrentProcess (), |
---|
5240 | (FILETIME *) &creation64, |
---|
5241 | (FILETIME *) &exit64, |
---|
5242 | (FILETIME *) &kernel64, |
---|
5243 | (FILETIME *) &user64); |
---|
5244 | if (! rc) { |
---|
5245 | *kernel = 0; |
---|
5246 | *user = 0; |
---|
5247 | return FALSE; |
---|
5248 | } |
---|
5249 | *kernel = (unsigned long) (kernel64 / 10000); |
---|
5250 | *user = (unsigned long) (user64 / 10000); |
---|
5251 | return TRUE; |
---|
5252 | } else { |
---|
5253 | __int64 creation64, exit64, kernel64, user64; |
---|
5254 | int rc = GetThreadTimes (GetCurrentThread (), |
---|
5255 | (FILETIME *) &creation64, |
---|
5256 | (FILETIME *) &exit64, |
---|
5257 | (FILETIME *) &kernel64, |
---|
5258 | (FILETIME *) &user64); |
---|
5259 | if (! rc) { |
---|
5260 | *kernel = 0; |
---|
5261 | *user = 0; |
---|
5262 | return FALSE; |
---|
5263 | } |
---|
5264 | *kernel = (unsigned long) (kernel64 / 10000); |
---|
5265 | *user = (unsigned long) (user64 / 10000); |
---|
5266 | return TRUE; |
---|
5267 | } |
---|
5268 | } |
---|
5269 | |
---|
5270 | #endif /* WIN32 */ |
---|
5271 | |
---|
5272 | /* ------------------------------------------------------------ |
---|
5273 | History: |
---|
5274 | |
---|
5275 | V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) |
---|
5276 | * Introduce independent_comalloc and independent_calloc. |
---|
5277 | Thanks to Michael Pachos for motivation and help. |
---|
5278 | * Make optional .h file available |
---|
5279 | * Allow > 2GB requests on 32bit systems. |
---|
5280 | * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>. |
---|
5281 | Thanks also to Andreas Mueller <a.mueller at paradatec.de>, |
---|
5282 | and Anonymous. |
---|
5283 | * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for |
---|
5284 | helping test this.) |
---|
5285 | * memalign: check alignment arg |
---|
5286 | * realloc: don't try to shift chunks backwards, since this |
---|
5287 | leads to more fragmentation in some programs and doesn't |
---|
5288 | seem to help in any others. |
---|
5289 | * Collect all cases in malloc requiring system memory into sYSMALLOc |
---|
5290 | * Use mmap as backup to sbrk |
---|
5291 | * Place all internal state in malloc_state |
---|
5292 | * Introduce fastbins (although similar to 2.5.1) |
---|
5293 | * Many minor tunings and cosmetic improvements |
---|
5294 | * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK |
---|
5295 | * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS |
---|
5296 | Thanks to Tony E. Bennett <tbennett@nvidia.com> and others. |
---|
5297 | * Include errno.h to support default failure action. |
---|
5298 | |
---|
5299 | V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) |
---|
5300 | * return null for negative arguments |
---|
5301 | * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com> |
---|
5302 | * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' |
---|
5303 | (e.g. WIN32 platforms) |
---|
5304 | * Cleanup header file inclusion for WIN32 platforms |
---|
5305 | * Cleanup code to avoid Microsoft Visual C++ compiler complaints |
---|
5306 | * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing |
---|
5307 | memory allocation routines |
---|
5308 | * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) |
---|
5309 | * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to |
---|
5310 | usage of 'assert' in non-WIN32 code |
---|
5311 | * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to |
---|
5312 | avoid infinite loop |
---|
5313 | * Always call 'fREe()' rather than 'free()' |
---|
5314 | |
---|
5315 | V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) |
---|
5316 | * Fixed ordering problem with boundary-stamping |
---|
5317 | |
---|
5318 | V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) |
---|
5319 | * Added pvalloc, as recommended by H.J. Liu |
---|
5320 | * Added 64bit pointer support mainly from Wolfram Gloger |
---|
5321 | * Added anonymously donated WIN32 sbrk emulation |
---|
5322 | * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen |
---|
5323 | * malloc_extend_top: fix mask error that caused wastage after |
---|
5324 | foreign sbrks |
---|
5325 | * Add linux mremap support code from HJ Liu |
---|
5326 | |
---|
5327 | V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) |
---|
5328 | * Integrated most documentation with the code. |
---|
5329 | * Add support for mmap, with help from |
---|
5330 | Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
---|
5331 | * Use last_remainder in more cases. |
---|
5332 | * Pack bins using idea from colin@nyx10.cs.du.edu |
---|
5333 | * Use ordered bins instead of best-fit threshhold |
---|
5334 | * Eliminate block-local decls to simplify tracing and debugging. |
---|
5335 | * Support another case of realloc via move into top |
---|
5336 | * Fix error occuring when initial sbrk_base not word-aligned. |
---|
5337 | * Rely on page size for units instead of SBRK_UNIT to |
---|
5338 | avoid surprises about sbrk alignment conventions. |
---|
5339 | * Add mallinfo, mallopt. Thanks to Raymond Nijssen |
---|
5340 | (raymond@es.ele.tue.nl) for the suggestion. |
---|
5341 | * Add `pad' argument to malloc_trim and top_pad mallopt parameter. |
---|
5342 | * More precautions for cases where other routines call sbrk, |
---|
5343 | courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
---|
5344 | * Added macros etc., allowing use in linux libc from |
---|
5345 | H.J. Lu (hjl@gnu.ai.mit.edu) |
---|
5346 | * Inverted this history list |
---|
5347 | |
---|
5348 | V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) |
---|
5349 | * Re-tuned and fixed to behave more nicely with V2.6.0 changes. |
---|
5350 | * Removed all preallocation code since under current scheme |
---|
5351 | the work required to undo bad preallocations exceeds |
---|
5352 | the work saved in good cases for most test programs. |
---|
5353 | * No longer use return list or unconsolidated bins since |
---|
5354 | no scheme using them consistently outperforms those that don't |
---|
5355 | given above changes. |
---|
5356 | * Use best fit for very large chunks to prevent some worst-cases. |
---|
5357 | * Added some support for debugging |
---|
5358 | |
---|
5359 | V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) |
---|
5360 | * Removed footers when chunks are in use. Thanks to |
---|
5361 | Paul Wilson (wilson@cs.texas.edu) for the suggestion. |
---|
5362 | |
---|
5363 | V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) |
---|
5364 | * Added malloc_trim, with help from Wolfram Gloger |
---|
5365 | (wmglo@Dent.MED.Uni-Muenchen.DE). |
---|
5366 | |
---|
5367 | V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) |
---|
5368 | |
---|
5369 | V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) |
---|
5370 | * realloc: try to expand in both directions |
---|
5371 | * malloc: swap order of clean-bin strategy; |
---|
5372 | * realloc: only conditionally expand backwards |
---|
5373 | * Try not to scavenge used bins |
---|
5374 | * Use bin counts as a guide to preallocation |
---|
5375 | * Occasionally bin return list chunks in first scan |
---|
5376 | * Add a few optimizations from colin@nyx10.cs.du.edu |
---|
5377 | |
---|
5378 | V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) |
---|
5379 | * faster bin computation & slightly different binning |
---|
5380 | * merged all consolidations to one part of malloc proper |
---|
5381 | (eliminating old malloc_find_space & malloc_clean_bin) |
---|
5382 | * Scan 2 returns chunks (not just 1) |
---|
5383 | * Propagate failure in realloc if malloc returns 0 |
---|
5384 | * Add stuff to allow compilation on non-ANSI compilers |
---|
5385 | from kpv@research.att.com |
---|
5386 | |
---|
5387 | V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) |
---|
5388 | * removed potential for odd address access in prev_chunk |
---|
5389 | * removed dependency on getpagesize.h |
---|
5390 | * misc cosmetics and a bit more internal documentation |
---|
5391 | * anticosmetics: mangled names in macros to evade debugger strangeness |
---|
5392 | * tested on sparc, hp-700, dec-mips, rs6000 |
---|
5393 | with gcc & native cc (hp, dec only) allowing |
---|
5394 | Detlefs & Zorn comparison study (in SIGPLAN Notices.) |
---|
5395 | |
---|
5396 | Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) |
---|
5397 | * Based loosely on libg++-1.2X malloc. (It retains some of the overall |
---|
5398 | structure of old version, but most details differ.) |
---|
5399 | |
---|
5400 | */ |
---|