Searched refs:allocation (Results 1 – 12 of 12) sorted by relevance
/bionic/libc/stdio/ |
D | vfscanf.cpp | 101 void* allocation = nullptr; // Allocated but unassigned result for %mc/%ms/%m[. in __svfscanf() local 337 allocation = wcp = reinterpret_cast<wchar_t*>(malloc(width * sizeof(wchar_t))); in __svfscanf() 338 if (allocation == nullptr) goto allocation_failure; in __svfscanf() 374 if (allocation != nullptr) { in __svfscanf() 375 *va_arg(ap, wchar_t**) = reinterpret_cast<wchar_t*>(allocation); in __svfscanf() 376 allocation = nullptr; in __svfscanf() 400 allocation = p = reinterpret_cast<char*>(malloc(width)); in __svfscanf() 401 if (allocation == nullptr) goto allocation_failure; in __svfscanf() 407 if (allocation != nullptr) { in __svfscanf() 408 *va_arg(ap, char**) = reinterpret_cast<char*>(allocation); in __svfscanf() [all …]
|
D | fmemopen.cpp | 41 char* allocation; member 108 free(ck->allocation); in fmemopen_close() 126 if (ck->buf == nullptr) ck->buf = ck->allocation = static_cast<char*>(calloc(capacity, 1)); in fmemopen()
|
/bionic/libc/malloc_debug/ |
D | README_marshmallow_and_earlier.md | 20 the normal allocation calls. The replaced calls are: 46 that contains information about the allocation. 49 Enable capturing the backtrace of each allocation site. Only the 66 Whenever an allocation is created, initialize the data with a known 68 Whenever an allocation is freed, write a known pattern over the data (0xef). 73 that contains information about the allocation. 78 A 32 byte buffer is placed before the returned allocation (known as 80 a 32 byte buffer is placed after the data for the returned allocation (known 83 When the allocation is freed, both of these guards are verified to contain 89 entire allocation is filled with the value 0xef, and the backtrace at [all …]
|
D | README_api.md | 13 the allocation information. 16 value is zero, then there are no allocation being tracked. 17 *total\_memory* is set to the sum of all allocation sizes that are live at 21 that are present for each allocation. 42 backtrace and size as this allocation. On Android Nougat, this value was 56 Note, the size value in each allocation data structure will have bit 31 set 57 if this allocation was created in a process forked from the Zygote process.
|
D | README.md | 14 the normal allocation calls. The replaced calls are: 34 backtrace related to the allocation. Starting in P, every single realloc 49 to find memory corruption occuring to a region before the original allocation. 50 On first allocation, this front guard is written with a specific pattern (0xaa). 51 When the allocation is freed, the guard is checked to verify it has not been 56 the backtrace of the allocation site. 61 on 64 bit systems to make sure that the allocation returned is aligned 65 and information about the original allocation. 70 04-10 12:00:45.622 7412 7412 E malloc_debug: allocation[-32] = 0x00 (expected 0xaa) 71 04-10 12:00:45.622 7412 7412 E malloc_debug: allocation[-15] = 0x02 (expected 0xaa) [all …]
|
/bionic/docs/ |
D | native_allocator.md | 6 [SQL Allocation Trace Benchmark](#sql-allocation-trace-benchmark), 34 call to an allocation function (malloc/free/etc). When a call 40 a call to an allocation function (malloc/free/etc) when `malloc_disable` 67 allocation operation occurs. For server processes, this can mean that 69 and no other allocation calls are made. The `M_PURGE` option is used to 92 of allocation routines such as what happens when a non-power of two alignment 100 The allocation tests are not meant to be complete, so it is expected 130 allocator on Android. One is allocation speed in various different scenarios, 174 These are the benchmarks to verify the allocation speed of a loop doing a 175 single allocation, touching every page in the allocation to make it resident [all …]
|
D | elf-tls.md | 114 If an allocation fails, `__tls_get_addr` calls `abort` (like emutls). 495 * Static TLS Block allocation for static and dynamic executables 562 On the other hand, maybe lazy allocation is a feature, because not all threads will use a dlopen'ed 815 example][go-tlsg-zero]). With this hack, it's never zero, but with its current allocation strategy,
|
D | fdsan.md | 10 …e* and *double-close*. These errors are direct analogues of the memory allocation *use-after-free*…
|
/bionic/libc/bionic/ |
D | fdsan.cpp | 86 void* allocation = in at() local 88 if (allocation == MAP_FAILED) { in at() 92 FdTableOverflow* new_overflow = reinterpret_cast<FdTableOverflow*>(allocation); in at() 99 munmap(allocation, aligned_size); in at()
|
D | pthread_create.cpp | 74 void* allocation = mmap(nullptr, allocation_size, in __allocate_temp_bionic_tls() local 78 if (allocation == MAP_FAILED) { in __allocate_temp_bionic_tls() 82 return static_cast<bionic_tls*>(allocation); in __allocate_temp_bionic_tls()
|
/bionic/libc/malloc_hooks/ |
D | README.md | 4 Malloc hooks allows a program to intercept all allocation/free calls that 12 the normal allocation calls. The replaced calls are: 68 the current default allocation functions. It is expected that if an 69 app does intercept the allocation/free calls, it will eventually call
|
/bionic/ |
D | README.md | 123 # native allocation problems.
|