Everything you need to know about Delphi FastMM

Note: This is an excerpt from my "Delphi in all its glory [Part 2]" book. 
Find the full chapter there.

 


A memory manager is a very special piece of code that controls the allocation and deallocation of memory throughout the entire application.

 

The memory management in Delphi is unique because ALL memory is allocated through an internal memory manager called FastMM.

You don’t have to do anything to activate it. FastMM is compiled into each of our applications.

 

Advantages of using FastMM:

  • Dramatic speed increase for memory-bound operations.
  • Prevents memory fragmentation.
  • Better memory sharing.
  • Memory leak detection.
  • Better debugging.

Supported platforms

FastMM works only under Windows. On other platforms, Delphi downgrades to the native memory manager of that OS but there is a multi-platform optimized version of FastMM here: GitHub.com/gabr42/FastMM4-MP

Why a separate memory manager?

Windows Memory Manager is slow

The Windows OS memory manager controls all system memory and keeps track of allocated and free memory blocks in an internal structure called the Virtual Address Descriptor (VAD) table. Each time an application requests memory, Windows must find a large-enough free block in this table, mark it as allocated, and return a pointer to the application.

When memory is released, Windows updates the table to mark the block as “free” again.

 

This overhead of this complicated process increases with the number of memory requests leading to significant performance degradation.

FastMM is better at this

FastMM optimizes this process by acting as a layer between your code and the Windows memory manager. Instead of requesting memory from Windows for every small allocation, FastMM requests large chunks of memory from Windows and manages those chunks internally dividing them into smaller bits as needed by the application, avoiding frequent communication with the Windows memory manager. This greatly reduces the overhead of constantly updating the Windows memory tables, resulting in faster memory operations.

By grouping multiple small allocations into larger blocks, FastMM reduces the likelihood of cache misses.

How FastMM works

The trinity

In a normal program, 99% of the memory allocation are small memory allocation. Therefore, for efficiency, it is good to treat the small memory allocations different from big memory allocations. And this is what FastMM does. It operates as three distinct memory managers, each handling a specific block size: small, medium, and large.

What is a block

A block is a contiguous piece of memory allocated for use. It can represent any amount of memory, ranging from a small byte-sized allocation to a larger segment of memory. In Delphi, when you request memory (for example when you create a new array) the system allocates a block of memory to you, where the size of the block depends on your request.

What is a bucket

A bucket is a category within FastMM’s that holds several blocks of a specific size range. So, each bucket holds only blocks of a specific size (e.g., 16 bytes, 32 bytes, etc.).

When an allocation request is made in Delphi, FastMM finds the appropriate bucket that can provide a block of the requested size.

 

Note: Medium and small blocks are allocated from the bottom of the available memory space – keeping them separate improves fragmentation behavior.

Large blocks

Requests for large blocks (>1MB) are passed through to the operating system (VirtualAlloc) to be allocated from the top of the address space. This is fine because large blocks are rarely requested.

Medium blocks

The medium block (260 bytes to 1MB) manager obtains memory from the OS in 1.25MB chunks. Unused medium blocks are kept in double-linked lists. There are 1024 such lists, and since the medium block granularity is 256 bytes that means there is a bin for every possible medium block size. FastMM maintains a two-level bitmap of these lists, so there is never any need to step through them to find a suitable unused block – a few bitwise operations on the bitmaps is all that is required. Whenever a medium block is freed, FastMM checks the neighboring blocks to determine whether they are unused and can thus be combined with the block that is being freed. There may never be two neighboring medium blocks that are both unused.

FastMM has no background “clean-up” thread, so everything must be done as part of the freemem/getmem/reallocmem call.

Small blocks

In an object-oriented programming language like Delphi, most memory allocations and frees are usually for small objects. In practical tests with various Delphi applications, it was found that, on average, over 99% of all memory operations involve blocks under 1 KB. It thus makes sense to optimize specifically for these small blocks (<260 bytes). Small blocks are allocated from already existent medium blocks that are subdivided into equal sized small blocks. Since a particular small block pool contains only equal sized blocks, and adjacent free small blocks are never combined, it allows the small block allocator to be greatly simplified and thus much faster. FastMM maintains a double-linked list of pools with available blocks for every small block size, so finding an available block for the requested size when servicing a getmem request is very speedy.

Reallocation

Moving data around in memory is typically a very expensive operation. Consequently, FastMM uses an intelligent reallocation algorithm to avoid as much as possible moving memory. When a block is upsized FastMM adjusts the block size in anticipation of future upsizes, thus improving the odds that the next reallocation can be done in place.  When an allocation (string, array, etc) is resized to a smaller size, FastMM requires the new size to be significantly smaller than the old size in order to move the data.

 

Speed is further improved by an improved locking mechanism: Every block size is locked individually. If while servicing a getmem request, the optimal block type is locked by another thread, then FastMM will try up to three larger block sizes. This design drastically reduces the number of thread contentions and improves performance for multi-threaded applications.

 

Note: FastMM’s optimization strategy may be tuned via SetOptimizationStrategy.  It can be set to favor performance, low memory usage, or a blend of both.  The default strategy is to blend the performance and low memory usage goals.

Deallocation

FastMM maintains a free list for small and medium-sized blocks. When memory is deallocated, it isn’t immediately returned to the OS; instead, it goes back into this list for reuse. This reduces the time and cost of future allocations by avoiding repeated system calls.

Granularity

Memory granularity is directly related to memory allocation. Therefore, it is important for optimizing memory management, particularly when aligning data structures for performance.

Memory granularity refers to the smallest unit of memory that a processor or operating system handles during certain operations, such as allocation, caching, or page mapping.

 

Memory granularity primarily concerns three aspects in modern systems:

  1. Page Size

This is the smallest block of memory managed by the OS for virtual memory purposes.

  1. Cache Line Size

The unit of memory transferred between the CPU and main memory.

  1. Alignment

Ensures that data structures (variables) are aligned to specific memory boundaries for efficient access.

Granularity in Modern Windows Systems (Win32/Win64)

 

Page Granularity

Win32 and Win64 use a default page size of 4KB. This means that when memory is allocated at the OS level, the smallest unit that can be allocated or manipulated is 4KB.

Larger memory allocations can be handled using large pages, typically 2MB or 1GB, which are useful for high-performance applications like databases.

 

Cache Line Granularity

On modern CPUs (Intel, AMD, ARM, etc.), the cache line size is 64 bytes. This is the smallest unit of memory transferred between the CPU cache and RAM.

Proper alignment of data to cache line boundaries is critical to avoid cache contention (e.g., false sharing), where multiple threads modify different variables that happen to be on the same cache line.

Connection to memory allocation

FastMM optimizes small memory allocations by grouping them into larger blocks, aligning them to cache line and page boundaries to reduce fragmentation and maximize cache efficiency. In contrast, if memory is directly allocated via the OS, the system allocates memory in page-sized units (4KB), without the internal optimizations that FastMM applies.

Alignment in C

In C, you can determine alignment requirements using the `_Alignof` operator. For example:

size_t alignment = _Alignof(double); // Get alignment requirement for a double

 

This ensures that memory is aligned properly for the data type, avoiding performance penalties due to misaligned accesses.

 

In high-performance, multi-core systems, aligning data structures to both page boundaries (4KB) and cache line boundaries (64 bytes) can improve memory access speed and reduce latency, especially in scenarios involving parallel processing.

Comparative timing

When allocating memory for a string (for example) via FastMM, the operation remains within the application’s address space, making it extremely fast (nanoseconds) since no system calls or VAD lookups are needed. On the other side, requesting memory from the OS takes significantly longer (microseconds or milliseconds), as it involves a system call, VAD lookups, page table management, and other overheads.

Which allocations go through FastMM?

Dynamic variables

Dynamic variables, such as objects or dynamically sized arrays, are allocated from the heap. When a dynamic variable is allocated, FastMM finds the appropriate bucket and provides a block from it. If a block is deallocated, it is returned to the free list within the bucket, ready for future allocations.

Simple-Type Variables

Simple-type variables, like integers or characters, are handled differently:

  • Local variables – For local variables, such as integers declared within a procedure, memory is allocated on the stack. This allocation is managed by the system stack and is not directly influenced by FastMM. The stack memory is managed by the operating system and compiler, providing quick allocation and deallocation as functions are called and return.
  • Global variables: Global and static variables are allocated in the data segment of the process’s memory. This segment is also managed by the operating system and compiler and is not subject to FastMM’s management. The allocation of these variables is fixed at compile time and persists throughout the application’s lifetime.

Advantages of FastMM

Memory sharing

If we want to share data between our applications and DLLs, FastMM provides a dedicated sharing mechanism. We can easily share basic data types (strings for example) between DLL and EXE files by using ShareMM. Complex data structures (objects, forms, etc) can also be shared by using Delphi BPL packages.

Memory fragmentation

Memory fragmentation appears when applications repeatedly ask for small amounts of memory. The memory could be almost free since the blocks allocated are small. However, if we ask for a large contiguous block of memory of let’s say 200MB (a very small value today), the OS might not be able to find such block even though 2GB out of a total of 4 are free. Why? Because the 2GB of free memory is now fragmented by tiny blocks of occupied memory, scattered all over your memory.

 

So, Delphi applications are not only faster at allocating memory, but they also play nice with other applications running in the system.

Safer code

It is easy to write buggy code in a low-level programming language like Delphi or C++ because the user is responsible for creating and destroying all objects. Even more, once an object was destroyed, the programmer must remember never to access it again (unless it is re-created).

 

So, I kept talking in this book about how Delphi is much safer than C++. Below are yet two more unique features that make Delphi safer. The first feature comes from the memory manager. The second comes from the compiler.

 

Always use these features into your program and never go to sleep without thanking the programming gods for it, because they really help us achieve something that many programmers think is unachievable: bug-free programs! BAM! I knocked your down with this phrase, right?

 

FastMM can also wipe the memory of freed objects or the memory of the whole application on shutdown.

No more memory leaks

FastMM provides memory related self-reporting functions to help applications monitor their own memory usage and report potential memory leaks. Because of this, memory leaks are close to impossible in Delphi applications.

 

But… what the hell is a memory leak?

 

A memory leak appears when we request memory from the memory manager, and we forget to give it back when we don’t need it.

It is like borrowing a book from Alice and forgetting to give it back.

If we do that too many times, your application will eat up too much memory. Other applications running on that computer will suffer because of us.

In worst case scenario, the system will not have any more memory to give to us. At that point, our application will crash. The users will not be happy. At all.

Once our application shuts down, all the leaked memory blocks will be returned to the system. (All borrowed books are returned to Alice automatically after we die).

The report contains a stack trace to the line of code that allocated the memory and “forgot” to release it. The object that leaked the memory is also listed in the report.  All we must do is to open the report (a txt file generated in the application’s folder) and use it to locate the faulty line in our code.

No more stack corruption

We have seen above that a call stack is a piece of memory that is used to store information about the routine that is being executed currently and the current execution path of the program. The stack frame is basically used by the program to know how to “go back”.

 

There are situations where the call stack can get corrupted. Stack corruption is, without doubt, the most fucked up way in which the program can crash. It is the ultimate nightmare of a programmer. Once the stack was corrupted, we cannot trace anymore the execution path to the line of code that caused the corruption because well… the stack was corrupted. The source of the error is simply untraceable! Ghosts and mummies.

 

And we should call ourselves lucky to get to know that the stack was corrupted because in some cases the stack is corrupted in such a way that it gives the impression that it was not corrupted, sending the programmer in a goose chase.

 

Unfortunately, the stack can be easily corrupted. Programming languages like C++ are prone to stack corruption because they use the stack a lot. C++ prefers to allocate things in the stack because C++ is slow in allocating heap memory (we just have seen above why).

For example, in C++ all we must do is to allocate a structure of 100 bytes and write in it (by accident) 100000 bytes instead of 100.

 

Delphi uses the stack to store only some things. All local variables of fixed size (mark this down, it is important), for example, integers, booleans, floats, are stored in the stack. These operations are safe because the variable has a FIXED size so there can be no accidental stack corruption.

All possibly dangerous types (which means dynamic structures such as objects, strings, dynamic arrays) are on the heap. Delphi affords to do this because of FastMM.

 

The runtime error checking (discussed in the “Compiler, please save my ass”) also protects us from stack corruption.

TlDr

FastMM detects buffer overruns and other memory-related issues through a combination of techniques, including:

Memory Protection: When you allocate a block of memory, FastMM may add a guard page or a few extra bytes before and after your allocated memory block. These extra bytes are marked as inaccessible, so if your code attempts to read or write outside the boundaries of the allocated block, it triggers an access violation or segmentation fault. This provides a form of runtime bounds checking.

Memory Pool Management: FastMM maintains memory pools of various sizes to optimize memory allocation. When you request memory, it tries to find a suitable block in its pool. If a buffer overrun occurs within a block, FastMM can more easily detect it because it knows the size of each block and can check if you’re accessing beyond the allocated size.

Memory Tagging: Some versions of FastMM can use memory tagging techniques. It assigns unique tags to allocated memory blocks and stores them in a separate data structure. When you free a memory block, it checks if the tag matches what was expected. If not, it suggests a memory corruption issue.

No more access violations

FastMM offers two additional unique debugging features. We will see later how to switch FastMM to Full Debug mode in order to get all these extra features.

Preventing access to freed objects

When this feature is enabled, FastMM keeps track of all freed objects, so it knows when we are trying to access an object that was already freed. In this case, your program will get suspended and FastMM will immediately report the freed object that the program tried to access, who tries to access it (which line of code) and a stack trace that led to this situation.

Preventing access on dangling pointers

When this feature is enabled, FastMM can write special patterns in memory, so it knows when we are trying to access a dangling pointer. This operation makes the program terribly slow, but who cares. Better to wait 2 minutes for the program to start up than to waste days weeks doing painful code inspection/debugging.

FastMM in multithreaded applications

FastMM is optimized for multi-threaded applications. It uses thread-local memory pools to minimize contention between threads. Each thread initially works with its own pool, so memory allocations from different threads do not interfere with each other. If one thread requires more memory, or if the thread’s pool is depleted, FastMM allocates from a shared global pool. Another trick it does in multithreaded applications is to withhold from returning some allocated memory blocks to their pools IF the pool is locked by a thread, until later when it can do it at batch.

This design reduces lock contention and improves performance in multi-threaded environments.

 

However, there might be better alternatives.

In this case study, a Delphi application exhibited thread contention in FastMM, which limited its ability to handle high data rates despite low CPU usage. Replacing FastMM with SapMM temporarily resolved the issue by allowing the application to manage higher loads.

To identify the root cause, FastMM4 was modified to log instances where memory management structures were locked in a retry loop due to contention. Excessive memory allocation and deallocation were pinpointed as the bottleneck. The data was collected using a static collector (TStaticCollector), and the logs were sorted by frequency of occurrences.

Further analysis revealed that contention primarily occurred in the small block memory lists during FreeMem operations. An experimental improvement involved adding a lock-free stack to handle blocked memory releases, but initial tests showed no performance gains.

The conclusion: Don’t be afraid to test experimental FastMM4 versions, available in a GitHub branch, for multi-threaded applications. Also take a look at ScaleMM2 and AVRMM.

TheDelphiGeek.com/2016/02/finding-memory-allocation-bottlenecks.html

 

Hint:

If you do multithreaded applications, you might want to set NeverSleepOnMMThreadContention to true when ratio of running threads to CPU cores is greater than 2:1.

FastMM5

FastMM5 is an evolution of FastMM4, which was under development for a while. FastMM5 aims to provide even better performance and memory usage in modern applications, especially with multithreading in mind.

Key improvements include:

  • Reduced contention in multi-threaded environments by fine-tuning locking mechanisms.
  • Better scalability and optimization for modern CPUs with larger caches.
  • Improvements in memory leak detection and debugging features.

 

FastMM5 was intended to replace FastMM4 in most cases.

Unlike FastMM4, FastMM5 is not free, but its price is affordable.

Warnings

  1. Do not include FastMM in any of your packages. We will see in the next book why is that.
  2. When in full debug mode, FastMM never releases the used memory back to the OS. The memory is released back into the unallocated memory pool and re-used (internally, by your program) when needed, but never back to Windows.

Installing FastMM

The FastMM is included in Delphi. You don’t have to do anything to use it as a memory manager. However, if you want to use it as a debugger you need to download its full version from GitHub, give it access to the FastMM_FullDebugMode DLL and activate the Full Debug mode.

1. Get FastMM

Download FastMM source code from: GitHub.com/pleriche/FastMM4 Unzip to a convenient folder, like C:\FastMM4. A precompiled FastMM_FullDebugMode DLL (32/64 bit) is found in the Precompiled folder.

2. Prepare FastMM for deeper debugging

FastMM’s settings are controlled via compiler switches defined in the FastMM4Options.inc file located in its folder. After you change any settings in this file, you need to recompile your program.

3. Prepare the IDE

The compiler needs to find the FastMM source code. Therefore, add the FastMM path (C:\FastMM), into the IDE (Library Paths) or to your current project’s Search Path.

Also, we need to turn on Debug information and Symbol reference under Compiling (and optionally Use debug DCUs):

 

 

And the MAP file under the Linking page:

4. Let FastMM finds its DLL

After we compile the program, FastMM will be included into the exe file. Now it needs to find the DLL, therefore, we need to copy the FastMM_FullDebugMode.dll from the “Precompiled” folder (found in the GitHub ZIP file) to C:\Delphi12\bin. For 64-bit programs we need to copy also the FastMM_FullDebugMode64.dll to C:\Delphi12\bin64.

Alternatively, you can copy the DLL next to your program’s exe file. Not recommended because you need to do this for each project/exe file.

If the DLL is not found, FastMM will not switch to full debug mode.

Quick check list / Validation

  • Add FastMM4 to Uses in the DPR
  • Change settings in the INC file
  • Set proper compiler options for Debug mode: Enable Stack traces, Debug info, MAP file. Disable Optimizations.
  • Let the compiler find the FastMM code
  • Let the FastMM (in your exe) file find its DLL

 

To test if it works, start a new VCL app, create an TStringList object and never release it. On app shutdown, you should see a message reporting the leak. In the “How to read a FastMM leak report” section we will see a report for exactly that kind of leak.

Configuring FastMM to catch memory leaks

To make FastMM report memory leaks on application shutdown we need to do a few changes.

 

First, we need to add the FastMM4 unit to our DPR file(s). This unit must be always on the first line. No exceptions!

program Demo;

uses

{$IFDEF DEBUG}

FastMM4,

{$ENDIF}

LightCore.AppData;

begin

ReportMemoryLeaksOnShutdown:= TRUE;  // Controls FastMM

 

AppData:= TAppData.Create(‘Light Saber Demo);

AppData.CreateMainForm(TMainForm, MainForm, True, True, asFull);

AppData.Run;

end.

 

Then we need to flip a few switches in the INC file. In principle we need to set at least:

  • ReportMemoryLeaksOnShutdown
  • LogMemoryLeakDetailToFile

Instead of going into the gory details of that, just overwrite the original INC file that comes into the zip file with the one provided with this book (see my GitHub).

 

Hint: We can set FastMM to report leaks even if it does not run under the Delphi debugger.

Selectively turning on/off leak reporting

The ReportMemoryLeaksOnShutdown is a variable defined in FastMM unit. It allows you to quickly turn on/off FastMM memory leak reporting. But in order for this to work, we need to turn on the $ ManualLeakReportingControl witch in the FaastMm.inc file.

I don’t use that option. Instead, I have FastMM always active when I compile in Debug more and exclude it in Release mode. For this I put a IFDEF switch around FastMM4 unit, as seen above.

 

Another way to control FastMM is to flip the $InstallOnlyIfRunningInIDE switch.

Delphi 12 RTL

If you use Delphi 12 and higher disable the $UseCustomFixedSizeMoveRoutines as the Delphi 12 RTL was improved, and its Move routines are much faster than the ones in FastMM. The above note does not apply to the UseCustomVariableSizeMoveRoutines switch.

Other interesting settings

Other switches that you might want to turn one:

  • EnableMMX / ForceMMX – Today all CPUs support MMX.
  • Align16Bytes – This will make the program faster, but it will waste some extra memory.
  • NoDebugInfo – Definitively disable this one otherwise the debugger might step into FastMM code during debugging.

Configuring FastMM to catch bugs

FastMM can be used to catch all sorts of bugs such as memory overwrites and “double freed” objects. The INC file provided with this book will switch on these features too.

In general, we are interested in switches like:

CheckHeapForCorruption

(Disabled by default)

FastMM always catches attempts to free the same memory block twice. Example:

Var StringList: TStringList;

StringList:= TStringList.Create;

StringList.Free;
StringList.Free;  // Ups

 

Without FastMM the second free will give you AT BEST an simple “Invalid Pointer Operation” error message (no additional details). At worst, it will execute silently.

 

However, FastMM can also check for corruption of the memory heap (typically due to the user program overwriting the bounds of allocated memory). These checks are expensive, and this option should thus only be used for debugging purposes. If this option is set, then the ASMVersion option is automatically disabled.

DetectMMOperationsAfterUninstall

Check if code still tries to do memory allocations after FastMM was uninstall (on program sutdown).

 

I don’t have words to explain how useful full debug mode is. Never turn it off while in Debug mode!

How FastMM detects bugs (TLDR)

When the “FullDebugMode” define is set, FastMM places a header and footer around every memory block in order to catch memory overwrite bugs. It also stores a stack trace whenever a block is allocated or freed, and these stack traces are displayed if FastMM detects an error involving the block. When blocks are freed, they are filled with a special byte pattern that allows FastMM to detect blocks that were modified after being freed (blocks are checked before being reused, and also on shutdown), and also to detect when a virtual method of a freed object is called. FastMM can also be set to detect the use of an interface of a freed object, but this facility is mutually exclusive to the detection of invalid virtual method calls.

GUI for FastMM

There are many cool debugging features that we can customize in FastMM. The customization is done via an INI file. But INI files are boring so better download the FastMM Options Interface tool.

 

Enabling FastMM additional features for advanced memory leak reporting

 

You can find more details about how to customize your FastMM here: DelphiProgrammingDiary.blogspot.com/2018/09/fastmm-and-how-to-use-in-delphi-project.html

 

When FastMM is running in debug mode, we will need to have the FastMM_FullDebugMode.dll present in application’s folder, or somewhere in the search path (for example in c:\Windows). When we ship our application, we must remember to switch FastMM back to non-debug mode OR to ship the mentioned DLL with our application.

How to read a FastMM leak report?

Whenever our application leaks memory, FastMM (if properly configured via the .inc file) will show a warning:

 

 

The full description of the leak will be saved to disk in PROGRAM’S FOLDER, under the “YourAppName_MemoryManager_EventLog.txt” name.

 

If we open the file, we will see something like the text below. Looks scarry but it is actually quite simple to interpret:

 

————————-2023/1/13 21:04:45————————-

A memory block has been leaked. The size is: 36

 

This block was allocated by thread 0x31D4, and the stack trace (return addresses) at the time was:

476363 [System.Classes][System][System.Classes.TStrings.SetTextStr]

7BA886 [TesterForm.pas][TesterForm.TfrmTester.btnStartClick][79]

569759 [Vcl.Controls][Vcl][Vcl.Controls.TControl.Click]

58C043 [Vcl.StdCtrls][Vcl][Vcl.StdCtrls.TCustomButton.Click]

58D1CD [Vcl.StdCtrls][Vcl][Vcl.StdCtrls.TCustomButton.CNCommand]

5691FD [Vcl.Controls][Vcl][Vcl.Controls.TControl.WndProc]

 

The block is currently used for an object of class: UnicodeString

Current memory dump of 256 bytes starting at pointer address 7F8ACE90:

B0 04 02 00 01 00 00 00 09 00 00 00 53 00 6F 00 6D 00 65 00 20 00 74

88 26 6A 0E 80 80 80 80 00 00 00 00 61 75 8A 7F 00 00 00 00 00 00 00

C3 09 00 00 39 71 40 00 77 D0 40 00 77 C6 DA 75 DE D1 40 00 CB 28 47

77 1E 65 00 11 5F DA 75 5A 9D D9 75 4C 1F 65 00 D4 31 00 00 D4 31 00

°  S  .  o  .  m  .  e  .     .  t  .  e  .  x  .  t

 

————————–2023/1/13 21:04:45————————-

A memory block has been leaked. The size is: 84

This block was allocated by thread 0x31D4, and the stack trace (return addresses) at the time was:

477824 [System.Classes][System][System.Classes.TStringList.Create]

7BA849 [TesterForm.pas][TesterForm][TesterForm.TfrmTester.Test][70]

7BA868 [TesterForm.pas][TesterForm.TfrmTester.btnStartClick][77]

569759 [Vcl.Controls][Vcl][Vcl.Controls.TControl.Click]

 

The block is currently used for an object of class: System.Classes.TStringList

 

————————2023/1/13 21:04:45————————–

 

This application has leaked memory.

The small block leaks are (excluding expected leaks registered by pointer):

13 – 20 bytes: UnicodeString x 1

21 – 36 bytes: UnicodeString x 1

69 – 84 bytes: System.Classes.TStringList x 1

———————————————————————

The “Summary” section

The last section of the log shows the summary – the total number of leaked blocks and the amount of leaked memory for each block.

We start by looking at this section, to see what kind of memory we are leaking.

In the above listing we see a TStringList object, and two strings leaked.

The “Leaked blocks“ section

The rest of the log lists all leaked memory and the stack trace leading to that leak. Each like starts with a number which is the memory address of the function. Then we can see the unit’s name and function’s name. The last number in brackets is the source code line number.

The most relevant section is the one that states, “the stack trace (return addresses) at the time was”. This is what is the call stack. (Reminder: A stack trace shows the execution path of your program: which function called which function.)

 

We start reading the call stack trace from the bottom up:

 

 

First, we see the TContro.Click method. This means that a control was clicked. One line up, we see that the clicked button was the btnStart, which called the Test function, which on line 77 created a TStringList. Now, let’s go into the source code and see if this is true:

 

 

Whad’Ya Know? The Test() function creates and returns indeed a TStringList object.  And that happens exactly on line 77 (I marked the spot with number 1).

We also see that Test() is called in btnStartClick exactly as FastMM predicated.

 

If we look at the second marker, we can see that indeed we forgot to free the TSL variable.

Case solved. Bruce Willis (this time with help from FastMM) saves everyone!

 

Bonus: If we look carefully at one of leaked blocks, we can see the leaked data (the “some text” string) in it, first as hexadecimal, then as characters.

 

 

Cool right? Looking at the leaked blocks can help us sometimes to find the source of the leak – if we are lucky to recognize the content of our strings/objects.

Let’s see another example

Let’s say that we leak again a TStringList object, but this time FastMM says that we leak that object 10 times:

 

 

Ignore the first line (UnicodeString x 10) – these 10 strings are just part of the TStringList objects we leak. If we fix the TStringList, the other leak will disappear automatically.

 

Let’s look into the code to see where we create 10 TStringList objects and we never free them:

 

 

Super easy. Right? Babies and candies all over…

Information overload

We can use GetMemoryMap to get details about how our application uses memory:

procedure GetMemoryMap(var AMemoryMap: TMemoryMap);

 

The result is returned in the parameter, and its meaning is:

csUnallocated Free
csAllocated In use by the process.
csReserved Reserved for future use by the process.
csSysAllocated In use by the operating system.
csSysReserved Reserved for future use by the operating system.

Also, the FastMM folder features a demo that show FastMM in action:

 

The demo gives a lot of low lever details about the blocks, efficiency of the allocation (that section is scarry), the VM layout, etc.

Another scary section is the VM info which shows the effects of memory fragmentation in a PC with 16GB or RAM: the largest free block is under 2GB!

Tech resources & documentation

1 thought on “Everything you need to know about Delphi FastMM”

  1. Pingback: “How to use FastMM 4” explained in exactly 150 words – Delphi, in all its glory

Leave a Comment

Scroll to Top