While not unthinkable only a few years ago, tumbling memory prices and increasing hard drive capacities have presented PC users the ability to acquire gigabytes of memory at relativity low cost. As a result of this hardware environment, users are attempting to access large areas of contiguous memory in IDL, but failing for various reasons.

This document outlines some of the limitations affecting memory allocation and how they are related to IDL. While focused on the Windows operating system, most of the issues described here relate to any modern operating system. This is a particularly useful Tech Tip for users wondering why on 32-bit Windows they get system “Unable to allocate memory” errors after calls to initialize single IDL arrays requiring 1GB to 2GB of storage space.

Limiting Factors

When a process requests memory from the operating system, various factors determine if the request will be successful or not. The major factors that can cause a request to fail include:

1. Available Memory
The amount of physical storage space on the system allocated for memory.

2. OS Limits
The operating system has limits to the amount of memory that it can support.

3. Memory Fragmentation
Affects the size of contiguous memory blocks available to the process.

Available Memory

The Problem

This is a fairly straightforward issue: if the amount of memory requested exceeds the amount available, the request will fail. To determine the amount of free memory, the operating system takes into account the following factor

Physical Memory
The amount of physical memory (RAM) available to the system.

Virtual Memory
The amount of virtual memory available on the system. Modern operating systems can use areas on hard drives to present a set of memory to the system that exceeds the amount of physical memory.

Memory systems on modern operating systems, Virtual Memory Managers (VMM), segregate memory into blocks or pages of a specific size. Pages that aren’t being accessed by the current process are copied to the hard drive page file (sometimes referred as a “swap file”) and retrieved to physical memory when requested. This action allows the memory pool used by a process to exceed actual physical RAM.

Free Memory
While the system might have a large amount of memory available, if that memory is in use, the memory request will fail. The operating system, other processes, and other memory requested by the application affect the pool of available memory. If a memory request fails, closing other running applications could release enough memory to allow it to succeed.


  • Exit any executing applications.
  • Purchase more memory for the system.
  • Increase the virtual memory manager’s page file size.

Operating System Limits

The problem

Another main factor that can affect the amount of free memory are operating limits: In particular the size of the address space available to a process.

To present a uniform memory model to all executing processes, modern operating systems abstract memory addresses used by the systems virtual memory manager. This abstraction presents each process with the same memory address space, while allowing them to access different elements of memory in the VMM.

This memory abstraction creates a range of available memory addresses that is referred to as an address space. The address space has a specific range of values and it is the limits of this range that restricts the amount of memory available to the executing process.

The range of an address space is defined by the native word size of the operating system. For Windows NT based systems, this value has a size of 32 bits, which corresponds to an address space of 2^32 bytes or approximately 4 gigabytes of memory. Thus, all processes on Windows NT platforms are limited to having access to only 4 gigabytes of memory (This limit will be expanded to a range of 2^64, with the introduction of Win64 platforms in the future).

This is not the only limitation placed on processes on Windows platforms, though. System addresses are mapped into this address space, so the available address space is further reduced. The amount of address space is utilized by the system is dependent on the version of Windows NT being used.

For Windows NT and 2000 Workstation and some versions of Windows NT Server, the upper 2 gigabytes of the address space is reserved for the system. This leaves only 2 gigabytes available to the process for use. For certain Windows Server-class systems (including versions of NT/2000/2003 Server and also Windows XP Professional), the system can be configured such that only the upper 1 gigabyte of the address space is reserved for the system, leaving 3 gigabytes of address space available for use by the process.

These limitations are summarized in the following table. (Please consult your system administrator, system documentation or Microsoft for information about configuring your system to use more than 2 GB of memory for process use.)

Operating SystemAvailable Address Space
Windows NT Workstation and Windows Server-class systems2 Gigabytes
Certain Windows Server-class systems (including NT/2000/2003 Server and XP Professional)3 Gigabytes

To the best of our knowledge, Windows XP also has these same limits. As you will read below, IDL, as a window-based application, does not have access to this full 2 – 3 GB in one contiguous block.


The only solution for this type of limitation is for the operating system to change. For Windows, a user could move to Windows NT Server, Enterprise Edition (or other systems mentioned above) or a 64 bit version of Windows. Of course, a user could also try another platform such as Linux (32-bit) or Solaris (64-bit).

Memory Fragmentation

The problem

Even when a sufficient amount of free memory is available and the memory request being made is below the limits set by the operation system, memory requests can fail. Often this results from memory fragmentation.

When memory is requested from the system, a contiguous free space of the requested size must exist in the process’s address space. If such a free space does not exist, the request will fail, even if the total free memory is greater than the request.

The larger the memory allocation request, the greater the chance that this failure will occur. This scenario is exacerbated when memory fragmentation occurs. Memory fragmentation takes place when a block of memory is allocated that divides a region of the process’s address space, reducing the maximum size of a free block of memory. The following diagram demonstrates this issue.

An indication that memory fragmentation is taking place is when a request for a large block of memory fails, but smaller requests that total the larger request succeed.

Memory fragmentation is often the result of software coding scenarios like the following:

  • A large block of memory is allocated.
  • A smaller block of memory is allocated. This allocation places that memory above the large block of memory in the address space.
  • The large block of memory is released to the system.

This can occur often when working with large dynamic memory blocks, so developers must be aware of the issue when designing and implementing software.


The key to resolving this type of issue is to be aware that it exists during application development, allocating memory in a correct manner: Persistent allocations first, transient allocations, particularly the large ones, after, and freed at the earliest possible moment.

Forced Memory Fragmentation In window-based Windows Applications


It is important to note that the Windows Operating System forces major memory fragmentation on window-based applications the moment they are loaded. In the case of IDL, because it is an application built with the MFC “Microsoft Foundation Classes” libraries, this fragmentation automatically reduces the maximum malloc() allowed by an IDL process to approximately 1.2 GB. That is to say, the IDLDE, right after it loads, is unable to build an array that occupies more than 1.2 GB. This is because each array and the malloc() call behind it requires a single contiguous block of memory.

NOTE: In the examples below the numbers refer to the behavior of standard Win 2000/NT/XP, which most of our Windows users own. Users of the “Enterprise editions” probably have access to an additional 1GB of malloc() capability.

The explanation for this lies in the behavior of system DLL’s (“Dynamic Link Libraries”) loading in Windows. Applications coded in the Win 32 API or with the Microsoft Foundation Classes (the chief libraries that support Microsoft Visual C++ development) need to have the OS load the DLL’s for that Windows code the moment they initialize.
These DLL’s get loaded from the top of virtual storage (higher addresses), reducing the amount of space left for the heap. There are furthermore a number of other DLL’s that, on most systems, load automatically with each Windows process at locations well above the bottom of the virtual storage. These might be DLL’s supporting the display graphics, for example, and they have a tendency to request a specific address space, most commonly 0X10000000 (256MB), chopping off a few hundred megabytes of contiguous memory at the bottom of virtual memory.

The space now between these DLL’s and the start of the virtual memory addresses for the MFC DLL’s is little more than 1.2 GB at IDL start. (A simple DOS console application, having no need for space-consuming DLL’s, can run malloc() calls that might get up to 1.9 GB of contiguous memory, but its graphics functionality would be limited to console text output and command prompt user interface.)


The Microsoft developer tool Visual C++ includes a utility called editbin.exe, which can potentially win an IDL process a couple of 100 MB additional contiguous virtual memory space at process start. This utility, run at the DOS command line, has a parameter option “/rebase”, which enables end-users to force any application DLL to move from its default address to an address of the user’s choice.

With this utility, one can “pack” DLL’S, which are loading at too high a starting address, down in the lowest virtual memory addresses that the system will allow.

This is not likely to be a good solution for an IDL application that has to run frequently with very large array allocation, but rather is a utility you might use when you have a one-time IDL process you want to test, you think it is failing by just 100-200 MB of malloc() capability, and you have exhausted the other options we have suggested above.

Memory Allocation in IDL

The memory allocation system leaves main memory accounting and management up to the operating system (malloc() and free()). For efficiency in some areas, IDL does utilize several special sub-systems and methodologies to manage internal memory. Of particular interest are the following:

Temporary Memory

This system is core to the interpreter and used for allocation of memory in a system routine that is called from the interpreter. When memory is allocated using this system, the interpreter will ensure that the memory is returned when the called routine returns. This prevents any memory leaks from occurring in system routines.

Allocating in Blocks

In subsystems that utilize a common record size and that are extensively used in IDL, a memory pool is used. This pool will allocate multiple records at a single time, reducing the overhead of multiple allocation events. Also, this system maintains a list of free records, so when elements are returned they are actually recycled. This methodology is often used for small records, such as in the Widget system and the Pointer-Object heap system in IDL.

Central Allocation

All memory in IDL is allocated and released through a central set of routines. These routines fulfill the memory request using the system memory allocation functions (malloc() and free()) and also keep count of the memory allocated by the system. This is what provides the information displayed by IDL’s “HELP, /MEMORY” command.

Correct Development

The IDL Engineering team makes extreme efforts to ensure that the proper memory allocation methodology and procedures are followed when implementing IDL functionality. Efficiency with-respect-to speed and memory usage is key to IDL’s success and as such rigorously checked with every product release.

These systems improve efficiency of memory usage in IDL, but do nothing for overall memory allocation management. This task is left to the operating system since it is more efficient at this operation. As such, all of the aforementioned memory allocation issues can and will affect the memory available to the IDL user. Because of this, the IDL user should be aware of these issues and take appropriate actions as warranted.

Write A Comment