Introduce





64-bit systems sometimes lack equivalents to software that is written for 32-bit architectures. The most severe problem is incompatible device drivers. Although most software can run in a 32-bit compatibility mode (also known as an emulation mode, e.g. Microsoft WoW64 Technology), it is usually impossible to run a driver (or similar software) in that mode since such a program usually runs in between the OS and the hardware, where direct emulation cannot be employed. Many open source software packages can simply be compiled from source to work in a 64-bit environment on operating systems such as Linux. All that would be needed in this case is a compiler (usually gcc) for the 64-bit machine. Currently the 64-bit versions for many existing device drivers are not available, so using a 64-bit operating system can become frustrating as a result. Because device drivers in operating systems with monolithic kernels, and in many operating systems with hybrid kernels, execute within the operating system kernel, it is possible to run the kernel as a 32-bit process while still supporting 64-bit user processes. This provides the memory and performance benefits of 64-bit for users without breaking binary compatibility with existing 32-bit device drivers, at the cost of some additional overhead within the kernel. This is the mechanism by which Mac OS X enables 64-bit processes while still supporting 32-bit device drivers. Converting application software written in a high-level language from a 32-bit architecture to a 64-bit architecture varies in difficulty. One common recurring problem is that some programmers assume that pointers have the same length as some other data type. These programmers assume they can transfer quantities between these data types without losing information. Those assumptions happen to be true on some 32-bit machines (and even some 16-bit machines), but they are no longer true on 64-bit machines. The C programming language and its descendant C++ make it particularly easy to make this sort of mistake [1]. Differences between the C89 and C99 language standards also exacerbate the problem [2]. To avoid this mistake in C and C++, the sizeof operator can be used to determine the size of these primitive types if decisions based on their size need to be made, both at compile- and run-time. Also, the header in the C99 standard, and numeric_limits class in header in the C++ standard, give more helpful info; sizeof only returns the size in chars. This used to be misleading, because the standards leave the definition of the CHAR_BIT macro, and therefore the number of bits in a char, to the implementations. However, except for those compilers targeting DSPs, "64 bits == 8 chars of 8 bits each" has become the norm. One needs to be careful to use the ptrdiff_t type (in the standard header ) for the result of subtracting two pointers; too much code incorrectly uses "int" or "long" instead. To represent a pointer (rather than a pointer difference) as an integer, use uintptr_t where available (it is only defined in C99, but some compilers otherwise conforming to an earlier version of the standard offer it as an extension). Neither C nor C++ define the length of a pointer, int, or long to be a specific number of bits. C99, however, defines several dedicated integer types with an exact number of bits. In most programming environments on 32-bit machines, pointers, "int" types, and "long" types are all 32 bits wide. However, in many programming environments on 64-bit machines, "int" variables are still 32 bits wide, but "long"s and pointers are 64 bits wide. These are described as having an LP64 data model. Another alternative is the ILP64 data model in which all three data types are 64 bits wide, and even SILP64 where "short" variables are also 64 bits wide[citation needed]. However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment without changes. Another alternative is the LLP64 model, which maintains compatibility with 32-bit code by leaving both int and long as 32-bit. "LL" refers to the "long long" type, which is at least 64 bits on all platforms, including 32-bit environments. Many 64-bit compilers today use the LP64 model (including Solaris, AIX, HP, Linux, Mac OS X, and IBM z/OS native compilers). Microsoft's VC++ compiler uses the LLP64 model. The disadvantage of the LP64 model is that storing a long into an int may overflow. On the other hand, casting a pointer to a long will work. In the LLP model, the reverse is true. These are not problems which affect fully standard-compliant code but code is often written with implicit assumptions about the widths of integer types. Note that a programming model is a choice made on a per-compiler basis, and several can coexist on the same OS. However typically the programming model chosen by the OS API as primary model dominates. Another consideration is the data model used for drivers. Drivers make up the majority of the operating system code in most modern operating systems (although many may not be loaded when the operating system is running). Many drivers use pointers heavily to manipulate data, and in some cases have to load pointers of a certain size into the hardware they support for DMA. As an example, a driver for a 32-bit PCI device asking the device to DMA data into upper areas of a 64-bit machine's memory could not satisfy requests from the operating system to load data from the device to memory above the 4 gigabyte barrier, because the pointers for those addresses would not fit into the DMA registers of the device. This problem is solved by having the OS take the memory restrictions of the device into account when generating requests to drivers for DMA, or by using an IOMMU.