The very first all-electronic memory was the Williams-Kilburn tube, developed in 1947 at Manchester University. It used a cathode ray tube to store bits as dots on the screen’s surface. The evolution of computer memory since that time has included numerous magnetic memory systems, such as magnetic drum memory, magnetic core memory, magnetic tape drive, and magnetic bubble memory. Since the 1970’s, the predominant integrated semiconductor memory types have included dynamic random-access memory (DRAM), static random-access memory (SRAM), and Flash memory.
When I think of computer memory, I think primarily of DRAM and SRAM. DRAM is the denser of the two memory types, while SRAM has the fastest on-chip cache memory. These two types of semiconductor memory have been around for decades. DRAM development has been driven by density and cost, and DRAM requires refresh cycles to maintain stored information. SRAM development, on the other hand, has been driven by cell area and speed, and SRAM doesn’t require refresh cycles to maintain its stored “1’s” and “0’s”.
DRAM technology evolved from earlier random-access memory, or RAM. Prior to the introduction of DRAM, RAM was a well-known memory concept. RAM memory temporarily reserves memory states during read/write operations, erasing the memory every time the computer is turned off. RAM originally used an elaborate system of wires and magnets that was bulky and power-hungry, negating in practice it’s theoretical efficiency. IBM’s legendary contribution, thanks to Robert Dennard, was to reduce RAM to a memory cell using only a single transistor and a storage capacitor. The ultimate effect of Dennard’s invention was that a single chip could hold a billion or more RAM cells in modern computers.
The complexity of today’s DRAM technology is driven by many of the same development challenges that impact CPU’s, including multi-patterning and proximity effects, as well as storage node leakage issues. DRAM development requires accurate modeling to predict and optimize such effects and to avoid yield problems. For example, challenges with bit-line (BL) mandrel spacer and mask shift can be critical in determining the BL-to-active area (AA) contact area and can result in poor yield if left unaddressed.
Identifying and correlating specific process parameters that drive wafer-level failures is extremely difficult using wafer experimentation alone. Manufacturing test wafers during process variation studies, and measuring the resulting contact areas on wafer, is extremely time-consuming and costly. This time and expense can be avoided using advanced process modeling techniques. Areas of minimum contact can be identified based upon DoE (Design of Experiment) statistical variation studies, by modeling both BL spacer thickness variation and BL mask shift at the same time. This process variation capability, coupled with a built-in Structure Search/DRC capability, can result in identification of the minimum contact location areas on chip. SEMulator3D® is a process modeling platform that can perform these types of studies. Using SEMulator3D, we can execute a process variation study to look at potential issues with BL mandrel spacer thickness and mask shift. Figure 1(a) shows an example of using SEMulator3D to examine the impact of BL spacer thickness and mask shift on BL/AA contact area. Figure 1(b) identifies the on-chip location of the minimum contact area.
Another process concern in DRAM process development is storage node contact proximity to neighboring active areas, since excessive proximity can lead to device short circuits. Tracking down the root cause of these potential shorts is difficult, yet they can cause catastrophic reliability and yield issues late in the development cycle. Accurately modeling and identifying the minimum gap between capacitor contact and AA at different z-locations, prior to tape-out, can help alleviate these future reliability and yield issues. Figure 2 illustrates BL to AA contact areas discovered during process modeling and highlights minimum gap locations that need to be addressed through process or design changes. These two examples illustrate the complicated interaction between process steps and the resulting impact on DRAM reliability and yield, along with the importance of being able to accurately model these interactions.
Flash memory was invented in 1984 and is capable of being erased and re-programmed multiple times. It is used for storage and data transfer in consumer devices, enterprise systems and industrial applications. Flash memory retains data for an extended period-of-time, regardless of whether a flash-equipped device is powered on or off. Flash memory has now been transformed from a 2D technology to a 3D technology (3D NAND), providing an increase in memory density.
A single-tier 3D NAND structure is complex to etch, since a very high aspect ratio hole must be etched in an alternating set of materials. In addition, bowing and tilting of the hole must be avoided during the etch process. There is an additional requirement to create a “slit” etch to separate neighboring memory cells. 3D NAND structures have the added complexity of a “staircase” etch that is required to form the word-line (WL) contacts. A completed 3D NAND array, modeled in SEMulator3D, is shown in Figure 3. It illustrates the structural complexity of a state-of-the-art 3D NAND memory design – and this is a simple single tier structure.
Process complexity increased dramatically during the transition from a 2D to a 3D Flash memory structure, since the 3D structure requires a multi-tier pillar-etch operation. Most 3D NAND memory stacks are now two tiers high, which adds an additional concern of top tier to bottom tier misalignment. The issues and concerns of a multi-tier 3D NAND pillar etch are shown in Figure 4.
In this figure, we have displayed an example of tier misalignment and the resulting pillar etch offset. This type of misalignment can be caused by process variability and must be incorporated into any 3D NAND process development project. From this example, it can be seen that tier-to-tier alignment plays a critical role in creating a robust multi-tier 3D NAND memory cell. Similar to our DRAM example, DoE statistical variation studies can be run in SEMulator3D that model 3D NAND multi-tier alignment errors, and enable the possibility of taking corrective action without the time and expense of wafer-based testing.