QDR-IV has single address bus running at double data rate and high frequency.
Data bus inversion performs a similar function on the data lines. However, the inversion bits are generated by a memory controller during a memory write operation and the inversion logic within the QDR-IV memory generates the inversion bits during a memory read operation.
The DINVA and DINVB pins indicate whether the corresponding DQA and DQB pins are inverted. DINVA and DINVB are active HIGH signals. When DINV = 1, the data bus is inverted; when DINV= 0, the data bus is not inverted.
DINVA and DINVA are independent and control their respective DQA groups. DINVA covers DQA[17:0] for the x36 configuration and DQA[8:0] for the x18 configuration, respectively. DINVA covers DQA[35:18] for the x36 configuration and DQA[17:9] for the x18 configuration, respectively. Similarly, DINVB covers DQB[17:0] for the x36 configuration and DQB[8:0] for the x18 configuration, respectively. DINVB covers DQB[35:18] for the x36 configuration and DQB[17:9] for the x18 configuration, respectively.
Table 1 lists the DINV-bit descriptions and DINVA setup conditions for the x18 and x36 QDR-IV options.
Table 1: Data bus inversion conditions
Note: A similar Inversion logic can be applied to DINVA, DINVB, and DINVB for their respective DQ groups.
Example of x18 device
Without data bus inversion Assume that you want to transmit 9’h 007 and 9’h 1F3 on DQA[8:0] respectively. As a result, six data pins need to switch the logic between the first and second DQA[8:0] bits, as shown in Table 2 (red cells). This will increase the switching noise, I/O current and crosstalk on data pins.
Table 2: Data bus order
With data bus inversion According to Table 1, the 1st DQA[8:0] satisfies the inversion logic condition. Therefore, before the memory controller transmits the 1st DQA[8:0], it will invert (9’h 007 --> 9’h 1F8) the pin, and set the DINVA pin to 1. Because the 2nd DQA[8:0] does not need be inverted, the memory controller will transmit it with no change and will set DINVA to 0.
Table 3 shows the result with data bus inversion. In this case, only three data pins need to switch the logic (see the red cells). Hence, the total number of switching bits is reduced to three, which results in reduced SSO noise, I/O current and crosstalk.
Table 3: Data bus order with bus inversion
QDR-IV has single address bus running at double data rate and high frequency. Therefore, the address parity input (AP) and address parity error flag output (PE#) pins provide an address parity feature in the chip to ensure the data integrity of the address bus. The AP function is optional; you can enable or disable it using the configuration registers.
The AP pin is used to provide an even parity across the address pins (An to A0). The AP value is set if the total number of 1s of AP and An to A0 add up to an even number. For a x18 data bus-width device, set the AP so that the number of 1s in A[21:0] and AP are even. For a x36 data bus-width device, set the AP so that the number of 1s in A[20:0] and AP are even.
Example of x36 device
Let us take two addresses, 21’h1E0000 and 21’h1F0000, for a x36 data bus-width device. Table 4 shows how to set an AP value for each address.
Table 4: Address parity
When a parity error occurs, the complete address of the first error is recorded in configuration registers 4, 5, 6 and 7, (refer to the relevant datasheet for more information on the configuration registers) along with the Port A/B error bit and address invert bit. The Port A/B error bit indicates from which port the address parity error occurred: 0 for Port A and 1 for Port B. This information remains latched until cleared by writing a 1 to the address parity error clear bit in the configuration register 3.
Two counters are used to indicate if multiple address parity errors have occurred. The Port A error count is a running count of the number of parity errors on Port A addresses. Similarly, the Port B error count is a running count of the number of parity errors on Port B addresses. They each independently count to a maximum value of 3 and then stop counting. These counters are free-running and they are both reset by writing a 1 to the address parity error clear bit in the configuration register 3.
As soon as the address parity error is detected, the write operation will be ignored to prevent memory corruption. However, the read operation continues with the incoming incorrect address, and the false data will be sent out from the memory.
The PE# is an active LOW signal, which indicates an address parity error. The PE# signal is set to zero within eight cycles (in QDR-IV XP SRAMs) or five cycles (in QDR-IV HP SRAMs) after the address parity error was detected. It remains asserted until the error is cleared through configuration registers. The address parity check is completed after address inversion is processed.
As soon as PE# goes LOW, the controller must stop the memory operation, and reset PE# to HIGH using the configuration registers. In addition, the controller must rewrite data into the memory due to the earlier write operation being blocked by the device owing to the AP error detect.
Pritesh Mandaliya is a staff applications engineer in the Memory Product Division of Cypress Semiconductor. He holds a master’s degree in electrical engineering from San Jose State University.
Anuj Chakrapani is a senior applications engineer at the Memory and Imaging Division of Cypress Semiconductor. His responsibilities include creating behavioral simulation models of SRAMs, board-level failure-analysis debug, system-level testing, and applications support for customers. Anuj holds a master's degree in electrical engineering from Arizona State University (Tempe).
First published by EDN.