Self-destructing displays lead to a lesson learned: keep an eye on the supply, not just the signal.
We were trying to add a remote data entry and display unit to our radio communications monitoring system, itself recently revised to take advantage of new 8-bit microprocessor technology (yes, this was a long time ago). We found what looked like the perfect solution: a newly-released four-character, 16-segment alphanumeric display device with on-board ASCII decoder and character memory ASIC. Devices could be easily cascaded to achieve the desired character length. With this versatile and easy-to-code-for-in-assembly (like I said, a long time ago) display device in hand, all the remote unit needed for its interface was power and 8-bit parallel data with read/write control.
The Litronix DL1416 four-digit alphanumeric display was introduced in 1977. Source: The Vintage Technology Association
The unit achieved all its operational objectives admirably, with one glaring exception. The display devices would sometimes abruptly burn out, blistering the epoxy blob that covered the on-board ASIC and releasing “the magic smoke.” There seemed no rhyme or reason for the timing of the failures; they did not seem to correlate with anything our system was doing at the time. So, we consulted with the manufacturer to see if there was a reliability problem in their first-generation production.
Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale.
It turned out that the problem was primarily ours in that we had inadvertently triggered a vulnerability in the vendor’s ASIC design. A cautionary note in the product data sheet mentioned that “it is highly recommended that the display and the components that interface with the display be powered by the same supply to avoid logic inputs higher than VCC.” The manufacturer admitted to us that the reason for the note was an implementation weakness in the CMOS ASIC’s input protection diodes. If the input signal exceeded the supply voltage by more than half a volt, it could trigger an SCR latch-up condition in the protection diodes, shorting VCC to ground in the ASIC.
The host system driving our remote unit had only one supply powering both the remote unit and the interface chips in the host unit. This seemed on the surface to meet the guideline – a common supply for all the devices involved. What we failed to realize was our cabling did not preserve this relationship at the remote unit.
It wasn’t so much a noise problem with the signal lines, though. Noise glitches on the signal lines did not in themselves have enough energy to trigger the latch-up. The trouble was in the power integrity at the remote unit.
The remote unit’s power demands were causing a measurable IR drop in the supply voltage coming through the cabling, although not enough to inconvenience the unit’s CMOS circuits. The signal lines, on the other hand, saw almost no IR drop. Thus, the signals were consistently at a slightly higher voltage than the supply as seen at the display. Occasionally, a surge in the remote unit’s current demand (and corresponding increase in the cable’s IR drop) coincided with a noise spike on the signal lines, just enough to trigger the latch-up.
By redesigning the remote unit to buffer the input signal lines just at the displays, we were able to ensure that both the signal and supply voltages the displays saw were the same. Problem solved and lesson learned: keep an eye on the supply, not just the signal.
One final wrinkle. A few years later, I changed jobs and the challenge presented to me on the first day at the new company was a lingering problem with their new design’s display panel devices spontaneously burning out. Yep, same device, same kind of remote powering and signaling.
Not only did my instant solution to their problem secure my reputation there, it allowed my manager to say to his budget hawks, “This is why we hire senior engineers.”
Rich Quinnell is a retired engineer and writer, and former Editor in Chief at EDN.