Kandrtech has only scratched the surface of silicon device reliability patterns.
In the 70s, when we introduced a new product design for sales, we would do a lengthy burn in to weed out the "infant mortality" components of the design. If any board or component of the system failed during the 'Burn-in" tests, the machine's clock would reset to the beginning. So, it would demonstrate reliability before the customer received it. This was part of production QA (quality assurance.)
Notes were made as to what failed and how often. This gathering of data allowed problem areas to be scrutinized and either design alterations made or part vendors disqualified from the buyer list. We primarily concerned ourselves with the leading edge of the "bathtub curve", as in the technology changing silicon valley environment of the 70s, product lines were superseded far sooner than individual part end of life issues.
Still, as the design and components improved the front wall of the bathtub got shorter in height, and the edge steeper, allowing the system burn-in to be reduced, from 2 days, to one day, and so on, until eventually a 1 hour burn in was sufficient to weed out infant mortality of the electronic components. (Some critical components were "pre-burned" in at a different in-house test station, and there was even a set up to run circuit boards or sub-assemblies in a "Hot-Box" at elevated temperatures.
The whole QA process was constantly under scrutiny. Even vendors of components got involved, offering to pre-screen parts to specific parameters or QA test them to our specs at their facility before we even received them for insertion into our sub-assemblies.
As a buyer of electronic parts, that was about the limit of what we could do, beyond ensuring that our design in no way operated the specific component beyond or near it's rated maximum. And, well below maximum was the clear design target whenever possible. The less stress put on any component, the better its service life.
However, makers of silicon devices had their own issues and problems, with which to contend.
Silicon chips are grown rather than assembled. That in itself is a process that requires clean room conditions. Once discs are grown with sufficient purity, they are sliced into wafers. They are then masked, doped, etched, more crystals grown, then masked and doped, etc. Until a wafer collects all the materials needed to support the intended function. Then the wafer is cleaved into tiny squares, glued into a plastic lead carrier where tiny wires are bonded to the chip surface and the lead carrier connections. Then the "integrated circuit" is sealed up and sent to test. This was a 70s process.
The "process" was described as a 10 mirco meter process, which refers to the line width technology of the era and refers to how small an area can be used on a chip to route signals or power.
SEE:
http://en.wikipedia.org/wiki/File:Comparison_semiconductor_process_nodes.svgNotice how much smaller the chip making technology has gotten over the years? 2010 saw line making technology in chips as 22 NANO meters.
I believe I can safely say the there are NO manufacturers today still using the 10 mircometer process that was so prevalent in the 70s or 80s.
The reason for the part making process to change is not just for part density, but also for part speed. You can't make a 70's era transistor using a new process, without changing it's operating parameters. A smaller part has different heat transfer and conduction capability, as well as speed, during operation. While you may be able to install a modern version of the old chip inside an old package, and it looks on the outside like an old component, at the very least SOME of the operating parameters of the new chip will be different from the old chip, as the process by which it was made is different than when the original part was made.
The point is, if for whatever reason, I wanted to make a brand - new 70s era electronic device, the internal parts would also have to come from the 70's (old stock) or I would have to do a redesign, using parts available today and account for the part parameters that exist for the current parts.
Back in the 70s, if we changed a part vendor/source for a component inside the system, it had to go through another cycle of determining it's infant mortality rate, before resuming the short production version in QA test. Such tasks are expensive. Production time costs money. So, there is always high resistance to increasing test times or even performing a test at all, with management finding it highly desirable to build and ship immediately due to operating cost considerations.
Dyna has been around since the 70s, and certainly used parts from that era. I'm reasonably certain some of those parts are NOT available today in exactly the same form as existed in the 70s.
Do they have a large pool of old stock?
Did they redesign the unit?
Did they re-qualify new replacement parts to take place of the old?
Did they do QA testing of components or units after making part substitutions?
Do they still deserve the Brand loyalty that was earned in the 70s an 80s?
You decide.
FWIW