I presume the reason a poll should be "scientific" is so that you can draw valid conclusions from it. If it isn't "scientific", what good is it?
I have to agree with ofreen. There hasn't been a single scientific poll on the forum. They are all laced with bias, unsupported conclusions, are anecdotal, and non-repeatable in basic functional nature. Further, volunteer polls heavily weigh the extreme contributors, rather than a balanced cross section of product users.
One of the first things we learn in engineering for production is that a test case of one, while it may be encouraging (if successful), is in no way a proof. While it may indeed prove a successful concept, it doesn't mean all the variables are properly confined. The same is true for a single failure. As engineers (good ones, anyway), we aggressively pursue a failure to determine root cause, for fear there may be a design flaw which may effect the total population of sold products. In Mark's Dyna-S case the root cause was never determined, though it is clear by admission that their production quality tests are inadequate to prove the viability of all production samples. They simply don't test their product for the environment it is intended to work within. That test is up to the buyer of the product, and rather than satisfy the buyer, they would prefer not to sell the unit at all rather than determine root cause for the customer perceived failure.
What Mark's experience has affirmed for me, is that Dyna is just milking a product line in which they plan no further investment of time or money. They would rather see the product die via lack of sales rather than invest anything in product assurance or improvement. If it results in one less sale, then so be it. It is NOT their cash cow anymore. Other product lines are more lucrative. SOHC4's are a dwindling population.
From what I can tell, the Dyna-S product has remain unchanged for 20 years. It wasn't an ideal design when new, and still isn't today (dwell is way to long). Also, I expect the internal parts have changed at the supplier end. They may well be having a hard time finding internal parts that meet the same spec as they did 20 years ago. This usually results in a relaxation of spec boundaries for the parts that are still offered. While this should be an engineer's decision, if an engineer isn't available, the decision is made with the parts buyer, usually on advice from the supplying vendor salesman. Then it is up to the robustness of the production test process to find the parts that are to out of outside of proper working spec. Dyna admitted to no capability or desire to test their product at the temperatures found at the installation site.
Here's a sequence of events that may explain what has happened. It's fiction, but I have experienced similar event sequences.
Engineer gets a product spec, which states it must work in a 200F environment.
Engineers selects parts or design features that constrain the design to work reliably within those requirements.
To save a step or time heat soaking the components at final test, the engineer selects parts with tolerances far beyond the working environment.
Throughout production, it is found that heat soaking the components never finds any failures. So, the production test is streamlined to eliminate that test in final test Quality Assurance (QA). The product remains viable, reliable.
The parts vendor finds out that their component die yield is much greater if they can narrow the component temperature specs. They can either sell the parts at a cheaper rate or sell more of them for the same rate for more profit. This doesn't mean all the parts made can't meet the higher old spec, many of them still do. But, some small population of the parts can only meet the lower spec, not the old wider spec range of operation. The buyer accepts the cheaper parts, as they still have a max temp spec at or slightly above 200F environment (without realizing that the internal temperatures are far higher). Production test no longer screens for elevated temperature operation, passes final test, and the part is put on the sales/distribution shelf. Most of the products still work in customer applications. Less than 10% of the products fail at user temps, yet will still pass their final product test. Phone guy/test guy can't see a problem with the unit, declares customer can't be satisfied and backs out of the sales. Company loss; one unit and the time spent to deal with a customer.
There is another event chain where a critical part, once made by multiple vendors to a wide temp. spec., becomes sole sourced. The buyer is at the mercy of that single vendor for part specs. Since they are the only game in town, they change the part specs to increase their profits. Buyer has no choice but to accept the changed parts, discontinue their product line, or activate an engineering project to redesign the product with alternative parts. Guess which one is the cheaper or more lucrative alternative?
On the Dyna side of the argument:
Most customer service people realize that some buyers take advantage of companies when they can. People buy stuff, use it, change their mind and want to return it, "rent free", as it were. They have to "write off" these events as inevitable. It is only when a trend of multiple failures (where they can find cause) that makes them respond with real action to the product itself. I am NOT saying that Mark did this. Only that it may have been perceived that way by Dyna phone personnel. If indeed Dyna has a 10% product problem they may not realize it for a while (depends on number of units sold or if they are paying attention to these statistics anymore). If/when they do realize it, it will be a fix it or end of life product line situation.
And there you have it. Another opinion to weight against others.
Cheers,