Question

Can a scalable theory and architectures that will allow adaptation to various upset rates and system reliability targets be developed?

Summary

As error rates change, so does the level of protection required to achieve a given level of reliability. For an optimal design precisely tuned to the error rate, how does the total investment in resources (area, delay, energy) scale as a function of error rate?

A given architecture will be less efficient than the theoretical minimum. How efficient is the architecture?

An architecture designed for a fixed upset rate target will only work correctly up to its design target. Further, below its designed error rate target it will be increasingly inefficient.

An adaptive architecture could potentially reallocate resources to match the upset rate. Over what range of upset rates can this architecture scale? How efficient is it at each upset rate?

What tradeoffs, if any, will arise between flexibility and efficiency? Will the flexible architecture necessarily be less efficient than the single-error-rate-target architecture? How much so?

Subquestions

Relevant Scenarios

Workshop Materials

Existing Work

Comments

Questions/Q6 (last edited 2009-03-31 18:09:16 by NikilMehta)