|
Talk Abstract
Underlying Future Technology talk on Computing:
Scaling Up, Scaling Down, and Scaling Back
L. Durbeck
talk given at the dinner banquet for the American Nuclear Society's International Meeting on Mathematical Methods for Nuclear Applications,
Salt Lake City, Utah, U.S.A., September 9-13, 2001. Conference sponsored by the Idaho chapter of
the American Nuclear Society, IANS.
|
Cell Matrix Corporation's core technology is, fundamentally, a convenient and
elegant way to organize matter and energy to do computing. A Cell Matrix
is an n-dimensional structure formed from m-sided cells, where a small set of properties holds true for cell structure, function, and intercellular communication. This set of properties provides ways to program cells to bring about useful computations and data processing. Cell Matrix Corporation work to date has made it possible to translate algorithms, dataflow diagrams, and circuits to this Cell Matrix structure/function through straightforward application of standard electrical engineering practices. As the technology matures, it will be possible to program Cell Matrices using high level software languages as well.
The Cell Matrix computing architecture is fault tolerant, scalable, distributed, and massively parallel. The cells are programmable, gate-level processors. Circuits implemented on a Cell Matrix can form self-organizing organic systems that analyze and modify their own, and other, circuitry. Problems that require months to solve on today's computers can be implemented directly in hardware and distributed, not over time, but over space/materials, drastically reducing their execution times. The architecture supports a "one problem, one machine" model of problem-solving, whereby the hardware is tailored exactly to the specifications of the problem being solved, and adapted or modified when the problem specifications or operating conditions change. This architecture provides a more natural starting point for many of the large parallel problems that are either difficult, slow, or expensive to solve today. It is a particularly convenient platform for large problems that require distributed, massively parallel processing and that can be formulated such that computations, operations, and data sharing can be local rather than global.
In this talk I will present a view of how the full potential of this computing architecture will unfold in the future, what important events or milestones will occur along the way, and what the eventual and intermediate outcomes could be.
|
|