And now for something completely different, brought to you by David Ackley and Daniel Cannon in their playfully thought provoking paper: Pursue robust indefinite scalability, wherein they try to take a fresh look at neural networks, starting from scratch.
What is this strange thing called indefinite scalability? They sound like words that don’t really go together:
Indefinite scalability is the property that the design can support open-ended computational growth without substantial re-engineering, in as strict as sense as can be managed. By comparison, many computer, algorithm, and network designs — even those that address scalability — are only finitely scalable because their scalability occurs within some finite space. For example, an penis enlargement info in-core sorting algorithm for a 32 bit machine can only scale to billions of numbers before address space is exhausted and then that algorithm must be re-engineered.
Our idea is to expose indefinitely scalable computational power to programmers using reinvented and restricted—but still recognizable—concepts of sequential code, integer arithmetic, pointers, and user-deﬁned classes and objects. Within the space of indefinitely scalable designs, consequently, we prioritize programmability and software engineering concerns well ahead of either theoretical parsimony or maximally efficient or ﬂexible tile hardware design.
It’s easy to read indefinite as “infinite” here. So what would such machine look like? Well, they’ve built the Movable Feast Machince as an implementation of their ideas: