There are two main inter-disciplinary research intiatives at WARFT. They are :
The research focus at Waran Research FoundaTion (WARFT) is towards discovery and testing of brain related drugs. This effort requires inter and multi-disciplinary research initiatives and enormous resources. Thus, WARFT conducts research through its seven research groups - Charaka, Vishwakarma, Marconi, Ramanujan, Hardy, Naren and Bhaskara. These research groups conduct unique research with active interaction between them.
Though clinical experimentation is more realistic and of paramount importance in designing the drugs, neuronal network based simulations is also of equal importance to speed up the clinical process of exactly fixing up of the composition of the drugs. For drug design and testing including their side effect, energetics based neuron models and knowledge of large scale biologically realistic neuronal interconnectivity, specific to a brain region are essential. Further, biological modeling of brain diseases essentially needs the interconnectivity pattern across the neuronal structure. Clinical research will be more efficient if predictive methods are evolved to use experimental data to arrive at possible faults characterizing a disease. This interconnectivity prediction is bound to help the clinical researchers while drawing in their data and expertise.
Multi MIllion Neuron interconnectivity - Dendrite Axon Soma and Synapse (MMINi-DASS) - the flagship project of WARFT, belonging to CHARAKA group, endeavors to facilitate discovery of brain drugs through a systematic and detailed models of single neurons and neuronal interconnectivity prediction of specific regions of the brain. The MMINi-DASS project upon completion will assist clinical research in order to expedite drug discovery.
The immense computational demand imposed by the MMINi-DASS project, has given rise to the novel supercomputer design paradigm known as the MIP SCOC. The MIP approach attempts a very fine grain physical and logical integration of memory and logic by incorporating the memory within the logic. In the MIP SCOC architecture, memory is physically and logically integrated with the functional units of the processor. This bit-level integration of processing logic and memory has led to the creation of basic MIP Cells, with which varied classes of functional units are developed. The MIP SCOC architecture includes powerful functional units(ALFU - Algorithm Level Functional Units) like chain matrix adders, multipliers, sorters, multiple operand adders and graph theoretic units like Depth-First-Search, Breadth-First-Search. This introduces a higher level of abstraction through the algorithm-level instructions (ALISA). A single ALISA is equivalent to multiple parallel VLIW. The MIP SCOC Node is organized into heterogeneous multi-cores. For high bandwidth communication across these cores, an On Node NETwork(ONNET) architecture has been designed. This architecture uses MIN Structure for switching instead of crossar and hence increasing the scalability. It has reduced control complexity due to hierarchical organization of routers.
The MIP SCOC architecture includes an on-chip compiler (Compiler-On-Silicon) to generate the ALISA and scalar instructions to feed the ALFUs of the MIP SCOC node. The Primary COS (PCOS) partitions the incoming application libraries. Each Secondary COS(SCOS) generates and schedules the ALISA binary instructions to the respective functional units of the heterogenous cores. A distributed control design is employed specific to ALFU population type (forming different heterogeneous cores) enabling parallel operation of a very large number of ALFUs. The complete design and fabrication of MIP-SCOC node is made feasible by resorting to a new design and implementation paradigm, called CUBEMACH (Custom Built Heterogeneous Multi Core Architectures), developed at WARFT [refer Center Page Diagram].The cost-effectiveness is achieved due to the fact that the CUBEMACH design paradigm helps create an architecture for a single user for executing multiple independent applications without space time sharing.
The MIP SCOC paradigm helps Simultaneous Multiple Application (SMAPP) execution in a cluster where traces of independent applications are run without space time sharing within every node, unlike the conventional approaches. The SMAPP concept besides being flexible enough for cost sharing across multiple users, will provide the requisite performance for individual applications by utilizing the heterogeneous cores efficiently across the traces of the independent applications.