• Java virtual machines and Just-In-Time (JIT) compilers
  • Wireless downloads and runtime environments for mobile Java
  • Scheduling compilers for instruction level parallelism
  • Optimizing compilers for embedded processors
  •  


     

  • Java virtual machines and Just-In-Time (JIT) compilers

    Java, with its advantage of being "write-once run-everywhere", is being used in a wide variety of area from enterprise servers to embedded systems. However, it performance has always been an issue.

    Here we do research on improving Java performance, researching JIT compiler algorithms, which dynamically translate bytecode to native machine code, and algorithms for various components of the Java virtual machine such as the threads, garbage collection, and exception handling.

    Based on the results from this research, we have developed the LaTTe Java virtual machine and released the source to the Internet. Its performance is better than Sun's JDK 1.3 (Hotspot) by 30% in benchmarks such as SPECjvm98 and Java Grande benchmarks.

    We are also doing research on Java virtual machines for embedded systems, having developed JIT compilers and faster interpreters for Sun's Personal Java and KVM. We are in the process of applying these technologies to real products.

    Related projects:

     
    - Open source LaTTe Java virtual machine (sponsored by IBM)
    - JIT compiler for the IA-64 Itanium processor (sponsored by Intel)
    - Java acceleration software for embedded sytems (sponsored by Veloxsoft)

     

  • Wireless downloads and runtime environments for mobile Java

    We are developing a runtime environment for mobile systems where all applications are written in Java and can be downloaded wirelessly.

    Related projects:

    - JINOS: An efficient Java runtime environment for mobile phones (sponsored by Veloxsoft)

     

  • Scheduling compilers for instruction level parallelism

    Recent microprocessors use instruction level parallelism (ILP) to run more than one instruction in a single sycle. This requires instruction scheduling, which finds instructions without data dependencies that can be executed in parallel from sequential code. There are basically two kinds of architectures which support ILP, the superscalar architecture based on hardware scheduling (e.g. UltraSPARC, PowerPC, MIPS, Alpha, PA-RISC) and the VLIW/EPIC architecture based on compiler scheduling (e.g. IA-64, TI Velocity). For both architectures, compiler scheduling plays an important role in improving performance. Here we do research on developing and evaluating new compiler scheduling algorithms and optimizing techniques.

    In our lab we base our scheduling techniques on a software pipelining algorithm called Enhanced Pipeline Scheduling (EPS) and a global code motion algorithm called Selective Scheduling (SS). Our lab has implemented a VLIW testbed based on SPARC and have implemented and evaluated EPS and SS on this testbed. Based on the results from this research, we are implementing EPS and SS for Sun's UltraSPARC compiler with funding from Sun Microsystems.

    Related projects:

    - Instruction scheduling for inorder superscalar processors (sponsored by Sun Microsystems)

     

  • Optimizing compilers for embedded processors>

    With embedded microprocessors, memory for storing code usually determines price, so minimizing the generated code size is often more important than improving execution speed. Here we do research on modifying "traditional" compiler optimizations, which focus on improving performance, to optimizations that can reduce code size and generate code which consumes less power. We are also developing a GCC-based optimizing compiler for Samsung Electronics' CalmRISC microprocessor family, which has already been put to commercial use.

    Related projects:

    - GCC-based optimizing compiler for CalmRISC 8/16/32-bit microprocessors (sponsored by Samsung Electronics)

     

  •  


    Copyright(C) 2006 by MASS Lab. All rights reserved.