Parallel Java Programming System Launched by University

A team of researchers at the University of Illinois' Universal Parallel Computing Research Center (UPCRC) has developed a parallel implementation of Java.

With an emerging need for programs that take advantage of today's multicore and parallel systems, the University of Illinois has come up with an effort to deliver a parallel version of the Java language.

Indeed, the University of Illinois at Urbana-Champaign has launched a project to deliver a Deterministic Parallel Java (DPJ) implementation under funding from the National Science Foundation (NSF), Intel and Microsoft.

In a press release on the new technology, Cheri Helregel, a spokeswoman for the Universal Parallel Computing Research Center (UPCRC) at the University of Illinois at Urbana-Champaign, said the new parallel language is the first to guarantee deterministic semantics without run-time checks for general-purpose, object-oriented programs. It's also the first language to use compile-time type checking for parallel operations on arrays of references ("pointers") to objects, and the first language to use regions and effects for flexible, nested data structures.

University of Illinois Computer Science professor Vikram Adve and Ph.D. student Robert Bocchino launched the effort to help "make parallelism easy," and formed a team to focus on developers and their needs. They wanted to develop a language that supports programming styles developers find most familiar and productive, such as mainstream object-oriented programming languages.

According to the UPCRC's Deterministic Parallel Java Website:

""The broad goal of our project is to provide deterministic-by-default semantics for an object-oriented, imperative parallel language, using primarily compile-time checking. 'Deterministic' means that the program produces the same visible output for a given input, in all executions. 'By default' means that deterministic behavior is guaranteed unless the programmer explicitly requests non-determinism. This is in contrast to today's shared-memory programming models (e.g., threads and locks), which are inherently nondeterministic and can even have undetected data races.""

The resulting DPJ implementation is a safe and modular parallel language that helps developers port parts of sequential Java applications to run on multicore systems. It also helps developers rewrite parts of parallel Java applications to simplify debugging, testing and long-term maintenance. DPJ-ported parallel code can co-exist with ordinary Java code within the same application, so that programs can be incrementally ported to DPJ, the UPCRC said.

Moreover, DPJ simplifies debugging and testing of parallel software as all potential data races are caught at compile-time, the UPCRC press release said. Because DPJ programs have obvious sequential semantics, all debugging and testing of DPJ code can happen essentially like that for sequential programs. Maintenance becomes easier as DPJ encodes the programmer's knowledge of parallel data sharing patterns in DPJ annotations-simplifying the tasks of understanding, modifying and extending parallel DPJ software.

And because DPJ features the same program annotations, each function or class can be understood and parallelized in a modular fashion, without knowing internal parallelism or synchronization details of other functions or classes. The University of Illinois researchers said this is especially important because modularity is crucial for creating large-scale software applications. Yet, they say modularity is severely compromised when using any of today's mainstream shared memory programming models.

Adve and his group are also working with Intel to define a similar set of extensions to C++ (DPC++), which can be used to check similar properties for existing programming models such as Cilk, OpenMP and Threading Building Blocks (TBB).

For its part, the UPCRC makes a distinction between concurrent programming and parallel programming. A page on programming on the UPCRC said:

""We distinguish between concurrent programming that focuses on problems where concurrency is part of the specification (reactive code such as an operating system, user interfaces, or on-line transaction processing, etc.), and parallel programming that focuses on problems where concurrent execution is used only for improving the performance of a transformational code. The prevalence of multicore platforms does not increase the need for concurrent programming and does not make it harder; it increases the need for parallel programming. It is our contention that parallel programming is much easier than concurrent programming; in particular, it is seldom necessary to use nondeterministic code.""