Item Details

Print View

An Introduction to Parallel Object-Oriented Programming With Mentat

Grimshaw, Andrew
Grimshaw, Andrew
Mentat is an object-oriented, parallel computation system designed to provide large amounts of easy to use parallelism for distributed systems. Mentat alleviates most of the burden of explicit parallelization that message passing systems typically place on the programmer. Further, Mentat programs block only if specific data dependencies require blocking, thereby greatly increasing the degree of parallelism attainable over that from RPC systems. The Mentat Programming Language, an extension of C++, simplifies writing parallel programs by extending the encapsulation provided by objects to the encapsulation of parallelism. Users of Mentat objects are unaware of whether member functions are carried out sequentially or in parallel. In addition, member function invocation is asynchronous (non-blocking), the caller does not wait for the result. It is the responsibility of the compiler, in conjunction with the run-time system, to manage all aspects of communication and synchronization. The underlying assumption is that the programmer can make better granularity and partitioning decisions, while the compiler and run-time system can correctly manage communication and synchronization. By splitting the responsibility between the compiler and the programmer we exploit the strengths of each, and avoid their weaknesses. Mentat has been implemented on three architectures that span the MIMD spectrum, a network of Sun workstations (loosely coupled), the Intel iPSC/2 (tightly coupled), and the BBN Butterfly (shared memory). Mentat programs are source compatible between supported architectures. Even on an 8 processor network of Sun workstations, speed-ups in excess of four, in comparison to optimized sequential C code, are consistently seen for Gaussian elimination using partial pivoting. This paper describes the Mentat approach to parallelism, the Mentat Programming Language, an overview of the run-time system, and performance data on the above architectures. Examples that illustrate the major language features, and how the features support parallelism encapsulation are provided. Note: Abstract extracted from PDF text
Date Received
University of Virginia, Department of Computer Science, 1991
Published Date
Libra Open Repository
In CopyrightIn Copyright
▾See more
▴See less


Access Online