============================= VOLUME 19 NUMBER 2 MAY 1995 ============================= 1. RIMM: A Reactive Integration Multidatabase Model Niki Pissinou, Vijay Raghavan and Kanonkluk Vanapipat, The Center for Advanced Computer Studies, University of Southwestern Louisiana, e-mail: pissinou@cacs.usl.edu pp. 177-194 Keywords: databases, E-C-A rules, temporal objects, mediators Abstract: Traditional solutions to multidatabase systems mainly focus on resolving the structural and semantic incompatibility among the local databases to provide one or more frozen shared schemas to the users. However, the limitations of such approaches are evident in environments where changes might occur frequently. Specifically, it is impractical to assume that once set up, the global interface would remain valid and frozen in time. Over time the properties, behaviors, roles, and perhaps the identities of the objects in the local systems may change to reflect the evolution of the modeled universe. Additionally, the information requirements of the global users may change to reflect their needs. Such dynamism makes the functionalities of a multidatabase system obsolete. In this paper, we provide a paradigm for the dynamic interaction between local and global systems to resolve foreseeable conflicts that may occur over time, while supporting object consistency across databases and object relativism between local and global systems. Based on this paradigm, we develop a formal Reactive Integration Multidatabase Model (RIMM) that contains the expressiveness to represent the temporal changes, temporal conditions, and events in the real world. The model can be utilized as a framework to realize the multidatabase architecture that incorporates the event driven production rules to automatically react to anticipated changes of local databases and global users and dynamically reconfigure the global interfaces, so as to support interoperability over time. Our targeted application domains include geographic information systems, scientific database systems, and digital libraries. ---------------------- 2. Statistical Usage Testing for Software Reliability Control Per Runeson, Q-Labs AB, IDEON Research Park, S-223 70 Lund, Sweden AND Claes Wohlin, Dept. of Communication Systems, Lund Institute of Technology, Lund University, Box 118, S-221 00 Lund, Sweden Keywords: statistical usage testing, software reliability, usage profile, operational profile, statistical quality control pp. 195-208 Abstract: Software reliability is a frequently used term, but very seldom the reliability is under control during a software development project. This paper presents a method, Statistical Usage Testing (SUT), which gives the possibility to estimate and predict, and hence control software reliability. SUT is the reliability certification method described as a part of Cleanroom software engineering. The main objective of SUT is to certify the software reliability and to find the faults with high influence on realibility. SUT provides statistically based stopping rules during test as well as an effective use ot test resources, which is shown by practical applications of this and similar methods. This paper presents the basic ideas behind SUT and briefly discusses the theoretical basis as well as the application of the method. ---------------------- 3. Large Performance Analysis of Disk Mirroring Techniques Cyril U. Orji, Taysir Abdalla, School of Computer Science, Florida International University, Miami, Florida 33199, e-mail: orji,abdalla@geneva.fiu.edu, phone: (305)348-2440; fax: (305)348-3549 AND Jon A. Solworth, Department of EECS (M/C 154), University of Illinois at Chicago, Chicago, Illinois 60607-7053, e-mail: solworth@parsys.eecs.uic.edu, phone: (312)996-0955; fax: (312)413-0024 Keywords: disk mirroring, chained & interleaved declustering, RAID pp. 209-222 Abstract: Traditional mirroring maintains a logical disk image on two physical disks thereby improving reliability and read performance during normal mode operation. However, failure mode operation may be inefficient since the load on a single surviving disk could potentially double. Moreover, write performance during normal mode operation is inefficient since a single data item must be written to both disks in a mirror set. Interleaved and chained declustering improve load balancing during failure mode operation, while distorted mirror improves write performance. This paper presents a comparative study of the performance of three mirroring schemes -- traditional mirroring, distorted mirroring, and interleaved declustering under various operating conditions -- normal, degraded and rebuild modes. The tradeoffs between response time and rebuild time are studied. Our findings show that using a disk track as a rebuild unit provides the best balance between response time and rebuild time. In addition, the performance of traditional and distorted mirrors during the rebuild process is adversely affected if incoming application requests are dominated by read requests while interleaved declustering is adversely impacted by write requests. ---------------------- 4. Data Consistency in Hard Real-Time Systems Neil C. Audsley, Alan Burns, Mike F. Richardson and Andy J. Wellings, Real-Time Systems Group, Department of Computer Science, University of York, UK, Phone: +44 1904 432779, Fax: +44 1904 432767, e-mail: burns@minster.york.ac.uk Keywords: databases, real-time pp. 223-234 Abstract: The incorporation of database technology into hard real-time systems presents an important but difficult challenge to any system designer. Notions of consistency and failure atomicity are attractive for architects of dependable hard real-time applications. Unfortunately, the hard real-time requirements of bounded access times are not easily imposed upon the concurrency control methods usually found in current database models. The model of data consistency presented here uses temporal consistency constraints in order to ensure that data values are sufficiently recent to be usable. Blocking is avoided by imposing the restriction Data itself is considered to be either perishable or non-perishable. ---------------------- 5. Parallel Gaussian Elimination Algorithms on a Cray Y-MP Marcin Paprzycki, Department of Mathematics and Computer Science, University of Texas of the Permian Basin, Odessa, TX 79762-0001, USA, Phone: +915 552 2258, Fax: +915 552 2374, e-mail: paprzycki_m@gusher.pb.utexas.edu Keywords: performance, parallel Gaussian elimination, Strassen's algorithm pp. 235-240 Abstract: Various implementations of Gaussian elimination of dense matrices on an 8-processor Cray Y-MP are discussed. It is shown that when the manufacturer provided BLAS kernels are used the difference in performance between the best blocked implementations and Cray's routine SGETRF is almost negligible. It is shown that for large matrices Strassen's matrix multiplication algorithm can lead to substantial performance gains. ---------------------- 6. Multi-Grain Rendezvous Stanislaw Chrobot, Kuwait University, Department of Mathematics, P. O. Box 5969 Safat, 13060 Kuwait, e-mail: chrobot@mcc.sci.kuniv.edu.kw Keywords: distributed programming, message passing, rendezvous, interrupt pp. 241-256 Abstract: In many distributed languages, the select statement introduces a dynamic selection mode for a sequential process as a mode in which the successor for an operation is selected at run-time from a set of operations defined by the process program. It is an a program. The dynamic selection mode is used to accept the remotely invoked operations, called transactions in Concurrent C. The transactions themselves, however, execute in the static selection mode. This fact is a source of poor expressiveness and poor efficiency of the transactions. To improve the expressiveness and efficiency, we suggest that the transaction are subdivided into grains, and are not only accepted in the dynamic selection mode, but are executed in this mode as well. We also suggest that the implementation of the dynamic selection mode should be based on the interrupt feature. ---------------------- 7. Comparing Inference Control Mechanisms for Statistical Databases with Emphasis on Randomizing Ernst L. Leiss, University of Houston, Department of Computer Science, Houston, Texas, 77004, U.S.A., e-mail: coscel@cs.uh.edu AND Jurij Jaklic, University of Ljubljana, Faculty of Economics, Kardeljeva pl. 17, 61000 Ljubljana, Slovenia, e-mail: jurij.jaklic@uni-lj.si Keywords: statistical databases, security, comparison, randomizing pp. 257-264 Abstract: Statistical databases are primarily collected for statistical analysis purposes. They usually contain information about persons and organizations which is considered confidential and only aggregate statistics on this confidential attribute are permitted. However, deduction of confidential data (inference) is frequently possible. In order to prevent this possibility several security-control mechanisms have been developed, among them the randomizing method. We compare randomizing with other methods using several evaluation criteria. Evaluation shows that randomizing has several advantages in comparison with other methods, such as high level of security, robustness, low cost. On the other hand, the problem of bias for small query sets can be considerable for some applications. ---------------------- 8. Deep Knowledge and Domain Models Jarmo J. Ahonen, Lappeenranta University of Technology, P.O.Box 20, 53851, Lappeenranta, Finland, e-mail: jarmo.ahonen@lut.fi Keywords: modeling, simulation, deep knowledge, domain models pp. 265-280 Abstract: An approach to the concept of deep knowledge is outlined. The approach is based on the assumption that the depth of knowledge results from its explanatory powers. After considering some examples of deep and shallow knowledge and defining deep knowledge and robustness, an approach to the development of domain models based on deep knowledge is proposed. The proposed approach is based on the Salmonian concept of causal processes and it provides a uniform point of view to knowledge of physical domains and domain modeling. The approach is developed in order to incorporate structural and causal knowledge directly into numeric models because qualitative approaches seem to have philosophical problems.