Fifth_Generation_Computer_Systems_1992_Volume_1 Fifth Generation Computer Systems 1992 Volume 1

Fifth_Generation_Computer_Systems_1992_Volume_1 Fifth_Generation_Computer_Systems_1992_Volume_1

User Manual: Fifth_Generation_Computer_Systems_1992_Volume_1

Open the PDF directly: View PDF PDF.
Page Count: 495

DownloadFifth_Generation_Computer_Systems_1992_Volume_1 Fifth Generation Computer Systems 1992 Volume 1
Open PDF In BrowserView PDF
Edited by
Institute for New Generation
Computer Technology (ICOT) _
Volume 1

Ohmsha, Ltd.

lOS Press



1992 by Institute for New Generation Computer Technology

All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system or transmitted, in any form or by any means, electronic, mechanical, recording or
otherwise, without the prior permission of the copyright owner.
ISBN 4-274-07724-1 (Ohmsha)
ISBN 90-5199-099-5 (lOS Press)
Library of Congress Catalog Card Number: 92-073166

Published and distributed in Japan by
Ohmsha, Ltd.
3-1 Kanda Nishiki-cho, Chiyoda-ku, Tokyo 101, Japan
Distributed in North America by
lOS Press, Inc.
Postal Drawer 10558, Burke, VA 22009-0558, U. S. A.

United Kingdom by
lOS Press
73 Lime Walk, Headington, Oxford OX3 7AD, England
Europe and the rest of the world by
lOS Press
Van Diemenstraat 94, 1013 CN Amsterdam, Netherlands
Far East jointly by
Ohmsha, Ltd., lOS Press
Printed in Japan


On behalf of the Organizing Committee, it is my great pleasure
to welcome you to the International Conference on Fifth Generation
Computer Systems 1992.
The Fifth Generation Computer Systems (FGCS) project was
started in 1982 by the initiative of the late Professor Tohru MotoOka with the purpose of making a revolutionary new type of computers oriented to knowledge processing in the 1990s. After completing the initial and intermediate stages of research and development,
we are now at the final point of our ten-year project and are rapidly
approaching the completion of prototype Fifth Generation Computer Systems.
The research goals of the FGCS project were challenging, but
we expect to meet most of them. We have developed a new paradigm
of knowledge processing including the parallel logic language, KLl,
and the parallel inference machine, PIM.
When we look back upon these ten years, we can find many
research areas in knowledge processing related to this project, such
as logic programming, parallel processing, natural language processing, and machine learning. Furthermore, there emerged many new
applications of knowledge processing, such as legal reasoning and
genetic information processing.
I believe that this new world of information processing will
grow more and more in the future. When very large knowledge bases
including common sense knowledge come out in full scale and are
widely used, the knowledge processing paradigm will show its real
power and will give us great rewards. From now on, we can enjoy
fifth generation computer technology in many fields.
Following the same objective of creating such a new paradigm,
there has been intense international collaboration, such as joint
workshops with France, Italy, Sweden, the U.K., and the U.S.A., and
joint research with U.S.A. and Swedish institutes on parallel processing applications.
Against this background,ICOT hosts the International Conference on Fifth Generation Computer Systems 1992 (FGCS'92). This
is the last in a series of FGCS conferences; previous conferences were
held in 1981, 1984 and 1988. The purpose of the conference is to
present the final results of the FGCS project, as well as to promote
the exchange of new ideas in the fields of knowledge processing,
logic programming, and parallel processing.
FGCS'92 will take place over five days. The first two days will
be devoted to the presentation of the latest results of the FGCS
project, and will include invited lectures by leading researchers. The


remaining three days will be devoted to technical sessions for invited
and submitted papers, the presentation of the results of detailed
research done at ICOT, and panel discussions.
Professor D. Bj¢rner from the United Nations University,
Professor l.A. Robinson from Syracuse University, and Professor
C.A.R. Hoare from Oxford University kindly accepted our offer to·
give invited lectures.
Professor R. Kowalski from Imperial College is the chairperson of
the plenary panel session on "A springboard for information processing in the 21st century." Professor Hajime Karatsu from Tokai
University accepted our invitation to give a banquet speech.
During the conference, there will be demonstrations of the
research results from the ten-year FGCS project. The Parallel Inference Machines and many kinds of parallel application programs will
be highlighted to show the feasibility of the machines.
I hope that this conference will be a nice place to present all of
the research results in this field up to this time, confirm the milestones, and propose a future direction for the research, development
and applications of the fifth generation computers through vigorous
discussions among attendees from all over the world. I hope all of
the attendees will return to their own countries with great expectations in minds and feel that a new era of computer science has
opened in terms of fifth generation computer systems.
Moreover, I wish that the friendship and frank cooperation
among researchers from around the world, brewed in the process of
fifth generation computer systems research, will grow and widen so
that this small but strong relationship can help promote international collaboration for the brilliant future of mankind.
Hidehiko Tanaka
Conference Chairperson


Esteemed guests, let me begin by welcoming you to the International Conference on
Fifth Generation Computer Systems, 1992. I am Hideaki Kumano. I am the Director
General of the Machinery and Information Industries Bureau of MIT!.
We have been promoting the Fifth Generation Computer Systems Project, with the
mission of international contributions to technological development by promoting the
research and development of information technology in the basic research phase and
distributing the achievements of that research worldwide. This international conference
is thus of great importance in making our achievements available to all. It is, therefore,
a great honor for me to be given the opportunity to make the keynote speech today.


Achievements of the Project

Since I took up my current post, I have had several opportunities to visit the project site.
This made a great impression on me since it proved to me that Japanese technology can
produce spectacular results in an area of highly advanced technology, covering the fields
of parallel inference machine hardware and its basic software such as operating systems
and programming languages; fields in which no one had any previous experience.
Furthermore, I caught a glimpse of the future use of fifth generation computer technology when I saw the results of its application to genetics and law. I was especially
interested in the demonstration of the parallel legal inference system, since I have been
engaged in the enactment and operation of laws at MIT!. I now believe that the machines
using the concepts of fifth generation computers will find practical applications in the
enactment and operation of laws in the near future.
The research and development phase of our project will be completed by the end
of this fiscal year. We will evaluate all the results. The committee for development of
basic computer technology, comprised of distinguished members selected from a broad
spectrum of fields, will make a formal evaluation of the project. This evaluation will take
into account the opinions of those attending the conference, as well as the results of a
questionnaire completed by overseas experts in each field. Even before this evaluation,
however, I am convinced that the project has produced results that will have a great
impact on future computer technology.


Features of the Fifth Generation Computer Systems Project

I will explain how we set our goals and developed a scheme that would achieve these
high-level technological advances.
The commencement of the project coincided with the time when Japan was coming
to be recognized as a major economic and technological power in the world community.
Given these circumstances, the objectives of the project included not only the developInent of original and creative technology, but also the making of valuable international


contributions. In this regard, we selected a theme of "knowledge information processing", which would have a major impact on a wide area from technology through to the
economy. The project took as its research goal the development of a parallel inference
system, representing the paradigm of computer technology as applied to the theme.
The goal was particularly challenging at that time. I recalled the words of a participant at the first conference held in 1981. He commented that it was doubtful whether
Japanese researchers could succeed in such a project since we, at that time, had very
little experience in these fields.
However, despite the difficulties of the task ahead of us, we promoted the project
from the viewpoint of contributing to the international community through research. In
this regard, our endeavors in this area were targeted as pre-competitive technologies,
namely basic research. This meant that we would have to start from scratch, assembling
and training a group of researchers.
To achieve our goal of creating a paradigm of new computer technology, taking an
integrated approach starting from basic research, we settled on a research scheme after
exhaustive preliminary deliberations.
As part of its efforts to promote the dissemination of basic research results as international public assets, the government of Japan, reflecting its firm commitment to this
area, decided to finance all research costs.
The Institute for New Generation Computer Technology (ICaT), the sponsor of this
conference, was established to act as a central research laboratory where brainpower
could be concentrated. Such an organization was considered essential to the development
of an integrated technology that could be applied to both hardware and software. The
Institute"s research laboratory, that actually conducted the project's research and development, was founded precisely ten years ago, today, on June 1 of 1982. A number of
highly qualified personnel, all of whom were excited by the ideal that the project pursued,
were recruited from the government and industry. Furthermore, various ad hoc groups
were formed to promote discussions among researchers in various fields, making Ie aT
the key center for research communication in this field.
The duration of the project was divided into three phases. Reviews were conducted
at the end of each phase, from the viewpoint of human resources and technological advances, which made it possible to entrust various areas of the research. I believe that
this approach increased efficiency, and also allowed flexibility by eliminating redundant
areas of research.
We have also been heavily involved in international exchanges, with the aim of promoting international contributions. Currently, we are involved in five different international research collaboration projects. These include work in the theorem proving field
with the Australian National University (ANU), and research into constraint logic programming with the Swedish Institute of Computer Science (SICS). The results of these
two collaborations, on display in the demonstration hall, are excellent examples of what
research collaboration can achieve. We have also promoted international exchange by
holding international conferences and by hosting researchers from abroad at ICaT. And,
we have gone to great lengths to make public our project's achievements, including in-


termediate results.


Succession of the Project's Ideal

This project is regarded as being the prototype for all subsequent projects to be sponsored
by MITI.
It is largely due to the herculean efforts of the researchers, under the leadership of Dr.
Fuchi and other excellent research leaders, that have led to the revolutionary advances
being demonstrated at this conference.
In the light of these achievements, and with an eye to the future, I can now state
that there is no question of the need to make international contributions the basis of the
policies governing future technological development at MITI. This ideal will be passed
on to all subsequent research and development projects.
A case in point is the Real World Computing (RWC) project scheduled to start this
year. This project rests on a foundation of international cooperation. Indeed, the basic
plan, approved by a committee a few days ago, specifically reflects the international
exchange of opinions. The RWC project is a particularly challenging project that aims
to investigate the fundamental principles of human-like flexible information processing
and to implement it as a new information processing technology, taking full advantage
of advancing hardware technologies. We will not fail to make every effort to achieve the
project's objective~ for use as common assets for all mankind.


International Response

As I mentioned earlier, I believe that the Fifth Generation Computer System Project
has made valuable international contributions from its earliest stages. The project has
stimulated international interest and responses from its outset. The great number of
foreign participants present today illustrates this point.
Around the world, a number of projects received their initial impetus from our
project: these include the Strategic Computing Initiative in the U.S.A., the EC's Esprit project, and the Alvey Project in the United Kingdom.
These projects were initially launched to compete with the Fifth Generation Computer Systems Project. Now, however, I strongly believe that since our ideal of international contributions has come to be understood around the globe, together with the
realization that technology can not and should not be divided by borders, each project
is providing the stimulus for the others, and all are making major contributions to the
advancement of information processing technologies.


Free Access to the Project's Software

One of the great virtues of science, given an open environment,
between researchers using a common base of technology.


the collaboration


Considering this, it would be impractical for one person or even one nation to attempt
to cover the whole range of technological research and development. Therefore, the
necessity of international cooperation is self-evident from the standpoint of advancing
the human race as a whole.
In this vein, MITI has decided to promote technology globalism in the fields of science
and technology, based on a concept of "international cooperative effort for creative activity and international exchange to maximize the total benefit of science and technology
to mankind." We call this concept "techno-globalism".
It is also important to establish an environment based on "techno-globalism", that
supports international collaboration in basic and original research as a resource to solve
problems common to all mankind as well as the dissemination of the resulting achievements. This could be done through international cooperation.
To achieve this "techno-globalism" all countries should, as far as possible, allow free
and easy access to their domestic technologies. This kind of openness requires the voluntary establishment of environments where anyone can access technological achievements
freely, rather than merely asking other countries for information. It is this kind of international cooperation, with the efforts of both sides complementing each other, that can
best accelerate the advancement of technology.
We at MITI have examined our policies from the viewpoint of promoting international
technological advancement by using the technologies developed as part of this project,
the superbness of which- has encouraged us to set a new policy.
Our project's resources focused mainly on a variety of software, including parallel
operating systems and parallel logic programming languages. To date, the results of such
a national project, sponsored by the government, were available only for a fee and could
be used only under various conditions once they became the property of the government.
Therefore, generally speaking, although the results have been available to the public, in
principle, they have not been available to be used freely and widely.
As I mentioned earlier, in the push toward reaching the goal of promoting international cooperation for technological advancement, Japan should take the initiative in
creating an environment where all technologies developed in this project can be accessed
easily. Now, I can formally announce that, concerning software copyrights in the research
and development phase which are not the property of the government, the Institute for
New Generation Computer Technology(ICOT), the owner of these copyrights of software
products is now preparing to enable their free and and open use without charge.
The adoption of this policy not only allows anyone free access to the software technologies developed as part of the project, but also make it possible for interested parties
to inherit the results of our research, to further advance the technology. I sincerely hope
that our adopting this policy will maximize the utilization of researchers' abilities, and
promote the advancement of the technologies of knowledge information processing and
parallel processing, toward which all efforts have been concentrated during the project.
This means that our adopting this policy will not merely result in a one-way flow
of technologies from Japan, but enhance the benefit to all mankind of the technological
advancements brought on by a two-way flow of technology and the mutual benefits thus


I should say that, from the outset of the Fifth Generation Computer Systems Project,
we decided make international contributions an important objective of the project. We
fashioned the project as the model for managing the MITI-sponsored research and development projects that were to follow. Now, as we near the completion of the project, we
have decided to adopt a policy of free access to the software to inspire further international
contributions to technological development.
I ask all of you to understand the message in this decision. I very much hope that the
world's researchers will make effective use of the technologies resulting from the project
and will devote themselves to further developing the technologies.
Finally, I'd like to close by expressing my heartfelt desire for this international conference to succeed in providing a productive forum for information exchange between
participants and to act as a springboard for further advancements.
Thank you very much for bearing with me.
Hideaki Kumano
Director General
Machinery and Information Industries Bureau
Ministry of International Trade and Industry (MITI)


Ten years have passed since the FGCS project was launched
with the support of the Japanese government. As soon as the FGCS
project was announced it had a profound effect not only on computer scientists but also on the computer industry. Many countries
recognized the importance of the FGCS project and some of them
began their own similar national projects.
The FGCS project was initially planned as a ten-year project
and this final fourth FGCS conference, therefore, has a historical
meaning. For this reason the conference includes an ICOT session.
The first volume contains a plenary session and the ICOT session.
The plenary session is composed of many reports on the FGCS
project with three invited lectures and a panel discussion.
In the ICOT session, the logic-based approach and parallel
processing will be emphasized through concrete discussions. In
addition to these, many demonstration programs have been prepared
by ICOT at the conference site, the participants are invited to visit
and discuss these exhibitions. Through the ICOT session and the
exhibitions, the participants will understand clearly the aim and
results of the FGCS project and receive a solid image of FGCS.
The second volume is devoted to the technical session which
consists of three invited papers and technical papers submitted to this
conference. Due to the time and space limitation of the conference,
only 82 papers out of 256 submissions were selected by the program
committee after careful and long discussion of many of the high
quality papers submitted.
It is our hope that the conference program will prove to be both
worthwhile and enjoyable. As a program chairperson, it is my great
pleasure to acknowledge the support of a number of people. First of
all, I would like to give my sincere thanks to the program committee
members who put a lot of effort into making the program attractive.
lowe much to the three program vice-chairpersons, Professor
Makoto Amamiya, Dr. Shigeki Goto and Professor Fumio Mizoguchi. Many ICOT members, including Dr. Kazunori Ueda, Ken
Satoh, Keiji Hirata, and Hideki Yasukawa have worked as key
persons to organize the program. Dr. Koichi Furukawa, in particular, has played an indispensable role in overcoming many problems.
I would also like to thank the many referees from many countries
who replied quickly to the referees sheets.
Finally, I would like to thank the secretariat at ICOT, they
made fantastic efforts to carry out the administrative tasks efficiently.
Hozumi Tanaka
Program Chairperson


Steering Committee

Kazuhiro Fuchi
Hideo Aiso
Setsuo Arikawa
Ken Hirose
Takayasu Ito
Hiroshi Kashiwagi
Hajime Karatsu
Makoto Nagao
Hiroki Nobukuni
Iwao Toda
Eiiti Wada

Keio Univ.
Kyushu Univ.
Waseda Univ.
Tohoku Univ.
Tokai Univ.
Kyoto Univ.
NTT Data
Univ. of Tokyo

Conference Committee
Hidehiko Tanaka
Vice-Chairperson: Koichi Furukawa
Makoto Amamiya
Yuichiro Anzai
Shigeki Goto
Mitsuru Ishizuka
Kiyonori Konishi
Takashi Kurozumi
Fumio Mizoguchi
Kunio Murakami
Sukeyoshi Sakai
Masakazu Soga
Hozumi Tanaka
Shunichi Uchida
Kinko Yamamoto
Toshio Yokoi
Akinori Yonezawa
Toshitsugu Yuba

Univ. of Tokyo _
Kyushu Univ.
Keio Univ.
Univ. of Tokyo
NTT Data
Science Univ. of Tokyo
Kanagawa Univ.
ICOT(Chairperson, Management Committee)
ICOT(Chairperson, Technology Committee)
Tokyo Institute of Technology
Univ. of Tokyo

Program Committee
Hozumi Tanaka
Vice-Chairpersons: Makoto Amamiya
Shigeki Goto
Fumio Mizoguchi
Koichi Furukawa
Kazunori Ueda
Ken Satoh
Keiji Hirata
Hideki Yasukawa
Hitoshi Aida
Yuichiro Anzai
Ronald J. Brachman
John Conery
Doug DeGroot
Koichi Fukunaga
Jean-Luc Gaudiot
Atsuhiro Goto
Satoshi Goto
Seif Haridi
Ken'ichi Hagihara

Tokyo Institute of Technology
Kyushu Univ.
Science Univ. of Tokyo
Univ. of Tokyo
Keio Univ.
Univ. of Oregon
Texas Instruments
IBM Japan, Ltd.
Univ. of Southern California
NEC Corp.
Osaka Univ.


Makoto Haraguchi
Ryuzo Hasegawa
Hiromu Hayashi
Nobuyuki Ichiyoshi
Mitsuru Ishizuka
Tadashi Kanamori
Yukio Kaneda
Hirofumi Katsuno
Masaru Kitsuregawa
Shigenobu Kobayashi
Philip D. Laird
Catherine Lassez
Giorgio Levi
John W. Lloyd
Yuji Matsumoto
Dale Miller
Kuniaki Mukai
Hiroshi Motoda
Katsuto Nakajima
Ryohei Nakano
Kenji Nishida
Shojiro Nishio
. Stanley Peters
Ant6nio Porto
Teodor C. Przymusinski
Vijay Saraswat
Taisuke Sato
Masahiko Sato
Heinz Schweppe
Ehud Shapiro
Etsuya Shibayama
Kiyoshi Shibayama
Yoav Shoham
Leon Sterling
Mark E. Stickel
Akikazu Takeuchi
Kazuo Taki
Jiro Tanaka
Yuzuru Tanaka
Philip Treleaven
Sxun Tutiya
Shalom Tsur
D.H.D. Warren
Takahira Yamaguchi
Kazumasa Yokota
Minoru Yokota

Tokyo Institute of Technology
Fujitsu Laboratories
Univ. of Tokyo
Mitsubishi Electric Corp.
Kobe Univ.
Univ. of Tokyo
Tokyo Institute of Technology
IBM T.J. Watson
Univ. di Pisa
Univ. of Bristol
Kyoto Univ.
Univ. of Pennsylvania
Keio Univ.
Hitachi Ltd.
Mitsubishi Electric Corp.
Osaka Univ.
CSLI, Stanford Univ.
Univ. Nova de Lisboa
Univ. of California at Riverside
Xerox PARC
Tohoku Univ.
Institut fOr Informatik
The Weizmann Institute of Science
Ryukoku Univ.
Kyoto Univ.
Stanford Univ.
Case Western Reserve Univ.
SRI International
Hitachi Ltd.
Sony CSL
Fujitsu Laboratories
Hokkaido Univ.
University College, London
Chiba Univ.
Univ. of Bristol
Shizuoka Univ.
NEC Corp.

Publicity Committee
Kinko Yamamoto
Vice-Chairperson: Kunio Murakami
Akira Aiba
Yuichi Tanaka

Kanagawa Univ.

Demonstration Committee
Takashi Kurozumi
Vice-Chairperson: Shunichi Uchida



Abadi, Martin
A bramson, Harvey
Agha, Gul A.
Aiba, Akira
Aida, Hitoshi
Akama, Kiyoshi
Ali, Khayri A. M.
Alkalaj, Leon
Amamiya, Makoto
Amano, Hideharu
Amano, Shinya
America, Pierre
Anzai, Yuichiro
Aoyagi, Tatsuya
Apt, Krzysztof R.
Arikawa, Masatoshi
Arikawa, Setsuo
Arima, Jun
Baba, Takanobu
Babaguchi, Noboru
Babb, Robert G., II
Bancilhon, Fran<;ois
Bansal, Arvind K.
Barklund, Jonas
Beaumont, Tony
Beeri, Catriel
Beldiceanu, Nicolas
Benhamou, Frederic R.
Bibel, Wolfgang
Bic, Lubomir
Biswas, Prasenjit
Blair, Howard A.
Boku, Taisuke
Bonnier, Staffan
Boose, John
Borning, Alan H.
Boutilier, Craig E.
Bowen, David
Brachman, Ronald J.
Bradfield, J. C.
Bratko, Ivan
Brazdil, Pavel
Briot, Jean-Pierre
Brogi, Antonio
Bruynooghe, Maurice

Bry, Fran<;ois
Bubst, S. A.
Buntine, Wray L.
Carlsson, Mats
Chikayama, Takashi
Chong, Chin Nyak
Chu, Lon-Chan
Ciepielewski, Andrzej
Clancey, William J.
Clark, Keith L.
Codish, Michael
Codognet, Christian
Conery, John
Consens, Mariano P.
Crawford, James M., Jr.
Culler, David E.
Dahl, Veronica
Davison, Andrew
de Bakker, Jaco W.
de Maindreville, Christophe
Debray, Saumya K.
Deen, S. M.
DeGroot, Doug
del Cerro, Luis Farinas
Demolombe, Robert
Denecker, Marc
Deransart, Pierre
Dincbas, Mehmet
Drabent, Wlodzimierz
Duncan, Timothy Jon
Dutra, Ines
Fahlman, Scott E.
Falaschi, Moreno
Faudemay, Pascal
Feigenbaum, Edward
Fitting, Melvin C.
Forbus, Kenneth D.
Fribourg, Laurent
Fujisaki, Tetsu
Fujita, Hiroshi
Fujita, Masayuki
Fukunaga, Koichi
Furukawa, Koichi
Gabbrielli, Maurizio
Gaines, Brian R.
Gardenfors, Peter

Gaudiot, Jean-Luc
Gazdar, Gerald
Gelfond, Michael
Gero, John S.
Giacobazzi, Roberto
Goebel, Randy G.
Goodwin, Scott D.
Goto, Atsuhiro
Goto, Satoshi
Goto, Shigeki
Grama, Ananth
Gregory, Steve
Gunji, Takao
Gupta, Anoop
Hagihara, Kenichi
Hagiya, Masami
Han, Jiawei
Hanks, -Steve
Hara, Hirotaka
Harada, Taku
Haraguchi, Makoto
Haridi, Seif
Harland, James
Hasegawa, Ryuzo
Hasida, K6iti
Hawley, David J.
Hayamizu, Satoru
Hayashi, Hiromu
Henry, Dana S.
Henschen, Lawrence J.
Herath, J ayantha
Hewitt, Carl E.
Hidaka, Yasuo
Higashida, Masanobu
Hiraga, Yuzuru
Hirata, Keiji
Hobbs, Jerry R.
Hogger, Christopher J.
Hong, Se June
Honiden, Shinichi
Hori, Koichi
Horita, Eiichi
Hori~chi, Kenji
Hsiang, Jieh
Iannucci, Robert A.
Ichikawa, Itaru


Ichiyoshi, Nobuyuki
Ida, Tetsuo
Ikeuchi, Katsushi
Inoue, Katsumi
Ishida, Toru
Ishizuka, Mitsuru
Iwasaki, Yumi
I wayama, Makoto
Jaffar, Joxan
J ayaraman, Bharat
Kahn, Gilles
Kahn, Kenneth M.
Kakas, Antonios C.
Kameyama, Yukiyoshi
Kanade, Takeo
Kanamori, Tadashi
Kaneda, Yukio
Kaneko, Hiroshi
Kanellakis, Paris
Kaplan, Ronald M.
Kasahara, Hironori
Katagiri, Yasuhiro
Katsuno, Hirofumi
Kautz, Henry A.
Kawada, TsutonlU
Kawamura, Tadashi
Kawano, Hiroshi
Keller, Robert
Kemp, D'avid
Kifer, Michael
Kim, Chinhyun
Kim, Hiecheol
Kim, WooYoung
Kimura, Yasunori
Kinoshita, Yoshiki
Kitsuregawa, Masaru
Kiyoki, Yasushi
Kluge, Werner E.
Kobayashi, Shigenobu
Kodratoff, Yves
Kohda, Youji
Koike, Hanpei
Komorowski, Jan
Konagaya, Akihiko
Kono, Shinji
Konolige, Kurt
Korsloot, Mark
Koseki, Yoshiyuki
Kraus, Sarit
Kumar, Vipin

K unen, Kenneth
Kunifuji, Susumu
Kurita, Shohei
K urokawa, Toshiaki
Kusalik, Anthony J.
Laird, Philip D.
Lassez, Catherine
Leblanc, Tom
Lescanne, Pierre
Leung, Ho-Fung
Levesque, Hector J.
Levi, Giorgio
Levy, J ean-J acques
Lieberman, Henry A.
Lindstrom, Gary
Lloyd, John W.
Lusk, Ewing L.
Lytinen, Steven L.
Maher, Michael J.
Makinouchi, Akifumi
Manthey, Rainer
Marek, Victor
Marriott, Kim
Martelli, Maurizio
Maruoka, Akira
Maruyama, Fumihiro
Maruyama, Tsutomu
Masunaga, Yoshifumi
Matsubara, Hitoshi
Matsuda, Hideo
Matsumoto, Yuji
Matsuoka, Satoshi
McCune, William, W.
Memmi, Daniel
Mendelzon, Alberto O.
Menju, Satoshi
Meseguer, Jose
Michalski, Richard S.
Michie, Donald
Miller, Dale A.
Millroth, Hakan
Minami, Toshiro
Minker, Jack
Miyake, Nobuhisa
1Iliyano, Satoru
Miyazaki, Nobuyoshi
Miyazaki, Toshihiko
Mizoguchi, Fumio
Mizoguchi, Riichiro
Mori, Tatsunori

Morishita, Shinichi
Morita, Yukihiro
Motoda, Hiroshi
Mowtesi, Dawilo
Mukai, K uniaki
Mukouchi, Yasuhito
Murakami, Kazuaki
Murakami, Masaki
M uraki, Kazunori
Muraoka, Yoichi
N adathur, Gopalan
Naganuma, Jiro
N agashima, Shigeo
Nakagawa, Hiroshi
Nakagawa, Takayuki
Nakajima, Katsuto
Nakamura, J unichi
Nakano, Miyuki
Nakano, Ryohei
Nakashima, Hideyuki
Nakashima, Hiroshi
Nakata, Toshiyuki
N akayatna, Masaya
N aqvi, Shamim A.
N atarajan, Venkat
Nikhil, Rishiyur, S.
Nilsson, J 0rgen Fischer
Nilsson, Martin
Nishida, Kenji
Nishida, Toyoaki
Nishikawa, Hiroaki
Nishio, Shojiro
Nitta, Izumi
Nitta, Katsumi
N oye, Jacques
N umao, Masayuki
N umaoka, Chisato
'Rorke, Paul V.
Ogura, Takeshi
o hki, Masaru
Ohmori, I{enji
Ohori, Atsushi
Ohsuga, Akihiko
Ohsuga, Setsuo
Ohwada, Hayato
Oka, Natsuki
Okumura, Manabu
Ono, Hiroakira
Ono, Satoshi
Overbeek, Ross A.



Oyanagi, Shigeru
Palamidessi, Catuscia
Panangaden, Prakash
Pearl, Judea
Pereira, Fernando C.
Pereira, LUIs MonIz
Petrie, Charles J.
Plaisted, David A.
Plumer, Lutz
Poole, David
Popowich, Fred P.
Porto, Antonio
Przymusinski, Teodor C.
Raina, Sanjay
Ramamohanarao, Kotagiri
Rao, Anand S.
Reddy, U day S.
Ringwood, Graem A.
Robinson, John Alan
Rojas, Raul
Rokusawa, Kazuaki
Rossi, Francesca
Rossi, Gianfranco
Russell, Stuart J.
Sadri, Fariba
Saint-Dizier, Patrick
Sakai, Hiroshi
Sakai, Ko
Sakai, Shuichi
Sakakibara, Yasubumi
Sakama, Chiaki
Sakurai, Akito
Sakurai, Takafumi
Sangiorgi, Davide
Santos Costa, Vltor
Saraswat, Vijay A.
Sargeant, John
Sato, Masahiko
Sato, Taisuke
Sato, Yosuke
Satoh, Ken
Schweppe, Heinz
Seki, Hirohisa
Seligman, Jerry M.
Sergot, Marek J.
Sestito, Sabrina
Shanahan, Murray
Shapiro, Ehud
Shibayama, Etsuya
Shibayama, Kiyoshi

Shibayama, Shigeki
Shimada, Kentaro
Shin, Dongwook
Shinohara, Takeshi
Shintani, Toramatsu
Shoham, Yoav
Simonis, Helmut
Sirai, Hidetosi
Smith, Jan Magnus
Smolka, Gert
Sterling, Leon S.
Stickel, Mark E.
Stolfo, Salvatore. J.
Subrahmani an , V. S.
Sugano, Hiroyasu
Sugie, Mamoru
Sugiyama, Masahide
Sundararajan, Renga
Suwa, Masaki
Suzuki, Hiroyuki
Suzuki, Norihisa
Takagi, Toshihisa
Takahashi, Mitsuo
Takahashi, N aohisa
Takahashi, Yoshizo
Takayama, Yukihide
Takeda, Masayuki
Takeuchi, Akikazu
Takeuchi, Ikuo
Taki, Kazuo
Tarnai, Tetsuo
Tamura, Naoyuki
Tanaka, Hozumi
Tanaka, J iro
Tanaka, Katsumi
Tanaka, Yuzuru
Taniguchi, Rin-ichiro
Tatemura, Jun'ichi
Tatsuta, Makoto
Terano, Takao
Tick, Evan M.
Toda, Mitsuhiko
Togashi, Atsushi
Tojo, Satoshi
Tokunaga, Takenobu
Tomabechi, Hideto
Tomita, Shinji
Tomiyama, J;'etsuo
Touretzky, David S.
Toyama, Yoshihi to

Tsuda, Hiroshi
Tsur, Shalom
Tutiya, Syun
U chihira, N aoshi
U eda, Kazunori
Uehara, K uniaki
Ueno, Haruki
van de Riet, Reinder P.
van Emden, Maarten H.
Van Hentenryck, Pascal
Van Roy, Peter L.
Vanneschi, Marco
Wada, Koichi
Wah, Benjamin W.
Walinsky, Clifford
Walker, David
Waltz, David L.
Warren, David H. D.
Warren, David Scott
Watanabe, Takao
Watanabe, Takuo
Watanabe, Toshinori
",\iVatson, Ian
Watson, Paul
Weyhrauch, Richard W.
Wilk, Pau~ F.
Wolper, Pierre
Yamaguchi, Takahira
Yamamoto, Akihiro
Yamanaka, Kenjiroh
Yang, Rong
Yap, Roland
Yardeni, Eyal
Yasukawa, Hideki
Yokoo, Nlakoto
Yokota, Raruo
Yokota, Kazumasa
Yokota, Minoru
Yokoyama, Shoichi
Yonezaki, N aoki
Yonezawa, Akinori
Yoo, Namhoon
Yoon, Dae-Kyun
Yoshida, Hiroyuki
Yoshida, Kaoru
Yoshid~, Kenichi
Yoshida, N orihiko
Yoshikawa, Masatoshi
Zerubia, Josiane B.


Keynote Speech
Launching the New Era
Kazumro Fuchi


General Report on ICOT Research and Developnlent
Overview of the Ten Years of the FGCS Project
Takashi K urozurru . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . .
.. 9
Summary of Basic Resear:ch Activities of the FGCS Project
Koichi Furukawa . . . . . . . . . . . . . . . . . . . . . . . .
Summary of the Parallel Inference Machine and its Basic Software
Sh uni chi Uchida . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
Report on ICOT Research Results
Parallel Inference Machine PIM
Kazuo Taki . . . . . . . . . . . . . . . . . . . . . . .
Operating System PIMOS and Kernel Language KL1
Takashi Chikayama . . . . . . . . . . . . . . . . . .
Towards an Integrated Knowledge-Base Management System: Overview of R&D on
Databases and Knowledge-Bases in the FGCS Project
Kazumasa Yokota and Hideki Yasukawa . . . . . . . . . . . . . . . . . . . . . . . .
Constraint Logic Programming System: CAL, GDCC and Their Constraint Solvers
Akira Aibaand Ryuzo Hasegawa . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Parallel Theorem Provers and Their Applications
Ryuzo Hasegawa and Masayuki Fujita
Natural Language Processing Software
Yuichi Tanaka . . . . . . . . . . . . . . . . . . . . . . .
Experimental Parallel Inference Software
Katsumi Nitta, Kazuo Taki and Nobuyuki Ichiyoshi
Invited Lect ures
Formalism vs. Conceptualism: Interfaces between Classical Software Development Techniques
and Knowledge Engineering
Dines Bj¢rner . . . . . . . . . . . . . . . . . . . . . . . . . . .. .
The Role of Logic in Computer Science and Artificial Intelligence
J. A. Robinson . . .
Programs are Predicates
C. A. R. Hoare . ..
Panel Discussion! A Springboard for Information Processing in the 21st Century
PANEL: A Springboard for Information Processing in the 21st Century
Robert A. Kowalski (Chairman) . . . . . . . . . . . . . . . . . . . .
Finding the Best Route for Logic Programming
Herve Ga1laire . . . . . . . . . . . . . . . . . .
The Role of Logic Programming in the 21st Century
Ross Overbeek . . . . . . . . . . . . . .
Object-Based Versus Logic Programming
Peter Wegner . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . ..
Concurrent Logic Programming as a Basis for Large-Scale Knowledge Information Processing
Koichi Furukawa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .









Knowledge Information Processing in the 21st Century
Shunichi Uchida " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

"Parallel VLSI-CAD and KBM Systems
LSI-CAD Programs on Parallel Inference Machine
Hiroshi Date, Yukinori Matsumoto, Kouichi Kimura, Kazuo Taki, Hiroo Kato and
Masahiro Hoshi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Parallel Database Management System: Kappa-P
Moto Kawamura, Hiroyuki Sato, Kazutomo Naganuma and Kazumasa Yokota . . . . . . . . 248
Objects, Properties, and Modules in QUIXOTE:
Hideki Yasukawa, Hiroshi Tsuda and Kazumasa Yokota . . . . . . . . . . . . . . . . . . . . . . 257
Parallel Operating System, PIM OS
Resource Management Mechanism of PIMOS
Hiroshi Yashiro, Tetsuro Fujise, Takashi Chikayama, Masahiro Matsuo, Atsushi Hori
and K umiko vVada . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
The Design of the PIMOS File System
Fumihide Itoh, Takashi Chikayama, Takeshi Mori, Masaki Sat 0, Tatsuo Kato and
Tadashi Sato . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 278
ParaGraph: A Graphical Tuning Tool for Multiprocessor Systems
Seiichi Aikawa, Mayumi Kamiko, Hideyuki Kubo, Fumiko Matsuzawa and
Takashi Chikayama . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Genetic Information Processing
Protein Sequence Analysis by Parallel Inference Machine
Masato Ishikawa, Masaki Hoshida, Makoto Hirosawa, Tomoyuki Toya, Kentaro Onizuka
and Katsumi Nitta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Folding Simulation using Temperature Parallel Simulated Annealing
Makoto Hirosawa, Richard J. Feldmann, David Rawn, Masato Ishikawa, Masaki Hoshida
and George Michaels . . . . . . . . . . . . ". . . . . . . . . . . . . . . . . . . . . . . . .
. .
Toward a Human Genome Encyclopedia
Kaoru Yoshida, Cassandra Smith, Toni Kazic, George Michaels, Ron Taylor,
David Zawada, Ray Hagstrom and Ross Overbeek . . . . . . . . . . . . . . . . . . . . . . . . . .
Integrated System for Protein Information Processing
Hidetoshi Tanaka . . . . " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .




Constraint Logic Progralnming and Parallel Theorenl. Proving
Parallel Constraint Logic Programming Language GDCC and its Parallel Constraint Solvers
Satoshi Terasaki, David J. Hawley, Hiroyuki Sawada, Ken Satoh, Satoshi Menju,
Taro Kawagishi, Noboru Iwayama and Akira Aiba . . . . . . . . . . . . . . . . . . . . . . .
. 330
cu-Prolog for Constraint-Based Grammar
Hiroshi Tsuda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Model Generation Theorem Provers on a Parallel Inference Machine
Masayuki Fujita, Ryuzo Hasegawa, Miyuki Koshimura and Hiroshi Fujita
Natural Language Processing
On a Grammar Formalism, Knowledge Bases and Tools for Natural Language Processing in
Logic Programming
Hiroshi Sana and Fumiyo Fukumoto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376


Argument Text Generation System (Dulcinea)
Teruo Ikeda, Akira Kotani, Kaoru Hagiwara and Yukihiro Kubo . . . . . . . . . . . . . . . . . 385
Situated Inference of Temporal Information
Satoshi Tojo and Hideki Yasukawa
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
A Parallel Cooperation Model for Natural Language Processing
Shigeichiro Yamasaki, Michiko Turuta, Ikuko Nagasawa and Kenji Sugiyama . . . . . . . . . 405

Parallel Inference Machine (PIM)
Architecture and Implementation of PIM/p
Kouichi Kumon, Akira Asato, Susumu Arai, Tsuyoshi Shinogi, Akira Hattori,
. . . 414
Hiroyoshi Hatazawa and Kiyoshi Hirano . . . . . . . . . . . . . . . . . . . .
Architecture and Implementation of PIM/m
Hiroshi Nakashima, Katsuto Nakajima, Seiichi Kondo, Yasutaka Takeda, Yu Inamura,
Satoshi Onishi and Kanae Masuda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Parallel and Distributed Implementation of Concurrent Logic Programming Language KLl
Keiji Hirata, Reki Yamamoto, Akira Imai, Hideo Kawai, Kiyoshi Hirano,
Tsuneyoshi Takagi, Kazuo Taki, Akihiko Nakase and Kazuaki Rokusawa
A uthor Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. .i


Reasoning about ProgralTIS
Logic Program Synthesis from First Order Logic Specifications
Tadashi Kawamura . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound and Complete Partial Deduction with Unfolding Based on Well-Founded Measures
Bern Martens, Danny De Schreye and Maurice Bruynooghe . . . . . . . . . . . . . . . .
A Framework for Analyzing the Termination of Definite Logic Programs with respect to Call
Danny De Schreye, Kristof Verschaetse and Maurice Bruynooghe . . . . . . . . . . . . . . .
Automatic Verification of GHC-Programs: Termination
Lutz Pliimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Analogical Generalization
Takenao Ohkawa, Toshiaki Mori, Noboru Babaguchi and Yoshikazu Tezuka
Logical Structure of Analogy: Preliminary Report
Jun Arima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Abduction (1)
Consistency-Based and Abductive Diagnoses as Generalised Stable Models
Chris Preist and Kave Eshghi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Forward-Chaining Hypothetical Reasoner Based on Upside-Down Meta-Interpretation
Yoshihiko Ohta and Katsumi Inoue . . . . . . . . . . . . . . .
. . . . . . . . . .
Logic Programming, Abduction and Probability
David Poole . . . . . . . . . . . . . . . . . . .
Abduction (2)
Abduction in Logic Programming with Equality
P. T. Cox, E. Knill and T. Pietrzykowski
Hypothetico-Dedudive Reasoning
Chris Evans and Antonios C. Kakas . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acyclic Disjunctive Logic Programs with Abductive Procedures as Proof Procedure
Phan Minh Dung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Semantics of Logic Programs
Adding Closed '\iVorld Assumptions to Well Founded Semantics
Luis Moniz Pereira, Jose J. Alferes and Joaquim N. Aparicio . . . . . . . . . . .
Contributions to the Semantics of Open Logic Programs
A. Bossi, M. Gabbrielli, G. Levi and M. C. Meo . . . . . . . . . . . . . . . . . . .
A Generalized Semantics for Constraint Logic Programs
Roberto Giacobazzi, Saumya K. Debray and Giorgio Levi
Extended Well-Founded Semantics for Paraconsistent Logic Programs
Chiaki Sakama . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



." 497


.. 555

. 592

Invited Paper
Formalizing Database Evolution in the Situation Calculus
Raymond Reiter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600


Machine Learning
Learning Missing Clauses by Inverse Resolution
Peter Idestam-Almquist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
A Machine Discovery from Amino Acid Sequences by Decision Trees over Regular Patterns
Setsuo Arikawa, Satoru Kuhara, Satoru Miyano, Yasuhito Mukouchi, Ayumi Shinohara
and Takeshi Shinohara . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . "
Efficient Induction of Version Spaces through Constrained Language Shift
Claudio Carpineto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
Theorem Proving
Theorem Proving Engine and Strategy Description Language
Massimo Bruschi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
ANew Algorithm for Subsumption Test
Byeong Man Kim, Sang Ho Lee, Seung Ryoul Maeng and Jung Wan Cho . . . . . . . . . . 643
On the Duality of Abduction and Model Generation
Marc Denecker and Danny De Schreye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
Functional Programming and Constructive Logic
Defining Concurrent Processes Constructively
Yukihide Takayama .. '. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Realizability Interpretation of Coinductive Definitions and Program Synthesis with Streams
Makoto Tatsuta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MLOG: A Strongly Typed Confluent Functional Language with Logical Variables
Vincent Poirriez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ANew Perspective on Integrating Functional and Logic Languages
John Darlington, Yi-ke Guo and Helen Pull . . . . . . . . . . . .' . . . . . . . . . . . . . . . . . .
Telnporal Reasoning
A Mechanism for Reasoning about Time and Belief
Hideki Isozaki and Yoav Shoham . . . . . . . .
Dealing with Time Granularity in the Event Calculus
Angelo Montanari, Enrico Maim, Emanuele Ciapessoni and Elena Ratto



Hardware Architecture and Evaluation
UNIRED II: The High PerforII?-ance Inference Processor for the Parallel Inference Machine
Kentaro Shimada, Hanpei Koike and Hidehiko Tanaka . . . . . . . . . . . . . . . . . . . . ..
Hardware Implementation of Dynamic Load Balancing in the Parallel Inference Machine
T. Nakagawa, N. Ido, T. Tarui, M. Asaie and M. Sugie . . . . . . . . . . . . . . . . . . .
Evaluation of the EM-4 Highly Parallel Computer using a Game Tree Searching Problem
Yuetsu Kodama, Shuichi Sakai and Yoshinori Yamaguchi . . . . . . . . . . . . . . . . .
OR-Parallel Speedups in a Knowledge Based System: on Muse and Aurora
Khayri A. M. Ali and Roland Karlsson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Invited Paper
A Universal Parallel Computer Architecture
William J. Dally . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746


AND-Parallelisrn. and OR-Parallelism
An Automatic Translation Scheme from Prolog to the Andorra Kernel Language
Francisco Bueno and Manuel Hermenegildo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
Recomputation based Implementations of And-Or Parallel Prolog
Gopal Gupta and Manuel V. Hermenegildo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
Estimating the Inherent Parallelism in Prolog Programs
David C. Sebr and Laxmikant V. Kale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
Implementation Techniques
Implementing Streams on Parallel Machines with Distributed Memory
Koicbi Konisbi, Tsutomu Maruyama, Akibiko Konagaya, Kaoru Yosbida and
Takasbi Cbikayama . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
Message-Oriented Parallel Implementation of Moded Flat GHC
Kazunori Ueda and Masao Morita . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
Towards an Efficient Compile-Time Granularity Analysis Algorithm
X. Zbong, E. Tick, S. Duvvuru, L. Hansen, A. V. S. Sastry and R. Sundararajan
Providing Iteration and Concurrency in Logic Programs through Bounded Quantifications
Jonas Barklund and Hakan Millrotb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Extension of Logic Programming
An Implementation for a Higher Level Logic Programming Language
Antbony S. K. Cbeng and Ross A. Paterson . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
Implementing Prolog Extensions: a Parallel Inference Machine
Jean-Marc Alliot, Andreas Herzig and Mamede Lima-Marques . . . . . . . . . . . . . . . . . . 833
Parallel Constraint Solving in Andorra-I
Steve Gregory and Rong Yang . . . . . . . . . . . . . . . . . . . .
A Parallel Execution of Functional Logic Language with Lazy Evaluation
Jong H. Nang, D. W. Sbin, S. R. Maeng and Jung W. Cbo . . . . . . . . . . . . . . . . . . . . 851
Task Scheduling and Load Analysis
Self-Organizing Task Scheduling for Parallel Execution of Logic Programs
Zbeng Lin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Asymptotic Load Balance of Distributed Hash Tables
Nobuyuki Icbiyosbi and Kouicbi Kimura


Constructing and Collapsing a Reflective Tower in Reflective Guarded Horn Clauses
Jiro Tanaka and Fumio Matono . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
CHARM: Concurrency and Hiding in an Abstract Rewriting Machine
Andrea Corradini, Ugo Montanari and Francesca Rossi . . ....
Less Abstract Semantics for Abstract Interpretation of FGHC Programs
Kenji Horiucbi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
Databases and Distributed SystelTIS
Parallel Optimization and Execution of Large Join Queries
Eileen Tien Lin, Edward Omiecinski and Sudbakar Yalamancbili . . . . . . .
Towards an Efficient Evaluation of Recursive Aggregates in Deductive Databases
Alexandre Lefebvre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Distributed Programming Environment based on Logic Tuple Spaces
.......... 926
Paolo Ciancarini and David Gelernter . . . . . . . . . . . . . . . . .


Programming EnvirOlU11.ent
Visualizing Parallel Logic Programs with VISTA
E. Tick . . . . . . . . . . . . . . . . . . . . . .
Concurrent Constraint Programs to Parse and Animate Pictures of Concurrent Constraint
Kenneth M. Kahn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Logic Programs with Inheritance
Yaron Goldberg, William Silverman and Ehud Shapiro . . . . . . . . . . . . . . . . . . .
Implementing a Process Oriented Debugger with Reflection and Program Transformation
Munenori Maeda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

... 934


Prod uction Systel11.S
ANew Parallelization Method for Production Systems
E. Bahr, F. Barachini and H. Mistelberger
. . . . . . . . . . . . . . . . . . . . . . . . . . 969
Performance Evaluation of the Multiple Root Node Approach to the Rete Pattern Matcher
for Production Systems
Andrew Sohn and Jean-Luc Gaudiot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 977

Constraint Logic
Output in CLP(R)
Joxan Jaffar, Michael J. Maher, Peter J. Stuckey and Roland H. C. Yap
Adapting CLP(R) to Floating-Point Arithmetic
J. H. M. Lee and M. H. van Emden ..
Domain Independent Propagation
Thierry Le Provost and Mark Wallace
. . . . . . . .
A Feature-Based Constraint System for Logic Programming with Entailment
Hassan Ai't-Kaci, Andreas Podelski and Gert Smolka . . . . . . . . . . . .
Qualitative Reasoning
Range Determination of Design Parameters by Qualitative Reasoning and its Application to
Electronic Circuits
Masaru Ohki, Eiji Oohira, Hiroshi Shinjo and Masahiro Abe
Logical Implementation of Dynamical Models
Yoshiteru Ishida . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Knowledge Representation
The CLASSIC Knowledge Representation System or, KL-ONE: The Next Generation
Ronald J. Brachman, Alexander Borgida, Deborah L. McGuinness, Peter F. PatelSchneider and Lori Alperin Resnick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Morphe: A Constraint-Based Object-Oriented Language Supporting Situated Knowledge
Shigeru Watari, Y,asuaki Honda and Mario Tokoro . . . . . . . . . . . .
On the Evolution of Objects in a Logic Programming Framework
F. Nihan Kesim and Marek Sergot . . . . . . . . . . . . . . . . . . . . . . . .




Panel Discussion: Future Direction of Next Generation Applications
The Panel on a Future Direction of New Generation Applications
Fumio Mizoguchi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
Knowledge Representation Theory Meets Reality: Some Brief Lessons from the CLASSIC
Ronald J. Brachman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063


Reasoning with Constraints
Catherine Lassez
Developments in Inductive Logic Programming
Stephen A1uggleton . . . . . . . . . . . . . . . . . . . . .
Towards the General-Purpose Parallel Processing System
Kazuo Taki .. . . . . . . . . . . . . . . . . . . . . . . . .

Knowledge-Based SystelTIS
A Hybrid Reasoning System for Explaining Mistakes in Chinese Writing
Jacqueline Castaing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Automatic Generation of a Domain Specific Inference Program for Building a Knowledge
Processing System
Takayasu I{asahara, Naoyuki Yamada, Yasuhiro Kobayashi, Katsuyuki Yoshino and
Kikuo Yoshimura . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Knowledge-Based Functional Testing for Large Software Systems
Uwe Nonnenmann and John K. Eddy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Diagnostic and Control Expert System Based on a Plant Model
Junzo Suzuki, Chiho Konuma, Mikito Iwamasa, Naomichi Sueda, Shigeru Mochiji and
Akimoto Kamiya . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .





Legal Reasoning
A Semiformal Metatheory for Fragmentary and Multilayered Knowledge as an Interactive
Metalogic Program
Andreas Hamfelt and Ake Hansson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 1107
HELIC-II: A Legal Reasoning System on the Parallel Inference Machine
Katsumi Nitta, Yoshihisa Ohtake, Shigeru Maeda, Masayuki Ono, Hiroshi Ohsaki and
Kiyokazu Sakane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
Natural Language Processing
Chart Parsers as Proof Procedures for Fixed-Mode Logic Programs
David A. Rosenblueth . . . . . . . . . . . . . . . . . . . .
A Discourse Structure Analyzer for Japanese Text
I{. Sumita, K. Ono, T. Chino, T. Ukita and S. Amano . . . . . . .
Dynamics of Symbol Systems: An Integrated Architecture of Cognition
Koiti Hasida . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Knowledge Support Systems
Mental Ergonomics as Basis for New-Generation Computer Systems
M. H. van Emden .. . . . . . . . . . . . . . . . . . . . . . . . . . .
An Integrated Knowledge Support System
B. R. Gaines, M. Linster and M. L. G. Shaw . . . . . . . . . . . .
Modeling the Generational Infrastructure of Information Technology
B. R. Gaines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Parallel Applications
Co-HLEX: Co-operative Recursive LSI Layout Problem Solver on Japan's Fifth Generation
Parallel Inference Machine
Toshinori Watanabe and Keiko Komatsu . . . . . . . . . . .
.. 1173
A Cooperative Logic Design Expert System on a Multiprocessor
Yoriko Minoda, Shuho Sawada, Yuka Takizawa, Fumihiro Maruyama and
Nobuaki I{awato . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Parallel Inductive Learning Algorithm for Adaptive Diagnosis
Yoichiro N akakulci, Yoshiyuki Koseki and Midori Tanaka


Parallel Logic Simulator based on Time Warp and its Evaluation
Yukinori lVlatsumoto and Kazuo Taki . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198

Invited Paper
Applications of Machine Learning: Towards Knowledge Synthesis
Ivan Bratko


A uthor Index .

. . i


edited by ICOT. © ICOT, 1992


Launching the New Era
Kazuhiro Fuchi
Director, Research Center
Institute for New Generation Computer Technology (ICOT)
4-28, Iv1ita l-chome, Minato-ku, Tokyo 108, Japan

Thank you for coming to FGCS'92. As you know, we
have been conducting a ten-year research project on fifth
generation computer systems. Today is the tenth anniversary of the founding of our research center, making
it exactly ten years since our project actually started.
The first objective of this international conference is to
show what we have accomplished in our research during
these ten years.
Another objective of this conference is to offer an opportunity for researchers to present the results of advanced research related to Fifth Generation Computer
Systems and to exchange ideas. A variety of innovative
studies, in addition to our own, are in progress in many
parts of the world, addressing the future of computers
and information processing technologies.
I constantly use the phrase "Parallel Inference" as the
keywords to simply and precisely describe the technological goal of this project. Our hypothesis is that parallel
inference technology will provide the core for those new
technologies in the future-technologies that will be able
to go beyond the framework of conventional computer
During these ten years I have tried to explain this idea
whenever I have had the chance. One obvious reason why
I have repeated the same thing so many times is that
I wish its importance to be recognized by the public.
However, I have another, less obvious, reason.
When this project started, an exaggerated image of
the project was engendered, which seems to persist even
now. For example, some people believed that we were
trying, in this project, to solve in a mere ten years some
of the most difficult problems in the field of artificial intelligence (AI), or to create a machine translation system
equipped with the same capabilities as humans.
In those days, we had to face criticism, based upon
that false image, that it \V,1.S a reckless project trying
to tackle impossible goals. Now we see criticism, from
inside and outside the country, that the project has failed
because it has been unable to realize those grand goals.
The reason why such an image was born appears to
have something to do with FGCS'Sl-a conference we
held one year before the project began. At that confer-

ence we discussed many different dreams and concepts.
The substance of those discussions was reported as sensational news all over the world.
A vision with such ambitious goals, however, can never
be materialized as a real project in its original form.
Even if a project is started in accordance with the original form, it cannot be managed and operated within the
framework of an effective research scheme. Actually, our
plans had become much more modest by the time the
project was launched.
For example, the development of application systems,
such as a machine translation system, was removed from
the list of goals. It is impossible to complete a highly
intelligent system in ten years. A preliminary stage is
required to enhance basic studies and to reform computer technology itself. We decided that we should focus
our efforts on these foundational tasks. Another reason is that, at that time in Japan, some private companies had already begun to develop pragmatic, low-level
machine-translation systems independently and in competition with each other.
Most of the research topics related to pattern recognition were also eliminated, because a national project
called "Pattern Information Processing" had already
been conducted by the Ministry of International Trade
and Industry for ten years. We also found that the stage
of the research did not match our own.
We thus deliberately eliminated most research topics covered by Pattern Information Processing from the
scope of our FGCS project. However, those topics themselves are very important and thus remain major topics
for research. They may become a main theme of another
national project of Japan in the future.
Does all this mean that FGCS'Sl was deceptive? I
do not think so. First, in those days, a pessimistic outlook predominated concerning the future development of
technological research. For example, there was a general
trend that research into artificial intelligence would be of
no practical use. In that sort of situation, there was considerable value in maintaining a positive attitude toward
the future of technological research-whether this meant
ten years or fifty. I believe that this was the very reason


why we received remarkable reactions, both positive and
negative, from the public.
The second reason is that the key concept of Parallel
Inference was presented in a clear-cut form at FGCS'Sl.
Let me show you a diagram (Figure 1). This diagram is
the one I used for my speech at FGCS'81, and is now a
sort of "ancient document." Its draft was completed in
1980, but I had come up with the basic idea four years
earlier. After discussing the concept with my colleagues
for four years, I finally completed this diagram.
Here, you can clearly see our concept that our goal
should be a "Parallel Inference Machine." We wanted
to create an inference machine, starting with study on
a variety of parallel architectures. For this purpose, research into a new language was necessary. We wanted to
develop a 5G-kernel language--what we now call KLl.
The diagram includes these hopes of ours.
The upper part of the diagram shows the research infrastructure. A personal inference machine or workstation for research purposes should be created, as well as a
chip for the machine. We expected that the chip would
be useful for our goal. The computer network should be
consolidated to support the infrastructure. The software
aspects are shown in the bottom part of the diagram.
Starting with the study on software engineering and AI,
we wanted to build a framework for high-level symbol
processing, which should be used to achieve our goal.
This is the concept I presented at the FGCS'81 conference.
I would appreciate it if you would compare this diagram with our plan and the results of the final stage
of this project, when Deputy Director Kurozumi shows
you them later. I would like you to compare the original
structure conceived 12 years ago and the present results
of the project so that you can appreciate what has been
accomplished and criticize what is lacking or what was
immature in the original idea.
Some people tend to make more of the conclusions
drawn by a committee than the concepts and beliefs of
an individual. It may sound a little bit beside point, but
I have heard that there is a proverb in the West that
goes, "The horse designed by a committee will turn out
to be a camel."
The preparatory committee for this project had a series of enthusiastic discussions for three years before the
project's launching. I thought that they were doing an
exceptional job as a committee. Although the committee's work was great, however, I must say that the plan
became a camel. It seems that their enthusiasm created some extra humps as well. Let me say in passing
that some people seem to adhere to those humps. I am
surprised that there is still such a so-called bureaucratic
view even among academic people and journalists.
This is not the first time I have expressed this opinion
of mine about the goal of the project. I have, at least
in Japanese, been declaring it in public for the past ten

years. I think I could have been discharged at any time
had my opinion been inappropriate.
As the person in charge of this project, I have pushed
forward with the lines of Parallel Inference based upon
my own beliefs. Although I have been criticized as still
being too ambitious, I have always been prepared to take
responsibility for that.
Since the project is a national project, it goes without
saying that it should not be controlled by one person. I
have had many discussions with a variety of people for
more than ten years. Fortunately, the idea of the project
has not remained just a personal belief but has become
a common belief shared by the many researchers and
research leaders involved in the project.
Assuming that this project has proved to be successful,
as I believe it has, this fact is probably the biggest reason
for its success. For a research project to be successful, it
needs to be favored by good external conditions. But the
most important thing is that the research group involved
has a common belief and a common will to reach its
goals. I have been very fortunate to be able to realize
and experience this over the past ten years.
So much for introductory remarks. I wish to outline, in
terms of Parallel Inference, the results of our work conducted over these ten years. I believe that the remarkable
feature of this project is that it focused upon one language and, based upon that language, experimented with
the development of hardware and software on a large
From the beginning, we envisaged that we would take
logic programming and give it a role as a link that connects highly parallel machine architecture and the problems concerning applications and software. Our mission
was to find a programming language for Parallel Inference.
A research group led by Deputy Director Furukawa
was responsible for this work. As a result of their efforts, Veda came up with a language model, GHC, at
the beginning of the intermediate stage of the project.
The two main precursors of it were Parlog and Concurrent Prolog. He enhanced and simplified them to make
this model. Based upon GHC, Chikayama designed a
programming language called KL1.
KL1, a language derived from the logic programming
concept, provided a basis for the latter half of our
project. Thus, all of our research plans in the final stage
were integrated under a single language, KLl.
For example, we developed a hardware system, the
Multi-PSI, at the end of the intermediate stage, and
demonstrated it at FGCS'88. After the conference we
made copies and have used them as the infrastructure
for software research.
In the final stage, we made a few PIM prototypes, a
Parallel Inference Machine that has been one of our final
research goals on the hardware side. These prototypes
are being demonstrated at this conference.


(Year) 1





-----(optiC:5) ----

(Reducing to chips) } - - - - - - - - - - - -

Personal inference machine

PROLOG machine + a



(comparable to large-scale)
machine currently used

PS, etc
~ (funct i~na 1) programming
log lC

(New software)

Intelligent programming environments


Designing and prototype - building environments


Various machines (chip, module)


Super machine


rendered intelligent

New language













Data flow machine


Databa se mach ine

symbol manipulation
Other ideas





Knowledge engineering

Problem solving

---------- -------- - -

Software engineering


(Basic theories)

Research for artificial intelligence

QA - language understanding
Knowledge base


( Theorem proving

( Consultations

Conceptional development diagram

T. Moto-oka (ed.): Fifth Generation Computer Systems (Proc. FGCS'81),
JIPDEC: North-Holland, 1982, p. 113

Each prototype has a different architecture in its interconnection network and so forth, and the architecture
itself is a subject of research. Viewed from the outside,
however, all of them are KL1 machines.
Division Chief Uchida and Laboratory Chief Taki will
show you details on PIM later. What I want to emphasize here is that all of these prototypes are designed,
down to the level of internal chips, with the assumption
that KL1, a language that could be categorized as a very
high-level language, is a "machine language."
On the software side as well, our research topics were
integrated under the KL1 language. All the application
software, as well as the basic software such as operating
systems, were to be written in KL1.

We demonstrated an operating system called PIMOS
at FGCS'88, which was the first operating system software written in KL1. It was immature at that time, but
has been improved since then. The full-fledged version
of PIMOS now securely backs the demonstrations being
shown at this conference.
Details will later be given by Laboratory Chief
Chikayama, but I wish to emphasize that not only have
we succeeded in writing software as complicated and
huge as an operating system entirely in KL1, but we
have also proved through our own experience that KL1
is much more appropriate than conventional languages
for writing system software such as operating systems.
One of the major challenges in the final stage was to


demonstrate that KL1 is effective not only for basic software, such as operating systems and language implementations, but also for a variety of applications. As Laboratory Chief Nitta will report later, we have been able to
demonstrate the effectiveness of KL1 for various applications including LSI-CAD, genetic analysis, and legal
reasoning. These application systems address issues in
the real world and have a virtually practical scale. But,
again, what I wish to emphasize here is that the objective of those developments has been to demonstrate the
effectiveness of Parallel Inference.
In fact, it was in the initial stage of our project that we
first tried the approach of developing a project around
one particular language. The technology was at the level
of sequential processing, and we adopted ESP, an expanded version of Prolog, as a basis.
Assuming that ESP could playa role of KLO, our kernel language for sequential processing, a Personal Sequential Inference machine, called PSI, was designed as
hardware. We decided to use the PSI machine as a workstation for our research. Some 500 PSIs, including modified versions, have so far been produced and used in the
SIMPOS, the operating system designed for PSI, is
written solely in ESP. In those days, this was one of
the largest programs written in a logic programming language.
Up to the intermediate stage of the project, we used
PSI and SIMPOS as the infrastructure to conduct research on expert systems and natural language processing.
This kind of approach is indeed the dream of researchers, but some of you may be skeptical about our
approach. Our project, though conducted on a large
scale, is still considered basic research. Accordingly, it is
supposed to be conducted in a free, unrestrained atmosphere so as to bring about innovative results. Some of
you may wonder whether the policy of centering around
one particular language restrains the freedom and diversity of research.
But this policy is also based upon my, or our, philosophy. I believe that research is a process of "assuming
and verifying hypotheses." If this is true, the hypotheses
must be as pure and clear as possible. If not, you cannot
be sure of what you are trying to verify.
A practical system itself could include compromise or,
to put it differently, flexibility to accommodate various
needs. However, in a research project, the hypotheses
must be clear and verifiable. Compromises and the like
could be considered after basic research results have been
obtained. This has been my policy from the very beginning, and that is the reason why I took a rather controversial or provocative approach.
We had a strong belief that our hypothesis of focusing
on Parallel Inference and KL1 had sufficient scope for a
world of rich and free research. Even if the hypothesis

acted as a constraint, we believed that it would act as a
creative constraint.
I would be a liar if I was to say that there was no
resistance among our researchers when we decided upon
the above policy. KL1 and parallel processing were a
completely new world to everyone. It required a lot of
courage to plunge headlong into this new world. But
once the psychological barrier was overcome, the researchers set out to create new parallel programming
techniques one after another.
People may not feel like using new programming languages such as KLl. Using established languages and
systems only, or a kind of conservatism, seems to be the
major trend today. In order to make a breakthrough into
the future, however, we need a challenging and adventuring spirit. I think we have carried out our experiment
with such a spirit throughout the ten-year project.
Among the many other results we obtained in the final stage was a fast theorem-proving system, or a prover.
Details will be given in Laboratory Chief Hasegawa's report, but I think that this research will lead to the resurrection of theorem- proving research.
Conventionally, research into theorem proving by computers has been criticized by many mathematicians who
insisted that only toy examples could be dealt with.
However, very recently, we were able to solve a problem
labelled by mathematicians as an 'open problem' using
our prover, as a result of collaborative research with Australian National University.
The applications of our prover is not limited to mathematical theorem proving; it is also being used as the
inference engine of our legal reasoning system. Thus,
our prover is being used in the mathematics world on
one hand, and the legal world on the other.
The research on programming languages has not ended
with KLl. For example, a constraint logic programming
language called eDee has been developed as a higherlevel language than KLl. We also have a language called
From the beginning of this project, I have advocated
the idea of integrating three types of languages-logic,
functional, and object-oriented-and of integrating the
worlds of programming and of databases. This idea has
been materialized in the Quixote language; it can be
called a deductive object-oriented database language.
Another language, CIL, was developed by Mukai in the
study of natural language processing. CIL is a semantics
representation language designed to be able to deal with
situation theory. Quixote incorporates CIL in a natural
form and therefore has the characteristics of a semantics
representation language. As a whole, it shows one possible future form of knowledge representation languages.
More details on Quixote, along with the development
of a distributed parallel database management system,
Kappa-P, will be given by Laboratory Chief Yokota.
Thus far I have outlined, albeit briefly, the final results


of our ten-year project. Recalling what I envisaged ten
years ago and what I have dreamed and hoped would
materialize for 15 years, I believe that we have achieved
as much as or more than what I expected, and I am quite
Naturally, a national project is not performed for mere
self-satisfaction. The original goal of this project was to
create the core of next-generation computer technologies. Various elemental technologies are needed for future computers and information processing. Although it
is impossible for this project alone to provide all of those
technologies, we are proud to be able to say that we have
created the core part, or at least provided an instance of
The results of this project, however, cannot be commercialized as soon as the project is finished, which is
exactly why it was conducted as a national project. I
estimate that it takes us another five years, which could
be called a period for the "maturation of the technologies", for our results to actually take root in society. I
had this prospect in mind when this project started ten
years ago, and have kept declaring it in public right up
until today. Now the project is nearing its end, but my
idea is still the same.
There is often a gap of ten or twenty years between the
basic research stage of a technology and the day it appears in the business world. Good examples are UNIX,
C, and RISC, which has become popular in the current
trend toward downsizing. They appear to be up-to-date
in the business world, but research on them has been
conducted for many years. The frank opinions of the researchers involved will be that industry has finally caught
up with their research.
There is thus a substantial time lag between basic research and commercialization. Our project, from its very
outset, set an eye on technologies for the far distant future. Today, the movement toward parallel computers
is gaining momentum worldwide as a technology leading
into the future. However, skepticism was dominant ten
years ago. The situation was not very different even five
years ago. When we tried to shift our focus on parallel
processing after the initial stage of the project, there was
a strong opinion that a parallel computer was not possible and that we should give it up and be happy with the
successful results obtained in the initial stage.
In spite of the skepticism about parallel computers
that still remains, the trend seems to be changing drastically. Thanks to consta,nt progress in semiconductor
technology, it is now becoming easier to connect five hundred, a thousand, or even more processor chips, as far as
hardware technology is concerned.
Currently, the parallel computers that most people are
interested in are supercomputers for scientific computation. The ideas there tend to still be vague regarding the
software aspects. Nevertheless, a new age is dawning.
The software problem might not be too serious as long

as scientific computation deals only with simple, scaledup matrix calculations, but it will certainly become serious in the future. Now suppose this problem has been
solved and we can nicely deal with all the aspects of
large-scale problems with complicated overall structures.
Then, we would have something like a general-purpose
capability that is not limited to scientific computation.
We might then be able to replace the mainframe computers we are using now.
The scenario mentioned above is one possibility leading to a new type of mainframe computer in the future.
One could start by connecting a number of processor
chips and face enormous difficulties with parallel software.
However, he or she could alternatively start by considering what technologies will be required in the future,
and I suspect that the answer should be the Parallel Inference technology which we have been pursuing.
I am not going to press the above view upon you. However, I anticipate that if anybody starts research without
knowing our ideas, or under a philosophy that he or she
believes is quite different from ours, after many twists
and turns that person will reach more or less the same
concept as ours-possibly with small differences such as
different terminology. In other words, my opinion is that
there are not so many different essential technologies.
It may be valuable for researchers to struggle through
a process of research independently from what has already been done, finally to find that they have followed
the same course as somebody else. But a more efficient
approach would be to build upon what has been done in
this FGCS project and devote energy to moving forward
from that point. I believe the results of this project will
provide important insights for researchers who want to
pursue general-purpose parallel computers.
This project will be finished at the end of this year.
As for "maturation of the Parallel Inference technology", I think we will need a new form of research activities. There is a concept called "distributed cooperative
computing" in the field of computation models. I expect that, in a similar spirit, the seeds generated in this
project will spread both inside and outside the country
and sprout in many different parts of the world.
For this to be realized, the results of this project must
be freely accessible and available worldwide. In the software area, for example, this means that it is essential
to disclose all our accomplishments including the source
codes and to make them "international common public
MITI Minister Watanabe and the Director General of
the E1ureau announced the policy that the results of our
project could be utilized throughout the world. Enormous effort must have been made to formulate such a
policy. I find it very impressive.
We have tried to encourage international collaboration for ten years in this project. As a result, we have


enjoyed opportunities to exchange ideas with many researchers involved in advanced studies in various parts of
the world. They have given us much support and cooperation, without which this project could not have been
In that regard, and also considering that this is a
Japanese national project that aims at making a contribution, though it may only be small, toward the future of
mankind, we believe that we are responsible for leaving
our research accomplishments as a legacy to future generations and to the international community in a most
suitable form. This is now realized, and I believe it is an
important springboard for the future.
Although this project is about to end, the end is just
another starting point. The advancement of computers
and information processing technologies is closely related
to the future of human society. Social thought, ideologies, and social systems that fail to recognize its significance will perish as we have seen in recent world history.
We must advance into a new age now. To launch a new
age, I fervently hope that the circle of those who share
our passion for a bright future will continue to expand.
Thank you.

edited by ICOT. © ICOT, 1992


Overview of the Ten Years of the FGCS Project
Takashi Kurozumi
Institute for New Generation Computer Technology
4-28, Mita 1-chome, Minato-ku, Tokyo 108, Japan
Kurozumi @

This paper introduces how the FGCS Project
started, its overall activities and the results of the
FGCS project. The FGCS Project was launched in
1982 after a three year preliminary study stage.
The basic framework of the fifth generation
computer is parallel processing and inference
processing based on logic programming. Fifth
generation computers were viewed as suitable for
the knowledge information processing needs of the
near future. ICOT was established to promote the
FGCS Project. This paper shows not only, ICOT's
efforts in promoting the FGCS project, but
relationship between ICOT and related
organizations as well. I, also, conjecture on the
parallel inference machines of the near future.


Preliminary Study Stage for
the FGCS Project

The circumstances prevailing during the
preliminary stage of the FGCS Project, from 1979 to
1981, can be summarized as follows.
.J apanese computer technologies had reached the
level of the most up-to-date overseas computer
·A change of the role of the Japanese national
project for computer technologies was being
discussed whereby there would be a move away
from improvement of industrial competi ti veness by
catching up with the latest European computer
technologies and toward world-wide scientific
contribution through the risky development of
leading computer technologies.
In this situation, the Japanese Ministry of
International Trade and Industry (MIT!) started
study on a new project - the Fifth Generation
Computer Project. This term expressed MITI's will
to develop leading technologies that would progress
beyond the fourth generation computers due to

appear in the near future and which would
anticipate upcoming trends.
The Fifth Generation Computer Research
Committee and its subcommittee (Figure 1-1) were
established in 1979. It took until the end of 1981 to
decide on target technologies and a framework for
the project.

Figure 1-1

Organization of the Fifth Generation
Computer Committee

Well over one hundred meetings were held with a
similar number of committee members
participating. The following important near-future
computer technologies were discussed .
Inference computer technologies for knowledge
Computer technologies to process large-scale
data bases and knowledge bases
High performance workstation technologies
Distributed functional computer technologies
Super-computer technologies for scientific
These computer technologies were investigated and
discussed from the standpoints of international
contribution by developing original Japanese
technologies, the important technologies in future,
social needs and conformance with Japanese
governmental policy for the national project.
Through these studies and discussions, the
committee decided on the objectives of the project by


the topic with foreign researchers.

the end of 1980, and continued future studies of
technical matters, social impact, and project
The committee's proposals for the FGCS Project
are summarized as follows.
CD The concept of the Fifth Generation Computer:
To have parallel (non-Von Neumann)
processing and inference processing using
knowledge bases as basic mechanisms. In order
to have these mechanisms, the hardware and
software interface is to be a logic program
language (Figure 1-2) .
® The objectives of the FGCS project: To develop
these innovative computers, capable of
knowledge information processing and to
overcome the technical restrictions of
conventional computers.



(Intelligent Assistant for
Human Activities)
OBasic Mechanisn of H/w & S/W....:;
*Logicallnference Processing
(based on Logic Programming)
*Highly Parallel Processing

Concept of the Fifth Generation

® The goals of the FGCS project: To research and
develop a set of hardware and software
technologies for FGCS, and to develop an FGCS
prototype system consisting of a thousand
element processors with inference execution
speeds of between 100M LIPS and 1G LIPS
(Logical Inferences Per Second).
® R&D period for the project: Estimated to be 10
years, divided into three stages.
3-year initial stage for R&D of basic
4-year intermediate stage for R&D of subsystems
3-year final stage for R&D of total prototype
MITI decided to launch the Fifth Generation
Computer System (FGCS) project as a national
project for new information processing, and made
efforts to acquire a budget for the project.
At the same time, the international conference on
FGCS '81 was prepared and held in October 1981 to
announce these results and to hold discussions on

Stages and Budgeting in the FGCS

The FGCS project was designed to investigate a
large number of unknown technologies that were
yet to be developed. Since this involved a number of
risky goals, the project was scheduled over a
relatively long period of ten years. This ten-year
period was divided into three stages.
- In the initial stage (fiscal 1982-1984), the
purpose of R&D was to develop the basic
computer technologies needed to achieve the
- In the intermediate stage (fiscal 1985-1988), the
purpose of R&D was to develop small to medium
- In the final stage (fiscal 1989-1992), the purpose
of R&D was to develop a total prototype system.
The final stage was initially planned to be three
years. After reexamination halfway through the
final stage, this stage was extended to four years
to allow evaluation and improvement of the total
system in fiscal year 1992. Consequently, the
total length of this project has been extended to
11 years.

OComputer for
Knowledge Information
Processing System (KIPS)

Figure 1-2

Overview of R&D Activities
and Results of the FGCS



'i (


trehmlnary: Initial Stage:
: Study : 3 years:82- 84 :
: Stage : (TOTAL:~8.3B):
4 years: 8S- 88
(TOTAL: ~21.6B)
o R&D of BaSIC I 0 R&D of Experimental
Sth G. Computer I Small-to-Medium Scale

Final Stage
4 years: 89- 92
(3years total:~20.7B)

R&D of Total

0 Total

" 1985 1986 1987 1988' 1989 1990 1991 1992
Budaet 1982 1983 1984'
¥400M ~2.7B ¥5.1B 1~4.7B ¥5.55B ~5.6B ¥5.7B I ¥6.5B ~7.0B ¥7.2B (¥3.6B)

(for each

SI.86M·, S12.6M

S23.7M: S21.9M S34.5M', S35.0M S35.6M : S40.6M S43.7M SSI.4M"


fl.30M', f8.80M

f'6.6M: £l5.3M £22.0M', f22.4M £22.8M : f26.0M f28.0M £lO.OM',

10- year initial plan
• R&D are carried out under the auspices of MITt.
(All budget (Total budgets:¥54,6B) are covered by MITt.)
" SI • ¥ 2'5, fl. ¥ 307 ('982-1985)
'2 S'. ¥ 160, £I- ¥ 250 ('986-'990)
'3 $1- ¥ 140,
¥ 240 (,991-)

£, •

Figure 2-1

Budgets for the FGCS project

Each year the budget for the following years R&D
activities was decided. MITI made great efforts in
negotiating each year's budget with the Ministry of
Finance. The budgets for each year, which are all
covered by MITI, are shown in Figure 2-1. The total
budget for the 3-year initial stage was about 8
billion yen. For the 4-year intermediate stage, it
was about 22 billion yen. The total budget for 1989
to 1991 was around 21 billion yen. The budget for
1992 is estimated to be 3.6 billion yen.


Consequently, the total budget for the II-year
period of the project will be about 54 billion yen.


R&D subjects of each stage

At the beginning, it was considered that a detailed
R&D plan could not be decided in detail for a period
as long as ten years. The R&D goals and the means
to reach these goals were not decided in detail.
During the project, goals were sought and methods
decided by referring back to the initial plan at the
beginning of each stage.
The R&D subjects for each stage, shown in Figure
2-2, were decided by considering the framework and
conditions mentioned below.
We defined 3 groups of 9 R&D subjects at the
beginning of the initial stage by analyzing and
rearranging the 5 groups of 10 R&D subjects
proposed by the Fifth Generation Computer
At the end of the initial stage, the basic research
themes of machine translation and speech, figure
and image processing were excluded from this
project. These were excluded because computer
vender efforts on these technologies were recognized
as having become very active.
In the middle of the intermediate stage, the task
of developing a large scale electronic dictionary was
transferred to EDR (Electronic Dictionary Research
Center), and development of CESP (Common ESP
system on UNIX) was started by AIR (AI language
Research Center).
The basic R&D framework for promoting this
project is to have common utilization of developed
software by unifying the software development
environment (especially by unifying programming
languages). By utilizing software development
systems and tools, the results of R&D can be
evaluated and improved. Of course, considering the
nature of this project, there is another reason
making it difficult or impossible to use commercial
products as a software development environment.
In each stage, the languages and the software·
development environment are unified as follows.
Initial stage: Prolog on DEC machine
Intermediate stage: ESP on PSI and SIMPOS
Final stage: KLI on Multi-PSI (or PIM) and
PIMOS (PSI machines are also used as pseudo
multi-PSI systems.)
(Figure 2-6)


Overview of R&D Results of
Hardware System

Hardware system R&D was carried out by the
subjects listed listed below in each stage.
CD Initial stage




Final Stage
Intermediate Stage X
Initial Stage
119821 '83 I '84 19851 '861 '871 '88 19891 '90 I '911 '92


OBasic SIW System

8Basic SIW System

.5G Kernel Languages
.Problem Solving &
Inference SIWM
.K8 Management SIWM
.'ntelligent Interface

.5G Kernel Languages
.Problem Solving &
Inference SIWM
.K8 Management S/WM
.'ntelligent Interface
.'ntelli!i1ent Programming
.Experlmental Application

SIWM(Software module)
Programming S/WM

OPiiot Model for
Software Development
.SIM Hardware
.SIM Software

Software Development

.Network System lor

-Hardware System

Development Support

.PIM Functional
.K8M Functional

Figure 2-2

~stem for Basic SIW
Support System
.Pilot Model for Parallel

.Hardware System
• Inference subsystem
.k8 Subsystem

• Expenm.entll l l'aral.lel
Application System
Programming S/w System
.Knowledge construction
& Utilization
.Natural Language
.Problem Solving &
• Advanced Inference Method

• Basic Software System
.Inference Control
Module (PIMOS)
.K8 Management Modul
(KflMS: Kappa & Quixote)

.Prototype Hardware

Transition of R&D subjects in each

® Functional mechanism modules and
simulators for PIM (Parallel Inference
Machine) of the hardware system
® Functional mechanism modules and
simulators for KBM (Knowledge Base
Machine) of the hardware system
CD SIM (Sequential Inference Machine)
hardware of pilot model for software
@ Intermediate Stage
® Inference subsystem of the hardware system.
® Knowledge base subsystem of the hardware
CD Pilot model for parallel software development
of the development support system.
® Final Stage
® Prototype hardware system

Figure 2-3

Transition of R&D results of Hardware

The major R&D results on SIM were the PSI
(Personal Sequential Inference Machine) and CHI
(high performance back-end inference unit). In the
initial stage, PSI- I (CD CD) was developed as KLO
(Kernel Language Version 0) machine. PSI- I had


around 35 KLIPS (Logical Inference Per Second)
execution speed. Around 100 PSI- I machines were
used as main WSs (workstations) for the sequential
logic programming language, ESP, in the first half
of the intermediate stage. CHI- I (CD ©) showed
around 200 KLIPS execution speed by using WAM
instruction set and high-speed devices. In the
intermediate stage, PSI was redesigned as multiPSI FEP (Front End Processor) and PSI- IT, and has
performance of around 330-400 KLIPS. CHI was
also redesigned as CHI- IT (® ©), with more than
400 KLIPS performance. PSI- IT machines were the
main WSs for ESP after the middle of the
intermediate stage, and were able to be used for
KLI by the last year of the intermediate stage. PSIIII was developed as a commercial product by a
computer company by using PIM/m CPU
technologies, with the permission of MITI, and by
using UNIX.
R&D on PIM continued throughout the project, as
In the initial stage, experimental PIM hardware
simulators and software simulators with 8 to 16
processors were trial-fabricated based on data
flow and reduction mechanisms (@®).
In the intermediate stage, we developed multiPSI VI, which was to construct 6 PSI-Is, as the
first version of the KLI machine. The
performance of this machine was only several
KLIPS because of the KLI emulator (® ©). It
did, however, provide evaluation and experience
by developing a very small parallel as in KLl.
This meant that we could develop multi-PSI V2
wi th 64 PSI- IT CPU s connected by a mesh
network (® ®). The performance of each CPU
for KLI was around 150 KLIPS, and the average
performance of the full multi-PSI V2 was 5
MLIPS. This speed was enough to significantly
improved to encourage efforts to develop various
parallel KLI software programs including an
practical as.
After development of multi-PSI V2, we promote
the design (® ®) and trial-fabrication of PIM
experimental models (@®).
At present, we are completing development of
prototype hardware consisting of 3 large scale
PIM modules and 2 small scale experimental
PIM modules (@ ®). These PIM modules are
designed to be equally suited to the KLI
machine for inference and knowledge base
management, and to be able to be installed all
programs written by KLl. This is in spite of
their using different architecture.
The VPIM system is a KLl-b language processing
system which gives a common base for PIM
firmware for KLl-b developed on conventional

R&D on KBM continued until the end of the
intermediate stage. An experimental relational
data base machine (Delta) with 4 relational
algebraic engines was trial-fabricated in the initial
stage (CD ®). During the intermediate stage, a
deductive data base simulator was developed to use
PSIs with an accelerator for comparison and
searching. An experimental system was also
developed with multiple-multiple name spaces, by
using CHI. Lastly, a know ledge base hard ware
simulator with unification engines and multi-port
page memory was developed in this stage (® (li)).
We developed DB/KB management software, called
Kappa, on concurrent basic software themes. At the
beginning of the final stage, we thought that
adaptability of PIM with Kappa for the various
descri ption forms for the knowledge base was more
important than effectivity of KBM with special
mechanism for the specific KB forms. In other
words, we thought that deductive object-oriented
DB technologies was not yet matured to design
KBM as a part of the prototype system.


Overview of R&D Results of
Software Systems

The R&D of software systems was carried out by a
number of subjects listed below in each stage.
CD Initial stage
· Basic software
® 5G Kernel Languages
® Problem solving and inference software
CD Knowledge base management software
@ Intelligent interface software module
® Intelligent programming software module
CD SIM software of pilot model for development
® Basic software system in the intermediate stage
®-® (as in the initial stage)
CD Experimental application system for basic
software module
@ Final stage
· Basic software system
® Inference Control module
® KB management module
· Knowledge programming software
CD Problem solving and programming module
@ Natural language interface module
® Knowledge construction and utiljzation
CD Advanced problem solving inference method


® Experimental parallel application system
To make the R&D results easy to understand, I will
separate the results for languages, basic software,
knowledge programming and application software.

R&D results of Fifth Generation
Computer languages

As the first step in 5G language development, we
designed sequential logic programming languages
KLO and ESP (Extended Self-contained Prolog) and
developed these language processors (CD @). KLO,
designed for the PSI hardware system, is based on
Prolog. ESP has extended modular programming
functions to KLO and is designed to describe large
scale software such as SIMPOS and application
As a result of research on parallel logic
programming language, Guarded Horn Clauses, or
GHC, was proposed as the basic specification for
KLI (Kernel Language Version 1) (CD @). KLI was,
then, designed by adding various functions to KLI
such as a macro description (@@). KLI consists of a
machine level language (KLl-b (base) ), a core
language (KLl-c) for writing parallel software and
pragma (KL1-p) to describe the division of parallel
processes. Parallel inference machines, multi-PSI
and PIM, are based on KLl-b. Various parallel
software, including PIMOS, is written in KLl-c and
A'um is an object oriented language. The results
of developing the A'um experimental language
processor reflect improvements in KLI (@@, ®@).
To research higher level languages, several
languages were developed to aid description of
specific research fields.
CIL (Complex
Indeterminate Language) is the extended language
of Prolog that describes meanings and situations for
natural language processing (CD @, @ @). CRL
(Complex Record Language) was developed as a
knowledge representation language to be used
internally for deductive databases on nested
relational DB software (@ ©). CAL (Contrainte
Avec Logique) is a sequential constraint logic
language for constraint programming (@(0).
Mandala was proposed as a knowledge
representation language for parallel processing, but
was not adopted because it lacks a parallel
processing environment and we had enough
experience with it in the initial stage (CD©).
Quixote is designed as a knowledge
representation language and knowledge-base
language for parallel processing based on the
results of evaluation by CIL and CRL. Quixote is
also a deductive object-oriented database language
and play the key role in KBMS. A language
processor is currently being developed for Quixote.
GDCC(Guarded Definite Clause with Constraints)

Figure 2-4

Transi tion of R&D of 5G Languages

is a parallel constraint logic language that processes
CAL results.

R&D Results of Basic Software (OS)

In the initial stage, we developed a preliminary
programming and operating system for PSI, called
SIMPOS, using ESP (CD ® CD). We contiJ;lued to
improve SIMPOS by adding functions
corresponding to evaluation results. We also took
into account the opinions of inside users who had
developed software for the PSI machine using
Since no precedent parallel OS which is suited for
our aims had been developed anywhere in the world,
we started to study parallel OS using our
experiences of SIMPOS development in the initial
stage. A small experimental PIMOS was developed
on the multi-PSI VI system in the first half of the
intermediate stage (@(0). Then, the first version of
PIMOS was developed on the multi-PSI V2 system,
and was used by KLI users (@ (0). PIMOS
continued to be improved by the addition of
functions such as remote access, file access and
debugging support (®@).
The Program Development Support System was
also developed by the end of the intermediate stage

Figure 2-5

Transition of basic software R&D


Paragraph was developed as a parallel
programming support system for improving
concurrency and load distribution by the indication
results of parallel processing (@@).
In regard to DB/KB management software
Kaiser was developed as a experimental relationai
DB management software in the initial stage
(CD @). Then, Kappa- I and Kappa- II were
developed to provide the construction functions
required to build a large scale DB/KB that could be
used for natural language processing, theorem
proving and various expert systems (@ @). KappaI and Kappa- II ,based on nested relational model
are aimed at the database engine of deductiv~
object-oriented DBMS.
. Recently, a parallel version of Kappa, Kappa-P,
I~ b~ing developed. Kappa-P can manage
dIstnbuted data bases stored on distributed disks in
PIM. (@ ®) Kappa-P and Quixote constitute the


R&D Results of Problem Solving and
Programming Technologies

Throughout this project, from the viewpoint of
similarity mathematical theorem proving and
program specification, we have been investigating
proving technologies. The CAP (Computer Aided
Proof) system was experimentally developed in the
initial stage (@ 0). TRS (Term Rewriting System)
and Metis were also developed to support specific
mathematical reasoning, that is, the inference
associated equals sign (@0).
An experimental program for program verification
and composition, Argus, was developed by the end of
the intermediate stage (CD 0 and @ 0). These
research themes concentrated on R&D into the
MGTP theorem prover in the final stage(@@).
Meta-programming technologies, partial
evaluation technologies and the learning
mechanism were investigated as basic research on
advanced problem solving and the inference method

(CD®, @®, @CD).


R&D Results on Natural Language
Processing Technologies

Natural language processing tools such as BUP
(Bottom-Up Parser) and a miniature electronic
dictionary were experimentally developed in the
initial stage (CD @). These tools were extended
improved and arranged into LTB (Language Tooi
Box). LTB is a library of Japanese processing
software modules such as LAX (Lexical Analyzer),
SAX (Syntactic Analyzer), a text generator and
language data bases (@@), @@).
An experimental discourse understanding
system, DUALS, was implemented to investigate

context processing and semantic analysis using
these language processing tools (CD @),@ @). An
experimental argument system, called Dulcinia is
being implemented in the final stage (@@).


R&D Results on Knowledge Utilization
Technologies and Experimental
Application Systems

In the intermediate stage we implemented
experimental knowledge utilization tools such as
APRICOT, based on hypothetical reasoning
technology, and Qupras, based on qualitative
reasoning technology (@ ©). At present, we are
investigating such inference mechanisms for expert
systems as assumption based reasoning and case
based reasoning, and implementing these as
knowledge utilization tools to be applied to the
experimental application system (@0).
As an application system, we developed, in
~rol~g, an. experimental CAD system for logic
cirCUlt deSIgn support and wiring support in the
initial stage. We also developed severaJ
experimental expert systems such as a CAD system
for layout and logic circuit design, a troubleshooting
system, a plant control system and a go-playing
system written in ESP (@CD, etc.).
Small to medium parallel programs written in
KLI were also developed to test and evaluate
parallel systems by the end of the intermediate
stage. These were improved for application to PIM
in the final stage. These programs are PAX (a
parallel semantics analyzer), Pentomino solver,
shortest path solver and Tsume-go.
We developed several experimental parallel
systems, implemented using KLI in the final stage,
such as LSI-CAD system (for logical simulation,
wire routing, block layout, logical circuit design),
¥enetic information processing system, legal
Inference system based on case based reasoning,
expert systems for troubleshooting, plant control
and go-playing (3g).
Some of these experimental systems were
developed from other earlier sequential systems in
the intermediate stage while others are new
application fields that started in the final stage.


Infrastructure of the FGCS

As explained in 2.2, the main language used for
software implementation in the initial stage was
Prolog. In the intermediate stage, ESP was mainly
used, and in the final stage KLI was the principle
Therefore, we used a Prolog processing system on
a conventional computer and terminals in the
initial stage. SIMPOS on PSI ( I and II) was used
as the workbench for sequential programming in




and Research


Figure 2-6

the intermediate stage. We are using PSI (II and
ill) as a workbench and remote terminals to parallel
machines (multi-PSIs and PIMs) for parallel
programming in the final stage. We have also used
conven tional machines for simulation to design PIM
and a communication (E-mail, etc.) system.
In regard to the computer network system, LAN
has been used as the in-house system, and LAN has
been connected to domestic and international
networks via gateway systems.


Working Groups

Infrastructure for R&D

Promoting Organization of
the FGCS Project

ICOT was established in 1982 as a non-profit core
organization for promoting this project and it began
R&D work on fifth generation computers in June
1982, under the auspices of MITI.
Establishment of ICOT was decided by considering
the following necessity and effectiveness of a
centralized core research center for promoting
originative R&D,
· R&D themes should be directed and selected by
powerful leadership, in consideration of hardware
and software integration, based on a unified
framework of fifth generation computers,
throughout the ten-year project period.
· It was necessary to develop and nurture
researchers working together because of the lack of
researchers in this research field.
· A core center was needed to exchange information
and to collaborate with other organizations and
outside researchers.
ICOT consists of a general affairs office and a
research center (Figure 3-1) .
The organization of the ICOT research center was
changed flexibly depending on the progress being
made. In the initial stage, the research center
consisted of a research planning department and
three research laboratories. The number of

Figure 3-1

ICOT Organization

laboratories was increased to five at the beginning
of the intermediate stage. These laboratories
became one research department and seven
laboratories in 1990.
Final StaJle
<:: Initia! Stage X Intermediate Stage
Fiscal 119821 '83 L '84119851 '861 '871 '88119891 '90 '911 '921

I Deputy Directors

Dl!puty Director
H1st R.Lab.
2nd R.Lab.


1st R.Lab.
2nd R.Lab.
3rd R.Lab.
14th R.Lab.

H3rd R.Lab.


I Research Dep.
1st R.Lab.
2nd R.lab.
Jr R.Lao.
4th R.Lab.
5th R.Lab.'
~t h R.Lab.
7th R.Lab.

5th R.Lab.
*R.Lab.:Research Laboratory
fL.fResearch P\anninq Department / Section

95 I 1 DD I 100 1 100 I

The number of researchers at the ICOT research
center has increased yearly, from 40 in 1982 to 100
at the end of the intermediate stage.
All researchers at the ICOT research center have
been transferred from national research centers,
public organizations, and computer vendors, and the
like. To encourage young creative researchers and
promote originative R&D, the age of dispatched
researchers is limited to 35 years old. Because all
researchers are normally dispatched to the ICOT
research center for three to four years, ICOT had to
receive and nurture newly transferred researchers.
We must make considerable effort to continue to
consisten tly lead R&D in the fifth generation
computer field despite researcher rotation. This
rotation has meant that we were able to maintain a
staff of researchers in their 30's, and also could
easily change the structure of organization in the
ICOT research center.
In total, 184 researchers have been transferred to


the ICOT research center with an average transfer
period of 3 years and eight months (including
around half of the dispatched researchers who are
presently at ICOT).
The number of organizations which dispatched
researchers to IeOT also increased, from 11 to 19.
This increase in participating organizations was
caused by an expanding scheme of the supporting
companies, around 30 companies, to dispatch
researchers to ICOT midway through the
intermediate stage.
The themes each laboratory was responsible for
changed occasionally depending on the progress
being made.
Figure 3-3 shows the present assignment of
research themes to each research laboratory.
Research Planning => Research planning
Department & Section & management



·Basic software (Kappa & Quixote)

=> ·Constraint logic programming software

·Prover & its application

=> . Natural

language interface software

·Parallel application system
·Knowledge utilization software
(as of 1991)

Figure 3-3

ICOT research center organization

Every year we invited several visiting
researchers from abroad for several weeks at ICOT's
expense to discuss and to exchange opinion on
specific research themes with ICOT researchers. Up
to the present, we have invited 74 researchers from
12 countries in this program.
We also received six long-term (about one year
each) visiting researchers from foreign
governmental organizations based on
memorandums with the National Science
Foundation (NSF) in the United States, the
Institute National de Recherche en Informatique et
Automatiqeu (INRIA) in France, and the
Department of Trade and Industry (DTI) in the
United Kingdom (Figures 3-2 and 3-4).
Figure 3-4 shows the overall structure for
promoting this project. The entire cost for the R&D
activities of this project is supported by MITI based
on the entrust contract between MITI and ICOT.
Yearly and at the beginning of each stage we
negotiate our R&D plan with MIT!. MITI receives
advice of this R&D plan and evaluations of R&D
results and ICOT research activities from the FGCS
project advisory committee.
ICOT executes the core part of R&D and has
contracts with eight computer companies for

Research Staff

o Public

Companies (14)
Visiting Researchers


Figure 3-4

• Invited Researchers
• Dispatched Researchers

Programming &
Development work
o Computer
Companies (8)

Structure for promoting FGCS project

experimental production of hardware and
developmental software. Consequently, ICOT can
handle all R&D activities, including the
developmental work of computer companies towards
the goals of this project.
ICOT has set up committee and working groups to
discuss and to exchange opinions on overall plans
results and specific research themes with
researchers and research leaders from universities
and other research institutes. Of course,
construction and the themes of working groups are
changed depending on research progress. The
number of people in a working group is around 10 to
20 members, so the total number in the committee
and working groups is about 150 to 250 each year.
Another program for information exchange and
collaborative research activities and diffusion of
research results will be described in the. following


Distribution of R&D Results
and International Exchange

Because this project is a national project in which
world-wide scientific contribution is very important,
we have made every effort to include our R&D ideas,
processes and project results when presenting ICOT
activities. We, also, collaborate with outside
researchers and other research organizations.
We believe these efforts have contributed to
progress in parallel and knowledge processing
computer technologies. I feel that the R&D efforts
in these fields have increased because of the
stimulative effect of this project. We hope that R&D
efforts will continue to increase through
distribution of this projects R&D results. I believe
that many outside researchers have also made
significant contributions to this project through

their discussions and information exchanges with
ICOT researchers.

Research ~



Accepting Dispatched
Researchers(total :8)


& Workshops

-International Conference
on FGCS ('81 :84:88:92)
Co-sponser with U.S.(NSF),
France(lNRIA),Sweden & Italy

...-"--::,....:....,-:....-.:..:..::..:.:...:.:..::::.. U.K.(IED of DT\)
-Domestic Conferences

Figure 4-1
We could, for example, produce GHC, a core
language of the parallel system, by discussion with
researchers working on Parlog and Concurrent
Prolog. We could, also, improve the performance of
the PSI system by introducing the W AM instruction
set proposed by Professor Warren.
We have several programs for distributing the
R&D results of this project, to exchange information
and to collaborate with researchers and
CD One important way to present R&D activities
and results is publication and distribution of
ICOT journals and technical papers. We have
published and distributed quarterly journals,
which contain introductions of ICOT activities,
and technical papers to more than 600 locations
in 35 countries.
We have periodically published and sent more
than 1800 technical papers to around 30
overseas locations. We have sent TRs
(Technical Reports) and TMs (Technical
Memos) on request to foreign addresses. These
technical papers consist of more than 700 TRs
and 1100 TMs published since the beginning of
this project up to January 1992. A third of
these technical papers are written in English.
@ In the second program ICOT researchers
discuss research matters and exchange
information with outside researchers.
ICOT researchers have made more than 450
presentations at international conferences
and workshops, and at around 1800 domestic
conferences and workshops. They have
visited many foreign research organizations
to discuss specific research themes and to
explain ICOT activities.

Every year, we have welcomed around 150 to
300 foreign researchers and specialists in
other fields to exchange information with
them and explain ICOT activities to them.
As already described in the previous chapter,
we have so far invited 74 active researchers
from specific technical fields related to FGCS
technologies. We have also received six longterm visiting researchers dispatched from
foreign governmental organization based on
agreemen t. These visi ting researchers
conducted research at ICOT and published the
results of that research.
@ We sponsored the following symposiums and
workshops to disseminate and exchange
information on the R&D results and on ICOT
We hosted the International Conference on
FGCS'84 in November 1984. Around 1,100
persons participated and the R&D results of
the initial stage were presented. This
followed the International Conference on
FGCS'81, in which the FGCS project plan was
presented. We also hosted the International
Conference on FGCS'88 in November 1988.
1,600 persons participated in this
symposium, and we presented the R&D
results of the intermediate stage.
We have held
7 Japan-Sweden (or Japan-Swederi-Italy)
workshops since 1983 (co-sponsored with
institute or universities in Sweden and Italy),
4 Japan-France AI symposiums since 1986,
(co-sponsored with INRIA of France),
4 Japan-U.S. AI symposiums since 1987 (cosponsored with NSF of U.S.A.), and
2 Japan-U.K. workshops since 1989 (cosponsored with DTI of U.K.).
Participating researchers have become to
known each other well through presentations
and discussions during these symposiums and
We have also hosted domestic symposiums on
this project and logic programming
conferences every year.
@) Because the entire R&D cost of this project has
been provided by the government such
intellectual property rights (IPR) as p~tents,
which are produced in this project, belong to the
Japanese government. These IPR are managed
by AIST (Agency of Industrial Science and
Technology). Any company wishing to produce
commercial products that use any of these IPR
must get permission to use them from AIST.
For example, PSI and SIMPOS have already
been commercialized by companies licensed by
AIST. The framework for managing IPR must


impartially utilize IPR acquired through this
project. That is, impartial permission to
domestic and foreign companies, and among
participating companies or others is possible
because of AIST.
@ Software tools developed in this project that are
not yet managed as IPR by AIST can be used by
other organizations for non-commercial aims.
These software tools are distributed by ICOT
according to the research tools permission
procedure. We, now, have more than 20
software tools, such as PIMOS, PDSS, Kappa-II,
the A'um system, LTB, the CAP system, the cuprolog system and the TRS generator.
In other cases, we make the source codes of
some programs public by printing them in
technical papers.
® On specific research themes in the logic
programming field, we have collaborated with
organizations such as Argonne National
Laboratory (ANL), National Institute of Health
(NIH), Lawrence Berkeley Laboratory (LBL),
Swedish Institute of Computer Science (SICS)
and Australia National University (ANU).


Forecast of Some Aspects of

LSI technologies have advance in accordance with
past trends. Roughly speaking, the memory
capacity and the number of gates of a single chip
quadruple every three years. The number of boards
for the CPU of an inference machine was more than
ten for PSI- I , but only three for PSI- II and single
board for PIM.
The number of boards for 80M bytes memory was
16 for PSI- I , but only four for PSI- II and a single
Figure 5-1 shows the anticipated trend in board
numbers for one PE (processor element: CPU and
memory) and cost for one PE based on the actual
value of inference machines developed by this
The trend shows that, by the year 2000, around
ten PEs will fit on one board, around 100 PEs will fit
in one desk side cabinet, and 500 to a 1,000 PEs will
fit in a large cabinet. This trend also shows that the
cost of one PE will halve every three years.
Figure 5-2 shows the performance trends of 5G
machines based on the actual performance of
inference machines developed by this project.
The sequential inference processing performance
for one PE quadrupled every three years. The
improvement in parallel inference processing
performance for one PE was not as large as it was
for sequential processing, because PIM performance
is estimated at around two and one half times that



-'cosrii'p-E' -. ~


I(Relative Cost
·:o.:".p':i.r~d. ,::,::i~~~I~? j


.... ,...

(3. _.



l (5e~L~~~~I)



Figure 5-1

DRAM Memory

DRAM Memory

Size and cost trends of 5G machines

of multi-PSI. Furthermore, Figure 5-2 shows the
performance of one board for both sequential and
parallel processing, and the performance of a
conventional micro-processor with CISC and RISC
technology. In this figure, future improvements in
the performance of one PE are estimated to be
rather lower than a linear extension of past values
would indicate because of the uncertainty of
whether future technology will be able to elicit such
performance improvements. Performance for one
board is estimated at about 20 MLIPS, which is 100
times faster than PIM. Thus, a parallel machine
with a large cabinet size could have 1 GLIPS. These
parallel systems will have the processing speeds
needed for various knowledge processing
applications in the near future.










Figure 5-2


Performance trends of 5G machines

Several parallel applications in this project, such as
CAD, theorem provers, genetic information
processing, natural language processing, and legal
reasoning are described in Chapter 2. These
applications are distributed in various fields and
aim at cultivating new parallel processing
application fields.
We believe that parallel machine applications
will be extended to various areas in industry and
society, because parallel technology will become


common for computers in the near future. Parallel
application fields will expand gradually according
to function expansion by the use of advanced
parallel processing and knowledge processing


Final Remarks

I believe that we have shown the basic framework
of the fifth generation computer based on logic
programming to be more than mere hypothesis. By
the end of the initial stage, we had shown the fifth
generation computer to be viable and efficient
through the development of PSI, SIMPOS and
various experimental software systems written in
ESP and Prolog.
I believe that by the end of the intermediate
stage, we had shown the possibility of realizing the
fifth generation computer through the development
of a parallel logic programming software
environment which consisted of multi-PSI and
And I hope you can see the possibility of an era of
parallel processing arriving in the near future by
looking at the prototype system and the R&D
results of the FGCS Project.

This project has been carried out through the efforts
of the researchers at ICOT, and with the support of
MITI and many others outside ofICOT. We wish to
extend our appreciation to them all for the direct
and indirect assistance and co-operation they have

[Motooka, et a11981] Proceedings of the International Conference on Fifth Generation Computer
Systems, 1981, J1PDEC
[Kawanobe, et a11984] K.Kawanobe, et al. ICOT
Research and Development, Proceeding of the
International Conference on Fifth Generation
Computer Systems 1984, 1984, ICOT
[Kurozumi, et a11987] T.Kurozumi, et al. Fifth
Generation Computer Systems Project, 1987,
[Kurozumi, et a11988] T.Kurozumi, et al. ICOT Research and development, Proceedings of the
International Conference on Fifth Generation
Computer Systems 1988, 1988, ICOT
[Kurozumi, 1990] T.Kurozumi. Fifth Generation
Computer Systems Project-Outline of Plan and
Results, 1990, ICOT TM-996

edited by ICOT. © ICOT, 1992


Summary of Basic Research Activities of the FGCS Project
Koichi Furukawa
Institute for New Generation Computer Technology
4-28, Mita 1-chome, Minato-ku, Tokyo 108, Japan




The Fifth Generation Computer Project was launched
in 1982, with the aim of developing parallel computers dedicated to knowledge information processing. It
is commonly believed to be very difficult- to parallelize
knowledge processing based on symbolic computation.
We conjectured that logic programming technology would
solve this difficulty.
We conducted our project while stressing two seemingly different aspects of logic programming: one was
establishment of a new information technology, and the
other was pursuit of basic AI and software engineering
In the former, we developed a concurrent logic programming language, GHC, and its extension for practical
parallel programming, KL1. The invention of GHCjKLl
enabled us to conduct parallel research on the development of software technology and parallel hardware dedicated to the new language.

In the Fifth Generation Computer Project, two main
research targets were pursued: knowledge information
processing and parallel processing. Logic programming
was adopted as a key technology for achieving both targets simultaneously. At the beginning of the project, we
adopted Prolog as our vehicle to promote the entire research of the project. Since there were no systematic
research attempts based on Prolog before our project,
there were many things to do, including the development
of a suitabie workstation for the research, experimental
studies for developing a knowledge-based system in Prolog and investigation into possible parallel architecture
for the language. We rapidly succeeded in promoting
research in many directions.
From this research, three achievements are worth noting. The first is the development of our own workstation dedicated to ESP, Extended Self-contained Prolog.
We developed an operating system for the workstation
completely in ESP [Chikayama 88]. The second is the
application of partial evaluation to meta programming.
This enabled us to develop a compiler for a new programming language by simply describing an interpreter of the
language and then partially evaluating it. We applied
this technique to derive a bottom-up parser for context
free grammar given a bottom up interpreter for them. In
other words, partial evaluation made meta programming
useful in real applications. The third achievement was
the development of constraint logic programming languages. We developed two constraint logic programming
languages: CIL and CAL. CIL is for natural language
processing and is based on the incomplete data structure for representing "Complex Indeterminates" in situation theory. It has the capability to represent structured data like Minsky's frame and any relationship between slots' values can be expressed using constraints.
CIL was used to develop a natural language understanding system called DUALS. Another constraint logic programming language, CAL, is for non-linear equations.
Its inference is done using the Buchberger algorithm for
computing the Grobner Basis which is a variant of the
Knuth-Bendix completion algorithm for a term rewriting

We also developed several constraint logic programming languages which are very promising as high level
languages for AI applications. Though most of them are
based on sequential Prolog technology, we are now integrating constraint logic programming and concurrent
logic programming and developing an integrated Ian.,.
guage, GDCC.
In the latter, we investigated many fundamental AI
and software engineering problems including hypothetical reasoning, analogical inference, knowledge representation, theorem proving, partial evaluation and program
As a result, we succeeded in showing that logic programming provides a very firm foundation for many aspects of information processing: from advanced software
technology for AI and software engineering, through system programming and parallel programming, to parallel
The research activities are continuing and latest as
well as earlier results strongly indicate the truth of our
conjecture and also the fact that our approach is appropriate.


We encountered one serious problem inherent to Prolog: that was the lack of concurrency in the fundamental
framework of Prolog. We recognized the importance of
concurrency in developing parallel processing technologies, and we began searching for alternative logic programming languages with the notion of concurrency.
We noticed the work by Keith Clark and Steve Gregory
on Relational Language [Clark Gregory 81] and Ehud
Shapiro on Concurrent Prolog [Shapiro 83]. These languages have a common feature of committed choice
nondeterminism to introduce concurrency. We devoted
our efforts to investigating these languages carefully
and Ueda finally designed a new committed choice
logic programming language called GHC [Ueda 86a]
[UedaChikayama 90], which has simpler syntax than the
above two languages and still have similar expressiveness.
We recognized the importance of GHC and adopted it as
the core of our kernel language, KL1, in this project. The
introduction of KL1 made it possible to divide the entire
research project into two parts: the development of parallel hardware dedicated to KL1 and the development of
software technology for the language. In this respect, the
invention of GHC is the most important achievement for
the success of the Fifth Generation Computer Systems
Besides these language oriented researches, we performed many fundamental researches in the field of artificial intelligence and software engineering based on logic
and logic programming. They include researches on nonmonotonic reasoning, hypothetical reasoning, abduction,
induction, knowledge representation, theorem proving,
partial evaluation and program transformation. We expected that these researches would become important
application fields for our parallel machines by the affinity
of these problems to logic programming and logic based
parallel processing. This is now happening.
In this article, we first describe our research efforts
in concurrent logic programming and in constraint logic
programming. Then, we discuss our recent research activities in the field of software engineering and artificial
intelligence. Finally, we conclude the paper by stating
the dirction of future research.


Concurrent Logic Programming

In this section, we pick up two important topics in
concurrent logic programming research in the project.
One is the design principles of our concurrent logic
programming language Flat GHC (FGHC) [Ueda 86a]
[UedaChikayama 90], on which the aspects of KL1 as
a concurrent language is based. The other is search
paradigms in FGHC. As discussed later, one drawback
of FGHC, viewing as a logic programming language, is

the lack of search capability inherent in Prolog. Since
the capability is related to the notion of completeness in
logic programming, recovery of the ability is essential.


Design Principles of FGHC

The most important feature of FGHC is that there is
only one syntactic extension to Prolog, called the commitment operator and represented by a vertical bar "I".
A commitment operator divides an entire clause into two
parts called the guard part (the left-hand side of the bar)
and the body part (the right-hand side). The guard of a
clause has two important roles: one is to specify a condition for the clause to be selected for the succeeding computation, and the other is to specify the synchronization
condition. The general rule of synchronization in FGHC
is expressed as dataflow synchronization. This means
that computation is suspended until sufficient data for
the computation arrives. In the case of FGHC, guard
computation is suspended until the caller is sufficiently
instantiated to judge the guard condition. For· example, consider how a ticket vending machine works. After
receiving money, it has to wait until the user pushes a
button for the destination. This waiting is described as a
clause such that "if the user pushed the 160-yen button,
then issue a 160-yen ticket".
The important thing is that dataflow synchronization
can be realized by a simple rule governing head unification which occurs when a goal is executed and a corresponding FGHC clause is called: the information flow of
head unification must be one way, from the caller to the
callee. For example, consider a predicate representing
service at a front desk. Two clauses define the predicate: one is for during the day, when more customers are
expected, and another is for after-hours, when no more
customers are expected. The clauses have such definitions as:
serve ([First I Rest]) : - 
do_service(First) , serve(Rest).
serve([]) :- true I true.
Besides the serve process, there should be another process queue which makes a waiting queue for service. The
top level goal looks like:

?- queue(Xs), serve(Xs).
where "?-" is a prompt to the user at the terminal. Note
that the execution of this goal generates two processes,
queue and serve, which share a variable Xs. This shared
variable acts as a channel for data transfer from one process to the other. In the above example, we assume that
the queue process instantiates XS and the serve process reads the value. In other words, queue acts as a
generator of the value of XS and serve acts as the consumer. The process queue instantiates XS either to a


list of servees represented by [, , .. .] or to an empty list []. Before the instantiation, the value of Xs remains undefined.
Suppose Xs is undefined. Then, the head unification
invoked by the goal serve(Xs) suspends because the
equations Xs = [First I Rest] and Xs = [] cannot be
solved without instantiating Xs. But such instantiation
violates the rule of one-way unification. Note that the
term [First I Rest] in the head of serve means that
the clause expects a non-empty list to be given as the
value of the argument. Similarly, the term [] expects
an empty list to be given. Now, it is clear that the unidirectionality of information flow realizes dataflow synchronization.
This principle is very important in two aspects: one is
that the language provides a natural tool for expressing
concurrency, and the other is that the synchronization
mechanism is simple enough to realize very efficient parallel implementation.


Search Paradigms in FGHC

There is one serious drawback to FGHC because of the
very nature of committed choice; that is, it no longer
has an automatic search capability, which is one of the
most important features of Prolog. Prolog achieves its
search capability by means of automatic backtracking.
However, since committed choice uniquely determines a
clause for succeeding computation of a goal, there is no
way of searching for alternative branches other than the
branch selected. The search capability is related to the
notion of completeness of the logic programming computation procedure and the lack of the capability is very
serious in that respect.
One could imagine a seemingly trivial way of realizing search capability by means of or-parallel search:
that is, to copy the current computational environment
which provides the binding information of all variables
that have appeared so far and to continue computations
for each alternative case in parallel. But this does not
work because copying non-ground terms is impossible in
FGHC. The reason why it is impossible is that FGHC
cannot guarantee when actual binding will occur and
there may be a moment when a variable observed at
some processor remains unchanged even after some goal
has instantiated it at a different processor.
One might ask why we did not adopt a Prolog-like
language as our kernel language for parallel computation. There are two main reasons. One is that, as stated
above, Prolog does not have enough expressiveness for
concurrency, which we see as a key feature not only for
expressing concurrent algorithms but also for providing
a framework for the control of physical parallelism. The
other is that the execution mechanism of Prolog-like languages with a search capability seemed too complicated
to develop efficient parallel implementations.

We tried to recover the search capability by devising
programming techniques while keeping the programming
language as simple as possible. We succeeded in inventing several programming methods for computing all solutions of a problem which effectively achieve the completeness of logic programming. Three of them are listed
as follows:
(1) Continuation-based method [Ueda 86b]
(2) Layered stream method [OkumuraMatsumoto 87]
(3) Query compilation method [Furukawa 92]
In this paper, we pick up (1) and (3), which are
complementary to each other. The continuation-based
method is suitable for the efficient processing of rather
algorithmic problems. An example is to compute all ways
of partitioning a given list into two sublists by using
append. This method mimics the computation of ORparallel Prolog using AND-parallelism of FGHC. ANDserial computation in Prolog is translated to continuation processing which remembers continuation points
in a stack. The intermediate results of computation are
passed from the preceding goals to the next goals through
the continuation stack kept as one of the arguments of
the FGHC goals. This method requires input/output
mode analysis before translating a Prolog program into
FGHC. This requirement makes the method impractical for d~tabase applications because there are too many
possible input-output modes for each predicate.
The query compilation method solves this problem.
This method was first introduced by Fuchi [Fuchi 90]
when he developed a bottom-up theorem prover in KLl.
In his coding technique, the multiple binding problem is
avoided by reversing the role of the caller and the callee in
straightforward implementation of database query evaluation. Instead of trying to find a record (represented
by a clause) which matches a given query pattern represented by a goal, his method represents each query component with a compiled clause, represents a databasae
'with a data structure passed around by goals, and tries
to find a query component clause which matches a goal
representing a record and recurses the process for all potentially applicable records in the database1 . Since every record is a ground term, there is no variable in the
caller. Variable instantiation occurs when query component clauses are searched and an appropriate clause
representing a query component is found to match a
currently processed record. Note that, as a result of reversing the representation of queries and databases from
straightforward representation, the information flow is
now from the caller (database) to the callee (a query
component). This inversion of information flow avoids
deadlock in query processing. Another important trick
is that each time a query clause is called, a fresh variable is created for each variable in the query component.
This mechanism is used for making a new environment
1 We need an auxiliary query clause which matches every record
after failing to match the record to all the real query clauses.


for each OR-parallel computation branch. These tricks
make it possible to use KLI variables to represent object
level variables in database queries and, therefore, we can
avoid different compilation of the entire database and
queries for each input/output mode of queries.
The new coding method stated above is very general and there are many applications which can be programmed in this way. The only limitation of this approach is that the database must be more instantiated
than queries. In bottom-up theorem proving, this requirement is referred to as the range-restrictedness of
each axiom. Range-restrictedness means that, after successfully finding ground model elements satisfying the
antecedent of an axiom, the new model element appearing as the consequent of the axiom must be ground.
This restriction seems very strong. Indeed, there are
problems in the theorem proving area which do not
satisfy the condition. We need a top-down theorem
prover for such problems. However, many real life problems satisfy the range-restrictedness because they almost always have finite concrete models. Such problems include VLSI-CAD, circuit diagnosis, planning, and
scheduling. We are developing a parallel bottom-up
theorem prover called MGTP (Model Generation Theorem Prover) [FujitaHasegawa 91] based on SATCHMO
developed by Manthey and Bry [MantheyBry 88]. We
are investigating new applications to utilize the theorem
prover. We will give an example of computing abduction
using MGTP in Section 5.





We began our constraint logic programming research
almost at the beginning of our project, in relation to
the research on natural language processing. Mukai
[MukaiYasukawa 85] developed a language called CIL
(Complex Indeterminates Language) for the purpose of
developing a computational model of situation semantics. A complex indeterminate is a data structure allowing partially specified terms with indefinite arity. During
the design phase of the language, he encountered the idea
of freeze in Prolog II by Colmerauer [Colmerauer 86]. He
adopted freeze as a proper control structure for our CIL
From the viewpoint of constraint satisfaction, CIL only
has a passive way of solving constraint, which means
that there is no active computation for solving constraints such as constraint propagation or solving simultaneous equations. Later, we began our research on
constraint logic programming involving active constraint
solving. The language we developed is called CAL. It
deals with non-linear equations as expressions to specify constraints. Three events triggered the research: one
was our preceding efforts on developing a term rewrit-

ing system called METIS for a theorem prover of linear
algebra [OhsugaSakai 91]. Another event was our encounter with Buchberger's algorithm for computing the
Grabner Basis for solving non-linear equations. Since the
algorithm is a variant of the Knuth-Bendix completion
algorithm for a term rewriting system, we were able to
develop the system easily from our experience of developing METIS. The third event was the development of
the CLP(X) theory by Jaffar and Lassez which provides
a framework for constraint logic programming languages
[JaffarLassez 86].
There is further remarkable research on constraint
logic programming in the field of general symbol processing [Tsuda 92]. Tsuda developed a language called
cu-Prolog. In cu-Prolog, constraints are solved by means
of program transformation techniques called unfold/fold
transformation (these will be discussed in more detail
later in this paper, as an optimization technique in relation to software engineering). The unfold/fold program transformation is used here as a basic operation
for solving combinatorial constraints among terms. Each
time the transformation is performed, the program is
modified to a syntactically less constrained program.
Note that this basic operation is similar to term rewriting, a basic operation in CAL. Both of these operations
try to rewrite programs to get certain canonical forms.
The idea of cu-Prolog was introduced by Hasida during
his work on dependency propagation and dynamical programming [Hasida 92]. They succeeded in showing that
context-free parsing, which is as efficient as chart parsing,
emerges as a result of dependency propagation during the
execution of a program given as a set of grammar rules
in cu-Prolog. Actually, there is no need to construct a
parser. cu-Prolog itself works as an efficient parser.
Hasida [Hasida 92] has been working on a fundamental
issue of artifici~l intelligence and cognitive science from
the aspect of a computational model. In his computation model of dynamical programming, computation is
controlled by various kinds of potential energies associated with each atomic constraint, clause, and unification.
Potential energy reflects the degree of constraint violation and, therefore, the reduction of energy contributes
constraint resolution.
Constraint logic programming greatly enriched the
expressiveness of Prolog and is now providing a very
promising programming environment for applications by
extending the domain of Prolog to cover most AI problems.
One big issue in our project is how to integrate constraint logic programming with concurrent logic programming to obtain both expressiveness and efficiency.
This integration, however, is not easy to achieve because (1) constraint logic programming focuses on a control scheme for efficient execution specific to each constraint solving scheme, and (2) constraint logic programming essentially includes a search paradigm which re-


quires some suitable support mechanism such as automatic backtracking.
It turns out that the first problem can be processed efficiently, to some extent, in the concurrent logic programming scheme utilizing the data flow control method. We
developed an experimental concurrent constraint logic
programming language called GDCC (Guarded Definite Clauses with Constraints), implemented in KL1
[HawleyAiba 91]. GDCC is based on an ask-tell mechanism proposed by Maher [Maher 87], and extended by
Saraswat [Saraswat 89]. It extends the guard computation mechanism from a simple one-way unification solving problem to a more general provability check of conditions in the guard part under a given set of constraints
using the ask operation. For the body computation, constraint literals appearing in the body part are added to
the constraint set using the tell operation. If the guard
conditions are not known to be provable because of a
lack of information in the constraints set, then computation is suspended. If the conditions are disproved under the constraints set, then the guard computation fails.
Note that the provability check controls the order of constraint solving execution. New constraints appearing in
the body of a clause are not included in the constraint
set until the guard conditions are known to be provable.
The second problem of realizing a search paradigm in a
concurrent constraint logic programming framework has
not been solved so far. One obvious way is to develop an
OR-parallel search mechanism which uses a full unification engine implemented using ground term representation of logical variables [Koshimura et al. 91]. However,
the performance of the unifier is 10 to 100 times slower
than the built in unifier and, as such, it is not very practical. Another possible solution is to adopt the new coding
technique introduced in the previous section. We expect
to be able to efficiently introduce the search paradigm by
applying the coding method. The paradigm is crucial if
parallel inference machines are to be made useful for the
numerous applications which require high levels of both
expressive and computational power.


Advanced Software EngineerIng

Software engineering aims at supporting software development in various dimensions; increase of software productivity, development of high quality software, pursuit
of easily maintainable software and so on. Logic programming has great potential for many dimensions in
software engineering. One obvious advantage of logic
programming is the affinity for correctness proof when
given specifications. Automatic debugging is a related
issue. Also, there is a high possibility of achieving automatic program synthesis from specifications by applying
proof techniques as well as from examples by applying

ind uetion. Program optimization is another promising
direction where logic programming works very well.
In this section, two topics are picked up: (1) meta
programming and its optimization by partial evaluation,
and (2) unfold/fold program transformation.


Meta Programming and Partial

We investigated meta programming technology as a vehicle for developing knowledge-based systems in a logic
programming framework inspired by Bowen and Kowalski's work [BowenKowalski 83]. It was a rather direct
way to realize a knowledge assimilation system using the
meta programming technique by regarding integrity constraints as meta rules which must be satisfied by a knowledge base. One big problem of the approach was its inefficiency due to the meta interpretation overhead of each
object level program. We challenged the problem and
Takeuchi and Furukawa [Takeuchi Furukawa 86] made a
brealdhrough in the problem by applying the optimization technique of partial evaluation to meta programs.
We first derived an efficient compiled program for an expert system with uncertainty computation given a meta
interpreter of rules with certainty factor. In this program, we succeeded in getting three times speedup over
the original program. Then, we tried a more non-trivial
problem of developing a meta interpreter of a bottom-up
parser and deriving an efficient compiled program given
the interpreter and a set of grammar' rules. We succeeded in obtaining an object program known as BUP,
developed by Matsumoto [Matsumoto et al. 83]. The
importance of the BUP meta-interpreter is that it is not
a vanilla meta-interpreter, an obvious extension of the
Prolog interpreter in Prolog, because the control structure is totally i::lifferent from Prolog's top-down control
After our first success of applying partial evaluation
techniques in meta programming, we began the development of a self-applicable partial evaluator. Fujita and
Furukawa [FujitaFurukawa 88] succeeded in developing a
simple self-applicable partial evaluator. We showed that
the partial evaluator itself was a meta interpreter very
similar to the following Prolog interpreter in Prolog:
solve(true) .

solve(A) ,
clause(A,B), solve(B).

where it is assumed that for each program clause,
H :- B, a unit clause, clause(H ,B), is asserted 2 • A
goal, solve (G), simulates an immediate execution ofthe
subject goal, G, and obtains the same result.
This simple definition of a Prolog self-interpreter,
sol ve, suggests the following partial solver, psol ve.
2clauseC-,_) is available as a built-in procedure in the
DECsystem-lO Prolog system.


psolve(A,RA), psolve(B,RB).
psolve(A,R) :clause(A,B), psolve(B,R).
psolve(A,A) :- '$suspend'(A).
The partial solver, psol ve (G ,R), partially solves a
given goal, G, returning the result, R. The result, R,
is called the residual goal(s) for the given goal, G. The
residual goal may be true when the given goal is totally
solved, otherwise it will be a conjunction of subgoals,
each of which is a goal, ~, suspended to be solved at
'$suspend' (~), for some reason. An auxiliary predicate, '$suspend' (P), is define-d for each goal pattern,
P, by the user.
Note that psolve is related to solve as:
solve(G) :- psolve(G,R), solve(R).
That is, a goal, G, succeeds if it is partially solved with
the residual goal, R, and R in turn succeeds (is totally
solved). The total solution for G is thus split into two
tasks: partial solution for G and total solution for the
residual goal, R.
We developed a self-applicable partial evaluator by
modifying the above psol ve program. The main modification is the treatment of built-in predicates in Prolog
and those predicates used to define the partial evaluator
itself to make it self-applicable. We succeeded in applying the partial evaluator to itself and generated a compiler by partially evaluating the psol ve program with
respect to a given interpreter, using the identical psol ve_
We further succeeded in obtaining a compiler generator,
which generates different compilers given different interpreters, by partially evaluating the psol ve program with
respect to itself, using itself.
Theoretically, it was known that self-application of
a partial evaluator generates compilers and a compiler
generator [Futamura 71]. There were many attempts
to realize self-applicable partial evaluators in the framework of functional languages for a long time, but no successes were reported until very recently [Jones et al. 85],
[Jones et al. 88], [GomardJones 89]. On the other hand,
we succeeded in developing a self-applicable partial evaluator in a Prolog framework in a very short time and
also in a very elegant way. This proves some merits of
logic programming languages over functional programming languages, especially in its binding scheme based
on unification.


Unfold/Fold Program Transformation

Program transformation provides a powerful methodology for the development of software, especially the
derivation of efficient programs either from their formal

specification or from decralative but possibly inefficient
programs. Programs written in declarative form are often inefficient under Prolog's standard left to right control rule. Typical examples are found in programs based
on a generate and test paradigm. Seki and Furukawa
[SekiFurukawa 87] developed a program transformation
method based on unfolding and folding for such programs. We will explain the idea in some detail. Let
gen_ test (L) be a predicate defined as follows:
gen_test(L) :- gen(L), test(L).
where L is a variable representing a list, gen(L) is a generator of the list L, and test (L) is a tester for L. Assume
both gen and test are incremental and are defined as
gene []) .
gene [x IL])

gen_element(X) , gen(L).

test ([]) .
test([XIL]) :- test_element (X) , test(L).
Then, it is possible to fuse two processes gen and test
by applying unfold/fold transformation as follows:
gen_test([XIL]) :- gen([XIL]), test([XIL]).
unfold at gen and test

gen_test([XIL]) :- gen_element(X) , gen(L),
test_element (X) , test(L).
fold by gen_ test

gen_test([XIL]) :- gen_element(X) ,
test_element(X), gen_test(L).

If the tester is not incremental, the above unfold/fold
transformation does not work. One example is to test
that all elements in the list are different from each other.
In this case, the test predicate is defined as follows:
test ([]) .
test([XIL]) :- non_member(X,L), test(L).
non_member(X,[yIL]):dif(X,Y), non_member(X,L).
where dif (X, Y) is a predicate judging that X is not equal
to Y. Note that this test predicate is not incremental because a test for the first element X of the list requires the
information of the entire list. The solution we gave to
this problem was to replace the test predicate with an
equivalent predicate with incrementality. Such an equivalent program test' is obtained by adding an accumulator as an extra argument of the test predicate defined
as follows:

test' «(] ,_).
test'([XIL] ,Ace)
non_member(X,Acc), test'(L,[XIAcc]).
The relationship between test and test' is given by
the following theorem:

unifiable to each pattern is associated with the pattern
and only those patterns having the same associated subset of clauses are generalized. Note that a goal pattern
is unfolded only by clauses belonging to the associated
subset. Therefore the suppression of over-generalization
also suppresses unnecessary expansion of clauses by unnecessary unfolding.


= test'(L,[])

Now, the original gen_ test program becomes
gen_test(L) :- gen(L), test'(L,[]).
We need to introduce the following new predicate to perform the unfold/fold transformation:
gen_test'(L,Acc) :- gen(L), test'(L.Acc).
By applying a similar transformation process as before, we get the following fused recursive program of
gen_ test':
gen_test' «(] ,_).
gen_test'([XIL],Acc) :- gen_element(X),
non_member(X,Acc) , gen_test'(L,[XIAcc]).
By symbolically computing the two goals
?- teste [Xi, ... ,Xn]).
?- test' ([Xi, ... ,Xn]).

and comparing the results, one can find that the reordering of pair-wise comparisons by the introduction of the
accumulator is analogous to the exchange of double summation L,i!iL,j!! Xij = L,j!iL,iJI Xij. Therefore, we refer
to this property as structural commutativity.
One of the key problems of unfold/fold transformation
is the introduction of a new predicate such as gen_test'
in the last example. Kawamura [Kawamura 91] developed a syntactic rule for finding suitable new predicates.
There were several attempts to find appropriate new
predicates using domain dependent heuristic knowledge,
such as append optimization by the introduction of difference list representation. Kawamura's work provides
some general criteria for selecting candidates for new
predicates. His method first analyzes a given program
to be transformed and makes a list of patterns which
may possibly appear in the definition of new predicates.
This can be done by unfolding a given program and properly generalizing all resulting patterns to represent them
with a finite number of distinct patterns while avoiding over-generalization. One obvious strategy to avoid
over-generalization is to perform least general generalization by Plotkin [Plotkin 70]. Kawamura also introduced another strategy for suppressing unnecessary generalization: a subset of clauses of which the head can be


Logic-based AI Research

For a long time, deduction has played a central role in
research on logic and logic programming. Recently, two
other inferences, abduction and induction, received much
attention and much research has been done in these new
directions. These directions are related to fundamental
AI problems that are open-ended by their nature. They
include the frame problem, machine learning, distributed
problem solving, natural language understanding, common sense reasoning, hypothetical reasoning and analogical reasoning. These problems require non-deductive
inference capabilities in order to solve them.
Historically, most AI research on these problems
adopted ad hoc heuristic methods reflecting problem
structures. There was a tendency to avoid a logic based
formal approach because of a common belief in the limitation of the formalism. However, the limitation of logical formalism comes only from the deductive aspect of
logic. Recently it has been widely recognized that abduction and induction based on logic provide a suitable
framework for such problems requiring open-endedness
in their formalism. There is much evidence to support
this observation.
• In natural language understanding, unification
grammar is playing an important role in integrating syntax, semantics, and discourse understanding.
• In non-monotonic reasoning, logical formalism such
as circumscription and default reasoning and its
compilation to logic based programs are studied ex~
• In machine learning, there are many results based
on logical frameworks such as the Model Inference
System, inverse resolution, and least general generalization.
• In analogical reasoning, analogy is naturally described in terms of a formal inference rule similar to
logical inference. The associated inference is deeply
related to abductive inference.
In the following, three topics related to these issues
are picked up: they are hypothetical reasoning, analogy,
and knowledge representation.



Hypothetical Reasoning

A logical framework of hypothetical reasoning was studied by Poole et al. [Poole et al. 87]. They discussed the
relationship among hypothetical reasoning, default reasoning and circumscription, and argued that hypothetical reasoning is all that is needed because it is simply and
efficiently implemented and is powerful enough to implement other forms of reasoning. Recently, the relationship of these formalisms was studied in more detail and
many attempts were made to translate non-monotonic
reasoning problems into equivalent logic programs with
negation as failure.
Another direction of research was the formulation of
abduction and its relationship' to negation as failure.
There was also a study of the model theory of a class
of logic programs, called general logic programs, allowing negation by failure in the definition of bodies in the
clausal form. By replacing negation-by-failure predicates
by corresponding abducible predicates which usually give
negative information, we can formalize negation by failure in terms of abduction [EshghiKowalski 89]
A proper semantics of general logic programs is given
by stable model semantics [GelfondLifschitz 88]. It is a
natural extension of least fixpoint semantics. The difference is that there is no Tp operator to compute the stable model directly, because we need a complete model for
checking the truth value of the literal of negation by failure in bottom-up fixpoint computation. Therefore, we
need to refer to the model in the definition of the model.
This introduces great difficulty in computing stable models. The trivial way is to assume all possible models and
see whether the initial models are the least ones satisfying the programs or not. This algorithm needs to search
for all possible subsets of atoms to be generated by the
programs and is not realistic at all.
Inoue [Inoue et ai. 92] developed a much more efficient
algorithm for computing all stable models of general logic
programs. Their algorithm is based on bottom-up model
generation method. Negation-by-failure literals are used
to introduce hypothetical models: ones which assume
the truth of the literals and the others which assume
that they are false. To express assumed literals, they introduce a modal operator. More precisely, they translate
each rule of the form:

false, whereas, KA means that we assume that A is true.
Although K and NK are modal operators, we can treat
KA and NKA as new predicates independent from A by
adding the following constraints:

NKA, A -:

for every atom A.



By this translation, we obtain a set of clauses in first
order logic and therefore it is possible to compute all
possible models for the set using a first order bottom-up
theorem prover, MGTP, described in Section 2. After
computing all possible models for the set of clauses, we
need to select only those models M which satisfy the
following condition:
For every ground atom A, if KA E M, then A EM.
Note that this translation scheme defines a coding
method of original general logic programs which may
contain negation by failure in terms of pure first order
logic. Note also that the same technique can be applied
in computing abduction, which means to find possible
sets of hypotheses explaining the observation and not
contradicting given integrity constraints.
Satoh and Iwayama [SatohIwayama 92] independently
developed a top-down procedure for answering queries to
a general logic program with integrity constraints. They
modified an algorithm proposed by Eshghi and Kowalski
[EshghiKowalski 89] to correctly handle situations where
some proposition must hold in a model, like the requirement of (3).
Iwayama and Satoh [IwayamaSatoh 91] developed a
mixed strategy combining bottom-up and top-down
strategies for computing the stable models of general
logic programs with constraints. The procedure is basically bottom-up. The top-down computation is related
to the requirement of (3) and as soon as a hypothesis of
K A is asserted in some model, it tries to prove A by a
top-down expectation procedure.
The formalization of abductive reasoning has a wide
range of applications including computer aided design
and fault diagnosis. Our approach provides a uniform
scheme for representing such problems and solving them.
It also provides a way of utilizing our parallel inference
machine, PIM, for solving these complex AI problems.

to the following disjunctive clause which does not contain
any negation-by-failure literals:


for every atom A.

Formal Approach to Analogy

Analogy is an important reasoning method in human
problem solving. Analogy is very helpful for solving
problems which are very difficult to solve by themselves.
AI+ 1 /\ ••• /\ Am - t
(NKAm+l /\ .. , /\ NKAn /\ AI) V KAm+l V ... V KAn. Analogy guides the problem solving activities using the
knowledge of how to solve a similar problem. Another
The reason why we express the clause with the anaspect of analogy is to extract good guesses even when
tecedent on the left hand side is that we intend to use
there is not enough information to explain the answer.
this clause in a bottom-up way; that is, from left to right.
There are three major problems to be solved in order
In this expression, NKA means that we assume that A is
to mechanize analogical reasoning [Arima 92]:


• searching for an appropriate base of analogy with
respect to a given target,
• selecting important properties shared by a base and
a target, and
• selecting properties to be projected through an analogy from a base to a target.
Though there was much work on mechanizing analogy,
most of this only partly addressed the issues listed above.
Arima [Arima 92] proposed an attempt to answer all the
issues at once. Before explaining his idea, we need some
preparations for defining teqIlinology.
Analogical reasoning is expressed as the following inference rule:

S(B) 1\ PCB)


J(x,s,p) = Jatt(s,p) 1\ Jobj(X,S),


The first component, Jatt(s,p), corresponds to information extracted from a base. The reason why it does not
depend on x comes from the observation that information in the base of the analogy is independent from the
choice of an object x.
The second component, Jobj(X, s), corresponds to information extracted from the similarity and therefore it
does not contain p as its parameter.

Example: Negligent Student

where T represents the target object, B the base object,
S the similarity property between T and B, and P the

projected property.
This inference rule expresses that if we assume an object T is similar to another object B in the sense that
they share a common property S then, if B has another
property P, we can analogically reason that T also has
the same property P. Note that the syntactic similarity
of this rule to modus ponens. If we generalize the object B to a universally quantified variable X and replace
the and connective to the implication connective, then
the first expression of the rule becomes SeX) :) P(X),
thereby the entire rule becomes modus ponens.
Arima [Arima 92] tried to link the analogical reasoning to deductive reasoning by modifying the expression
S(B) 1\ PCB) to

'v'x.(J(x) 1\ Sex) :) P(x)),

and p represent the similarity property and the projected
From the nature of analogy, we do not expect that
there is any direct relationship between the object x and
the projected property p. Therefore, the entire J(x,s,p)
can be divided into two parts:


where J(x) is a hypothesis added to Sex) in order to
logically conclude P(x). If there exists such a J(x), then
the analogical reasoning becomes pure deductive reasoning. For example, let us assume that there is a student
(StudentB) who belongs to an orchestra club and also
neglects study. If one happens to know that another
student (StudentT) belongs to the orchestra club, then
we tend to conclude that he also neglects study. The
reason why we derive such a conclusion is that we guess
that the orchestra club is very active and student members of this busy club tend to neglect study. This reason
is an example of the hypothesis mentioned above.
Arima analyzed the syntactic structure of the above
J( x) by carefully observing the analogical situation.
First, we need to find a proper parameter for the predicate J. Since it is dependent on not only an object
but also the similarity property and the projected property, we assume that J has the form of J(x,s,p), where s

First, let us formally describe the hypothesis described
earlier to explain why an orchestra member is negligent
of study. It is expressed as follows:

'v'x,s,p.( Enthusiastic(x,s) 1\ BusyClub(s)
I\Obstructive_to(p, s) 1\ Member _of (x, s)
:) NegligenLof(x,p) )


where x, s, and p are variables representing a person, a
club and some human activity, respectively. The meaning of each predicate is easy to understand and the
explanations are omitted. Since we know that both
StudentB and StudentT are members of an orchestra,
Members_of(X,s) corresponds to the similarity property Sex) in (4). On the other hand, since we want to reason the negligence of a student, the projected property
P(x) is NegligenLof(x,p). Therefore, the rest of the
expression ·in (6): Enthusiastic(x, s) 1\ BusyClub(s) 1\
Obstructive_to(p,s) corresponds to J(x,s,p). From the
syntactic feature of this expression, we can conclude that

JObj(X,S) = Enthusiastic(x,s),
Jatt(s,p) = BusyClub(s) 1\ Obstructive_to(p,s).
The reason why we need Jobj is that we are not always aware of an important similarity like Enthusiastic.
Therefore, we need to infer an important hidden similarity from the given similarity such as Member _0 f. This
inference requires an extra effort in order to apply the
above framework of analogy.
The restriction on the syntactic structure of J(x,s,p)
is very important since it can be used to prune a search
space to access the right base case given the target. This
function is particularly important when we apply our
analogical inference framework to case based reasoning



Knowledge Representation

Knowledge representation is one of the central issues in
artificial intelligence research. Difficulty arises from the
fact that there has been no single knowledge representation scheme for representing various kinds of know ledge
while still keeping the simplicity as well as the efficiency
of their utilization. Logic was one of the most promising
candidates but it was weak in representing structured
knowledge and the changing world. Our aim in developing a knowledge representation framework based on
logic and logic programming is to solve both of these
problems. From the structural viewpoint, we developed
an extended relational database which can handle nonnormal forms and its corresponding programming language, CRL [Yokota 88aJ. This representation allows
users to describe their databases in a structured way in
the logical framework [Yokota et al. 88b].
Recently, we proposed a new logic-based knowledge
representation language, Quixote [YasukawaYokota 90].
Quixote follows the ideas developed in CRL and CIL:
it inherits object-orientedness from the extended version
of CRL and partially specified terms from CIL. One of
the main characteristics of the object-oriented features
is the notion of object identity. In Quixote, not only
simple data atoms but also complex structures are candidates for object identifiers [Morita 90J. Even circular
structures can be represented in Quixote. The non-well
founded set theory by Aczel [Aczel 88] was adopted to
characterize them as a mathematical foundation for such
objects, and unification on infinite trees [Colmerauer 82J
was adopted as an implementation method.



In this article, we summarized the basic research activities of the FGCS project. We emphasized two different
directions of logic programming research. One followed
logic programming languages where constraint logic programming and concurrent logic programming were focussed. The other followed basic research in artificial
intelligence and software engineering based on logic and
logic programming.
This project has been like solving a jigsaw puzzle. It
is like trying to discover the hidden picture in the puzzle
using logic and logic programming as clues. The research
problems to be solved were derived naturally from this
image. There were several difficult problems. For some
problems, we did not even have the right evaluation standard for judging the results. The design of GHC is such
an example. Our entire picture of the project helped in
guiding our research in the right direction.
The picture is not completed yet. We need further
efforts to fill in the remaining spaces. One of the most
important parts to be added to this picture is the integration of constraint logic programming and concurrent

logic programming. We mentioned our preliminary language/system, GDCC, but this is not yet matured. We
need a really useful language which can be efficientlly executed on parallel hardware. Another research subject
to be pursued is the realization of a database in KLl.
We are actively constructing a parallel database but it
is still in the preliminary stages. We believe that there
is much affinity between databases and parallelism and
we expect a great deal of parallelism from database applications. The third research subject to be pursued is
the parallel implementation of abduction and induction.
Recently, there has been much work on abduction and
induction based on logic and logic programming frameworks. They are expected to provide a foundation for
many research themes related to knowledge acquisition
and machine learning. Also, both abduction and induction require extensive symbolic computation and, therefore, fit very well with PIM architecture.
Although further research is needed to make our results really useful in a wide range of large-scale applications, we feel that our approach is in the right direction.

This paper reflects all the basic research activities in the
Fifth Generation Computer Systems project. The author
would like to express his thanks to all the researchers
in ICOT, as well as those in associated companies who
have been working on this project. He especially would
like to thank Akira Aiba, Jun Arima, Hiroshi Fujita,
K6iti Hasida, Katsumi Inoue, Noboru Iwayama, Tadashi
Kawamura, Ken Satoh, Hiroshi Tsuda, Kazunori Ueda,
Hideki Yasukawa and Kazumasa Yokota for their help in
greatly improving this work. Finally, he would like to
express his deepest thanks to Dr. Fuchi, the director of
ICOT, for providing the opportunity to write this paper.

[Arima 92]

J. Arima, Logical Structure of Analogy. In Proc. of the International Conf.
on Fifth Generation Computer Systems
1992, Tokyo, 1992.

[Aczel 88]

P. Aczel, Non- Well Founded Set Theory. CLSI Lecture Notes No. 14, 1988.

[Aiba et al. 88] A. Aiba, K. Sakai, Y. Sato, D.J. Hawley,
and R. Hasegawa, Constraint Logic Programming Language CAL. In Pmc. of
the International Conf. on Fifth Generation Computing Systems 1988, Tokyo,
[Bowen Kowalski 83] K. Bowen and R. Kowalski, Amalgamating Language and Metalanguage


in Logic Programming. In Logic Programming, K. Clark and S. Tarnlund
(eds.), Academic Press, 1983.

[FujitaHasegawa 91] H. Fujita and R. Hasegawa, A

[Clark Gregory 81] K. 1. Clark and S. Gregory, A Re-

tional Conference on Logic Programming, Paris, 1991.

Model Generation Theorem Prover in
1(Ll Using a Ramified-Stack Algorithm. In Proc. of the Eighth Interna-

lational Language for Parallel Programming. In Proc. ACM Conf. on Func-

tional Programming Languages and
Computer Architecture, ACM, 1981.
[Clark Gregory 86] K. L. Clark and S. Gregory, PARLOG: Parallel Programming in Logic.
Research Report DOC 84/4, Dept. of
Computing, Imperial College of Science
and Technology, London. Also in ACM.
Trans. Prog. Lang. Syst., Vol. 8, No.1,
[Chikayama 88] T. Chikayama, Programming in ESP Experiences with SIMPOS -, In Programming of Future Generation Computers, Fuchi and Nivat (eds.), NorthHolland, 1988.
[Colmerauer 82] A. Colmerauer, Prolog and Infinite
Trees. In Logic Programming, K. L.
'Clark and S. A. Tarnlund (eds.), Academic Press, 1982.
[Colmerauer 86] A. Colmerauer, Theoretical Model of
Prolog II. In Logic Programming and
Its Applications, M. Van Caneghem and
D. H. D. Warren (eds.), Albex Publishing Corp, 1986.
[FuchiFurukawa 87] K. Fuchi and K. Furukawa, The
Role of Logic Programming in the Fifth
Generation Computer Project. New

Generation Computing, Vol. 5, No.1,
Ohmsha-springer, 1987.
[EshghiKowalski 89] K. Eshghi and R.A. Kowalski, Abduction compared with negation by failure, in: Proceedings of the Sixth International Conference on Logic Programming, Lisbon, Portugal, 1989.

[Fuchi 90]

K. Fuchi, An Impression of 1(Ll Programming - from my experience with
writing parallel provers . -. In Proe.

of KLI Programming Workshop '90,
ICOT, 1990 (in Japanese).
[FujitaFurukawa 88] H. Fujita and K. Furukawa, A SelfApplicable Partial Evaluator and Its
Use in Incremental Compilation. New

Generation Computing, Vol. 6, Nos.2,3,
Ohmsha/Springer- Verlag, 1988.

[Furukawa 92]

K. Furukawa, Logic Programming as
the Integrator of the Fifth Generation
Computer Systems Project, Communication of the ACM, Vol. 35, No.3, 1992.

[Futamura 71]

Y. Futamura, Partial Evaluation of
Computation Process: An Approach to
a Compiler-Compiler. Systems, Computers, Controls 2, 1971.

[GelfondLifschitz 88] M. Gelfond and V. Lifschitz, The
stable model semantics for logic programming, In Proceedings of the Fifth
International Conference and Symposium on Logic Programming, Seattle,

[GomardJones 89] C. K. Gomard and N. D. Jones, Compiler Generation by Partial Evaluation:
A Case Study. In Proc. of Information
Processing 89, G. X. Ritter (ed.), NorthHolland, 1989.
[Hasida 92]

K. Hasida, Dynamics of Symbol Systems - An Integrated Architecture of
Cognition. In Proc. of the International
Conf. on Fifth Generation Computer
Systems 1992, Tokyo, 1992.

[HawleyAiba 9l] D. Hawley and A. Aiba, Guarded Definite Clauses with Constraints - Preliminary Report. Technical Report TR-713,

ICOT, 1991.
[Inoue et al. 92] K. Inoue, M. Koshimura and R.
Hasegawa, Embedding Negation as Failure into a Model Generation Theorem Prover. To appear in CADE11: The Eleventh International Confer-

ence on A utomated Deduction, Saratoga
Springs, NY, June 1992.
[IwayamaSatoh 91] N. Iwayama and K. Satoh,


Bottom-·up Procedure with Top-down
Expectation for General Logic Programs
with Integrity Constraints. ICOT Tech-

nical Report TR-625, 1991.
[JaffarLassez 86] J. Jaffar and J-L. Lassez, Constraint Logic Programming. Technical
Report, Department of Computer Science, Monash University, 1986.


[Jones et al. 85] N.D. Jones, P. Sestoft, and H.
S¢ndergaard, An Experiment in Partial
Evaluation: The Generation of a Compiler Generator. In J-.P. Jouannaud
(ed.), Rewriting Techniques and Applications, LNCS-202, Springer-Verlag,
pp.124-140, 1985.

Layered Streams, In Proc. 1987 International Symposium on Logic Programming, pp. 224-232, San Francisco,

September 1987.
[Plotkin 70]

G. D. Plotkin, A note on inductive generalization. In B. Meltzer and D. Michie
(eds.), Machine Intelligence 5, 1970.

[Jones et al. 88] N. D. Jones, P. Setstoft and H. Sondergaard, MIX: a self-applicable partial
evaluator for experiments in compiler
generator,Journal of LISP and Symbolic
Computation, 1988. .

[Poole et al. 87] D. Poole, R. Goebel and R. Aleliunas,
Theorist: A logical Reasoning System
for Defaults and Diagnosis, N. Cercone
and G. McCalla (eds.),' The Knowledge

[Kawamura 91] T. Kawamura, Derivation of Efficient

Frontier: Essays in the Representation
of Knowledge, Springer-Verlag, pp.331-

Logic Programs by Synthesizing New
Predicates. Proc. of 1991 International

Logic Programming Symposium, pp.611
- 625, San Diego, 1991.

352 (1987).
[SakaiAiba 89]

K. Sakai and A. Aiba, CAL: A Theoretical Background of Constraint Logic Programming and its Applications. J. Sym-

[Koshimura et al. 91] M. Koshimura, H. Fujita and R.

bolic Computation, Vo1.8, No.6, pp.589603, 1989.

Utilities for Meta-Programming in KL1.

In Proc. of KLI Programming Workshop'91, ICOT, 1991 (in Japanese).
[Maher 87]

M. J. Maher, Logic semantics for a class
of committed-choice programs. In Proc.
of the 4th Int. Conf. on Logic Programming, MIT Press, 1987.

rMantheyBry 88] R. Manthey and F. Bry, SATCHMO:
A Theorem Prover Implemented in Prolog. In Proc. of CADE-88, Argonne, Illi-

[Saraswat 89]

[SatohIwayama 92] K. Satoh and N. Iwayama, A Correct Top-down Proof Procedure for a
General Logic Program with Integrity
Constraints. In Proc. of the 3rd International Workshop on Extensions of Logic
Programming, E. Lamma and P. Mello

nois, 1988.
[Matsumoto et al. 83] Yuji Matsumoto, H. Tanaka, H.
Hirakawa, H. Miyoshi and H. Yasukawa,
BUP: A Bottom-up Parser Embedded
in Prolog, New Generation Computing,
Vol. 1, 1983.
[Morita et ai. 90] Y. Morita, H. Haniuda and K. Yokota,
Object Identity in Quixote. Technical
Report TR-601, ICOT, 1990.
[MukaiYasukawa 85] K. Mukai, and H. Yasukawa, Complex Indeterminates in Prolog and its
Application to Discourse Models. New
Generation Computing, Vol. 3, No.4,
[OhsugaSakai 91] A. Ohsuga and K. Sakai, Metis: A
Term Rewriting System Generator. In
Software Science and Engineering, 1.
Nakata and M. Hagiya (eds.), World
Scientific, 1991.
[OkumuraMatsumoto 87] Akira Okumura and Yuji
Matsumoto, Parallel Programming with

V. Saraswat, Concurrent Constraint
Programming Languages. PhD thesis,
Carnegie-Mellon University, Computer
Science Department, 1989.

(eds.), Facalta di Ingegneria, Universita
di Bologna, Italy, 1992.
[SekiFurukawa 87] H. Seki and K. Furukawa, Notes on
Transformation techniques for Generate and Test Logic Programs. In Proc.

1987 Symposium on Logic Programming, iEEE Computer Society Press,
[Shapiro 83]

E. Y. Shapiro, A Subset of Concurrent
Prolog and Its Interpreter. Tech. Report
TR-003, Institute for New Generation
Computer Technology, Tokyo, 1983.

[Sugimura' et al. 88] R. Sugimura, K. Hasida, K.
Akasaka, K. Hatano, Y. Kubo, T. Okunishi, and T. Takizuka, A Software En,vironment for Research into Discourse
Understanding Systems. In Proc. of the
International Conf. on Fifth Generation
Computing Systems 1988, Tokyo, 1988.

[TakeuchiFu'rukawa 86] A. Takeuchi and K. Furukawa,
Partial Evaluation of Prolog Programs


and Its Application to Meta Programming. In Proc. IFIP'86, North-Holland,

[Taki 88]

K. Taki, The Parallel Software Research
and Development Tool: Multi-PSI system. In Programming of Future Genera-

tion Computers, K. Fuchi and M. Nivat
(eds.), North-Holland, 1988.
[Taki 89]

K. Taki, The FGCS Computing Architechlre. In Proc. IFIP'89, NorthHolland, 1989.

[Tanaka Yoshioka 88] Y. Tanaka, and T. Yoshioka,
Overview of the Dictionary and Lexical Knowledge Base Research. In Proc.
FGCS'88, Tokyo, 1988.
[Tsuda 92]

H. Tsuda, cu-Prolog for Constraintbased Grammar. In Proc. of the International Conf. on Fifth Generation
Computer Systems 1992, Tokyo, 1992.

[Veda 86a]

K. Veda, Guarded Horn Clauses. In
Logic Programming '85, E. Wada (ed.),
Lectl.,lre Notes in Computer Science,
221, Springer-Verlag, 1986.

[Veda 86b]

K. Veda, 1I1aking Exhaustive Search
Programs Deterministic. In Proc. of the
Third Int. Conf. on Logic Programming,
Springer-Verlag, 1986.

[VedaChikayama 90] K. Veda and T. Chikayama, Design of the J{ ernel Language for the Parallel Inference 1I1achine. The Computer

Journal, Vol. 33, No.6, .pp. 494-500;
[Warren 83]

D. H. D. Warren, An Abstract Prolog Instruction Set. Technical Note 304, Artificial Intelligence Center, SRI, 1983.

[YasukawaYokota 90] H. Yasukawa and K .. Yokota, Labeled Graphs as Semantics of Objects.

Technical Report TR-600, ICOT, 1990.
[Yokota 88a]

K. Yokota, Deductive Approach for
Nested Relations. In Programming of
Future Generation Computers II, K.
Fuchi and L. Kott (eds.), NorthHolland, 1988.

[Yokota et al. 88b] K. Yokota, M. Kawamura and A.
Kanaegami, Overview of the Knowledge
Base Management System(KAPPA). In
Pt.()c. of the International Conf. on Fifth
Grmeration Computing Systems 1988,

Tokyo, 1988.

edited by ICOT. © ICOT, 1992


Summary of the Parallel Inference Machine and
its Basic Software
Shunichi Uchida
Institute for New Generation Computer Technology
4-28, Mita 1-chome, Minato-ku, Tokyo 108, Japan

This paper aims at a concise introduction to the PIM
and its basic software, including the overall framework
of the project. Now an FGCS prototype system is under
development. Its core is called a parallel inference system which includes a parallel inference machine, PIM,
and its operating system, PIMOS. The PIM includes
five hardware modules containing about 1,000 element
processors in total. On the parallel inference system,
there is a knowledge base management system (KBMS).
The PIMOS and KBMS make a software layer called a
basic software of the prototype system. These systems
are already being run on the PIM. On these systems, a
higher-level software layer is being developed. It is called
a know ledge programming software. This is to be used
as a tool for more powerful inference and know ledge processing. It contains language processors for constraint
logic programming languages, parallel theorem provers
and natural language processing systems. Several experimental application programs are also being developed for
both general evaluation of the PIM and the exploration
of new application fields for knowledge processing. These
achievements with the PIM and its basic software easily
surpass the research targets set up at the beginning of
the project.



Since the fifth generation computer systems project
(FGCS) was started in June, 1982, 10 years have passed,
and the project is approaching its goal. This project
assumed that "logic" was the theoretical backbone of
future knowledge information processing, and adapted
logic programming as the kernel programming language
of fifth generation computer systems. In addition to the
adaptation of logic programming, highly parallel processing for symbolic computation was considered indispensable for implementing practical knowledge information
processing systems. Thus, the project aimed to create a
new computer technology combining knowledge process-

Knowledge and Symbol
Processing Applications
Parallel Evaluation and
Benchmark Programs

Knowledge Processing

( Knowledge Programming

Kernel of FGCS
l@@ij~1 !l/'iliarQl/'ilcQ
IJIsll/'il~ ~<9



Parallel OS and KBMS




Logic Programming
Language ~11;;:11
Parallel Inference Machine

PIM 1,000 PEs in total
( Multi-PSI System, 64 PEs)
Technical Framework

Prototype System of FGCS

Figure 1: Framework of FGCS Project

ing with parallel processing using logic programming.
Now an FGCS prototype system is under development. This system integrates the major research achievements of these 10 years so that they can be evaluated
and demonstrated. Its core is called a parallel inference system which includes a parallel inference machine, PIM, and its operating system, PIMOS. The PIM
includes five hardware modules containing about 1,000
element processors in total. It also includes a language
processor for a parallel logic language, KL 1.
On the parallel inference system, there is a
knowledge base management system (KBMS).
The KBMS includes a database management system
(DBMS), Kappa-P, as its lower layer. The KBMS
provides a knowledge representation language, Quixote,


based on the deductive (and) object-oriented database.
The PIMOS and KBMS make a software layer called a
basic software of the prototype system. These systems
are already being run on the PIM. The PIM and basic
software are now being used as a new research platform
for building experimental parallel application programs.
They are the most complete of their kind in the world.
On this platform, a higher-level software layer is being
developed. This is to be used as a tool for more powerful inference and knowledge processing. It contains language processors for constraint logic programming languages, parallel theorem provers, natural language processing systems, and so on. These software systems all
include the most advanced knowledge processing techniques, and are at the leading edge of advanced software
Several experimental application programs are also being developed for both general evaluation of the PIM
and the exploration of new application fields for knowledge processing. These programs include a legal reasoning system, genetic information processing systems, and
VLSI CAD systems. They are now operating on the
parallel inference system, and indicate that parallel processing of knowledge processing applications is very effective in shortening processing time and in widening the
scope of applications. However, they also indicate that
more research should be made into parallel algorithms
and load balancing methods for symbol and knowledge
processing. These achievements with the PIM and its
basic software easily surpass the research targets set up
at the beginning of the project.
This paper aims at a concise introduction to the PIM
and its basic software, including the overall framework
of the project. This project is the first Japanese national project that aimed at making a contribution to
world computer science and the promotion of international collaboration. We have published our research
achievements wherever possible, and distributed various
programs from time to time. Through these activities,
we have also been given much advice and help which
was very valuable in helping us to attain our research
targets. Thus, our achievements in the project are also
the results of our collaboration with world researchers on
logic programming, parallel processing and many related


Research Targets and Plan
Scope of R&D

The general target of the project is the development of
a new computer technology for knowledge information
Having "mathematical logic" as its theoretical backbone, various research and development themes were established on software and hardware technologies focusing

on knowledge and symbol processing. These themes are
grouped into the following three categories:

Parallel inference system

The core portion of the project was the research and development of the parallel inference system which contains
the PIM, a KL1language processor, and the PIMOS. To
make the goal of the project clear, a FGCS prototype
system was considered a major target. This was to be
build by integrating many experimental hardware and
software components developed ar0und logic programming.
The prototype system was defined as a parallel inference system which is intended to have about 1,000 element processors and attain more than 100M LIPS (Logical Inference Per Second) as its execution speed. It was
also intended to have a parallel operating system, PIMOS, as part of the basic software which provides us
with an efficient parallel programming environment in
which we can easily develop various parallel application
programs for symbol and knowledge processing, and run
them efficiently.- Thus, this is regarded as the development of a super computer for symbol and knowledge processing.
It was intended that overall research and development
activities would be concentrated so that the major research results could he integrated into a final prototype
system, step by step, over the timespan allotted to the

KBMS and knowledge programming software

Themes in this category aimed to develop a basic software technology and theory for knowledge processing.
• Knowledge representation and knowledge base management
• High-level problem solving and inference software
• N aturallanguage processing software
These research themes were intended to create new
theories and software technologies based on mathematical logic to describe various knowledge fragments
which are parts of "natural" knowledge bases produced in our social systems. We also intended to store
them in a computer system as components of "artificial" knowledge bases so that they can be used to
build various intelligent systems.
To describe the knowledge fragments, a knowledge representation language has to be provided. It can be regarded as a very high-level programming language execu ted by a sophisticated inference mechanism which is
much cleverer than the parallel inference system. Natural language processing research is intended to cover


lE~psrijm~rn~m~ #\p~~ij~~ij©)rn Sw~~~ms

Parallel VLSI-CAD Systems Legal Reasoning System
Genetic Information Processing Systems
Other parallel expert systems


_---IK...;.;,rn_©)_w~~~~ ~O'@oo~~ijrn~__


Natural Language ~@ifilwIlO'~
Processing System_-=s~ ~/~_ _--..;"'--


Parallel OS, PIMOS
KL 1 Programming Env.

Parallel KBMS/DBMS
Kappa-P + Quixote




Figure 2: Organization of Prototype System

research on knowledge representation methods and such
inference mechanisms, in addition to research on easyto-use man-machine interface functions. Experimental
software building for some of these research themes was
done on the sequential inference machines because the
level of research was so basic that computational power
was not the major problem.

Benchmarking and evaluation systems

• Benchmarking software for the parallel inference
• Experimental parallel application software
To carry out research on an element technology in computer science, it is essential that an experimental software system is built. Typical example problems can then
be used to evaluate theories or methods invented in the
progress of the research.
To establish general methods and technologies for
knowledge processing, experimental systems should be
developed for typical problems which need to process
knowledge fragments as sets of rules and facts.
These problems can be taken from engineering systems, including machine design and the diagnosis of machine malfunction, or from social systems such as medical
care, government services, and company management.

Generally, the exploitation of computer technology for
knowledge processing is far behind that for scientific calculation. Recent expert systems and machine translation
systems are examples of the most advanced knowledge
processing systems. However, the numbers of rules and
facts in their knowledge bases are several hundreds on
This scale of knowledge base may not be large enough
to evaluate the maximum power of parallel inference system having about 1,000 element processors. Thus, research and development on large-scale application systems is necessary not only for knowledge processing research but also for the evaluation of the parallel inference system. Such application systems should be widely
looked for in many new fields.
The scope of research and development in this project
is very wide, however, the parallel inference system is
central to the whole project. It is a very clear research
target. Software research and development should also
cover diverse areas in recent software technology. However, it has "logic" as the common backbone.
It was also intended that major research achievements
should be integrated into one prototype system. This has
made it possible for us to organize all of our research and
development in a coherent way. At the beginning of the
project, only the parallel inference machine was defined
as a target which was described very clearly. The other
research targets described above were not planned at the

beginning of the project. They have been added in the
middle of the intermediate stage or at the final stage.


Overall R&D plan

After three years of study and discussions on determining
our major research fields and targets, the final research
and development plan was determined at the end of fiscal
1981 with the budget for the first fiscal year.
At that time, practical logic programming languages
had begun to be used in Europe mainly for natural language processing. The feasibility and potential of logic
languages had not been recognized by many computer
scientists. Thus, there was some concern that the level
of language was too high to describe an operating system, and that the overhead of executing logic programs
might be too large to use it for practical applications.
This implies that research on logic programming was in
its infancy.
Research on parallel architectures linked with highlevel languages was also in its infancy. Research on
dataflow architectures was the most advanced at that
time. Some dataflow architecture was thought to have
the potential for knowledge and symbol processing. However, its feasibility for practical applications had not yet
been evaluated.
Most of the element technologies necessary to build the
core of the parallel inference system were still in their infancy. We then tried to define a detailed research plan
step by step for the la-year project period. We divided
the la-year period into three stages, and defined the research to be done in each stage as follows:
• Initial stage (3 years) :
-Research on potential element technologies
-Development of research tools



Inference System in the Initial
Personal Sequential Inference Machine (PSI-I)

To actually build the parallel inference system, especially
a productive parallel programming environment which is
now provided by PIMOS, we needed to develop various
element technologies step by step to obtain hardware and
software components. On the way toward this development, the most promising methods and technologies had
to be selected from among many alternatives, followed by
appropriate evaluation processes. To make this selection
reliable and successful, we tried to build experimental
systems which were as practical as possible.
In the initial stage, to evaluate the descriptive power
and execution speed of logic languages, a personal sequential machine, PSI, was developed. This was a logic
programming workstation. This development was also
aimed at obtaining a common research tool for software
development. The PSI was intended to attain an execution speed similar to DEClO Prolog running on a DEC20
system, which was the fastest logic programming system
in the world.
To begin with, a PSI machine language, KLO, was designed based on Prolog. Then a hardware system was designed for the KLO. We employed tag architecture for the
hardware system. Then we designed a system description language, ESP, which is a logic language having a
class and inheritance mechanisms to make program modules efficiently.[Chikayama 1984] ESP was used not only
to write the operating system for PSI, which is named
SIMPOS, but also to write many experimental software
systems for knowledge processing research.

• Final stage (3 years) :
-Second selection of major element technologies for
final targets
-Experimental building of a final full-scale system

The development of the PSI machine and SIMPOS was
successful. We were impressed by the very high software
productivity of the logic language. The execution speed
of the PSI was about 35K LIPS and exceeded its target.
However, we realized that we could improve its architecture by using the optimization capability of a compiler
more effectively. We produced about 100 PSI machines
to distribute as a common research tool. This version of
the PSI is called PSI-I.

At the beginning of the project, we made a detailed
research and development plan only for the initial stage.
We decided to make detailed plans for the intermediate
and final stages at the end of the stage before, so that
the plans would reflect the achievements of the previous
stage. The research budget and manpower were to be
decided depending on the achievements. It was likely
that the project would effectively be terminated at the
end of the initial stage or the intermediate stage.

In conjunction with the development of PSI-I and
SIMPOS, research on parallel logic languages was actively pursued. In those days, pioneering efforts were
being made on parallel logic languages such as P ARLOG and Concurrent Prolog. [Clark and Gregory 1984],
[Shapiro 1983] We learned much from this pioneering research, and aimed to obtain a simpler language more
suited for a machine language for a parallel inference machine. Near the end of the initial stage, a new parallel
logic language, GHC was designed. [Ueda 1986]

• Intermediate stage (4 years)
-First selection of major element technologies for final targets
-Experimental building of medium-scale systems

Table 1: Development of Inference Systems




Inference Tech.

Sequential Logic Programming

Il Languages, 1}:{1b® and ~®~

Inference Tech.



(parallel Logic Programming
( sequential_Inference
Languages ~1Xl~ and ~1.6 u
·..··r......·....·..·..·..·. ·....··..··J·..··················..·..·................
..... 1tR



'85-'88 (
Inter- ;~

New model of PSI, PSI-II,
~®I}:{IbO~® f1®ij' 1}:{1b®


"~~r" ~i~;;~:::;~;:;·;;

Effect of PSI development on the
research plan

Efficiency in program production

One of the important questions related to logic language
was the feasibility of writing an operating system which
needs to describe fine detailed control mechanisms. Another was its applicability to writing large-scale programs. SIMPOS development gave us answers to these
questions. The SIMPOS has a multi-window-based user
interface, and consists of more than 100,000 ESP program lines. It was completed by a team of about 20
software researchers and engineers over about two years.
Most of the software engineers were not familiar with
logic languages at that time.
We found that logic languages have much higher
productivity and maintainability than conventional von
Neumann languages. This was obvious enough to convince us to describe a parallel operating system also in a
logic language.


IWn ® I COOIP>~~ {J@)!l' ~Ib 'ij


Parallel OS, ~DIMl@® and Small
Application Programs


. ·. . ·.·.·. r. ···~~i~~;ii::~:=;··:~~···. .· .

The experience gained in the development of PSI-I and
SIMPOS heavily affected the planning of the intermediate stage.



Execution performance

The PSI-I hardware and firmware attained about 35K
LIPS. This execution speed was sufficient for most knowledge processing applications. The PSI had an 80 MB
main memory. It was a. very big memory compared to
mainframe computers at that time. We found that this
large memory and fast execution speed made a logic language a practical and highly productive tool for software

proto typing.
The implementation of the PSI-I hardware required
11 printed circuit boards. As the amount of hardware
became clear, we established that we could obtain an
element processor for a parallel machine if we used VLSI
chips for implementation.
For the KLO language processor which was implemented in the firmware, we estimated that better optimization of object code made by the compiler would
greatly improve execution speed.
(Later, this optimization was made by introducing of the "WAM"
code.[Warren 1983])
The PSI-I and SIMPOS proved that logic languages
are a very practical and productive vehicle for complex
knowledge processing applications.


Inference Systems in the Intermediate Stage


A parallel inference system


Conceptual design of KL1 and PIJYIOS

The most important target in the intermediate stage was
a parallel implementation of a KL1 language processor,
and the development of a parallel operating system, PIMOS.
The full version of GHC, was still too complex for the
machine implementation. A simpler version, FGHC,
was designed.[Chikayama and Kimura 1985] Finally, a
practical parallel logic language, KL1, was designed
based on FGHC.
The KL1 is a parallel language classified as an

AND-parallel logic programming language. Its language processor includes an automatic memory management mechanism and a dataflow process synchronization
mechanism. These mechanisms were considered essential
for writing and compiling large parallel programs. The
first problem was whether they could be implemented efficiently. The second problem was what kind of firmware
and hardware support would be possible and effective.
In addition to problems in implementing the KLI language processor, the design of PIMOS created several important problems. The role of PIMOS is different from
that of conventional operating systems. PIMOS does
not need to do primary process scheduling and memory management because these tasks are performed by
the language processor. It still has to perform resource
management for main memory and element processors,
and control the execution of user programs. However, a
much more difficult role was added. It must allow a user
to divide a job into parallel processable processes and
distribute them to many element processors. Processor
loads must be well balanced to attain better execution
performance. In knowledge and symbol processing applications, the dynamic structure of a program is not
regular. It is difficult to estimate the dynamic program
structure. It was desirable that PIMOS could offer some
support for efficient job division and load balancing problems.
These problems in the language processor and the operating system were very new, and had not been studied
as practical software problems. To solve these problems,
we realized that we must have appropriate parallel hardware as a platform to carry out practical software experiments using a trial and error.

PSI-II and Multi-PSI system

In conjunction with the development of KLI and PIMOS, we needed to extend our research and develop new
theories and software technologies for knowledge processing using logic programming. This research and development demanded improvement of PSI-I machines in such
aspects as performance, memory size, cabinet size, disk
capacity, and network connection.
We decided to develop a smaller and higherperformance model of PSI, to be called PSI-II. This
was intended to provide a better workstation for use as
a common tool and also to obtain an element processor
for the parallel hardware to be used as a platform for
parallel software development. This hardware was called
a multi-PSI system. It was regarded as a small-scale
experimental version of the PIM. As many PSI-II machines were produced, we anticipated having very stable
element processors for the multi-PSI system.
The PSI-II used VLSI gate array chips for its CPU.
The size of the cabinet was about one sixth that of PSI1. Its execution speed was 330K LIPS, about 10 times
faster than that of PSI-I. This improvement was attained

mainly through employment of the better compiler optimization technique and improvement of its machine architecture. The main memory size was also expanded to
320 MB so that prototyping of large applications could
be done quickly.
In the intermediate stage, many experimental systems
were built on PSI-I and PSI-II systems for knowledge
processing research. These included small-to-medium
scale expert systems, a natural language discourse understanding system, constraint logic programming systems, a database management system, and so on. These
systems were all implemented in the ESP language using
about 300 PSI-II machines distributed to the researchers..
as their personal tools.
The development of the multi-PSI system was completed in the spring of 1988. It consists of 64 element processors which are connected by an 8 by 8 mesh network.
One element processor is contained in three printed circuit boards. Eight element processors are contained in
one cabinet. Each element processor has an 80 MB main
memory. Thus, a multi-PSI was to have about 5GB
memories in total. This hardware was very stable, as
we had expected. We produced 6 multi-PSI systems and
distributed them to main research sites.

KLI language processor and PIMOS

This was the first trial implementation of a distributed
language processor of a parallel logic language, and a
parallel operating system on real parallel hardware, used
as a practical tool for parallel knowledge processing applications.
The KLI distributed language processor was an integration of various complex functional modules such as a
distributed garbage collector for loosely-coupled memories. The automatic process synchronization mechanism
based on the dataflow model was also difficult to implement over the distributed element processors. Parts of
these mechanisms had to be implemented combined with
some PIMOS functions such as a dynamic on-demand
loader for object program codes. Other important func-tions related to the implementation of the language processor were support functions like system debugging, system diagnostic, and system maintenance functions.
In addition to these functions for the KLI language
processor, many PIMOS functions for resource management and execution control had to be designed and implemented step by step, with repeated partial module
building and evaluation.
This partial module building and evaluation was done
for core parts of the KLllanguage processor and PIMOS,
using not only KLI but also ESP and C languages. An
appropriate balance between the functions of the language processor and the functions of PIMOS was considered. The language processor was implemented in a
PSI-II firmware for the first time. It worked as a pseudo
parallel simulator of KLl, and was used as a PIMOS




• Machine language: KLl-b
• Max. 64PEs and two FEPs (PSI-II) connected to LAN
• Architecture of PE:
- Microprogram control (64 bits/word)
- Machine cycle: 200ns, Reg.file: 64W
- Cache: 4 KW, set associative/write-back
- Data width: 40 bits/word
- Memory capacity: 16MW (80MB)
• Network:
- 2-dimensional mesh
- 5MB/s x 2 directions/ch with 2 FIFO buffers/ch
- Packet routing control function

Figure 3: Multi-PSI System: Main features and Appearance

development tool. It was eventually extended and transported to the multi-PSI system.
In the development of PIMOS, the first partial module building was done using the C language in a Unix
environment. This system is a tiny subset of the KLI
language processor and PIMOS, and is called the PIMOS Development Support System (PDSS). It is now
distributed and used for educational purposes. The
first version of PIMOS was released on the PSI-II with
the KLI firmware language processor. This is called a
pseudo multi-PSI system. It is currently used as a
personal programming environment for KLI programs.
With the KLI language processor fully implemented
in firmware, one element processor or a PSI-II attained
about 150 KLIPS for a KLI program. It is interesting
to compare this speed with that for a sequential ESP
program. As a PSI-II attains about 300 KLIPS for a
sequential ESP program, the overhead for KLI caused
by automatic process synchronization halves the execution speed. This overhead is compensated for by efficient parallel processing. A full-scale multi-PSI system
of 64 element processors could attain 5 - 10 MLIPS. This
speed was considered sufficient for the building of experimental software for symbol and knowledge processing
applications. On this system, simple benchmarking programs and applications such as puzzle programs, a naturallanguage parser and a Go-game program were quickly
developed. These programs and the multi-PSI system was demonstrated in FGCS'88.[Uchida et al. 1988]
These proved that KLI and PIMOS could be used as a

new platform for parallel software research.


Overall design of the parallel inference system
Background of the design

The first question related to the design of the parallel
inference system was what kind of functions must be
provided for modeling and programming complex problems, and for making them run on large-scale parallel
When we started this project, research on parallel processing still tended to focus on hardware problems. The
major research and development interest was in SIMD
or MIMD type machines applied for picture processing
or large-scale scientific calculations. Those applications
were programmed in Fortran or C. Control of parallel
execution of those programs, such as job division and
load balancing, was performed by built-in programs or
prepared subroutine libraries, and could not be done by
ordinary users.
Those machines excluded most of the applications
which include irregular computations and require general parallel programming languages and environments.
This tendency still continues. Among these parallel machines, some dataflow machines were exceptional and had
the potential to have functional languages and their general parallel programming environment.
We were confident that a general parallel programming

language and environment is indispensable for writing
parallel programs for large-scale symbol and knowledge
processing applications, and that they must provide such
functions as follows:

there were no prior examples, we could not make any reliable quantitative estimation of the overhead caused by
these software systems. This implementation was therefore considered risky too.

1. An automatic memory management mechanism for
distributed memories (parallel garbage collector)

Finally, we decided not to make an element processor too complex , so that our hardware engineers could
provide the software researchers with a large-scale hardware platform stable enough to make the largest-scale
software experiments in the world.

2. An automatic process synchronization mechanism
based on a dataflow scheme
3. Various support mechanisms for attaining the best
job division and load balancing.
The first two are to be embedded in the language processor. The last is to be provided in a parallel operating
system. All of these answer the question of how to write
parallel programs and map them on parallel machines.
This mapping could be made fully automatic if we
limited our applications to very regular calculations and
processing. However, for the applications we intend, the
mapping process, which includes job division and loadbalancing, should be done by programmers using the
functions of the language processor and operating system.

A general parallel programming environment

Above mechanisms for mapping should be implemented
in the following three layers:
1. A parallel hardware system consisting of element
processors and inter-connection network (PIM hardware)
2. A parallel language processor consisting of run-time
routines, built-in functions, compilers and so on
(KLI language processor)
3. A parallel operating system including a programming environment (PIMOS)
At the beginning of the intermediate stage, we tried to
determine the roles of the hardware, the language processor and the operating system. This was really the
start of development.
One idea was to aim at hardware with many functions
and using high density VLSI technology, as described in
early papers on dataflow machine research. It was a very
challenging approach. However, we thought it too risky
because changes to the logic circuits in VLSI chips would
have a long turn-around time even if the rapid advance of
VLSI technology was taken into account. Furthermore,
we thought it would be difficult to run hundreds of sophisticated element processors for a few days to a few
weeks without any hardware faults.
Implementation of the language processor and the operating system was thought to be very difficult too. As

However, we ~ried to add cost-effective hardware support for KLI to the element processor, in order to attain a higher execution speed. We employed tag architecture to support the autom:atic memory management
mechanism as well as faster execution of KLI programs.
The automatic synchronization mechanism was to be implemented in firmware. The supports for job division
and load balancing were implemented partially by the
firmware as primitives of the KLI language, but they
were chiefly implemented by the operating system. In a
programming environment of the operating system, we
hoped to provide a semi-automatic load balancing mechanism as an ultimate research goal.
PIMOS and KLI hide from users most of the architectural details of the element processors and network
system of PIM hardware. A parallel prograII}. is modeled
and programmed depending on a parallel model of an
application problem and algorithms designed by a programmer. The programmer has great freedom in dividing programs because a KLI program is basically constructed from very fine-grain processes.
As a second step, the programmer can decide the
grouping of fine-grain processes in order to obtain an appropriate granularity as divided jobs, and then specify
how to dispatch them to element processors using a special notation called "pragma". This two step approach
in parallel programming makes it easy and productive.
We decided to implement the memory management
mechanism and the synchronization mechanism mainly
in the firmware. The job division and load balancing
mechanism was to be implemented in the software. We
decided not to implement uncertain mechanisms in the
The role of the hardware system was to provide a stable platform with enough element processors, execution
speed, memory capacity, number of disks and so on. The
demands made on the capacity of a cache and a main
memory were much larger than those of a general purpose microprocessor of that time. The employment of
tag architecture contributed to the simple implementation of the memory management mechanism and also
increased the speed of KLI program execution.


R&D in the final stage


Planning of the final stage

At the end of the intermediate stage, an experimental medium-scale parallel inference system consisting of
the multi-PSI system, the KL1 language processor, and
PIMOS was successfully completed. On this system,
several small application programs were developed and
run efficiently in parallel. This proved that symbol and
knowledge processing problems had sufficient parallelism
and could be written in KL1 efficiently. This success enabled us to enter the final stage.
Based on research achievements and newly developed
tools produced in the intermediate stage, we made a detailed plan for the final stage. One general target was to
make a big jump from the hardware and software technologies for the multi-PSI system to the ones for the
PIM, with hundreds of element processors. Another general target was to make a challenge for parallel processing
of large and ·complex knowledge processing applications
which had never been tackled anywhere in the world,
using KL1 and the PIM.
Through the research and development directed to
these targets, we expected that a better parallel programming methodology would be established for logic
programming. Furthermore, the development of large
and complex application programs would not only encourage us to create new methods of building more intelligent systems systematically but could also be used
as practical benchmarking programs for the parallel inference system. We intended to develop new techniques
and methodologies.
1. Efficient parallel software technology
(a) Parallel modeling and programming techniques
-Parallel programming paradigms
-Parallel algorithms
(b) Efficient mapping techniques of parallel processes to parallel processors
-Dynamic load balancing techniques
- Performance debugging support

2. New methodologies to build intelligent systems using the power of the parallel inference system
(a) Development of a higher-level reasoning or inference engine and higher-level programming
(b) Methodologies for knowledge representation
and knowledge base management (methodology for knowledge programming)
The research and development themes in the final stage
were set up as follows:

1. PIM hardware development
We intended to build several models with different architectures so that we could compare mapping problems between the architectures and program models. The number of element processors for
all the modules was planned about 1,000.
2. The KL1 language processor for the PIM modules
We planned to develop new KLllanguage processors
which took the architectural differences on the PIM
modules into account.
3. Improvement and extension of PIMOS
We intended to develop an object-oriented language,
AYA, over KL1, a parallel file system, and extended
performance debugging tools for its programming
4. Parallel DBMS and KBMS
We planned to develop a parallel and distributed
database management system, using several disk
drives connected to PIM element processors, was intended to attain high throughput and consequently
a high information retrieval speed. As we had already developed a data base management system,
Kappa-II, which employed a nested relational model
on the PSI machine, we decided to implement a parallel version of Kappa- II. However, we redesiged its
implementation, employing the distributed database
model and using KL1. This parallel version is called
Kappa-P. We plan to develop a knowledge base management system on the Kappa-P. This would be
based on the deductive object-oriented DB, having
a knowledge representation language, Quixote.
5. Research on knowledge programming software
We intended to continue various basic research activities to develop new theories, methodologies and
tools for building knowledge processing application
systems. These activities were grouped together as
research on knowledge programming software.
This included research themes such as a parallel
constraint logic programming language, mathematical systems including theorem provers, natural language processing systems such as a grammar design
system, and an intelligent sentence generation system for man-machine interfacing.
6. Benchmarking and experimental parallel application systems
To evaluate the parallel inference system and the
various tools and methodologies developed in the
above themes, we decided to make more effort to


explore new applications of parallel knowledge processing. We began research into a legal expert system, a genetic information processing systems and
so on.


R&D results in the final stage

The actual research activities into the themes described
above differed according to characteristics. In the development of the parallel inference system, we focused
on the integration of PIM hardware and some software
components. In our research on knowledge programming
software, we continued basic research and experimental
software building to create new theories and develop parallel software technologies for the future.

PIM hardware and KLI language processor

A role of the PIM hardware was to provide software researchers with an advanced platform which would allow
large-scale software development for knowledge processing.
Another role was to obtain various evaluation data
in the architecture and hardware structure of the element processors and network systems. In particular, we
wanted to analyze the performance of large-scale parallel
programs on various architectures (machine instruction
sets) and hardware structures, so that hardware engineers could design more powerful and cost-effective parallel hardware in the future.
In the conceptual design of the PIM hardware, we realized that there were many alternative designs for the architecture of an element processor and the structure of a
network system. For the architecture of an element processor, we could choose between a CISC type instruction
set implemented in firmware and a RISC type instruction
set. On the interconnection network, there were several
opinions, including a two dimensional mesh network like
the multi-PSI, a cross-bar switch, and a common bus and
coherent cache.
To design the best hardware, we needed to find out the
mapping relationships between program behavior and
the hardware architectures and structures. We had to
establish criteria for the design of the parallel hardware,
reflecting the algorithms and execution structures of application programs.
To gather the basic data we needed to obtain this design criteria, we tried to categorize our design choices
into five groups and build five PIM modules. The main
features of these five modules are listed in Table 2. The
number of element processor required for each module
was determined depending on the main purpose of the
module. Large modules have 256 to 512 element processors, and were intended to be used for software experiments. Small modules have 16 or 20 element processors

and were built for architectural experiments and evaluation.
All of these modules were designed to support KL1
and PIMOS, so that software researchers could run one
program on the different modules and compare and analyze the behaviors of parallel program execution.
A PIM/m module employed architecture similar to
the multi-PSI system. Thus, its KL1 language processor could be developed by simply modifying and extending that of the multi-PSI system. For other modules,
namely PIM/p, PIM/c, PIM/k, and PIM/i, the KL1
language processor had to be newly developed because
all of these modules have a cluster structure. In a cluster, four to eight element processors were tightly coupled
by a shared memory and a common bus with coherent
caches. While communication between element processors is done through the common bus and shared memory, communication between clusters is done via a packet
switching network. These four PIM modules have different machine instruction sets.
We intended to avoid the duplication of development
work for the KL11anguage processor. We used the KL1.C language to write PIMOS and the usual application
programs. A KL1-C program is compiled into the KL1B language, which is similar to the "WAM" as shown
in Figure 5. We defined an additional layer between
the KL1-B language and the real machine- instruction.
This layer is called the virtual hardware layer. It has a
virtual machine instruction set called "PSL". The specification of the KL1-B interpreter is described in PSL.
This specification is semi-automatically converted to a
real interpreter or runtime routines dedicated to each
PIM modules. The specification in PSL is called a virtual PIM processor (the VPIM processor for short) and
is common to four PIM modules.
PIM/p, PIM/m and PIM/c are intended to be used
for large software experiments; the other modules were
intended for architectural evaluations. We plan to produce a PIM/p with 512 element processors, and a PIM/m
with 384 element processors. Now, at the beginning of
March 1992, a PIM/m of 256 processors has just started
to run a couple of benchmarking programs.
We aimed at a processing speed of more than 100
MLIPS for the PIM modules. The PIM/m with 256 processors will attain more than 100 MLIPS as its peak performance. However, for a practical application program,
this speed may be much reduced, depending on the characteristics of the application program and the programming technique. To obtain better performance, we must
attempt to augment the effect of compiler optimization
and to implement a better load balancing scheme. We
plan to run various benchmarking programs and experimental application programs to evaluate the gain and
loss of implemented hardware and software functions.


• Parallel VLSI-CAD system
• Legal inference system
• Parallel Go playing system
• Natural language analysis
• Genetic information
analysis tool

Software group
for functional demonstration and
parallel application

. Parallel expert system
- Logic design
- Equipment diagnosis
. Parallel software development support
- Parallel algorithm
- Intelligent programming

. Constraint programming
- Parallel constraint
processing syste m
. Automatic parallel
theorem-proving system
- MGTP prover

• Discourse processing system
- Contextual analysis
- Language knowledge base
• General-purpose Japanese
language pr ocessing system
• Paral~r natural language analysis
experimental system

. Parallel programming
- Visualization tool

• Deduction/object-oriented DB
- Knowledge representation
language Quixote
• Gene DB/KB application

Figure 4: Research Themes in the Final Stage
Table 2: Features of PIM modules







Machine instructions

RISC-type +






Target cycle time




100 nsec

100 nsec

LSI devices

Standard cell

Gate array

Cell base

Standard cell


Process Technology .
(line width)

.0.96 pm



1.2 pm

1.2 )1ITl

Machine configuration

connections (S PEs connections (S PEs
linked to a shared
+ CC linked to a
shared memory)
hypercu network in a crossbar network

mesh network

Shared memory Two-level
parallel cache
through a
parallel cache

Number of PEs connected

512 PEs

256 PEs

16 PEs

256 PEs

16 PEs


r- I

Kll Program

Compilation into an intennediate languge,
KLI-U (similar to WAM of Prolog).

......... ~.~.1.~.~..~.~.~.~........


are many transfonnation methods

corresponding to hardware architectures.

Runtime Libraries,
••• • •• • •
Microprograms, or ........::....
Object Codes

of KL 1-8
Abstract Machine

Real Hardware

(PIM/p, PIM/m, PIM/c, PIM/i,
PIM/k, Multi-PSI)

Uirtual Hardware

(Shared-memory Multiprocessors
+ Loosely-coupled Network)

Figure 5: KLI Language Processor and VPIM

Multiple Hypercube Network










, ••• ,




Clustero -. ---------------- ----, Ouster{



• Machine language: KLl-b
• Architecture of PE and cluster
- RISC + HLlC(Microprogrammed)
- Machine cycle: 60ns, Reg.file: 40bits x 32W
- 4 stage pipeline for RISC inst.
- Internal Inst. Mem: 50 bits x 8 KW
- Cache: 64 KB, 256 column, 4 sets, 32B/block
- Protocol: Write-back, Invalidation
- Data width: 40 bits/word
- Shared Memory capacity: 256 MB
• Max. 512 PEs, 8 PE/cluster and 4 clusters/cabinet
• Network:
- Double hyper-cube (Max 6 dimensions)
- Max. 20MB/sec in each link

Figure 6: PIM model P: Main Features and Appearance of a Cabinet


• Machine language: KL1-b
• Architecture of PE:
- Microprogram control (64 bits/word x 32 KW)
- Data width: 40 bits/word
- Machine cycle: 60ns. Reg.file: 40 bits x 64W
- 5 stage pipeline
- Cache: 1 KW for Inst .. 4 KW for Data
- Memory capacity: 16MW x 40 bits (80 MB)
• Max. 256 PEs, 32 PE/cabinet
• Network:
- 2-dimensional mesh
- 4.2MB/s x 2 directions/ch

Figure 7: PIM model M: Main Features and Appearance of four Cabinets

Development of PIMOS

PIMOS was intended to be a standard parallel operating
system for large-scale parallel machines used in symbol
and knowledge processing. It was designed as an independent, self-contained operating system with a programming environment suitable for KLl. Its functions
for resource management and execution control of user
programs were designed as independent from the architectural details of the PIM hardware. They were implemented based on an almost completely non-centralized
management scheme so that the design could be applied to a parallel machine with one million element
processors.[Chikayama 1992]
PIMOS is completely written in KLl. Its management and control mechanisms are implemented using a
"meta-call" primitive of KLl. The KL1 language processor has embedded an automatic memory management
mechanism and a dataflow synchronization mechanism.
The management and control mechanisms are then implemented over these two mechanisms.
The resource management function is used to manage
the memory resources and processor resources allocated
to user processes and input and output devices. The program execution control function is used to start and stop
user processes, control the order of execution following
priorities given to them, and protect system programs
from user program bugs like the usual sequential operat-

ing systems.
PIMOS supports multiple users, accesses via network
and so on. It also has an efficient KL1 programming environment. This environment has some new tools for debugging parallel programs such as visualization programs
which show a programmer the status of load balancing in
graphical forms, and other monitoring and measurement

Knowledge base management system

The know ledge base management system consists of two
layers. The lower layer is a parallel database management system, Kappa-P. Kappa-P is a database management system based on a nested relational model. It is
more flexible than the usual relational database management system in processing data of irregular sizes and
structures, such as natural language dictionaries and biological databases.
The upper layer is a knowledge base management system based on a deductive object-oriented
database. [Yokota and Nishio 1989] This provides us
with a knowledge representation language, Quixote.
[Yokota and Yasukawa 1992] These upper and lower layers are written in KL1 and are now operational on PIMOS.
The development of the database layer, Kappa, was
started at the beginning of the intermediate stage.

Kappa aimed to manage the "natural databases" accumulated in society, such as natural language dictionaries.
It employed a nested relational model so that it could
easily handle data sets with irregular record sizes and
nested structures. Kappa is suitable not only for natural language dictionaries but also for DNA databases,
rule databases such as legal data, contract conditions,
and other "natural databases" produced in our social
The first and second versions of Kappa were developed
on a PSI machine using the ESP language. The second
version was completed at the end of the intermediate
stage, and was called Kappa-II.[Yokota et al. 1988]
In the final stage, a parallel and distributed implementation of Kappa was begun. It is written in KL1
and is called Kappa-P. Kappa-P is intended to use large
PIM main memories for implementing the main memory
database scheme, and to obtain very high throughput
rate for disk input and output by using many disks connected in parallel to element processors.
In conjunction with the development of Kappa-II and
Kappa-P, research on a knowledge representation language and a know ledge base management system was
conducted. After repeated experiments in design and implementation, a deductive object-oriented database was
employed in this research.
At this point the design of the knowledge representation language, Quixote, was completed. Its language
processor, which is the knowledge base management system, is under development. This language processor is
being built over Kappa-P. Using Quixote, construction
of a knowledge base can then he made continuously from
a simple database. This will start with the accumulation
of passive fact data, then gradually add active rule data,
and will finally become a complete knowledge base.
The Quixote and Kappa-P system is a new knowledge base management system which has a high-level
knowledge representation language and the parallel and
distributed database management system as the base of
the language processor. The first versions of Kappa-P
and Quixote are now almost complete. It is interesting
to see how this big system operates and how much its
overhead will be.

Knowledge programming software

This software consists of various experimental programs
and tools built in theoretical research and development
into some element technologies for knowledge processing. Most of these programs and tools are written in
KLl. These could therefore be regarded as application
programs for the parallel inference system.
1. Constraint logic programming system
In the final stage, a parallel constraint logi~ programming language, GDCC, is being developed.

This language is a high-level logic language which
has a constraint solver as a part of its language
processor. The language processor is implemented
in KL1 and is intended to use parallel processing
to make its execution time faster. The GDCC
is evaluated by experimental application programs
such as a prograIIi for designing a simple handling
robot.[ Aiba and Hasegawa 1992]
2. Theorem proving and program transformation
A model generation theorem prover, MGTP, is being implemented in KLl. For this application, the
optimization of load balancing has been made successfully. The power of parallel processing is almost
proportional to the number of element processors
being used. This prover is being used as a rulebased reasoner for a legal reasoning system. It en-::'
abIes this system to use knowledge representation
based on first order logic, and to contribute to easy
knowledge programming.
3. Naturallanguage processing
Software tools and linguistic data bases are being
developed for use in implementing natural language
interfaces. The tools integrated into a library called
a Language Tool Box (LTB). The LTB includes natural language parsers, a sentence generators, and the
linguistic databases and dictionaries including syntactic rules and so on.

Benchmarking and experimental parallel
application software

This software includes benchmarking programs for the
parallel inference system, and experimental parallel application programs which were built for developing parallel programming methodology, knowledge representation
techniques, higher-level inference mechanisms and so on.
In the final stage, we extended the application area
to include larger-scale symbol and knowledge processing
applications such as genetic information processing and
legal expert systems. This was in addition to engineering
applications such as VLSI-CAD systems and diagnostic
systems for electronic equipment. [Nitta 1992]
1. VLSI CAD programs

Several VLSI CAD programs are being developed
for use in logic simulation, routing, and placement.
This system is aimed at developing various parallel
algorithms and load balancing methods. As there
are sequential programS which have similar functions to these programs, we can compare the performance of the PIM against that of conventional


2. Genetic information processing programs
Sequence alignment programs for proteins and a
protein folding simulation program are being developed. Research on an integrated database for biological data is also being made using Kappa.
3. A legal reasoning system
This system infers possible judgments on a crime
using legal rules and past cases histories. It uses
the parallel theorem prover, MGTP, as a core of the
rule- based reasoner. This system is making full use
of important research results of this project, namely,
the PIM, PIMOS, MGTP and high-level inference
and knowledge representation techniques.
4. A Go game playing system
The search space of a Go game is too large to apply
any exhaustive search method. For a human player,
there are many text books to show typical position
sequences of putting stones which is called "Joseki"
patterns. This system has s~me of the Joseki patterns and some heuristic rules as its knowledge base
to win the game against a human player. It aims to
attain 5 to 10 "kyuu" level.
The applications we have described all employ symbol
and knowledge processing. The parallel programs have
been programmed in KLI in a short time. Particularly
for the CAD and sequence alignment programs, the processing speed has improved almost proportionally to the
number of element processors.
However, as we can see in the Go playing system,
which is a very sophisticated program, the power of the
parallel inference system can not always increase its intelligence effectively. This implies that we cannot effectively transcribe "natural" knowledge bases written in
text books .on Go into data or rules in "artificial" knowledge base of the system which would make the system
" clever". We need to make more effort to find out a
better program structure and better algorithms to make
full use of the merit of parallel processing.



Evaluation of the parallel inference system
General purpose parallel programming environment

The practical problems in symbol and knowledge processing applications have been written efficiently in KLl,
and solved quickly using a PIM which has several hundred element processors. Productivity of parallel software using in KLI has been proved to be much higher

than in any conventional language. This high productivity is apparently a result of using the automatic memory management mechanism and the automatic dataflow
synchronization mechanism.
Our method of specifying job division and load balancing has been evaluated and proved successful. KLI programming takes a two-step approach. In the first step, a
programmer writes a program concentrating only on the
program algorithms and a model. When the program is
completed, the programmer adds the specifications for
job division and load balancing using a notation called
"pragma" as the second step. This separation makes the
programming work simple and productive.
The specification of the KLllanguage has been evaluated as practical and adequate for researchers. However,
we realize that application programmers need a simpler
and higher-level KLI language specification which is a
subset of KLI. In the future, several application-oriented
KLI language specifications should be provided, just as
the von Neumann language set has a variety of different
languages such as Fotran, Pascal and Cobol.


Evaluation of KL1 and PIMOS

The functions of PIMOS, some of which are implemented
as KLI functions, have been proved to be effective for
running and debugging user programs on parallel hardware. The resource management and execution mechanisms in particular work as we had expected. For instance, priority control of user processes permits programmers to use about 4,000 priority levels and enables
them to write various search algorithms and speculative
computations very easily. We are convinced that the
KLI and PIMOS will be the best practical example for
general purpose parallel operating systems in the future.


Evaluation of hardware support
for language functions

In designing of the PIM hardware and the KLllanguage
processor, we thought it more important to provide a usable and stable platform which has a sufficient number of
element processor for parallel software experiments than
to build many dedicated functions into the element processor. Only the dedicated hardware support built in
the element processor was tag architecture. Instead, we
added more support for the interconnection between element processors such as message routing hardware and
a coherent cache chip.
We did not embed complex hardware support, such as
a matching store of a dataflow machine, or a contentaddressable memory. We thought it risky because an
implementation of the complex hardware would take a
long turn around time even by a very advanced VLSI
technology. We also considered that we should create a
new optimization technique for a compiler dedicated to

the embedded complex hardware support, and that this
would not easy too.
The completion of PIM hardware is now one year behind the original schedule, mainly because we had many
unexpected problems in the design of the random logic
circuits, and in submicron chip fabrication. If we had
employed a more complex design for the element processor, the PIM hardware would have been further from

Comparison of PIM hardware with commercially available technology

Rapid advances have been made in RISe processors recently. Furthermore, a few MIMD parallel machines
which use a RISe processor as their element processor
have started to appear in the market. When we began
to design the PIM element processor, the performances
of both RISe and elSe processors were as low as a few
MIPS. At that time, a dedicated processor with tag architecture could attain a better performance. However,
now some RISe processors have attained more than 50
MIPS. It is interesting to evaluate these RISe processors
for KLI program execution speed.
We usually compare the execution speed of a PIM element processor to that of a general-purpose microprocessor, regarding 1 LIPS as approximately equivalent to 100
IPS. This means that a 500 KLIPS PIM element processor should be comparable to a 50 MIPS microprocessor.
However, the characteristics of KLI program execution
are very different from those of the usual benchmark programs for general-purpose microprocessors.
The locality of memory access patterns for practical
KLI programs is lower than for standard programs. As
the length of the object codes for a RISe instruction
set has to be longer than a elSe or dedicated instruction set processors, the cache miss ratio will be greater.
Then, simple comparison with the PIM element processor and some recent RISe chips using announced peak
performance is not meaningful. Thus, the practical implementation of the KLllanguage processor on a typical
RISe processor is necessary.
Most of the MIMD machines currently on the market
lack a general parallel programming environment. The
porting of the KLI language processor may allow them
to employ new scientific applications as well as symbol
and knowledge processing applications.
In the future processor design, we believe that a general purpose microprocessor should have tag architecture
support as apart of its standard functions.

Evaluation of high-level programming

Parallel programming in KLI is very productive, especially for large-scale and complex problems. The control

of job division and load balancing works well for hundreds of element processors. No conventional language
is so productive. However, if we compare the processing speed of a KLI program with that of a conventional
language program with similar functions within a single
element processor, we find that the KLI overhead is not
so small. This is a corrunon trade-off problem between
high-level programming and low-level programming.
One straightforward method of compensating is to
provide a simple subroutine call mechanism to link e
language programs to KLI programS. Another method
is to improve the optimization techniques of compilers.
This method is more elegant than the first. Further research on optimization technique should be undertaken.



It is obvious that a general-purpose parallel programming langu,age and environment is indispensable for solving practical problems of knowledge and symbol processing. The straightforward extension of conventional von
Neumann languages will not allow the use of hundreds
of element processors except for regular scientific calculations.
We anticipated the difficulties in efficient implementation of the automatic memory management and synchronization mechanisms. However, this has been now
achieved. The productivity and maintainability of KLI is
much higher than we expected. This more than compensates for the overhead in high-level language programming.
Several experimental parallel application programs on
the parallel inference system have proved that most
large-scale knowledge processing applications contain potential parallelism. However, to make full use of this parallelism, we need to have more parallel algorithms and
paradigms to actually program the applications.
The research and development targets of this FGeS
project have been achieved, especially as regards the parallel inference system. We plan to distribute the KLI
language processor and PIMOS as free software or public domain software, expecting that they will be ported
to many MlMD machines, and will provide a research
platform for future knowledge processing technology.

The development of the FGeS prototype system was
conducfed jointly by many people at lCOT, cooperating
manufacturers, and many researchers in many countries.
The author would like to express my gratitude to all the
people who have given us much advise and help for more
than 10 years.


[Uchida 1987] S. Uchida. "Inference Machines in FGCS
Project", TR 278, ICOT, 1987.
[Uchida et al. 1988] S. Uchida, K. Taki, K. Nakajima, A.
Goto and T. Chikayama, "Research and Development
of The Parallel Inference System in The Intermediate Stage of The project", Proc. Int. Conf. on Fifth
Generation Computer Systems, Tokyo, Nov.28-Dec.2,
[Go to et al. 1988] A. Goto, M. Sato, K. Nakajima, K.
Taki, and A. Matsumoto. " Overview of the Parallel Inference Machine Architecture (PIM)", In Proc.
of the International Conference on Fifth Generation
Computing Systems 1988, Tokyo, Japan, November
[Taki 1992] K. Taki, "Parallel Inference Machine, PIM",
Proc. Int. Conf. on Fifth Generation Computer Systems, Tokyo, Jul.1-5, 1992.
[Chikayama 1984] T. Chikayama, "Unique Features of
ESP", In Proc. Int. Conf. on Fifth Generation Computer Systems 1984, ICOT, 1984, pp. 292-298.

on Fifth Generation Computer Systems 1988, ICOT,
1988, pp. 230-251.
[Chikayama 1992] T. Chikayama, "Operating System
PIMOS and Kernel Language KL1", Proc. Int. Conf.
on Fifth Generation Computer Systems, Tokyo, Jul.15, 1992.
[Uchida et al. 1988] S. Uchida, "The Research and Development of Natural Language Processing Systems in
the Intermediate Stage of the FGCS Project", Proc.
Int. Conf. on Fifth Generation Computer Systems,
Tokyo, Nov.28-Dec.2, 1988.
[Yokota et al. 1988] K. Yokota, M. Kawamura, and A.
Kanaegami, "Overview of the Knowledge Base Management System (KAPPA)", Proc. Int. Conf. on Fifth
Generation Computer Systems, Tokyo, Nov.28-Dec.2,
[Yokota and Nishio 1989] K. Yokota and S. Nishio, "Towards Integration of Deductive Databases and ObjectOriented Databases-A Limited Survey", Proc. Advanced Database System Symposium, Kyoto, Dec.,

[Warren 1983] D.H.D. Warren, "An Abstract Prolog Instruction Set", Technical Note 309, Artificial Intelligence Center, SRI, 1983.

[Yokota and Yasukawa 1992] K.Yokota
and H. Yasukawa, "Towards an Integrated KnowledgeBase Management System", Proc. Int. Conf. on Fifth
Generation Computer Systems, Tokyo, Jul.1-5, 1992.

[Clark adn Gregory 1983] Keith L. Clark and Steve Gregory, "Parlog: A parallel logic programming language", Research Report TR-83-5, Imperial College,
March 1983.

[ Aiba and Hasegawa 1992] A. Aiba and R. Hasegawa,
"Constraint Logic Programming System", Proc. Int.
Conf. on Fifth Generation Computer Systems, Tokyo,
Jul.1-5, 1992.

[Clark and Gregory 1984] K. 1. Clark and S. Gregory,
"Notes on Systems Programming in PARLOG", In
Proc. Int. Conf. on Fifth Generation Computer Systems 1984, ICOT;'1984, pp. 299-306.

[Nitta 1992] K. Nitta, K. Taki, and N. Ichiyoshi, "Development of Parallel Application Programs of the Parallel Inference Machine", Proc. Int. Conf. on Fifth Generation Computer Systems, Tokyo, Jul.1-5, 1992.

[Shapiro 1983] E. Y. Shapiro, "A subset of Concurrent
Prolog and Its Interpreter", TR 003, ICOT, 1987.
[Ueda 1986] K. Ueda. Guarded Horn Clauses, "In Logic
Programming", '85, E. Wada (ed.), Lecture Notes in
Computer Science 221, Springer-Verlag, 1986, pp.168179.
[Ueda 1986] K. Ueda, "Introduction to Guarded Horn
Clauses", TR 209, ICOT, 1986-.
[Chikayama and Kimura 1985] T. Chikayama and Y.
Kimura, "Multiple Reference Management in Flat
GHC", In Proc. Fourth Int. Conf. on Logic Programming, MIT Press, 1987, pp. 276-293.
[Chikayama el al. 1988] T. Chikayama, H. Sato and T.
Miyazaki, "Overview of the Parallel Inference Machine Operating System (PIMOS)" , In Proc. Int. Conf.

edited by ICOT. © ICOT, 1992


Parallel Inference Machine PIM
Kazuo Taki
First Research Laboratory
Institute for New Generation Computer Technology
4-28, Mita l-chome, Minato-ku', Tokyo 108, JAPAN

The parallel inference machine, PIM, is the prototype
hardware system in the Fifth Generation Computer Systems (FGCS) project. The PIM system aims at establishing the basic technologies for large-scale parallel machine architecture, efficient kernel language implementation and many aspects of parallel software, that must
be required for high performance knowledge information
processing in the 21st century. The PIM system also
supports an R&D environment for parallel software,
which must extract the full power of the PIM hardware.
The parallel inference machine PIM is a large-scale
parallel machine with a distributed memory structure.
The PIM is designed to execute a concurrent logic programming language very efficiently. The features of the
concurrent logic language, its implementation, and the
machine architecture are suitable not only for knowledge processing, but also for more general large problems that arise dynamic and non-uniform computation.
Those problems have not been covered by commercial
parallel machines and their software systems targeting
scientific computation. The PIM system focuses on this
new domain of parallel processing.
There are two purposes to this paper. One is to report
an overview of the research and development of the PIM
hardware and its language system. The other is to clarify
and itemize the features anp advantages of the language,
its implementation and the hardware structure with the
view that the features are strong and indispensable for
efficient parallel processing of large problems with dynamic and non-uniform computation.



The Fifth Generation Computer Systems (FGCS)
project aims at establishing basic software and hardware
technologies that will be needed for high-performance
knowledge information processing in the 21st century.
The parallel inference machine PIM is the prototype
hardware system and offers gigantic computation power

( Interfaces)



KL 1 Language ==




KL 1 Parallel

PIM Hardware


KL 1

Figure 1: Overview of the PIM System
to the knowledge information processing. The PIM system includes an efficient language implementation of
KL1, which is the kernel language and a unique interface between hardware and software.
Logic programming was chosen as the common basis of
research and development for the project. The primary
working hypothesis was as follows. "Many problems of
future computing, such as execution efficiency (of parallel processing), descriptive power of languages, software
productivity, etc., will be solved drammatically with the
total reconstruction of those technologies based on logic
Following the working hypothesis, R&D on the PIM
system started from scratch with the construction of
hardware, a system software, a language system, application software and programming paradigms, all based
on logic programming. Figure 1 gives an overview of the
system structure.
The kernel language KL1 was firstly designed for efficient· concurrent programming and parallel execution
of knowledge processing prob.1ems. Then, R&D on the
PIM hardware with distributed-memory MIMD architecture and the KLI language implementation on it were
carried out, both aiming at efficient KLI execution in


parallel. A machine roughly with 1000 processors was
primarily targeted. Each of these processors was to be a
high-speed processor with hardware support for symbolic
processing. The PIM system also focused on realizing a
useful R&D environment for parallel software which
could extract the real computing power of the PIM. The
preparation of a good R&D environment was an important project policy.
KL1 is a concurrent logic programming language primarily targeting knowledge processing. Since the language had to be a common basis for various types of
knowledge processing, it became a general-purpose concurrent language suitable for symbolic processing, without shifting to a specific reasoning mechanism or a certain knowledge representation paradigm.
Our R&D led to the language features of KL1 being
very suitable for covering the dynamic and non-uniform
large problems that are not covered by commercial parallel computers and their software systems for scientific
computation. Most knowledge processing problems are
included in the problem domain of dynamic and nonuniform computation. The PIM hardware and the KL1
language implementation support the efficiency of the
language features. Thus, the PIM system covers this
new domain of parallel processing.
This paper focuses on two subjects. One is the R&D
report of the PIM hardware and the KL1language implementation on it. The other is to clarify and itemize the
features and advantages of the language, its implementation and the hardware structure with the view th?-t .the
features are strong and indispensable for efficient parallel processing of large problems with dynamic and nonuniform computation. Any parallel processing system
targeting this problem domain must consider those features.
Section 2 scans the R&D history of parallel processing systems in the FGCS project, with explanation of
some of the keywords. Section 3 characterizes the PIM
system. Many advantageous features of the language, its
parallel implementation and hardware structure are described with the view that the features are strong and
indispensable for efficient programming and execution of
the dynamic and non-uniform large problems. Section
4 presents the machine architecture of PIM. Five different models have been developed for both research use
and actual software development. Some hardware specifications are also reported. Section 5 briefly describes
the language implementation methods and techniques,
to give a concrete image of several key features of the
KL1 implementation. Section 6 reports some measurements and evaluation mainly focusing on a low-cost implementation of small-grain concurrent processes and remote synchronization, which support the advantageous
features of KLl. Overall efficiency, as demonstrated by
a few benchmark programs, is shown, including the most
recent measurements on PIM/m. Then, section 7 con-

cludes this paper.
Several important research issues of parallel software
are reported in other papers: the parallel operating system PIMOS is reported in [Chikayama 1992] and the
load balancing techniques controlled by software are reported in [Nitta et al. 1992].


R&D History

This section shows the R&D history of parallel processing systems in the FGCS project. Important research items and products of the R&D are described
briefly, with explanations of several keywords. There
are related reports for further information [Uchida 1992]
[Uchida et al. 1988].


Start of the Mainstream of R&D

Mainstream of R&D of the parallel processing systems
started at the beginning of the intermediate stage of the
FGCS project, in 1985. Just before that time, a concurrent logiclanguage GHC [Ueda 1986] had been designed,
which was chosen as the kernel language of the R&D.
Language features will be described in section 3.4.
Development of small hardware and software systems
was started based on the kernel language GHC as a hardware and software interface. The hardware system was
used as a testbed of parallel software research. Experiences and evaluation results was fed back to the next R
& D of larger hardware and software system, which was
the bootstrapping of R fj D.
It was started from development of the Multi-PSI
[Taki 1988]. Purpose of the hardware development was
not only the architectural research of a knowledge processing hardware, but also a preparation of a testbed for
efficient language implementation of the kernel language.
The Multi-PSI also focused to be a useful tool and environment of parallel software research and development.
That is, the hardware was not just an experimental machine, but a reliable system being developed in short
period, with measurements and debugging facilities for
software development. After construction of the MultiPSI/VI and /V2 with language implementations, various
parallel programs and technology and knowhow of parallel software have been accumulated [Nitta et al. 1992]
[Chikayama 1992]. The systems have been used for the
advanced software development environment for the parallel inference machines.



The first hardware was the Multi-PSI/VI [Taki 1988]
[Masuda et al. 1988], started in operation in spring
1986. The personal sequential inference machine PSI
[Taki et al. 1984] was used for processing elements. It
was a development result of the initial stage of the


project. Six PSI machines were connected by a mesh network, which supported so called wormhole routing. The
first distributed implementation of GHC was built on
it [Ichiyoshi et al. 1987]. (Distributed implementation
means a parallel implementation on a distributed memory hardware). Execution speed was slow (IK LIPS =
logical inference per second) because an interpreter system was written in ESP (the system description language
of the PSI). However, basic algorithms and techniques of
distributed implementation of GHC was investigated in
it. Several small parallel programs were written and executed on it for evaluation, and primary experimentations
of load balancing were also carried out.


From GHC To KLl

Since GHC had only basic functions that the kernel
concurrent logic language had to support, language extensions were needed for the next more practical system. Kernel language KLI was designed with considerations of execution efficiency, operating system supports,
and some built-in functions [Ueda and Chikayama 1990]
[Chikayama 1992]. An intermediate language KLI-B,
which was the target language of KLI compiler, was also
designed [Kimura and Chikayama 1987]. In the MultiPSI/V2 and a PIM model, binary code of KLI-B is directly interpreted by microprogram; that is, KLI-B is
machine language itself. In the other PIM models, KLlB code is converted to lower-level machine instruction
sequences and executed by hardware.



The second hardware system was the Multi-PSI/V2
[Takeda et al. 1988] [Nakajima 1992]' which was improved in performance and functions enough to be called
as the first experimental parallel inference machine. It
started in operation in 1988 and was demonstrated in
the FGCS '88 international conference.
The Multi-PSI/V2 included 64 processors, each
of which were equivalent to the CPU of PSIII [Nakashima and Nakajima 1987], smaller and faster
model of the PSI. Processors were connected with two
dimensional mesh network with improved speed (lOM
Bytes/s, full duplex in each channel). KLI-B was the
machine language of the system, executed by microprogram. Almost all the runtime functions of KLI was
implemented in microprogram. The KLI implementation was improved much in execution efficiency, reducing inter-processor communication messages, efficient
garbage collections, etc. compared with Multi-PSI/VI.
lt attained 130K LIPS (in KLI append) in single processor speed. Table 1 to 4 include specifications of the
Multi-PSI/V2. Since 1988, more than 15 systems, large
system with 64 processors and small with 32 or 16 processors, have been in operation for parallel software R &

D in ICOT and in cooperating companies.
A strong simulator of the Multi-PSI/V2 was also developed for software development environment. It was
called the pseudo Multi-PSI, available on the Prolog
workstation, PSI-II. A very special feature was caused
by similarity of the PSI-II CPU and processing element
of the Multi-PSI/V2. Usually, PSI-II executed ESP language with dedicated microprogram. However, it loaded
KLI microprogram dynamically at the activation of the
simulator system. The simulator executed KLI programs
as similar speed as that of the Multi-PSI/V2 single processor. Since the PIMOS could be also executed on the
simulator, programmers could use the simulator as sim:
ilar environment as the real Multi-PSI/V2, except for
speedup with multiple processors and process scheduling. The pseudo Multi-PSI was the valuable system for
initial debugging of KL1 programs.


Software Oevelopment



Parallel operating system PIMOS (the first version) and
four small application programs (benchmark programs)
[Ichiyoshi 1989] had been developed until FGCS'88.
Much efforts was paid in PIMOS development to realize a good environment of programming, debugging, execution and measurements of parallel programs. In the
development of small application programs, several important research topics of parallel software were investigated, such as concurrent algorithms with large concurrency without increase of complexity, programming
paradigms and techniques of efficient KLI programs, and
dynamic and static load balancing schemes for dynamic
and non-uniform computation.
The PIMOS has been improved in several versions,
and ported to the PIM until 1992. The small application programs, pentomino [Furuichi et al. 1990], bestpath [Wada and Ichiyoshi 1990], PAX (natural language
parser) and tsume-go (a board game) were improved,
measured and analyzed until 1989. They are still used
as test and benchmark programs on the PIM.
These development gave observations that the KLI
system on the Multi-PSI/V2 with PIMOS has reached
sufficient performance level for practical usage, and has
realized sufficient functions for describing complex concurrent programs and for experimentations of softwarecontrolled load balancing.
Several large-scale parallel application programs have
been developed from late 1989 [Nitta et al. 1992] and
still continuing. Some of them have been ported to the



Parallel Inference Machine PIM


Five PIM Models

Design of the parallel inference machine PIM was started
in concurrent with manufacturing of the Multi-PSI/V2.
Some research items in hardware architecture were omitted in the development of the Multi-PSI/V2, because of
short development time needed for starting the parallel
software development. So, PIM took a greedy R&D
plan, focusing both the architectural research and realization of software development environment.
The first trial to the novel architecture was the multiple clusters. A small number of tightly-coupled processors with shared-memory formed a cluster. Many clusters were connected with high speed network to construct
the PIM system with several hundred processors. Benefits of the architecture will be discussed in section 3.7.
Many component technologies had to be developed
or improved to realize the new system, such as parallel
cache memory suitable for frequent inter-processor communications, high speed processors for symbolic processing, improvement of the network, etc. For R&D of
better component technologies and their combinations,
the development plan of five PIM models was made, so
that different component architecture and their combinations could be investigated with assigning independent
research topics or roll on each model.
Two models, PIM/p [Kumon et al. 1992] and PIM/ c
[Nakagawa et al. 1992], took the multi-cluster structure.
They include several hundreds processors, maximum 512
in PIM/p and 256 in PIM/ c. They were developed both
for the architectural research and software R&D. Each
investigated different network architecture and processor
The other two models, PIM/k [Sakai et al. 1991] and
PIM/i [Sato et al. 1992], were developed for the experimental use of intra-cluster architecture. Two-layered
coherent cache memory which enabled larger number of
processors in a cluster, broadcast-typed coherent cache
memory, and a processor with 1IW-type instruction set
were tested.
The other model, PIM/m [Nakashima et al. 1992], did
not take the multi-cluster structure, but focused the rigid
compatibility with the Multi-PSI/V2, having improved
processor speed and larger number of processors. The
maximum number of processors will be 256. The performance of a processor will be four to five times larger at
peek speed, and 1.5 to 2.5 times larger in average than
the Multi-PSI/V2. The processor was similar to the CPU
of PSI- UX, the most recent version of the PSI machine.
A simulator, pseudo-PIM/m, was also prepared like the
pseudo Multi-PSI. The PIM/m targeted the parallel software development machine mostly among the models.
Architecture and specifications of each model will be
reported in section 4.
Experimental implementations of some 1SIs of these

models have started in 1989. The final design was almost fixed in 1990, and manufacturing of whole system
was proceeded with in 1991. From 1991 to spring 1992,
assembly and test of the five models have carried on.

Software Compatibility

K11 language is common among all the five PIM models. Except for execution efficiency, any K11 programs
including PIMOS can run on the all models. Hardware
architecture is different between two groups, Multi-PSI
and PIM/m as the one, and the other PIM models as
the other. However, from programmers' view, abstract
architecture are designed similar as follows.
The load allocation to processors are fully controlled
by programs on the Multi-PSI and the PIM/m. It is
sometimes written by programmers directly, and sometimes specified by load allocation libraries. Programmers
are often researchers of load balancing techniques. On
the other hand, load balancing in a cluster is completely
controlled by the K11 runtime system (not by KL1 programs) among the PIM models with the multi-cluster
structure. That is, programmers does not have to think
of multiple processors in a cluster, but specify load allocation· to each cluster in their programs. It means that
a processor of the Multi-PSI or PIM/m corresponds to a
cluster of the PIM models with the multi-cluster structure, which simplifies portation of KL1 programs.


KLI Implementation for PIM

KL1 system must be the first regular system in the world
which can execute large-scale parallel symbolic processing programs very efficiently. Execution mechanisms or
algorithms of KL1 language had been developed for distributed memory architectures sufficiently on the MultiPSI/V2. Some mechanisms and algorithms should be
expanded for the multi-cluster architecture of PIM. Ease
of porting the KL1 system to four different PIM models was also considered in the language implementation
method. Only the PIM/m inherited the KL1 implementation method directly from the Multi-PSI/V2.
To expand the execution mechanisms or algorithms
suitable for the multi-cluster architecture, several technical topics were focused, such as avoiding data update contentions among processors in a cluster, automatic load balancing in a cluster, expansion of an intercluster message protocol applicable for the message outstripping, parallel garbage collection in a cluster, etc.
[Hirata et al. 1992].
For easiness of porting the KL1 system to four different PIM models, a common specification of K11 system
"VPIM (virtual PIM)" was written in "C" -like description language "PSL", targeting a common virtual hardware. VPIM was the executable specification of KL1 execution algorithms, which was translated to C language
and executed to examine the algorithms. VPIM has been


translated to lower-level machine languages or microprograms automatically or by hands according to each PIM
Preparation of the description language started in
1988. Study of efficient execution mechanisms and algorithms continued until 1991, then, VPIM was completed. Porting the VPIM to four PIM models partially
started in autumn 1990, and continued to spring 1992.
Now, the KL1 system with PIMOS is available on each
PIM model. On the other hand, KL1 system on the
PIM/m, which was implemented in microprogram, was
made from conversion of Multi-PSI/V2 microprogram by
hands or partially in automatic translation. Prior to the
other PIM models, PIM/m started in operati6n with the
KL1 system and PIMOS in summer 1991.


Performance and System Evaluation

Measurements, analysis, and evaluation should be done
on various levels of the system shown below.
1. Hardware architecture and implementations

2. Execution mechanisms or algorithms of KL1 implementation
3. Concurrent algorithms of applications (algorithms
for problem solving, independent from mapping)
and their implementations
4. Mapping (load allocation) algorithms
5. Total system performance of a certain application
program on a certain system
done on the Multi-PSI/V2.
and 2 were reported in
[Masuda et al. 1988] and [Nakajima 1992]. 3 to 5 were
reported in [Nitta et al. 1992], [Furuichi et al. 1990],
[Ichiyoshi 1989] and [Wada and Ichiyoshi 1990].
Primary measurements have just started on each PIM
Some intermediate results are included in
[Nakashima et al. 1992] and [Kumon et at. 1992].
Total evaluation of the PIM system will be done in the
near future, however, some observations and discussions
are included in section 6.


Characterizing the PIM and
KLI system

PIM and KL1 system have many advantageous features
for very efficient parallel execution of large-scale knowledge processing which often shows very dynamic runtime
characteristics and non-uniform computation, much different from numerical applications on vector processors
and SIMD machines.

This section clarifies the characteristics of the targeted
problem domain shortly, and describes the various advantageous features of PIM and KL1 system, that are
dedicated for the efficient programming and processing
in the problem domain. They will give the total system
image and help to clarify the difference and similarity
of the system with other large-scale multiprocessors, recently available in the market.


Summary of Features

The total image of PIM and KL1 system are briefly
scanned as follows. Detailed features and their benefits, and reasons why they were chosen are presented in
the following sections.
Distributed memory MIMD machine:
Global structure of the PIM is the distributed memory MIMD machine in which hundreds computation
nodes are connected by highspeed network. Scalability and ease of implementations are focused. Each
computation node includes single processor or several tightly-coupled processors, and large memory.
Processors are dedicated for efficient symbolic proc
Logic programming language: The kernel language
KL1 is a concurrent logic programming language,
which is single language for system and application
descriptions. Language implementation and hardware design are based on the language specification.
KL1 is not a high-level knowledge representation
language nor a language for certain type of reasoning, but a general-purpose language for concurrent and parallel programming, especially suitable
for symbolic computations.
KL1 has many beneficial features to write parallel
programs in those application domains, described
Application domain: Primary applications are largescale knowledge processing and symbolic computation. However, large numerical computation with
dynamic features, or with non-uniform data and
non-uniform computation (non-data-parallel computation) are also targeted.
Language implementation: One KL1 system is implemented on a distributed memory hardware,
which is not a collection of many KL1 systems
implemented on each processing node. A global
name space is supported for code, logical variables,
etc. Communication messages between computation nodes are handled implicitly in KL1 system,
not by KL1 programs. An efficient implementation
for small-grain concurrent processes is taken.

becomes small, large directory memory is needed,
which increases the hardware cost.

These implementations focus to realize the beneficial features of KL1 language for the application domains described before.

Single assignment languages need special memory
management such as dynamic memory allocation
and garbage collection. These management should
be done as locally as possible for the sake of efficiency. Local garbage collection requires separation
of local and global address spaces with some indirect
referencing mechanism or address translation, even
in a scalable shared memory architecture. Merits of
the low-cost communication in the shared memory
architecture decrease significantly for such the case.

Policy of load balancing: Load balancing between
computation nodes should be controlled by KL1 programs, not by hardware nor by the language system automatically. Language system has to support
enough functions and efficiency for the experiments
of various loadbalancing schemes with software.


Basic Choices

These are the reasons to choose the distributed
memory structure.

(1) Logic programming: The first choice was to
adopt logic programming as the basis of the kernel language. The decision is mainly due to the
insights of ICOT founders, who expected that logic
programming was suitable for both knowledge processing and parallel processing. A history, from
vague expectations on logic programming to the
concrete design of the KL1 language, is explained
in [Chikayama 1992].
(2) Middle-out approach: A middle-out approach of
R&D was taken, placing the KL1 language as the
central layer. Based on the language specification,
design of the hardware and the language implementation started downward, and writing the PIMOS
operating system and parallel" software started upward.
(3) MIMD machine: The other choices concerned
with basic hardware architecture.
Dataflow architecture before mid 1980 was considered not providing enough performance against
hardware costs, according to observations for research results in initial stage of the project.
SIMD architecture seemed inefficient on applications with dynamic characteristics or low dataparallelism that are often seen in knowledge processing.
MIMD architecture remained without major demerits and was most attractive from the viewpoint of
ease of implementation with standard components.
(4) Distributed memory structure: Distributed
memory structure is suitable to construct very large
system, and easy to implement.
Recent large-scale shared memory machines with
directory- based cache coherency mechanisms claims
good scalability. However, when the block size
(the coherency management unit) is large, the interprocessor communication with frequent small data
transfer seems inefficient. KL1 programs require the
frequent small data transfer. When the block size


Characterizing the Applications

(1) Characterization: Characteristics of knowledge
processing and symbolic computation are often
much different from those of numerical computation
on vector processors and SIMD machines. Problem formalizations for those machines usually based
on data-parallelism, parallelism for regular computation on uniform data.
However, the characteristics of knowledge and symbolic computations on parallel machines tend to
be very dynamic and non-uniform. Contents and
amount of computation vary dynamically depending on time and space. For example, when a heuristic search problem is mapped on a parallel machine,
workload of each computation node changes drastically depending on expansion and pruning of the
search tree. Also, when a knowledge processing system is constructed from many heterogeneous 0 bjects, each object arises non-uniform computation.
Computation loads of these problems are hardly estimated before execution.
Some classes of large numerical computation without data-parallelism also show the dynamic and
non-uniform characteristics.
Those problems which has dynamism and nonuniformity of computation are called the dynamic
and non-uniform problems in this paper, implying
not only the know ledge processing and symbolic
computation but also the large numerical computation without data-parallelism.
The dynamic and non-uniform problems tends to
include the programs with more complex program
structure than the dat-a-parallel problems.

(2) Requirements for the system: Most of the software systems on recent commercial MIMD machines with hundreds of processors target the dataparallel computation, but they almost don't care
other paradigms.


The dynamic and non-uniform problems arise new
requirements mainly on software systems and a few
on hardware systems, which are listed below.

1. Descriptive power for complex concurrent programs
2. Easy to remove bugs
3. Ease of dynamic load balancing
4. Flexibility for changing the load allocation and
scheduling schemes to cope with difficulty on
estimating actual computation loads before execution


Characterizing the Language

This subsection itemizes several advantageous features of
1\.L1 that satisfy the requirements listed in the previous
section. Features and characteristics of the concurrent
logic programming language 1\.L1 are described in detail
in [Chikayama 1992].
The first three features have been in GRC, the basic
specifications of 1\.L1. These features make descriptive
power of the language large enough to write complex concurrent programs. They are the features of concurrent
programming to describe logical concurrency, independent from mapping to actual processors.

(load allocation) and scheduling. They are the features
for parallel programming to control actual parallelism
among processing nodes. (5) is prepared for operating
system supports. (6) is for the effici~ncy of practical
(4) Pragma: Pragma is a notation to specify goal allocation to processing nodes or specify execution priority of goals. Pragma doesn't affect the semantics
of a program, but controls parallelism and efficiency
of actual parallel execution. Pragmas are usually attached to goals after making sure that the program
is correct anyway. It can be changed very easily_
because it is syntactically separated from the correctness aspect of a program.

Pragma for load allocation: Goal allocation is
specified with a pragma, @node(X). X can be calculated in programs. Coupled with (1) and (2), the
load allocation pragma can realize very flexible load
allocation. Also coupled with (3) and the pragma,
1\.L 1 can describe a dynamic load balancing program
within a framework of the pure logic programming
language without side-effect. Dynamic load balancing programs are hard to be written in pure functional languages without indeterminacy.

(1) Dataflow synchronization: Communication and
synchronization between 1\.L1 processes are performed implicitly at all within a framework of usual
unification. It is based on the dataflow model. Implicitness is available even in a remote synchronization. The feature drastically reduces bugs of synchronization and communication compared with the
case of explicit description using separate primitives.
The single-assignment property of logic variables
supports the feature.
(2) Small-grain concurrent processes: The unit of
concurrent execution in 1\.L1 is each body goal of
clauses, which can be regarded as a process invocation. 1\.L1 programs can thus involve a large a~ount
of concurrency implicitly.
(3) Indeterminacy: A goal (or process) can test and
wait for the instantiation of multiple variables concurrently. The first instantiation resumes the goal
execution, and when a clause is committed (selected
from clauses that succeed to execute guard goals),
the other wait conditions are thrown away. This
function is valuable to describe "non-rigid" processing within a framework of side-effect free language.
Speculative computation can be dealt with, and dynamic load distribution can be also written.
The next features have been included in 1\.L1 as extensions to GRC. (4) was introduced to describe mapping

Pragma for execution priority: Execution priority is specified with a pragma, @priority(Y). More
than thousands priority levels are supported to control goal scheduling in detail, without rigid ordering.
Combination of (3) and the priority pragma realizes
the efficient control of speculative computations.
Large number of priority levels can be utilized in
e.g. parallel heuristic search to expand good branch
of the search tree at first.

(5) Shoen function (meta-control for goal group) :
The shoen function is designed to handle a set of
goals as a task, a unit of execution and resource
management. It is mainly used in PIMOS. Start,
stop and abortion of tasks can be controlled. Limit
of resource consumption can be specified. When errors or exception conditions occur, the status are
frozen and reported outside the shoen.

(6) Functions for efficiency: 1\.L1 has several builtin functions or data types whose semantics is understood within the framework of GRC but which
has been provided for the sake of efficiency. Those
functions hide demerits of side-effect free languages,
and also avoid an increase of computational complexity compared with sequeontial programs.



Characterizing the Language Implementation

Language features, just described in the previous section,
satisfy the requirements for a system by the dynamic and
non-umform problems discussed in section 3.3. Most of
special features of the language implementation focused
to enlarge those advantageous features of KLI language.
(1) Implicit communication:
Communication and synchronization among concurrent processes are implicitly done by unifications on
shared logical variables. They are supported both
in a computation node and between nodes. It is especially beneficial that a remote synchronization is
done implicitly as well as local.
A process (goal) can migrate between computation
nodes only being attached a pragma, @node(X).
When the process has reference pointers, remote references are generated implicitly between the computation nodes. The remote references are used for the
remote synchronizations or communications.
These functions hide the distributed memory hardware from the "concurrent programming". That is,
programmers can design concurrent processes and
their communications, independent from their allocations to a same computation node or different
nodes. Only the "parallel programming" with pragmas, a design of load allocation and scheduling, has
to concern with hardware structure and network
Implementation features of those functions are summarized below, including the features for efficiency.
• Global name space on a distributed memory
hardware - in which implicit pointer management among computation nodes are supported
for logical variables, structured data and program code
• Implicit data transfer caused by unifications
and goal (process) migration
• Implicit message sending and receiving invoked
with data transfer and goal sending, including
message composition and decomposition
• Message protocols able to reduce the number
of messages, and also protocols applicable to
message outstripping
(2) Small-grain concurrent processes: Efficient implementation of small-grain concurrent processes are
realized, coupled with low-cost communications and
synchronizations among them.
Process scheduling with low-cost suspension and resumption, and priority management are supported.

Efficient implementation allows actual use of a lot
of small-grain processes to realize large concurrency.
A large number of processes also gives flexibility for
the mapping and load balancing.
Automatic load balancing in a cluster is also supported. It is a process (goal) scheduling function in
a cluster implemented with priority management.
The feature hides multiprocessors in a cluster from
programmers. They do not have to think about
load allocation in a cluster, but only have to prepare enough concurrency.
(3) Memory management: These garbage collection
mechanisms are supported.
• Combination of incremental garbage collection
with subset of reference counting and stop-andcollect copying garbage collection
• Incremental releasing of remote reference
pointers between computation nodes with
weighted reference counting scheme
Dynamic memory management including garbage
collections looks essential both for symbolic processing and for parallel processii.g of the dynamic and
non-uniform problems. Because the single assignment feature, strongly needed for the problems, requires dynamic memory allocation and reclamation.
Efficiency of garbage collectors is one of key features
for practical language system of parallel symbolic
(4) Implementation of shoen function: Shoen represents a group of goals (processes) as presented in
the previous subsection. Shoen mechanism is implemented not only in a computation node but also
among nodes. Namely, processes in a task can be
distributed among computation nodes, and still controlled all together with shoen functions.
(5) Built-in functions for efficiency: Several builtin functions and data types are implemented to keep
up with the efficiency of sequential languages.
(6) Including as kernel functions: Figure 2 shows
the relation of KLI implementat.ion and operating
system functions. KLI implementation includes so
called OS kernel functions such as memory management, process management and scheduling, communication and synchronization, virtual single name
space, message composition and decomposition, etc.
While, PIMOS includes upper OS functions like programming environment and user interface.
The reason why the OS kernel functions are included
in the KLI implementation is that the implementation needs to use those functions with as light cost
as possible. Cost of those functions affect the actual





= KL 1 Language ==


KL 1 Parallel
:--OS-K~-r~el--: ~
:L _______________

PIM Hardware

- Load distribution libraries, etc.
- Utility programs ( ego shell )
- Programming environment ( ego complier, tracer,
performance analizer )
- Program code management
- User task management
- Resource management (eg. 10 resources)

- Memory management
- Process management
- Communication, synchronization, and scheduling
Single name space on a distributed memory
- Network message composition and

Figure 2: KLI Implementation and OS Functions

execution efficiency of the advantageous features of
KLI language, such as large number of small-grain
concurrent processes, implicit synchronization and
communication among them (even between remote
processes), indeterminacy, scheduling control with
large number of priority levels, process migration
specified with pragmas, etc. Those features are
indispensable for concurrent and parallel programming and efficient parallel execution of large-scale
symbolic computation with dynamic characteristics,
or large-scale non-data-parallel numerical computations.
Considering a construction of ,similar purpose parallel processing system on a standard operating system, interface level to the OS kernel may be too high
(or may arise too much overhead). Some reconstruction of OS implementation layers might be needed
for the standard parallel operating systems for those
large-scale computation with dynamic characteristics.


Policy of Load Balancing