Knowledge Based Software Assistant

From HandWiki

The Knowledge Based Software Assistant (KBSA) was a research program funded by the United States Air Force . The goal of the program was to apply concepts from artificial intelligence to the problem of designing and implementing computer software. Software would be described by models in very high level languages (essentially equivalent to first order logic) and then transformation rules would transform the specification into efficient code. The air force hoped to be able to generate the software to control weapons systems and other command and control systems using this method. As software was becoming ever more critical to USAF weapons systems it was realized that improving the quality and productivity of the software development process could have significant benefits for the military, as well as for information technology in other major US industries.

History

In the early 1980s the United States Air Force realized that they had received significant benefits from applying artificial intelligence technologies to solving expert problems such as the diagnosis of faults in aircraft. The air force commissioned a group of researchers from the artificial intelligence and formal methods communities to develop a report on how such technologies might be used to aid in the more general problem of software development.

The report described a vision for a new approach to software development. Rather than define specifications with diagrams and manually transform them to code as was the current process, the Knowledge Based Software Assistant (KBSA) vision was to define specifications in very high level languages and then to use transformation rules to gradually refine the specification into efficient code on heterogeneous platforms.

Each step in the design and refinement of the system would be recorded as part of an integrated repository. In addition to the artifacts of software development the processes, the various definitions and transformations, would also be recorded in a way that they could be analyzed and also replayed later as needed. The idea was that each step would be a transformation that took into account various non-functional requirements for the implemented system. For example, requirements to use specific programming languages such as Ada or to harden code for real time mission critical fault tolerance.[1]

The air force decided to fund further research on this vision through their Rome Air Development Center laboratory at Griffiss Air Force Base in New York. The majority of the early research was conducted at the Kestrel Institute in Northern California (with Stanford University) and the Information Sciences Institute (ISI) in Southern California (with USC and UCLA). The Kestrel Institute focused primarily on the provably correct transformation of logical models to efficient code. ISI focused primarily on the front end of the process on defining specifications that could map to logical formalisms but were in formats that were intuitive and familiar to systems analysts. In addition, Raytheon did a project to investigate informal requirements gathering and Honeywell and Harvard University did work on underlying frameworks, integration, and activity coordination.

Although not primarily funded by the KBSA program the MIT Programmer's Apprentice project also had many of the same goals and used the same techniques as KBSA.[2]

In the later stages of the KBSA program (starting in 1991) researchers developed prototypes that were used on medium to large scale software development problems. Also, in these later stages the emphasis shifted from a pure KBSA approach to more general questions of how to use knowledge-based technology to supplement and augment existing and future computer-aided software engineering (CASE) tools. In these later stages there was significant interaction between the KBSA community and the object-oriented and software engineering communities. For example, KBSA concepts and researchers played an important role in the mega-programming and user centered software engineering programs sponsored by the Defense Advanced Research Projects Agency (DARPA).[3] In these later stages the program changed its name to Knowledge-Based Software Engineering (KBSE). The name change reflected the different research goal, no longer to create a totally new all encompassing tool that would cover the complete software life cycle but to gradually work knowledge-based technology into existing tools. Companies such as Andersen Consulting (one of the largest system integrators and at the time vendor of their own CASE tool) played a major role in the program in these later stages.

Key concepts

Transformation rules

The transformation rules that KBSA used were different than traditional rules for expert systems. Transformation rules matched against specification and implementation languages rather than against facts in the world. It was possible to specify transformations using patterns, wildcards, and recursion on both the right and left hand sides of a rule. The left hand expression would specify patterns in the existing knowledge base to search for. The right hand expression could specify a new pattern to transform the left hand side into. For example, transform a set theoretic data type into code using an Ada set library.[4]

The initial purpose for transformation rules was to refine a high level logical specification into well designed code for a specific hardware and software platform. This was inspired by early work on theorem proving and automatic programming. However, researchers at the Information Sciences Institute (ISI) developed the concept of evolution transformations.[5] Rather than transforming a specification into code an evolution transformation was meant to automate various stereotypical changes at the specification level, for example developing a new superclass by extracting various capabilities from an existing class that can be shared more generally. Evolution transformations were developed at approximately the same time as the emergence of the software patterns community and the two groups shared concepts and technology. Evolution transformations were essentially what is known as refactoring in the object-oriented software patterns community.[6]

Knowledge-based repository

A key concept of KBSA was that all artifacts: requirements, specifications, transformations, designs, code, process models, etc. were represented as objects in a knowledge-based repository. The original KBSA report describes what was called a Wide Spectrum Language. The requirement was for a knowledge representation framework that could support the entire life cycle: requirements, specification, and code as well as the software process itself. The core representation for the knowledge base was meant to utilize the same framework although various layers could be added to support specific presentations and implementations.

These early knowledge-base frameworks were developed primarily by ISI and Kestrel building on top of Lisp and Lisp machine environments. The Kestrel environment was eventually turned into a commercial product called Refine which was developed and supported by a spin-off company from Kestrel called Reasoning Systems Incorporated.

The Refine language and environment also proved to be applicable to the problem of software reverse engineering: taking legacy code that is critical to the business but that lacks proper documentation and using tools to analyze it and transform it to a more maintainable form. With the growing concern of the Y2K problem reverse engineering was a major business concern for many large US corporations and it was a focus area for KBSA research in the 1990s.[7][8]

There was significant interaction between the KBSA communities and the Frame language and object-oriented communities. The early KBSA knowledge-bases were implemented in object-based languages rather than object-oriented. Objects were represented as classes and sub-classes but it was not possible to define methods on the objects. In later versions of KBSA such as the Andersen Consulting Concept Demo the specification language was expanded to support message passing as well.

Intelligent Assistant

KBSA took a different approach than traditional expert systems when it came to how to solve problems and work with users. In the traditional expert system approach the user answers a series of interactive questions and the system provides a solution. The KBSA approach left the user in control. Where as an expert system tried to, to some extent replace and remove the need for the expert the intelligent assistant approach in KBSA sought to re-invent the process with technology. This led to a number of innovations at the user interface level.

An example of the collaboration between the object-oriented community and KBSA was the architecture used for KBSA user interfaces. KBSA systems utilized a model-view-controller (MVC) user interface. This was an idea incorporated from Smalltalk environments.[9] The MVC architecture was especially well suited to the KBSA user interface. KBSA environments featured multiple heterogeneous views of the knowledge-base. It might be useful to look at an emerging model from the standpoint of entities and relations, object interactions, class hierarchies, dataflow, and many other possible views. The MVC architecture facilitated this. With the MVC architecture the underlying model was always the knowledge base which was a meta-model description of the specification and implementation languages. When an analyst made some change via a particular diagram (e.g. added a class to the class hierarchy) that change was made at the underlying model level and the various views of the model were all automatically updated.[10]

One of the benefits of using a transformation was that many aspects of the specification and implementation could be modified at once. For small scale prototypes the resulting diagrams were simple enough that basic layout algorithms combined with reliance on users to clean up diagrams was sufficient. However, when a transformation can radically redraw models with tens or even hundreds of nodes and links the constant updating of the various views becomes a task in itself. Researchers at Andersen Consulting incorporated work from the University of Illinois on graph theory to automatically update the various views associated with the knowledge base and to generate graphs that have minimal intersection of links and also take into account domain and user specific layout constraints.

Another concept used to provide intelligent assistance was automatic text generation. Early research at ISI investigated the feasibility of extracting formal specifications from informal natural language text documents. They determined that the approach was not viable. Natural language is by nature simply too ambiguous to serve as a good format for defining a system. However, natural language generation was seen to be feasible as a way to generate textual descriptions that could be read by managers and non-technical personnel. This was especially appealing to the air force since by law they required all contractors to generate various reports that describe the system from different points of view. Researchers at ISI and later Cogentext and Andersen Consulting demonstrated the viability of the approach by using their own technology to generate the documentation required by their air force contracts.[11]

References

  1. Green, Cordell; D. Luckham; R. Balzer; T. Cheatham; C. Rich (Aug 1983). "Report on a Knowledge-Based Software Assistant". Kestrel Institute A996431: 78. ftp://ftp.kestrel.edu/pub/papers/green/KBSA.pdf. Retrieved 4 January 2014. 
  2. Rich, Charles; Richard C. Waters (November 1988). "The Programmer's Apprentice Project: A Research Overview". Computer 21 (11): 10–25. doi:10.1109/2.86782. ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-1004.pdf. Retrieved 26 December 2013. 
  3. DeBellis, Michael; Christine Haapala (February 1995). "User-Centric Software Engineering". IEEE Expert 10 (1): 34–41. doi:10.1109/64.391959. 
  4. Smith, Doug (1991). "KIDS - A Knowledge-Based Software Development System". in Michael Lowry, Robert McCartney. Automating Software Design. AAAI/MIT Press. pp. 483–514. ISBN 978-0262620802. 
  5. Johnson, Lewis; M.S. Feather (1991). "Using Evolution Transformations to Construct Specifications". Automating Software Design (AAAI Press): 65–92. 
  6. Fowler, Martin (1999). Refactoring: Improving the Design of Existing Code. Addison Wesley. ISBN 0201485672. https://archive.org/details/isbn_9780201485677. 
  7. Boehm, Barry; Prasanta Bose (1998-08-15). "KBSA Life Cycle Evaluation: Final Technical Report". Contract No: F30602-96-C-0274 (USC Center for Software Engineering) I. http://csse.usc.edu/csse/event/1999/ARR/volumeI.pdf. Retrieved 4 January 2014. "As the program has proceeded toward its ultimate objectives, it has spawned a number of productivity-enhancing spinoffs such as the Refine-based software reengineering and testing tools". 
  8. Welty, Chris. "Summary of KBSE-93: The Eighth Annual Knowledge-Based Software Engineering Conference". ase-conferences.org. http://ase-conferences.org/ase/past/kbse-8.txt. Retrieved 4 January 2014. "REFINE/COBOL Object Modeling Workbench provides a set of reengineering tools, Refine is the language of the KBSA concept demo." 
  9. Harris, Dave; A. Czuchry (1988). "The Knowledge-Based Requirements Assistant". IEEE Expert 3 (4). 
  10. Johnson, Lewis; David R. Harris; Kevin M. Benner; Martin S. Feather (October 1992). "Aries: The Requirements/Specification Facet for KBSA". Rome Laboratory Final Technical Report RL-TR-92-248. 
  11. DeBellis, Michael; Kanth Miriyala; Sudin Bhat; William C. Sasso; Owen Rambo (April 1993). "KBSA Concept Demo: Final Technical Report". USAF Rome Laboratory Technical Report RL-TR-93-38. https://www.academia.edu/33220432. Retrieved 25 October 2021.