Re: A miscellany


> I don't think you described the proof obligation for a claim of
> well-foundedness?

To establish "well-founded R", with R: S,S->Bool, the user has to

  (forall s1:S ((forall s2:S  R(s2,s1) -> P(s2)) -> P(s1)))
  (forall s:S P(s))

where P:S->Bool is a brand-new symbol.  This is a "generic" instance
of the complete induction scheme over R.
> I guess I need to think more about your item 3.  Right off the bat, it
> doesn't grab me.
> I suspect that what you lose with the A-interpretation is the ability to
> implement any specification whose data types aren't fully specified (i.e.,
> admit non-isomorphic models), but I'm not real sure of this.  If so, the
> reason the problems with compositionality disappear is that there are no
> implementations to compose.

The only way to settle this, of course, is by sifting through lots of
examples.  My anecdotal impression has been that when I want to prove
the correctness of the implementation of some abstract data type I
always seem to have enough firepower available to prove the
A-specification.  What seems to happen quite naturally is that the
mathematical part nails down the alegebra in pretty good detail, while
the kind of underspecification one typically desires is easily
expressed in the interface notation.  (The one case in which I
sometimes think I might want an E-specification is when I want to do
something like underspecify the result of an operation, but insist
that it be nonetheless deterministic.  That case is easy to handle
with a special notation in the interface language.)
> Regarding separate implementations and compositionality: I don't see any
> sensible way to use a module if the client and the implementation do not
> agree on the representations of the data types they share.  So, in your
> example, the client that wants to use both Abstraction1 and Abstraction2 in
> a module to implement some other specification must have some way of
> ensuring that it is using the same representation as Abstraction1 and the
> same representation as Abstraction2.  Then the proof that Abstraction1 and
> Abstraction2 agree is quite trivial.  What have I missed?

My example was not very clear, and maybe "compositionality" was not
the right word.  Let me try again with something a little more
concrete.  You may not be terribly happy with this example, but the
whole discussion seems to revolve around marginal cases (and the
possibility of formulating some proof procedure that is both simple
and sound).

Suppose we want to define a package integer_lists containing

  - an abstract data type int_list whose elements are lists of integers
  - a procedure for sorting elements of int_list that takes advantage
       of the details of the implementation of int_list, and therefore
       has to be an export of the package

Supose that my library contains a trait IntegerCombinatorics, which
develops a lot of interrelated combinatorial mathematics about sets,
lists, and bags of integers, including the definition of
(perm:int_list,int_list->Bool), a relation that holds when its two
arguments are permutations of one another.  The trait
IntegerCombinatorics is a real example of something that one of our
guys decided it would be useful to put into a trait library.  This
trait also introduces an operation mapping each list into the
corresponding bag, and defines two lists as being permutations of one
another iff they map into the same bag.  Assuming I understand your
objection correctly, the key point about this example is that the
abstraction of bags is an unobservable intermediary notion introduced
for the sake of the specification: it is not being implemented as an
abstract data type of the package.

(Digression: One might object that there is something wrong or, at
least, tasteless about introducing all the extra specification
machinery of Combinatorics and, in particular, in defining perm using
the nonobervable intermediate notion of bags; but that's an objection
I don't find persuasive.  People want to write specifications in
whatever way is easiest to understand, which is not necessarily the
most logically parsimonious way.  Making free use of the normal
machinery of mathematics -- that, it seems to me, is one of the
attractions of Z -- is a good thing, not a bad thing.  Finally, of
course, sound proof procedures have to be sound even in the hands of
the tasteless.)

So we have (using Larch/Ada notation):

   --| with IntegerCombinatorics
   package integer_lists is

     type int_list is private;
     --| based on int_list;

     procedure sort(x: in out int_list);
     --| OUT sorted(x) and perm(x, IN x);

   end integer_lists;

Now consider some other package P, the mathematical part of whose
specification similarly uses Combinatorics.

Finally, let's suppose that we now have

   with integer_lists, P;
   package whatever is .. end package;

Now comes the problem: The unwary user -- and maybe even the wary user
-- would expect to identify the notion of int_bags occurring in the
annotations of package P with that of int_bags occuring in int_list.
There may be reasoning about the joint use of integer_lists and P to
implement whatever that relies on identifying some of the
"unobservable" sorts and the operations on them.  If the package
specifications are given E-interpretations, that identification is not
justified (in the absence of a lot of extra meta-theoretical proofs
about the trait IntegerCombinatorics).  I'm claiming that, as a matter
of fact, the correctness proof for the abstract data type typically
proves that it satisfies the A-specification and that the
identification is legitimate.  

(I'm also willing to argue that when a client implements operations
that *use* an ADT, rather than *create* an ADT, understanding the
specifications of the client's operations as A-specifications is much
more natural than understanding them as E-specifications.  Finally,
I've considered a specification notion that permitted mixtures of A-
and E-specifications, and that became a terrific mess.)

(Final digression: Lots of super-complex specification notations try
to capture the notion of "mathematics that comes from free" in terms
of parameterized data types that define free functors, etc.  Despite
the theoretical/theological interest of that stuff, I'm not persuaded
that all that complexity buys enough to be worthwhile.  I think a
notation like Larch/Ada accepts certain theoretical flaws as the price
of being actually usable.)

- David

(Feel free, of course, to pass this message along to anyone who might
be interested.)