From lusk Mon Jun 20 00:57:30 1994
Received: from uqcspe.cs.uq.oz.au (uqcspe.cs.uq.oz.au [130.102.192.8]) by antares.mcs.anl.gov (8.6.4/8.6.4) with SMTP id AAA15302 for <qed@mcs.anl.gov>; Mon, 20 Jun 1994 00:53:43 -0500
Received: from everest.cs.uq.oz.au by uqcspe.cs.uq.oz.au 
	id <AA12504@uqcspe.cs.uq.oz.au>; Mon, 20 Jun 94 15:53:03 +1000
Received: by everest (5.0/SMI-SVR4)
	id AA05728; Mon, 20 Jun 1994 15:53:01 --1000
Date: Mon, 20 Jun 1994 15:53:00 +1000 (EST)
From: John Staples <staples@cs.uq.oz.au>
Subject: Re: Formalising variable bindings/Show Me
To: qed@mcs.anl.gov
Cc: staples@cs.uq.oz.au
Message-Id: <Pine.3.89.9406201516.A5672-0100000@everest>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Content-Length: 3324

In arguing that lambda calculus/HOL `completely' solves the problem of 
formalising variable bindings, I believe John Harrison is
overlooking the distinction between syntax and semantics.

Syntax:
------
Lambda calculus syntax provides only the simplest sort of quantifier
syntax - as John H. says, \x.t[x] - or at best, a small extension of
that to include a type for the bound variable.  Such a typed quantifier
syntax is easily definable (given a definition mechanism!) in any form
of set theory which has a Hilbert epsilon or iota quantifier (`some x
such that', or `the unique x such that').  I agree with John H. that some 
such Hilbert quantifier is a practical necessity.

Semantics:
---------
Yes, any reasonably expressive mathematical framework can define semantics
for a range of quantifiers, as in John H's example.  (First order) set 
theory can do that at least as comprehensively as higher order logic, since 
the latter is semantically a part of set theory.

But an adequate semantics is not enough for convenient practical use.
People want practical support, as provided by relevant syntax and syntax
checking.  

A simple example: to define new quantifiers q by declarations
of the form
		q x A = Term
what conditions on A, and on Term, are necessary and sufficient for q to
have an appropriate semantics and for the declaration to be a correct axiom 
scheme?  I believe a good answer is:

*	A is a metavariable.

*	Term is an extended term in the sense that occurrences of the 
	metavariable A (but no other metavariables) are allowed.

*	Term has no free occurrences of object variables (unless q has
	arguments - but here I'm looking at the simplest case, where
	q has no arguments).

*	Every occurrence of A in Term is within a binding of x, and is
	not within a binding of any other object variable.

I think John H. would say something like the following (I'd be pleased
to be corrected):

	define the constant q by introducing the axiom
	q = \a.HTerm
	where a is an object level variable (not a meta variable)
	and where HTerm can be obtained from Term by replacing all
	occurrences of A by (ax).  Then, sugar q(\x.A[a]) to q x A.

Here are two limitations of this latter approach.

*	It introduces lambda abstractions and applications of higher
	order terms, which then must be got rid of by lambda evaluation
	(which leads to a more complicated notion of unification) and by
	sugaring.

*	The use of A[x], rather than just A, in John H's \.A[x] illustrates
	that his approach is less abstract.  That is because for each occurrence
	A of a metavariable, one must define the object variables (x in the
	example) which are bound in the surrounding Term.  Consequently,
	for example, \x.A[x] does not (meta-) unify with the body of
	\y. \x. A[x,y].  In the approach I suggest, \x.A does unify with
	(indeed, is identical with) the body of \y. \x.A.  Thus, more
	powerful unification is achieved using a less powerful unification
	algorithm, merely by adopting a more direct formalisation of ordinary
	mathematical discourse.

Bottom line:
-----------
Fascinating as this discussion is, QED does *not* depend on John H. agreeing
with me!  We need only to agree that QED be broad and flexible enough to
provide a framework in which different formalisms can interchange knowledge,
and can evolve.

John

