[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: [mizar] Re: proof objects for mizar: already available?
On 10/9/11, Jesse Alama <jesse.alama@gmail.com> wrote:
>>>> I totally agree that having detailed proof objects for all of MML
>>>> would be a great resource for study and data-mining of mathematical
>>>> proofs. The ATP path to this turned out to be very cheap once I had
>>>> practically complete ATP export and strong ATP methods.
>>>>
>>>> Doing it the hard way inside Mizar is certainly possible (I have
>>>> partially done it in 2000), but it will be a serious amount of work on
>>>> Mizar, and even if you do it, it will be in a constant danger of
>>>> becoming obsolete by later re-implementations of parts of Mizar. Bill
>>>> McCune told me in 2004 that the detailed Otter proof objects were so
>>>> much added code that it lead him to reimplement the whole thing as
>>>> Prover9. My experience from 2000 with the Mizar kernel was similar: it
>>>> was a large blow-up of the code, and a cleaner complete rewrite would
>>>> be needed.
>>>
>>> I agree that it probably would be a rather complex project to have bona
>>> fide proof objects for mizar. In addition to the problem of exposing
>>> "by"/"from" steps, there remains the problem of exporting a mizar proof
>>> -- even taking for granted that its "by" and "from" steps can be filled
>>> in some acceptable way, treating them as black boxes (or as axioms, in
>>> a natural deduction/sequent calculus context) -- into a vanilla natural
>>> deduction format. I think there are lots of design problems here. For
>>> example, consider the text fragment:
>>>
>>> set X = the set;
>>> theorem X = X;
>>>
>>> Do we translate this as the universal formula:
>>>
>>> for X : set (X = the set --> X = X)
>>>
>>> Or as the equation:
>>>
>>> the set = the set
>>
>> There are indeed various choice points when translating from Mizar to
>> pure first-order logic, but their default solutions have been provided
>> in MPTP since 2005. See
>> http://www.springerlink.com/content/kw9075426q26pu32/ . "set foo =
>> bar" is just a syntactic macro in Mizar, so already Mizar semantic
>> layer is "the set = the set", there is not really much of a choice
>> here. The choice comes when translating "the" (Hilbert's epsilon) to
>> first-order logic. This is not described in the paper above, because
>> "the" came later in Mizar (2009?). But its default translation is very
>> similar to translation of Faenkel terms (de-anonymization).
>
> Well, semantically the fragment
>
> set X = the set;
> X = X
>
> may simply be the equation "the set = the set", but it isn't the case
> that one can simply eliminate all 'set' statements from the MML by
> substituting the right-hand sides for free occurrences of the left-hand
> side.
This is exactly what Mizar macro expansion does. I can imagine that
this is done in a peculiar way in some cases, and if you do it
yourself, some things might break on the parsing level. It is even
possible that sometimes some macros are used as hints, for example in
scheme inferences and type analysis. But things like notational
overloading, parser limits, etc., do not live in TPTP, and I am
totally happy not to emulate them in TPTP and rather focus on the
semantics.
> If we really were to have natural deductions for mizar proof
> objects, I'd prefer the approach of introducing a universal formula to
> deal with 'set' statements, because then one could have a label for the
> equation
>
> X = the set
>
> and obtain find-grained information about where this equation is
> needed.
Then you are introducing new objects that do not semantically exist in Mizar.
> Simply uniformly plugging in the right-hand side for the
> left-hand side would, in general, produce unreadable proofs.
But this is a different issue of proof presentation, achieved in case
on Mizar proofs by macros. In the same way as in case of C and Lisp
and their macros.
> I'd like
> ideal proof objects which don't yet exist to record this information.
> Working with "semantic mizar proofs" (for lack of a better term) --
> ones where macro substitution is absent because it has already been
> carried out -- is (at least) one step too far, to my taste, since we've
> lost some information in the mizar text.
You will not get there by smuggling macro mechanisms into the semantic
layer. It seems that what you want is a full documentation of Mizar
notational mechanisms. Good luck with that. This is indeed much more
than I mostly care for in semantic applications like ATP.
>
>>
>> To sum up: there is a default TPTP translation both of the Mizar
>> natural deduction proof format and of the Mizar extensions over
>> first-order logic, and both have been around and productively used for
>> a couple of years now.
>
> From my point of view, this is just the first step. To my knowledge,
> we still lack even one bona fide natural deduction proof -- consisting
> exclusively of axioms of set theory, equality, and introduction and
> elimination of connectives -- of any (non-trivial) mizar theorem.
> Would you agree, or am I missing something?
The Prover9 proof objects use cut rule and equality rewriting, and
Jaskowski ND is not exactly your vanilla Gentzen ND. But it does not
look like a big deal to get the latter from the former.
>
>>
>> But I'd be happy to see a continuation of the 2008 cross-verification
>> paper that would attempt ATP-based cross-verification of the full MML
>> (not just large random parts of it). Based on that (on the full proof
>> objects), it would be probably fairly easy to import MML into HOL
>> Light, Isabelle, Coq, ..., and that could in turn trigger further work
>> on communication between the libraries of these systems.
>
> Portability of the library is certainly one ripe fruit of such an
> export. My own interest is in foundations. There are all sorts of
> experiments that I'd like to carry out on a library of natural
> deduction proofs of genuine mathematical results, which, as far as I
> can see, I can't carry out with the TPTP derivation notation. (I'm
> happy to concede that some of this is simply due to my ignorance of the
> details of TPTP derivations.)
Well, then transform it to the ND format that you need :-).
Josef
>
> --
> Jesse Alama
> http://centria.di.fct.unl.pt/~alama/
>
>
>