[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: [mizar] Re: Urelements?



There is an obvious disadvantage of approach: 'let us try something else'.
If the inference is incorrect then trying different methods to check it results in slow processing. We are in disadvantage when we write, when the article is ready it seems not to be a problem.

However such utilities as RELPREM systematically check 'incorrect' inferences,
then they will be quite slow.

Also when we revise MML, we will get some incorrect inferences, sometimes many
of them.

We observe it already with resolution. It is now done in similar way: if it is not classically obvious try the resolution! So, for all inferences that became
not obvious the system tries the resolution.

Regards,
Andrzej


Cytowanie Jesse Alama <jesse.alama@gmail.com>:

I don't know exactly how the checker works, so this may be out of
line, but here is one simple idea that doesn't even use heuristics.
One could try a breadth-first approach to expansion: first don't do
any definitional expansion, and try running the inference through the
checker.  If that fails, uniformly expand once anything that could be
expanded, and retry the inference with a bigger premise set.  And so
forth.  That's how SAD, for example, handles definitions.  This
preserves the complete/exhaustive property of the checker.  This would
probably not deliver the big speedups that Artur finds when expansion
is switched off, but it might help nonetheless.  And with this
approach, one needn't to introduce a new primitive type.