A person wants to experiment with some variation on extant material, such as changing a definition or deleting a rule of inference. They find it easy to discover what changes as a result of dependencies on those definitions or rules. They post the variant to the FDL, where it can coexist with the original. 

Transplanting. A person wants to try some proofs in a context other than the one they were developed in; perhaps they want to "transplant" some results from constructive type theory to a subset of higherorder logic. Variants are built by replacing several expressions and lemma citations specific to the old context by purported analogs from the new context, and the proofs are rerun (we think of this as transplanting the roots and seeing what thrives). Unsurprisingly, many of the proof variants check in the new context, since they used some rather highlevel forms of reasoning. Surprisingly and informatively, a few proofs depended on some unexpected detail, and are studied. 

A programmer wants evidence that a program they intend to download works, and can be applied to the data they intend. This might be direct evidence such as a proof, or it might be indirect, namely, a certification of the existence of a proprietary proof of a program whose source is unavailable (only the i/o specifications and compiled code are available; the source and proof, though in the FDL, are unavailable to this programmer). In this case the person must understand which logic the proof employs, and be persuaded of the trustworthiness of the FDL in regard to those proof methods having been adhered to. 

A commercial concern has developed a package of programs whose sources it wants to keep secret. How can it assure customers of facts about the programs? There must be a trusted impartial third party to certify the claims. This party would be an FDL process trusted to implement its published policies for certifying proofs. The commercial concern would maintain its own private FDL of source and object code and proofs of correctness. The impartial FDL process would certify it by employing a public logic, uploading the source code and object code and proofs, then itself checking the inferences by the public inference engine. If it succeeds then it deletes the source code and proof, and creates a certificate that refers to the object code, to the statement of correctness, and to the public inference engine, and claims that there once was a proof of the statement about the object code which the impartial FDL checked (then deleted). 

It is discovered that an inference engine supplied by a group is defective. All certifications of proofs that were developed with that engine can be located and discounted. If the engine is repaired the inferences that depended on the old one can be retried with the new version. 

A person has doubts about whether a particular inference method is correct, then happily discovers that someone else had the same doubts and has rechecked all the inferences made by the suspect method with an independent inference verifier. This is possible because a single inference can have attached to it multiple certificates of verification. 
Continued at