We address interpretability, the ability of machines to explain their reasoning. We formalize it for textual similarity as graded typed alignment between 2 sentences. We release an annotated dataset and build and evaluate a high performance system. We show that the output of the system can be used to produce explanations. 2 user studies show preliminary evidence that explanations help humans perform better.