| |
We present examples of inliers and outliers acquired using our pairing method (Sec. 3.5 in the paper).
Source Image |
Inliers |
Outliers |
||||||
|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
Rejected images:
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
Source Image |
Inliers |
Outliers |
||||||
|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
Rejected images:
|
|
|
|
|
|
|
|
|---|
We show results generated by SpliceNet with (i) training with dataset distillation, (ii) training without dataset distillation. Evidently, the model manages to transfer semantic regions in a more coherent manner when trained with our distillation method.
We show results generated by SpliceNet with (i) recieving the [CLS] as input (ii) receiving the apearance image as input (i.e. CNN baseline). Evidently, the model conditioned on the [CLS] token manages to transfer more complex texture (e.g. fur, different colors in different parts) than the CNN baseline.