Freditor: High-Fidelity and Transferable NeRF Editing by Frequency Decomposition

ECCV 2024


1Alibaba Group
2Fudan University 3The University of Texas at Austin

Abstract

MY ALT TEXT

Freditor enables high-fidelity, transferable NeRF editing by frequency decomposition. Recent NeRF editing pipelines lift 2D stylization results to 3D scenes while suffering from blurry results, and fail to capture detailed structures caused by the inconsistency between 2D editings. Our critical insight is that low-frequency components of images are more multiview-consistent after editing compared with their high-frequency parts. Moreover, the appearance style is mainly exhibited on the low-frequency components, and the content details especially reside in high-frequency parts. This motivates us to perform editing on low-frequency components, which results in high-fidelity edited scenes. In addition, the editing is performed in the low-frequency feature space, enabling stable intensity control and novel scene transfer. Comprehensive experiments conducted on photorealistic datasets demonstrate the superior performance of high-fidelity and transferable NeRF editing.

Methodology

Our pipeline comprises two primary branches: the high-frequency branch, reconstructed from multiview images, which ensures view-consistent scene details, and the low-frequency branch, responsible for filtering low-frequency components from the full scene feature fields, performing style transfer, and decoding the original and edited low-frequency images. Finally, the high-frequency details are reintegrated into the edited low-frequency image, resulting in a high-fidelity edited scene.

Video

Results

We enable high-fidelity (shown in top 4 rows) and transferable (shown in bottom 3 rows) NeRF editing.

We enable intensity control of edited NeRF during network inference.

BibTeX

@article{He2024freditor,
    title={Freditor: High-Fidelity and Transferable NeRF Editing by Frequency Decomposition},
    author={Yisheng He and Weihao Yuan and Siyu Zhu and Zilong Dong and Qixing Huang and Liefeng Bo},
    year={2024},
    journal = {arXiv preprint}
}