Light-field microscopy (LFM) captures 3D biological dynamics by single 2D snapshots but suffers from limited resolution and artifacts during 2D-to-3D inversion. Here, we introduce light-field meta neural representation (LFMNR), a novel self-supervised paradigm that utilizes physics-informed light-field implicit neural representation (LFINR) and meta learning for high-quality 3D reconstruction in Fourier-LFM. By developing a physics-based hybrid-rendering model, LFINR achieves artifact-free light-field reconstruction with enhanced spatial resolution (>1.4-fold improvement). Additionally, the integration of meta-learning and progressive sampling strategies mitigates INRs intrinsic limitations in low reconstruction speed caused by scene-specific optimization, enabling a [~]100-fold acceleration in the representation of consecutive volumes and facilitating the visualization of sustained 3D dynamics. These advancements enable LFMNR to deliver superior imaging with high spatiotemporal resolution and low phototoxicity, as demonstrated by capturing instantaneous voltage signals in C. elegans at 100 volumes per second and recording 6000 time points of organelle dynamics over 25 hours.
Support the authors with ResearchCoin
Support the authors with ResearchCoin