Asset production in the game industry is time-consuming, and since “The Vanishing of Ethan Carter” photogrammetry has gained traction. While the asset produced by photogrammetry achieves incredible detail, the illumination is baked into the texture maps. This makes the assets inflexible and limits their use in games and movies without manual post-processing. In this talk, I will present our recent work on decomposing an object into its shape, reflectance, and illumination. This highly ill-posed problem is inherently more challenging when the illumination is not a single light source under laboratory conditions but is an unconstrained environmental illumination. Decomposing an object under this ambiguous setup enables the automated creation of relightable 3D assets for AR/VR applications, enhanced shopping experiences, games, and movies from online images. In this talk, I will present our recent methods in the field of reflectance decomposition using Neural Fields. Our methods are capable of building a neural volumetric reflectance decomposition from unconstrained image collections. Contrary to most recent works that require images to be captured under the same illumination, our input images are taken under varying illuminations. This practical setup enables the decomposition of images gathered from online searches and the automated creation of relightable 3D assets. Our techniques handle complex geometries with non-Lambertian surfaces, and we also extract 3D meshes with material properties from the learned reflectance volumes enabling their use in existing graphics engines. In our last method, we also enable the decomposition of unposed image collections. Most recent reconstruction methods require posed collections. However, common pose recovery methods fail under highly varying illuminations or locations.