Rendering synthetic objects into legacy photographs
A group of students from the University of Illinois at Urbana-Champaign have demonstrated a ground-breaking system that allows a user realistically insert synthetic objects into legacy photographs quickly and easily.
The system requires that the user first annotate an image, pointing out light sources and object boundaries using software-based tools. Once an image in annotated, the system accurately renders synthetic objects placed ‘into’ the photograph, accounting for lighting, shading, reflection, illumination and other individual object properties like transparency, diffuse or specular.
The system has obvious applications in the movie and gaming industry, as well as interior design and user content creation. The video below speaks for itself,
Article by John Haddox, COO at Decision Resources The manufacturing industry is ripe for a…
With the rise of AI-generated content, the term “authentic” has evolved. It’s now used to…
In 2025, Web3 enters a new phase of transformation, being driven by elevated venture capital…
Guess Author: Shashank Patel Associate Principal Information Designer, and Amritha Madam, Associate Lead Information Designer,…
Former US Immigration and Customs Enforcement (ICE) agents call on lawmakers in Congress to ramp-up…
Guest author: Facundo Falduto, Argentina Reports Javier Milei made international headlines in recent weeks, but…
View Comments
very cool! I am assuming the objects would be modeled in a different program and inserted? how are theh new materials scaled/degenerated to match the original image quality? I am interested to follow the development of this program and see the far reaching effects that it can and will have.
very cool! I am assuming the objects would be modeled in a different program and inserted? how are theh new materials scaled/degenerated to match the original image quality? I am interested to follow the development of this program and see the far reaching effects that it can and will have.
very great post.