Rendering synthetic objects into legacy photographs
A group of students from the University of Illinois at Urbana-Champaign have demonstrated a ground-breaking system that allows a user realistically insert synthetic objects into legacy photographs quickly and easily.
The system requires that the user first annotate an image, pointing out light sources and object boundaries using software-based tools. Once an image in annotated, the system accurately renders synthetic objects placed ‘into’ the photograph, accounting for lighting, shading, reflection, illumination and other individual object properties like transparency, diffuse or specular.
The system has obvious applications in the movie and gaming industry, as well as interior design and user content creation. The video below speaks for itself,
The convergence of AI, specialized software, and clinical expertise is creating a new paradigm in…
The IRS just confirmed that Direct File — the agency’s short-lived attempt to offer a…
I.C.E. Exchange, long regarded as one of the country's leading credentialing conferences, announced that its…
Criticizing UN policies is now considered to be dangerous disinformation for impeding progress on Agenda…
I have come across my share of beautifully written, over-engineered code... one moment I am…
This week, I’ve seen a lot of over-dramatization of very simple factual events that seem…
View Comments
very cool! I am assuming the objects would be modeled in a different program and inserted? how are theh new materials scaled/degenerated to match the original image quality? I am interested to follow the development of this program and see the far reaching effects that it can and will have.
very cool! I am assuming the objects would be modeled in a different program and inserted? how are theh new materials scaled/degenerated to match the original image quality? I am interested to follow the development of this program and see the far reaching effects that it can and will have.
very great post.