|
. | . |
|
by Staff Writers Providence RI (SPX) Aug 13, 2014
We may not be able control the weather outside, but thanks to a new algorithm being developed by Brown University computer scientists, we can control it in photographs. The new program enables users to change a suite of "transient attributes" of outdoor photos - the weather, time of day, season, and other features - with simple, natural language commands. To make a sunny photo rainy, for example, just input a photo and type, "more rain." A picture taken in July can be made to look a bit more January simply by typing "more winter." All told, the algorithm can edit photos according to 40 commonly changing outdoor attributes. The idea behind the program is to make photo editing easy for people who might not be familiar with the ins and outs of complex photo editing software. "It's been a longstanding interest on mine to make image editing easier for non-experts," said James Hays, Manning Assistant Professor of Computer Science at Brown. "Programs like Photoshop are really powerful, but you basically need to be an artist to use them. We want anybody to be able to manipulate photographs as easily as you'd manipulate text." A paper describing the work will be presented next week at SIGGRAPH, the world's premier computer graphics conference. The team is continuing to refine the program, and hopes to have a consumer version of the program soon. The paper is available at http://transattr.cs.brown.edu/. Hays's coauthors on the paper were postdoctoral researcher Pierre-Yves Laffont, and Brown graduate students Zhile Ren, Xiaofeng Tao, and Chao Qian.
Editing by machine learning To start the project, Hays and his team defined a list of transient attributes that users might want to edit. They settled on 40 attributes that range from the simple - cloudy, sunny, snowy, rainy, or foggy - to the subjective - gloomy, bright, sentimental, mysterious, or calm. The next step was to teach the algorithm what these attributes look like. To do that, the researchers compiled a database consisting of thousands of photos taken by 101 stationary webcams around the world. The cameras took pictures of the same scenes in varying of conditions - different times of day, different seasons and in all kinds of weather. The researchers then asked workers on Mechanical Turk - a crowdsourcing marketplace operated by Amazon - to annotate more than 8,000 photos according to which of the 40 attributes are present in each. Those annotated photos were then fed through a machine learning algorithm. "Now the computer has data to learn what it means to be sunset or what it means to be summer or what it means to be rainy-or at least what it means to be perceived as being those things," Hays explained. Armed with the knowledge of what each attribute looks like, the algorithm can apply that knowledge to new photos. It does so by making what Hays refers to as "local color transforms." It splits the picture into regions - clusters of pixels - and draws on the database to determine how colors in those regions should change with a given attribute. "If you wanted to make a picture rainier, the computer would know that parts of the picture that look like sky need to become grayer and flatter," Hays explained. "In regions that look like ground, the colors become shinier and more saturated. It does this for hundreds of different regions in the photo." The results are pretty convincing. In a lab study, the researchers asked participants to rate the manipulated photos on how well they expressed given attributes. The participants preferred the new results around 70 percent of the time compared to the output of traditional approaches to automated editing that make more uniform color changes across the entire photo. There are limitations to what the program can do at this point, however. It can't reproduce attributes that require new structures to be added to the photo. "We can't turn winter into summer generally, because that would involve adding structure-putting grass where there's snow," Hays said. "We can't synthesize that detail at this point." Nonetheless, Hays says he's pleased that advances in his field of computer vision have helped to make this kind of application possible. "To be able to manipulate an image better, you need to be able to understand the image better - to understand the material objects in the image and the boundaries of those objects," he said. "All the progress in computer vision helps us do these things, and enables this progress in image editing."
Related Links Brown University Satellite-based Internet technologies
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service. |