Internet of Things Research highlights Age progression software Brain-to-brain communication Chair’s message Alumni profile: Captricity 2014 faculty additions Faculty awards and honors TR35 winners Anderson's USENIX awards Domingos' KDD Award Fox IEEE Fellow News and events Taskar Center launches Upcoming events Datagrams
If you have ever wondered what your toddler will look like when he or she grows up, there may soon be an app for that. With new illumination-aware age progression software developed by Ira Kemelmacher-Shlizerman and her colleagues in CSE’s GRAIL Group, we now have the capability to generate remarkably accurate images of an individual’s face at multiple ages based on a single photograph.
The software, which runs on any standard computer, leverages thousands of random Internet photos to compute the average pixel arrangement of different parts of the face at various ages. It then applies the differences in shape and texture to the input photo. The software is able to correct for vagaries in lighting quality and facial expression and generate age-progressed images up to 80 years old -- and it can do it in about 30 seconds.
“Aging the faces of very young children from a single photo is considered the most difficult of all scenarios,” noted Kemelmacher-Shlizerman. “Our method generates results so convincing that the majority of people who participated in our user studies could not distinguish between the age-progressed photo and the real one.”
Her work will do more than satisfy people’s curiosity about their future selves. It could also change the face of missing child cases by providing families and law enforcement with a more accurate tool for determining what victims look like years after their disappearance. According to Kemelmacher-Shlizerman, aging is one of many factors that interest her team, which also includes Supasorn Suwajanakorn and Steven M. Seitz.
“I am intrigued by the prospect of finding a representation of everyone in the world,” she said. “The massive amount of facial photos captured digitally presents exciting possibilities for the future of computer vision research.”
Cat got your tongue? Someday, it may not matter if you’re at a loss for words, because your brain may be able to communicate without them.
A team of UW researchers led by CSE professor Raj Rao recently replicated their ground breaking experiment from 2013 that established a direct brain-to-brain connection between two people. The results of the latest, more comprehensive demonstration, involving six people, was published this fall in the journal PLOS One.
In the demonstration, each pair of participants -- a sender and a receiver -- was separated into two different locations on the UW campus. The sender was connected to an electroencephalography (EEG) machine, which read his/her brain activity and sent electrical pulses over the Web to the receiver, who wore a transcranial magnetic stimulation (TMS) coil placed near the part of the brain that controls hand movements.
Participants were asked to cooperate in playing a computer game using only the link between their brains. The sender could see the game but could not physically control the gameplay, while the receiver could not see the game but had control of the touchpad that operated it. The only way the sender could meet the game’s objective of firing a cannon was to think about moving his/her hand, which would in turn cause the recipient’s hand to twitch on the touchpad across campus. While accuracy varied among participants, one pair achieved a rate of 83 percent.
In addition to assessing how often each pair successfully executed the “fire“ command, the researchers were able to quantify the amount of information conveyed between the two brains. Next, researchers want to move beyond quantity to focus on the quality of that information.
UW students Darby Losey, top,
and Jose Ceballos are positioned in two different
buildings on campus as they would be during
a brain-to-brain interface demonstration.
“We are still a long way from being able to directly communicate abstract knowledge, thoughts, feelings, or skills,” said Rao, who was lead author on the study. “But we hope our simple demonstration will serve as a stepping stone for achieving more complex types of brain-to-brain interaction in the future, and as an inspiration for using this new paradigm to better understand brain function.”
The study was coauthored by assistant professors Andrea Stocco and Chantel Prat of UW's Institute for Learning & Brain Sciences, CSE student Joseph Wu, Devapratim Sarma and Tiffany Youngquist of UW Bioengineering, and CSE alumnus Matthew Bryan. Initial funding was provided by grants from the UW Royalty Research Fund and the U.S. Army Research Office, which was recently followed by a new $1 million grant from the W.M. Keck Foundation.