JPL News: A WiFi Reflector Chip To Speed Up Wearables

 
Whether you're tracking your steps, monitoring your health or sending photos from a smart watch, you want the battery life of your wearable device to last as long as possible. If the power necessary to transmit and receive information from a wearable to a computer, cellular or Wi-Fi network were reduced, you could get a lot more mileage out of the technology you're wearing before having to recharge it.

Adrian Tang of NASA's Jet Propulsion Laboratory in Pasadena, California, is working on a technology to do just that.

Read the full story from JPL News
Read More »

Mosquitoes Use Smell to See Their Hosts

On summer evenings, we try our best to avoid mosquito bites by dousing our skin with bug repellents and lighting citronella candles. These efforts may keep the mosquitoes at bay for a while, but no solution is perfect because the pests have evolved to use a triple threat of visual, olfactory, and thermal cues to home in on their human targets, a new Caltech study suggests.


The study, published by researchers in the laboratory of Michael Dickinson, the Esther M. and Abe M. Zarem Professor of Bioengineering, appears in the July 17 online version of the journal Current Biology.

When an adult female mosquito needs a blood meal to feed her young, she searches for a host—often a human. Many insects, mosquitoes included, are attracted by the odor of the carbon dioxide (CO2) gas that humans and other animals naturally exhale. However, mosquitoes can also pick up other cues that signal a human is nearby. They use their vision to spot a host and thermal sensory information to detect body heat.

But how do the mosquitoes combine this information to map out the path to their next meal?

To find out how and when the mosquitoes use each type of sensory information, the researchers released hungry, mated female mosquitoes into a wind tunnel in which different sensory cues could be independently controlled. In one set of experiments, a high-concentration CO2 plume was injected into the tunnel, mimicking the signal created by the breath of a human. In control experiments, the researchers introduced a plume consisting of background air with a low concentration of CO2. For each experiment, researchers released 20 mosquitoes into the wind tunnel and used video cameras and 3-D tracking software to follow their paths.

When a concentrated CO2 plume was present, the mosquitos followed it within the tunnel as expected, whereas they showed no interest in a control plume consisting of background air.

"In a previous experiment with fruit flies, we found that exposure to an attractive odor led the animals to be more attracted to visual features," says Floris van Breugel, a postdoctoral scholar in Dickinson's lab and first author of the study. "This was a new finding for flies, and we suspected that mosquitoes would exhibit a similar behavior. That is, we predicted that when the mosquitoes were exposed to CO2, which is an indicator of a nearby host, they would also spend a lot of time hovering near high-contrast objects, such as a black object on a neutral background."

To test this hypothesis, van Breugel and his colleagues did the same CO2 plume experiment, but this time they provided a dark object on the floor of the wind tunnel. They found that in the presence of the carbon dioxide plumes, the mosquitoes were attracted to the dark high-contrast object. In the wind tunnel with no CO2 plume, the insects ignored the dark object entirely.

While it was no surprise to see the mosquitoes tracking a CO2 plume, "the new part that we found is that the CO2 plume increases the likelihood that they'll fly toward an object. This is particularly interesting because there's no CO2 down near that object—it's about 10 centimeters away," van Breugel says. "That means that they smell the CO2, then they leave the plume, and several seconds later they continue flying toward this little object. So you could think of it as a type of memory or lasting effect."

Next, the researchers wanted to see how a mosquito factors thermal information into its flight path. It is difficult to test, van Breugel says. "Obviously, we know that if you have an object in the presence of a CO2 plume—warm or cold—they will fly toward it because they see it," he says. "So we had to find a way to separate the visual attraction from the thermal attraction."

To do this, the researchers constructed two glass objects that were coated with a clear chemical substance that made it possible to heat them to any desired temperature. They heated one object to 37 degrees Celsius (approximately human body temperature) and allowed one to remain at room temperature, and then placed them on the floor of the wind tunnel with and without CO2 plumes, and observed mosquito behavior. They found that mosquitoes showed a preference for the warm object. But contrary to the mosquitoes' visual attraction to objects, the preference for warmth was not dependent on the presence of CO2.

"These experiments show that the attraction to a visual feature and the attraction to a warm object are separate. They are independent, and they don't have to happen in order, but they do often happen in this particular order because of the spatial arrangement of the stimuli: a mosquito can see a visual feature from much further away, so that happens first. Only when the mosquito gets closer does it detect an object's thermal signature," van Breugel says.

Information gathered from all of these experiments enabled the researchers to create a model of how the mosquito finds its host over different distances. They hypothesize that from 10 to 50 meters away, a mosquito smells a host's CO2 plume. As it flies closer—to within 5 to 15 meters—it begins to see the host. Then, guided by visual cues that draw it even closer, the mosquito can sense the host's body heat. This occurs at a distance of less than a meter.

"Understanding how brains combine information from different senses to make appropriate decisions is one of the central challenges in neuroscience," says Dickinson, the principal investigator of the study. "Our experiments suggest that female mosquitoes do this in a rather elegant way when searching for food. They only pay attention to visual features after they detect an odor that indicates the presence of a host nearby. This helps ensure that they don't waste their time investigating false targets like rocks and vegetation. Our next challenge is to uncover the circuits in the brain that allow an odor to so profoundly change the way they respond to a visual image."

The work provides researchers with exciting new information about insect behavior and may even help companies design better mosquito traps in the future. But it also paints a bleak picture for those hoping to avoid mosquito bites.

"Even if it were possible to hold one's breath indefinitely," the authors note toward the end of the paper, "another human breathing nearby, or several meters upwind, would create a CO2 plume that could lead mosquitoes close enough to you that they may lock on to your visual signature. The strongest defense is therefore to become invisible, or at least visually camouflaged. Even in this case, however, mosquitoes could still locate you by tracking the heat signature of your body . . . The independent and iterative nature of the sensory-motor reflexes renders mosquitoes' host seeking strategy annoyingly robust."

These results were published in a paper titled "Mosquitoes use vision to associate odor plumes with thermal targets." In addition to Dickinson and van Breugel, the other authors are Jeff Riffell and Adrienne Fairhall from the University of Washington. The work was funded by a grant from the National Institutes of Health.

Written by Jessica Stoller-Conrad

Read More »

Stanford research shows pitfalls of homework

A Stanford researcher found that students in high-achieving communities who spend too much time on homework experience more stress, physical health problems, a lack of balance and even alienation from society. More than two hours of homework a night may be counterproductive, according to the study.

Stanford research shows pitfalls of homework

A Stanford researcher found that too much homework can negatively affect kids, especially their lives away from school, where family, friends and activities matter.

"Our findings on the effects of homework challenge the traditional assumption that homework is inherently good," wrote Denise Pope, a senior lecturer at the Stanford Graduate School of Education and a co-author of a study published in the Journal of Experimental Education.

The researchers used survey data to examine perceptions about homework, student well-being and behavioral engagement in a sample of 4,317 students from 10 high-performing high schools in upper-middle-class California communities. Along with the survey data, Pope and her colleagues used open-ended answers to explore the students' views on homework.

Median household income exceeded $90,000 in these communities, and 93 percent of the students went on to college, either two-year or four-year.

Students in these schools average about 3.1 hours of homework each night.

"The findings address how current homework practices in privileged, high-performing schools sustain students' advantage in competitive climates yet hinder learning, full engagement and well-being," Pope wrote.

Pope and her colleagues found that too much homework can diminish its effectiveness and even be counterproductive. They cite prior research indicating that homework benefits plateau at about two hours per night, and that 90 minutes to two and a half hours is optimal for high school.

Their study found that too much homework is associated with:

• Greater stress: 56 percent of the students considered homework a primary source of stress, according to the survey data. Forty-three percent viewed tests as a primary stressor, while 33 percent put the pressure to get good grades in that category. Less than 1 percent of the students said homework was not a stressor.

• Reductions in health: In their open-ended answers, many students said their homework load led to sleep deprivation and other health problems. The researchers asked students whether they experienced health issues such as headaches, exhaustion, sleep deprivation, weight loss and stomach problems.

• Less time for friends, family and extracurricular pursuits: Both the survey data and student responses indicate that spending too much time on homework meant that students were "not meeting their developmental needs or cultivating other critical life skills," according to the researchers. Students were more likely to drop activities, not see friends or family, and not pursue hobbies they enjoy.

A balancing act
The results offer empirical evidence that many students struggle to find balance between homework, extracurricular activities and social time, the researchers said. Many students felt forced or obligated to choose homework over developing other talents or skills.

Also, there was no relationship between the time spent on homework and how much the student enjoyed it. The research quoted students as saying they often do homework they see as "pointless" or "mindless" in order to keep their grades up.

"This kind of busy work, by its very nature, discourages learning and instead promotes doing homework simply to get points," Pope said.

She said the research calls into question the value of assigning large amounts of homework in high-performing schools. Homework should not be simply assigned as a routine practice, she said.

"Rather, any homework assigned should have a purpose and benefit, and it should be designed to cultivate learning and development," wrote Pope.

High-performing paradox
In places where students attend high-performing schools, too much homework can reduce their time to foster skills in the area of personal responsibility, the researchers concluded. "Young people are spending more time alone," they wrote, "which means less time for family and fewer opportunities to engage in their communities."

Student perspectives
The researchers say that while their open-ended or "self-reporting" methodology to gauge student concerns about homework may have limitations – some might regard it as an opportunity for "typical adolescent complaining" – it was important to learn firsthand what the students believe.

The paper was co-authored by Mollie Galloway from Lewis and Clark College and Jerusha Conner from Villanova University.

BY CLIFTON B. PARKER

Read More »

A second minor planet may possess Saturn-like rings

Researchers detect features around Chiron that may signal rings, jets, or a shell of dust.


here are only five bodies in our solar system that are known to bear rings. The most obvious is the planet Saturn; to a lesser extent, rings of gas and dust also encircle Jupiter, Uranus, and Neptune. The fifth member of this haloed group is Chariklo, one of a class of minor planets called centaurs: small, rocky bodies that possess qualities of both asteroids and comets.

Scientists only recently detected Chariklo’s ring system — a surprising finding, as it had been thought that centaurs are relatively dormant. Now scientists at MIT and elsewhere have detected a possible ring system around a second centaur, Chiron.

In November 2011, the group observed a stellar occultation in which Chiron passed in front of a bright star, briefly blocking its light. The researchers analyzed the star’s light emissions, and the momentary shadow created by Chiron, and identified optical features that suggest the centaur may possess a circulating disk of debris. The team believes the features may signify a ring system, a circular shell of gas and dust, or symmetric jets of material shooting out from the centaur’s surface.

“It’s interesting, because Chiron is a centaur — part of that middle section of the solar system, between Jupiter and Pluto, where we originally weren’t thinking things would be active, but it’s turning out things are quite active,” says Amanda Bosh, a lecturer in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

Bosh and her colleagues at MIT — Jessica Ruprecht, Michael Person, and Amanda Gulbis — have published their results in the journal Icarus.

Catching a shadow

Chiron, discovered in 1977, was the first planetary body categorized as a centaur, after the mythological Greek creature — a hybrid of man and beast. Like their mythological counterparts, centaurs are hybrids, embodying traits of both asteroids and comets. Today, scientists estimate there are more than 44,000 centaurs in the solar system, concentrated mainly in a band between the orbits of Jupiter and Pluto.

While most centaurs are thought to be dormant, scientists have seen glimmers of activity from Chiron. Starting in the late 1980s, astronomers observed patterns of brightening from the centaur, as well as activity similar to that of a streaking comet.

In 1993 and 1994, James Elliot, then a professor of planetary astronomy and physics at MIT, observed a stellar occultation of Chiron and made the first estimates of its size. Elliot also observed features in the optical data that looked like jets of water and dust spewing from the centaur’s surface.

Now MIT researchers — some of them former members of Elliot’s group — have obtained more precise observations of Chiron, using two large telescopes in Hawaii: NASA’s Infrared Telescope Facility, on Mauna Kea, and the Las Cumbres Observatory Global Telescope Network, at Haleakala.

In 2010, the team started to chart the orbits of Chiron and nearby stars in order to pinpoint exactly when the centaur might pass across a star bright enough to detect. The researchers determined that such a stellar occultation would occur on Nov. 29, 2011, and reserved time on the two large telescopes in hopes of catching Chiron’s shadow.

“There's an aspect of serendipity to these observations,” Bosh says. “We need a certain amount of luck, waiting for Chiron to pass in front of a star that is bright enough. Chiron itself is small enough that the event is very short; if you blink, you might miss it.”

The team observed the stellar occultation remotely, from MIT’s Building 54. The entire event lasted just a few minutes, and the telescopes recorded the fading light as Chiron cast its shadow over the telescopes.

Rings around a theory

The group analyzed the resulting light, and detected something unexpected. A simple body, with no surrounding material, would create a straightforward pattern, blocking the star’s light entirely. But the researchers observed symmetrical, sharp features near the start and end of the stellar occultation — a sign that material such as dust might be blocking a fraction of the starlight.

The researchers observed two such features, each about 300 kilometers from the center of the centaur. Judging from the optical data, the features are 3 and 7 kilometers wide, respectively.  The features are similar to what Elliot observed in the 1990s.

In light of these new observations, the researchers say that Chiron may still possess symmetrical jets of gas and dust, as Elliot first proposed. However, other interpretations may be equally valid, including the “intriguing possibility,” Bosh says, of a shell or ring of gas and dust.

Ruprecht, who is a researcher at MIT’s Lincoln Laboratory, says it is possible to imagine a scenario in which centaurs may form rings: For example, when a body breaks up, the resulting debris can be captured gravitationally around another body, such as Chiron. Rings can also be leftover material from the formation of Chiron itself.

“Another possibility involves the history of Chiron’s distance from the sun,” Ruprecht says. “Centaurs may have started further out in the solar system and, through gravitational interactions with giant planets, have had their orbits perturbed closer in to the sun. The frozen material that would have been stable out past Pluto is becoming less stable closer in, and can turn into gases that spray dust and material off the surface of a body.
  
An independent group has since combined the MIT group’s occultation data with other light data, and has concluded that the features around Chiron most likely represent a ring system. However, Ruprecht says that researchers will have to observe more stellar occultations of Chiron to truly determine which interpretation — rings, shell, or jets — is the correct one.

“If we want to make a strong case for rings around Chiron, we’ll need observations by multiple observers, distributed over a few hundred kilometers, so that we can map the ring geometry,” Ruprecht says. “But that alone doesn’t tell us if the rings are a temporary feature of Chiron, or a more permanent one. There’s a lot of work that needs to be done.”

Nevertheless, Bosh says the possibility of a second ringed centaur in the solar system is an enticing one.

“Until Chariklo’s rings were found, it was commonly believed that these smaller bodies don’t have ring systems,” Bosh says. “If Chiron has a ring system, it will show it’s more common than previously thought.”

Matthew Knight, an astronomer at the Lowell Observatory in Flagstaff, Arizona, says the possibility that Chiron possesses a ring system “makes the solar system feel a bit more intimate.”

“We have a pretty good feel for what most of the inner solar system is like from spacecraft missions, but the small, icy worlds of the outer solar system are still mysterious,” says Knight, who was not involved in the research. “At least to me, being able to picture a centaur having a ring around it makes it seem more tangible.”

This research was funded in part by NASA and the National Research Foundation of South Africa.

Read More »

Life on an aquaplanet

MIT study finds an exoplanet, tilted on its side, could still be habitable if covered in ocean.

 

 
Nearly 2,000 planets beyond our solar system have been identified to date. Whether any of these exoplanets are hospitable to life depends on a number of criteria. Among these, scientists have thought, is a planet’s obliquity — the angle of its axis relative to its orbit around a star.

Earth, for instance, has a relatively low obliquity, rotating around an axis that is nearly perpendicular to the plane of its orbit around the sun. Scientists suspect, however, that exoplanets may exhibit a host of obliquities, resembling anything from a vertical spinning top to a horizontal rotisserie. The more extreme the tilt, the less habitable a planet may be — or so the thinking has gone.

Now scientists at MIT have found that even a high-obliquity planet, with a nearly horizontal axis, could potentially support life, so long as the planet were completely covered by an ocean. In fact, even a shallow ocean, about 50 meters deep, would be enough to keep such a planet at relatively comfortable temperatures, averaging around 60 degrees Fahrenheit year-round.
 
David Ferreira, a former research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), says that on the face of it, a planet with high obliquity would appear rather extreme: Tilted on its side, its north pole would experience daylight continuously for six months, and then darkness for six months, as the planet revolves around its star.

“The expectation was that such a planet would not be habitable: It would basically boil, and freeze, which would be really tough for life,” says Ferreira, who is now a lecturer at the University of Reading, in the United Kingdom. “We found that the ocean stores heat during summer and gives it back in winter, so the climate is still pretty mild, even in the heart of the cold polar night. So in the search for habitable exoplanets, we're saying, don't discount high-obliquity ones as unsuitable for life.”

Details of the group’s analysis are published in the journal Icarus. The paper’s co-authors are Ferreira; Sara Seager, the Class of 1941 Professor in EAPS and MIT’s Department of Physics; John Marshall, the Cecil and Ida Green Professor in Earth and Planetary Sciences; and Paul O’Gorman, an associate professor in EAPS.

Tilting toward a habitable exoplanet

Ferreira and his colleagues used a model developed at MIT to simulate a high-obliquity “aquaplanet” — an Earth-sized planet, at a similar distance from its sun, covered entirely in water. The three-dimensional model is designed to simulate circulations among the atmosphere, ocean, and sea ice, taking into the account the effects of winds and heat in driving a 3000-meter deep ocean. For comparison, the researchers also coupled the atmospheric model with simplified, motionless “swamp” oceans of various depths: 200 meters, 50 meters, and 10 meters.

The researchers used the detailed model to simulate a planet at three obliquities: 23 degrees (representing an Earth-like tilt), 54 degrees, and 90 degrees.

For a planet with an extreme, 90-degree tilt, they found that a global ocean — even one as shallow as 50 meters — would absorb enough solar energy throughout the polar summer and release it back into the atmosphere in winter to maintain a rather mild climate. As a result, the planet as a whole would experience spring-like temperatures year round.

“We were expecting that if you put an ocean on the planet, it might be a bit more habitable, but not to this point,” Ferreira says. “It’s really surprising that the temperatures at the poles are still habitable.”

A runaway “snowball Earth”

In general, the team observed that life could thrive on a highly tilted aquaplanet, but only to a point. In simulations with a shallower ocean, Ferreira found that waters 10 meters deep would not be sufficient to regulate a high-obliquity planet’s climate. Instead, the planet would experience a runaway effect: As soon as a bit of ice forms, it would quickly spread across the dark side of the planet. Even when this side turns toward the sun, according to Ferreira, it would be too late: Massive ice sheets would reflect the sun’s rays, allowing the ice to spread further into the newly darkened side, and eventually encase the planet.

“Some people have thought that a planet with a very large obliquity could have ice just around the equator, and the poles would be warm,” Ferreira says. “But we find that there is no intermediate state. If there’s too little ocean, the planet may collapse into a snowball. Then it wouldn’t be habitable, obviously.”

Darren Williams, a professor of physics and astronomy at Pennsylvania State University, says past climate modeling has shown that a wide range of climate scenarios are possible on extremely tilted planets, depending on the sizes of their oceans and landmasses. Ferreira’s results, he says, reach similar conclusions, but with more detail.

“There are one or two terrestrial-sized exoplanets out of a thousand that appear to have densities comparable to water, so the probability of an all-water planet is at least 0.1 percent,” Williams says. “The upshot of all this is that exoplanets at high obliquity are not necessarily devoid of life, and are therefore just as interesting and important to the astrobiology community.”

Read More »

New NSF-Funded Physics Frontiers Center Expands Hunt for Gravitational Waves

Gravitational Waves
 
The search for gravitational waves—elusive ripples in the fabric of space-time predicted to arise from extremely energetic and large-scale cosmic events such as the collisions of neutron stars and black holes—has expanded, thanks to a $14.5-million, five-year award from the National Science Foundation for the creation and operation of a multi-institution Physics Frontiers Center (PFC) called the North American Nanohertz Observatory for Gravitational Waves (NANOGrav).

The NANOGrav PFC will be directed by Xavier Siemens, a physicist at the University of Wisconsin–Milwaukee and the principal investigator for the project, and will fund the NANOGrav research activities of 55 scientists and students distributed across the 15-institution collaboration, including the work of four Caltech/JPL scientists—Senior Faculty Associate Curt Cutler; Visiting Associates Joseph Lazio and Michele Vallisneri; and Walid Majid, a visiting associate at Caltech and a JPL research scientist—as well as two new postdoctoral fellows at Caltech to be supported by the PFC funds. JPL is managed by Caltech for NASA.

"Caltech has a long tradition of leadership in both the theoretical prediction of sources of gravitational waves and experimental searches for them," says Sterl Phinney, professor of theoretical astrophysics and executive officer for astronomy in the Division of Physics, Mathematics and Astronomy. "This ranges from waves created during the inflation of the early universe, which have periods of billions of years; to waves from supermassive black hole binaries in the nuclei of galaxies, with periods of years; to a multitude of sources with periods of minutes to hours; to the final inspiraling of neutron stars and stellar mass black holes, which create gravitational waves with periods less than a tenth of a second."

The detection of the high-frequency gravitational waves created in this last set of events is a central goal of Advanced LIGO (the next-generation Laser Interferometry Gravitational-Wave Observatory), scheduled to begin operation later in 2015. LIGO and Advanced LIGO, funded by NSF, are comanaged by Caltech and MIT.

"This new Physics Frontier Center is a significant boost to what has long been the dark horse in the exploration of the spectrum of gravitational waves: low-frequency gravitational waves," Phinney says. These gravitational waves are predicted to have such a long wavelength—significantly larger than our solar system—that we cannot build a detector large enough to observe them. Fortunately, the universe itself has created its own detection tool, millisecond pulsars—the rapidly spinning, superdense remains of massive stars that have exploded as supernovas. These ultrastable stars appear to "tick" every time their beamed emissions sweep past Earth like a lighthouse beacon. Gravitational waves may be detected in the small but perceptible fluctuations—a few tens of nanoseconds over five or more years—they cause in the measured arrival times at Earth of radio pulses from these millisecond pulsars.

NANOGrav makes use of the Arecibo Observatory in Puerto Rico and the National Radio Astronomy Observatory's Green Bank Telescope (GBT), and will obtain other data from telescopes in Europe, Australia, and Canada. The team of researchers at Caltech will lead NANOGrav's efforts to develop the approaches and algorithms for extracting the weak gravitational-wave signals from the minute changes in the arrival times of pulses from radio pulsars that are observed regularly by these instruments.

Written by Kathy Svitil

Read More »

Physicist explores life’s 'most beautiful phenomena'

Are there theoretical principles that have the power and generality of physics, yet encompass the full complexity and diversity of life’s “most beautiful phenomena” – phenomena such as sounds that cause our eardrums to vibrate by less than the diameter of an atom?

Abdus Salam International Centre for Theoretical Physics
Abdus Salam International Centre for Theoretical Physics
Theoretical physicist William Bialek is the 2015 Hans Bethe Lecturer in Physics.

Theoretical physicist William Bialek will explore this question as the 2015 Hans Bethe Lecturer in Physics in a public lecture, “More Perfect than We Imagined: A Physicist's View of Life” Wednesday, March 18, at 7:30 p.m. in Schwartz Auditorium, Rockefeller Hall.

Known for his contributions to the understanding of coding and computation in the brain, Bialek and his collaborators have shown that aspects of brain function can be described as essentially optimal strategies for adapting to the complex dynamics of the world, making the most of available signals in the face of fundamental physical constraints and limitations. He has followed these ideas into early events of embryonic development and processes by which all cells decide when to read information stored in their genes.

Recently, Bialek and his colleagues have shown how the collective states of biological systems – the activity in a network of neurons, or the flight directions in a flock of birds – can be described using ideas from statistical physics, connecting quantitative detail with new experimental data.

Bialek is the John Archibald Wheeler/Battelle Professor in Physics and a member of the multidisciplinary Lewis-Sigler Institute for Integrative Genomics at Princeton University. He also serves as Visiting Presidential Professor of Physics at The Graduate Center of the City University of New York, where he has helped launch the Initiative for the Theoretical Sciences.

He received his doctorate in 1983 in biophysics from the University of California, Berkeley. The author of the textbook “Biophysics: Searching for Principles” (2012), Bialek is a member of the National Academy of Sciences and a fellow of the American Physical Society. He has received the Presidential Award for Distinguished Teaching and the Phi Beta Kappa Prize for undergraduate education at Princeton.

As part of the Hans Bethe Lecture series, Bialek will also present the physics colloquium, “Are Biological Networks Poised at Criticality?” Monday, March 16, at 4 p.m. in Schwartz Auditorium; and a Laboratory of Atomic and Solid State Physics seminar, “Predictive Information and the Problem of Long Time Scales in the Brain,” Tuesday, March 17, at 4 p.m. in 700 Clark Hall.

The Hans Bethe Lectures, established by the Department of Physics and the College of Arts and Sciences, honor Bethe, Cornell professor of physics from 1936 until his death in 2005. Bethe won the Nobel Prize in physics in 1967 for his description of the nuclear processes that power the sun.
 
By Linda B. Glaser is a writer for the College of Arts and Sciences. 

Read More »

Study helps understand why a material’s behavior changes as it gets smaller

To fully understand how nanomaterials behave, one must also understand the atomic-scale deformation mechanisms that determine their structure and, therefore, their strength and function.

Study helps understand why a material’s behavior changes as it gets smaller

Researchers at the University of Pittsburgh, Drexel University and the Georgia Institute of Technology have engineered a new way to observe and study these mechanisms and, in doing so, have revealed an interesting phenomenon in a well-known material, tungsten. The group is the first to observe atomic-level deformation twinning in body-centered cubic (BCC) tungsten nanocrystals.

The team used high-resolution transmission electron microscope (TEM) and sophisticated computer modeling to make the observation. This work, published March 9 in the journal Nature Materials, represents a milestone in the in-situ study of mechanical behaviors of nanomaterials.

Deformation twinning is a type of deformation that, in conjunction with dislocation slip, allows materials to permanently deform without breaking. In the process of twinning, the crystal reorients, which creates a region in the crystal that is a mirror image of the original crystal. Twinning has been observed in large-scale BCC metals and alloys during deformation. However, whether twinning occurs in BCC nanomaterials or not remained unknown.

“To gain a deep understanding of deformation in BCC nanomaterials, we combined atomic-scale imaging and simulations to show that twinning activities dominated for most loading conditions, due to the lack of other shear deformation mechanisms in nanoscale BCC lattices.” said Scott Mao, a professor in the Swanson School of Engineering at the University of Pittsburgh.

The team chose tungsten as a typical BCC crystal. The most familiar application of tungsten is their use as filaments for light bulbs.

The observation of atomic-scale twinning was made inside a TEM. This kind of study has not been possible in the past, due to difficulties of making BCC samples less than 100 nanometers in size, as required by TEM imaging. Jiangwei Wang, a graduate student at University of Pittsburgh, and Mao, the lead author of the paper, developed a clever way of making the BCC tungsten nanowires. Under a TEM, Wang welded together two small pieces of individual nanoscale tungsten crystals to create a wire about 20 nanometers in diameter. This wire was durable enough to stretch and compress while Wang observed the twinning phenomenon in real time using a high-resolution TEM.

To better understand the phenomenon observed by Mao and Wang’s team at the University of Pittsburgh, Christopher Weinberger, an assistant professor in Drexel’s College of Engineering, developed computer models that show the mechanical behavior of the tungsten nanostructure – at the atomic level. His modeling allowed the team to see the physical factors at play during twinning. This information will help researchers theorize why it occurs in nanoscale tungsten and plot a course for examining this behavior in other BCC materials.

“We’re trying to see if our atomistic-based model behaves in the same way as the tungsten sample used in the experiments, which can then help to explain the mechanisms that allow it to behave that way,” Weinberger said. “Specifically, we’d like to explain why it exhibits this twinning ability as a nanostructure, but not as a bulk metal.”

In concert with Weinberger’s modeling, Ting Zhu, an associate professor in the Woodruff School of Mechanical Engineering at Georgia Tech, worked with a graduate student, Zhi Zeng, to conduct advanced computer simulations, using molecular dynamics to study deformation processes in 3-D.

Zhu’s simulation revealed that tungsten’s “smaller is stronger” behavior is not without drawbacks when it comes to application.

“If you reduce the size to the nanometer scale, you can increase strength by several orders or magnitude,” Zhu said. “But the price you pay is a dramatic decrease in the ductility.
We want to increase the strength without compromising the ductility in developing these nanostructured metals and alloys. To reach this objective, we need to understand the controlling deformation mechanisms.”

The twinning mechanism, Mao added, contrasts with the conventional wisdom of dislocation nucleation-controlled plasticity in nanomaterials. The results should motivate further experimental and modeling investigation of deformation mechanisms in nanoscale metals and alloys, ultimately enabling the design of nanostructured materials to fully realize their latent mechanical strength.

"Our discovery of the twinning dominated deformation also opens up possibilities of enhancing ductility by engineering twin structures in nanoscale BCC crystals" Zhu said.

Original Article: http://www.news.gatech.edu/2015/03/09/study-helps-understand-why-material%E2%80%99s-behavior-changes-it-gets-smaller
Read More »

Early humans adapted to living in rainforests much sooner than thought

An international research team has shed new light on the diet of some of the earliest recorded humans in Sri Lanka. The researchers from Oxford University, working with a team from Sri Lanka and the University of Bradford, analysed the carbon and oxygen isotopes in the teeth of 26 individuals, with the oldest dating back 20,000 years and found that nearly all the teeth analysed suggest a diet largely sourced from the rainforest.

Early humans adapted to living in rainforests much sooner than thought

This study, published in the early online edition of the journal, Science, shows that early modern humans adapted to living in the rainforest for long periods of time. Previously it was thought that humans did not occupy tropical forests for any length of time until 12,000 years after that date, and that the tropical forests were largely 'pristine', human-free environments until the Early Holocene, 8,000 years ago. Scholars reasoned that compared with more open landscapes, humans might have found rainforests too difficult to navigate, with less available food to hunt or catch.

The Science paper also notes, however, that previous archaeological research provides 'tantalising hints' of humans possibly occupying rainforest environments around 45,000 years ago. This earlier research is unclear as to whether those early human dwellers of the rainforest were engaging in a specialised activity or whether they entered the rainforest for only limited periods of time in certain seasons rather than remaining there all year round.

Co-author Professor Julia Lee-Thorp from Oxford University said: 'The isotopic methodology applied in our study has already been successfully used to study how primates, including African great apes, adapt to their forest environment. However, this is the first time scientists have investigated ancient human fossils in a tropical forest context to see how our earliest ancestors survived in such a habitat.'

The researchers studied the fossilised teeth of 26 humans of a range of dates – from 20,000 to 3,000 years ago. All of the teeth were excavated from three archaeological sites in Sri Lanka, which are today surrounded by either dense rainforest or more open terrain. The analysis of the teeth showed that all of the humans had a diet sourced from slightly open 'intermediate rainforest' environments. Only two of them showed a recognisable signature of a diet found in open grassland. However, these two teeth were dated to around 3,000 years, the start of the Iron Age, when agriculture developed in the region. The new evidence published in this paper argues this shows just how adaptable our earliest ancestors were.

Lead author, Patrick Roberts, a doctoral student specialising in the investigation of early human adaptations from Oxford’s Research Laboratory for Archaeology and the History of Art, said: 'This is the first study to directly test how much early human forest foragers depended on the rainforest for their diet. The results are significant in showing that early humans in Sri Lanka were able to live almost entirely on food found in the rainforest without the need to move into other environments. Our earliest human ancestors were clearly able to successfully adapt to different extreme environments.'

Co-author Professor Mike Petraglia from Oxford University said: 'Our research provides a clear timeline showing the deep level of interaction that early humans had with the rainforest in South Asia. We need further research to see if this pattern was also followed in other similar environments in Southeast Asia, Melanesia, Australasia and Africa.'

Read More »

Welcome to the neighbourhood: new dwarf galaxies discovered in orbit around the Milky Way

Welcome to the neighbourhood: new dwarf galaxies discovered in orbit around the Milky Way
 
Astronomers have discovered a ‘treasure trove’ of rare dwarf satellite galaxies orbiting our own Milky Way. The discoveries could hold the key to understanding dark matter, the mysterious substance which holds our galaxy together.

A team of astronomers from the University of Cambridge have identified nine new dwarf satellites orbiting the Milky Way, the largest number ever discovered at once. The findings, from newly-released imaging data taken from the Dark Energy Survey, may help unravel the mysteries behind dark matter, the invisible substance holding galaxies together.

The new results also mark the first discovery of dwarf galaxies – small celestial objects that orbit larger galaxies – in a decade, after dozens were found in 2005 and 2006 in the skies above the northern hemisphere. The new satellites were found in the southern hemisphere near the Large and Small Magellanic Cloud, the largest and most well-known dwarf galaxies in the Milky Way’s orbit.

The Cambridge findings are being jointly released today with the results of a separate survey by astronomers with the Dark Energy Survey, headquartered at the US Department of Energy’s Fermi National Accelerator Laboratory. Both teams used the publicly available data taken during the first year of the Dark Energy Survey to carry out their analysis.

The newly discovered objects are a billion times dimmer than the Milky Way, and a million times less massive. The closest is about 95,000 light years away, while the most distant is more than a million light years away.

According to the Cambridge team, three of the discovered objects are definite dwarf galaxies, while others could be either dwarf galaxies or globular clusters – objects with similar visible properties to dwarf galaxies, but not held together with dark matter.

“The discovery of so many satellites in such a small area of the sky was completely unexpected,” said Dr Sergey Koposov of Cambridge’s Institute of Astronomy, the study’s lead author. “I could not believe my eyes.”

galaxy structures

Dwarf galaxies are the smallest galaxy structures observed, the faintest of which contain just 5000 stars – the Milky Way, in contrast, contains hundreds of billions of stars. Standard cosmological models of the universe predict the existence of hundreds of dwarf galaxies in orbit around the Milky Way, but their dimness and small size makes them incredibly difficult to find, even in our own ‘backyard’.

“The large dark matter content of Milky Way satellite galaxies makes this a significant result for both astronomy and physics,” said Alex Drlica-Wagner of Fermilab, one of the leaders of the Dark Energy Survey analysis.

Since they contain up to 99 percent dark matter and just one percent observable matter, dwarf galaxies are ideal for testing whether existing dark matter models are correct. Dark matter – which makes up 25 percent of all matter and energy in our universe – is invisible, and only makes its presence known through its gravitational pull.

“Dwarf satellites are the final frontier for testing our theories of dark matter,” said Dr Vasily Belokurov of the Institute of Astronomy, one of the study’s co-authors. “We need to find them to determine whether our cosmological picture makes sense. Finding such a large group of satellites near the Magellanic Clouds was surprising, though, as earlier surveys of the southern sky found very little, so we were not expecting to stumble on such treasure.”

The closest of these pieces of ‘treasure’ is 97,000 light years away, about halfway to the Magellanic Clouds, and is located in the constellation of Reticulum, or the Reticle. Due to the massive tidal forces of the Milky Way, it is in the process of being torn apart.

The most distant and most luminous of these objects is 1.2 million light years away in the constellation of Eridanus, or the River. It is right on the fringes of the Milky Way, and is about to get pulled in. According to the Cambridge team, it looks to have a small globular cluster of stars, which would make it the faintest galaxy to possess one.

“These results are very puzzling,” said co-author Wyn Evans, also of the Institute of Astronomy. “Perhaps they were once satellites that orbited the Magellanic Clouds and have been thrown out by the interaction of the Small and Large Magellanic Cloud. Perhaps they were once part of a gigantic group of galaxies that – along with the Magellanic Clouds – are falling into our Milky Way galaxy.”

The Dark Energy Survey is a five-year effort to photograph a large portion of the southern sky in unprecedented detail. Its primary tool is the Dark Energy Camera, which – at 570 megapixels – is the most powerful digital camera in the world, able to see galaxies up to eight billion light years from Earth. Built and tested at Fermilab, the camera is now mounted on the four-metre Victor M Blanco telescope at the Cerro Tololo Inter-American Observatory in the Andes Mountains in Chile. The camera includes five precisely shaped lenses, the largest nearly a yard across, designed and fabricated at University College London (UCL) and funded by the UK Science and Technology Facilities Council (STFC).

The Dark Energy Survey is supported by funding from the STFC, the US Department of Energy Office of Science; the National Science Foundation; funding agencies in Spain, Brazil, Germany and Switzerland; and the participating institutions.

The Cambridge research, funded by the European Research Council, will be published in The Astrophysical Journal.

Inset image: The Magellanic Clouds and the Auxiliary Telescopes at the Paranal Observatory in the Atacama Desert in Chile. Only 6 of the 9 newly discovered satellites are present in this image. The other three are just outside the field of view. The insets show images of the three most visible objects (Eridanus 1, Horologium 1 and Pictoris 1) and are 13x13 arcminutes on the sky (or 3000x3000 DECam pixels). Credit: V. Belokurov, S. Koposov (IoA, Cambridge). Photo: Y. Beletsky (Carnegie Observatories)

Original Article: http://www.cam.ac.uk/research/news/welcome-to-the-neighbourhood-new-dwarf-galaxies-discovered-in-orbit-around-the-milky-way
Read More »

Caltech Biochemist Sheds Light on Structure of Key Cellular 'Gatekeeper'

 
Caltech Biochemist Sheds Light on Structure of Key Cellular 'Gatekeeper'Facing a challenge akin to solving a 1,000-piece jigsaw puzzle while blindfolded—and without touching the pieces—many structural biochemists thought it would be impossible to determine the atomic structure of a massive cellular machine called the nuclear pore complex (NPC), which is vital for cell survival.

But after 10 years of attacking the problem, a team led by André Hoelz, assistant professor of chemistry, recently solved almost a third of the puzzle. The approach his team developed to do so also promises to speed completion of the remainder.

In an article published online February 12 by Science Express, Hoelz and his colleagues describe the structure of a significant portion of the NPC, which is made up of many copies of about 34 different proteins, perhaps 1,000 proteins in all and a total of 10 million atoms. In eukaryotic cells (those with a membrane-bound nucleus), the NPC forms a transport channel in the nuclear membrane. The NPC serves as a gatekeeper, essentially deciding which proteins and other molecules are permitted to pass into and out of the nucleus. The survival of cells is dependent upon the accuracy of these decisions.

Understanding the structure of the NPC could lead to new classes of cancer drugs as well as antiviral medicines. "The NPC is a huge target of viruses," Hoelz says. Indeed, pathogens such as HIV and Ebola subvert the NPC as a way to take control of cells, rendering them incapable of functioning normally. Figuring out just how the NPC works might enable the design of new drugs to block such intruders.

"This is an incredibly important structure to study," he says, "but because it is so large and complex, people thought it was crazy to work on it. But 10 years ago, we hypothesized that we could solve the atomic structure with a divide-and-conquer approach—basically breaking the task into manageable parts—and we've shown that for a major section of the NPC, this actually worked."

To map the structure of the NPC, Hoelz relied primarily on X-ray crystallography, which involves shining X-rays on a crystallized sample and using detectors to analyze the pattern of rays reflected off the atoms in the crystal.

It is particularly challenging to obtain X-ray diffraction images of the intact NPC for several reasons, including that the NPC is both enormous (about 30 times larger than the ribosome, a large cellular component whose structure wasn't solved until the year 2000) and complex (with as many as 1,000 individual pieces, each composed of several smaller sections). In addition, the NPC is flexible, with many moving parts, making it difficult to capture in individual snapshots at the atomic level, as X-ray crystallography aims to do. Finally, despite being enormous compared to other cellular components, the NPC is still vanishingly small (only 120 nanometers wide, or about 1/900th the thickness of a dollar bill), and its highly flexible nature prohibits structure determination with current X-ray crystallography methods.

To overcome those obstacles, Hoelz and his team chose to determine the structure of the coat nucleoporin complex (CNC)—one of the two main complexes that make up the NPC—rather than tackling the whole structure at once (in total the NPC is composed of six subcomplexes, two major ones and four smaller ones, see illustration). He enlisted the support of study coauthor Anthony Kossiakoff of the University of Chicago, who helped to develop the engineered antibodies needed to essentially "superglue" the samples into place to form an ordered crystalline lattice so they could be properly imaged. The X-ray diffraction data used for structure determination was collected at the General Medical Sciences and National Cancer Institutes Structural Biology Beamline at the Argonne National Laboratory.

With the help of Caltech's Molecular Observatory—a facility, developed with support from the Gordon and Betty Moore Foundation, that includes a completely automated X-ray beamline at the Stanford Synchrotron Radiation Laboratory that can be controlled remotely from Caltech—Hoelz's team refined the antibody adhesives required to generate the best crystalline samples. This process alone took two years to get exactly right.

Hoelz and his team were able to determine the precise size, shape, and the position of all atoms of the CNC, and also its location within the entire NPC.

The CNC is not the first component of the NPC to be fully characterized, but it is by far the largest. Hoelz says that once the other major component—known as the adaptor–channel nucleoporin complex—and the four smaller subcomplexes are mapped, the NPC's structure will be fully understood.

The CNC that Hoelz and his team evaluated comes from baker's yeast—a commonly used research organism—but the CNC structure is the right size and shape to dock with the NPC of a human cell. "It fits inside like a hand in a glove," Hoelz says. "That's significant because it is a very strong indication that the architecture of the NPC in both are probably the same and that the machinery is so important that evolution has not changed it in a billion years."

Being able to successfully determine the structure of the CNC makes mapping the remainder of the NPC an easier proposition. "It's like climbing Mount Everest. Knowing you can do it lowers the bar, so you know you can now climb K2 and all these other mountains," says Hoelz, who is convinced that the entire NPC will be characterized soon. "It will happen. I don't know if it will be in five or 10 or 20 years, but I'm sure it will happen in my lifetime. We will have an atomic model of the entire nuclear pore."

Still, he adds, "My dream actually goes much farther. I don't really want to have a static image of the pore. What I really would like—and this is where people look at me with a bit of a smile on their face, like they're laughing a little bit—is to get an image of how the pore is moving, how the machine actually works. The pore is not a static hole, it can open up like the iris of a camera to let something through that's much bigger. How does it do it?"

To understand that machine in motion, he adds, "you don't just need one snapshot, you need multiple snapshots. But once you have one, you can infer the other ones much quicker, so that's the ultimate goal. That's the dream."

Along with Hoelz, additional Caltech authors on the paper, "Architecture of the Nuclear Pore Complex Coat," include postdoctoral scholars Tobias Stuwe and Ana R. Correia, and graduate student Daniel H. Lin. Coauthors from the University of Chicago Department of Biochemistry and Molecular Biology include Anthony Kossiakoff, Marcin Paduch and Vincent Lu. The work was supported by Caltech startup funds, the Albert Wyrick V Scholar Award of the V Foundation for Cancer Research, the 54th Mallinckrodt Scholar Award of the Edward Mallinckrodt, Jr. Foundation, and a Kimmel Scholar Award of the Sidney Kimmel Foundation for Cancer Research.

Written by Jon Nalick

Read More »

Size Matters: The Importance of Building Small Things

The Importance of Building Small Things
 
Strong materials, such as concrete, are usually heavy, and lightweight materials, such as rubber (for latex gloves) and paper, are usually weak and susceptible to tearing and damage. Julia R. Greer, professor of materials science and mechanics in Caltech's Division of Engineering and Applied Science, is helping to break that linkage. In Caltech's Beckman Auditorium at 8 p.m. on Wednesday, January 21, Greer will explain how we can give ordinary materials superpowers. Admission is free.

Q: What do you do?

A: I'm a materials scientist, and I work with materials whose dimensions are at the nanoscale. A nanometer is one-billionth of a meter, or about one-hundred-thousandth the diameter of a hair. At those dimensions, ordinary materials such as metals, ceramics, and glasses take on properties quite unlike their bulk-scale counterparts. Many materials become 10 or more times stronger. Some become damage-tolerant. Glass shatters very easily in our world, for example, but at the nanoscale, some glasses become deformable and less breakable. We're trying to harness these so-called size effects to create "meta-materials" that display these properties at scales we can see.

We can fabricate essentially any structure we like with the help of a special instrument that is like a tabletop microprinter, but uses laser pulses to "write" a three-dimensional structure into a tiny droplet of a polymer. The laser "sets" the polymer into our three-dimensional design, creating a minuscule plastic scaffold. We rinse away the unset polymer and put our scaffold in another machine that essentially wraps it in a very thin, nanometers-thick ribbon of the stuff we're actually interested in—a metal, a semiconductor, or a biocompatible material. Then we get rid of the plastic, leaving just the interwoven hollow tubular structure. The final structure is hollow, and it weighs nothing. It's 99.9 percent air.

We can even make structures nested within other structures. We recently started making hierarchical nanotrusses—trusses built from smaller trusses, like a fractal.

Q: How big can you make these things, and where might that lead us?

A: Right now, most of them are about 100 by 100 by 100 microns cubed. A micron is a millionth of a meter, so that is very small. And the unit cells, the individual building blocks, are very, very small—a few microns each. I recently asked my graduate students to create a demo big enough to be visible, so I could show it at seminars. They wrote me an object about 6 millimeters by 6 millimeters by about 100 microns tall. It took them about a week just to write the polymer, never mind the ribbon deposition and all the other steps.

The demo piece looks like a little white square from the top, until you hold it up to the light. Then a rainbow of colors play across its surface, and it looks like a fine opal. That's because the nanolattices and the opals are both photonic crystals, which means that their unit cells are the right size to interact with light. Synthetic three-dimensional photonic crystals are relatively hard to make, but they could be extremely useful as high-speed switches for fiber-optic networks.

Our goal is to figure out a way to mass produce nanostructures that are big enough to see. The possibilities are endless. You could make a soft contact lens that can't be torn, for example. Or a very lightweight, very safe biocompatible material that could go into someone's body as a scaffold on which to grow cells. Or you could use semiconductors to build 3-D logic circuits. We're working with Assistant Professor of Applied Physics and Materials Science Andrei Faraon [BS '04] to try to figure out how to simultaneously write a whole bunch of things that are all 1 centimeter by 1 centimeter.

Q: How did you get into this line of work? What got you started?

A: When I first got to Caltech, I was working on metallic nanopillars. That was my bread and butter. Nanopillars are about 50 nanometers to 1 micron in diameter, and about three times taller than their width. They were what we used to demonstrate, for example, that smaller becomes stronger—the pillars were stronger than the bulk metal by an order of magnitude, which is nothing to laugh at.

Nanopillars are awesome, but you can't build anything out of them. And so I always wondered if I could use something like them as nano-LEGOs and construct larger objects, like a nano-Eiffel Tower. The question I asked myself was if each individual component had that very, very high strength, would the whole structure be incredibly strong? That was always in the back of my mind. Then I met some people at DARPA (Defense Advanced Research Projects Agency) at HRL (formerly Hughes Research Laboratories) who were interested in some similar questions, specifically about using architecture in material design. My HRL colleagues were making microscale structures called micro-trusses, so we started a very successful DARPA-funded collaboration to make even smaller trusses with unit cells in the micron range. These structures were still far too big for my purposes, but they brought this work closer to reality. 

Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's iTunes U site.

Written by Douglas Smith

Read More »

How the Brain Learns from the Past and Makes Good Decisions for the Future: A Tour of Neural Reinforcement Learning

How the Brain Learns from the Past and Makes Good Decisions for the Future: A Tour of Neural Reinforcement Learning
 
It is often said that people who do not learn from history are doomed to repeat it. Not being one of those people requires a network of different brain regions to work in concert. On Wednesday, February 4 at 8 p.m. in Caltech's Beckman Auditorium, John P. O'Doherty, professor of psychology and director of the Caltech Brain Imaging Center, will discuss our current understanding of how we learn from experience. Admission is free.

Q: What do you do?

A: I study how we learn from experience. Humans and other animals have to make decisions all the time to maximize their benefits and minimize danger. These decisions range from what I should have for dinner or should I cross the road—which could have life-changing consequences if I'm wrong—to the selection of a life partner. I don't claim that "Who should I marry?" is equivalent to "Carrots or Brussels sprouts?" but we do think that many decisions share certain commonalities. So we look at very simple tasks that give us a window into how the brain solves problems to maximize future rewards.

We study brain activity by putting your head in an fMRI scanner. "MRI" stands for magnetic resonance imaging, and you've probably had one if you've had a sports injury. The "f" stands for "functional," and an fMRI scan detects changes in the oxygenation levels in the blood. If a certain part of the brain is active, its oxygen supply increases. We map those increases onto the brain's anatomy in 3-D while our volunteers perform some task that involves learning.

A task might be playing virtual slot machines. You have a choice of three machines, and we tell you one machine pays better than the others. So you choose one, press the button, and get instant feedback—you win or you lose. As you try to work out which machine is better, we monitor the patterns of activity in various parts of your brain. One of our goals is to find the part of the brain that represents the experienced value of the things we meet in the world—how good it feels to win, or how bad to lose.

We're also interested in how the brain changes its expectations. As you play the machines, you're constantly revising your estimate of which machine is better. We have computational models that we think represent how the brain internalizes feedback, and we're trying to find brain areas where the activity matches those models.

We think that understanding the neural circuits and computations that underpin our decision-making capacity may shed some light on certain psychiatric disorders, such as obsessive-compulsive disorder, depression, and addiction. On some level, all of these can be seen as decision-making gone wrong. Addiction, for example, involves a choice—voluntary or otherwise—to engage in a certain pattern of behavior.

Q: Setting aside clinical disorders, why do people make garden-variety bad decisions? What leads us to cross a busy road and almost not make it?

A: First, it's important to emphasize that humans are collectively pretty good at making decisions. That's why we've been so successful as a species. But there could be all sorts of reasons why an individual might make a poor decision. For example, you might underestimate how fast the traffic is moving.

My lab is particularly interested in how two distinct decision-making mechanisms may interact to produce bad outcomes. One mechanism is "goal-directed," in which you evaluate the consequences of your action in light of the goal you're pursuing. This requires a lot of mental energy. In contrast, "habit-controlled" decision-making is basically stimulus-response—you react to some cue from the environment. Habits can be very beneficial, because you can execute them quickly without thinking deeply. Once you learn to ride a bicycle, for example, you don't have to concentrate on keeping your balance. It becomes routine, and you can focus your mental energy on other things. Poor decisions can result when the habit system drives your behavior when you really should be solving things in a goal-directed manner. This may be how addiction becomes compulsive. The goal-directed system says, "I don't want to take this drug any more," but the habitual system overrides it.

Q: How did you get into this line of work?

A: Even as a kid I was interested in science and its unsolved mysteries. I was actually keen on astronomy as a teenager and really considered going in that direction. Then I started getting interested in how computers work, which led me to start wondering about how the most complex computer that we know of works, namely our brain. So I basically had a career choice between studying the universe or studying the brain, which are probably the world's two greatest outstanding mysteries. I decided to take my chances on the brain.

At the time, the field of cognitive neuroscience was based on the paradigm that the brain is like a digital computer, and brain processes were modeled in essentially in the same way. There were lots of studies of memory, such as recalling lists of words, but very little was known about how the brain assigns a greater value to some things than others. But it's a really fundamental question, because the ability to work out whether something is good or bad—and to maximize behaviors that lead to good things and avoid bad things—is critical for survival. Digital computers typically don't make value judgments of that sort unless they are programmed to do so. So that's what excited me, trying to unlock how it is that the brain assigns value to things in the world.

Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's iTunes U site.

Written by Douglas Smith

Original Article: http://www.caltech.edu/news/how-brain-learns-past-and-makes-good-decisions-future-tour-neural-reinforcement-learning-45565


Read More »

A new sleep study may open your eyes to meditation

Focusing on the present has positive effects on daytime fatigue and depression, two conditions that often result from the poor sleep of older adults


A new sleep study may open your eyes to meditation

Having trouble turning in? Perhaps a new Keck Medicine of USC study will allow you to sleep.

Older adults experiencing sleep disturbances found more relief using a mindfulness meditation program than by using a sleep hygiene education program teaching sleep improvement skills, researchers have found.

In a randomized clinical trial of 49 older adults, scientists from USC and UCLA discovered that participants in the group-based mindfulness meditation program reported better outcomes than those enrolled in a group-based sleep hygiene program.

The research indicates that focusing attention and awareness on the present moment without judgment or reacting to thoughts — as taught through mindfulness meditation — has positive effects not just on sleep but on daytime fatigue and depression, two conditions that often result from poor sleep.

"We were surprised to find that the effect of mindfulness meditation on sleep quality was large." -David Black

“We were surprised to find that the effect of mindfulness meditation on sleep quality was large and above and beyond the effect of the sleep hygiene education program,” said David Black, corresponding author of the study and assistant professor of preventive medicine at the Keck School of Medicine and director of the American Mindfulness Research Association.

“Mindfulness meditation appears to have clinical importance by serving to reduce sleep problems among the growing population of older adults,” Black concluded, “and this effect on sleep appears to carry over into reducing daytime fatigue and depression symptoms.”



Do not disturb

Fifty percent of adults over the age of 55 will experience sleep disturbances, which include trouble falling asleep and waking in the middle of the night.

According to the National Sleep Foundation, the sleep needs of older adults do not diminish with age, and many older adults report dissatisfaction with their sleep and tiredness during the day.

Black’s team compared two structured conditions: the Mindful Awareness Practice program at UCLA, a six-week, two-hour-a week program introducing mindfulness meditation to participants, and a sleep hygiene program providing improvement strategies such as relaxation before bedtime, monitoring sleep behavior and not eating before sleeping. The research was conducted via self-reported surveys.

Black’s future research will focus on combining mindfulness meditation with a sleep hygiene program to determine the usefulness of meshing aspects of both programs.

The study was published online in JAMA Internal Medicine.

 The research team includes Gillian O’Reilly, doctoral student, Department of Preventive Medicine at the Keck School, and Richard Olmstead, Elizabeth Breen and Michael Irwin of UCLA.

Funding was provided by the National Institutes of Health, the National Institutes of Mental Health, the UCLA Older Americans Independence Center, the Cousins Center for Psychimmunotherapy at UCLA, the Pettit Family Foundation and the Furlotti Family Foundation.


Read More »

Student racers go high-tech in search for speed


An all-girl team of high school students competing in the F1 in Schools Technology Challenge has consulted with University of Queensland neuroscientists ahead of the national final at the Australian Grand Prix next week.

The four Redcliffe State High School students who form the Infinite Racing team turned to UQ’s Queensland Brain Institute (QBI) to use high speed cameras capable of filming at 1000 frames a second to see how their model car was performing.

QBI’s Professor Mandyam Srinivasan, a visual and sensory neuroscientist studying the flight behaviour of animals, said the students hoped to make their car design more aerodynamic and optimise the launch.

“They are being supported by Boeing to help make their design more aerodynamic, and our lab is also working with the company, so the team got in touch with us because they knew we use high speed cameras to film our birds,” Professor Srinivasan said.

Scientists at the Neuroscience of Vision and Aerial Robotics Laboratory at QBI are studying animal flight to determine how to improve aircraft flight efficiency, much like the high school team is improving its car’s performance.

Teams race their 20cm-long cars, powered by a small canister of compressed CO2, which can accelerate to 80km/hr in just 0.4 seconds on a short run down a 25-metre track.

“There’s only one instant, right at the start, when thrust is generated from the car’s CO2 canister to propel the car, so maximising the efficiency of the launch from the starting gate is critical for their speed,” Professor Srinivasan said.

The team is led by year 12 student Freya King and will compete in the national final at the Australian Grand Prix in Melbourne from 12-15 March.


Miss King said QBI’s help improved the team’s understanding of how the car was performing against the forces exerted by its quick acceleration during the launch.

“This testing helped us understand how the car and the suspension system react to the initial force at the start of the race,” Miss King said.

“We have been able to tweak our launch to help transfer some of the many forces acting on our car at this part of the race, to maximise our acceleration down the track,” she said.

The Infinite Racing team, formed in 2013, finished second in the Queensland state final to make this year’s national final for the second consecutive year. The winner of the national event will go to the world final.

“It is very exciting knowing we are going to go to the track every day to compete and we also get to watch the race, it’s an amazing experience that the whole team will never forget,” Miss King said.

The competition sees students using a range of science, technology, engineering and mathematical skills to make the cars.

“I have always wanted to be an engineer and doing this program has made me more excited to become one,” Miss King said.

Original Article: http://www.uq.edu.au/news/article/2015/03/student-racers-go-high-tech-search-speed
Read More »