Blogger Themes

Sunday, 24 June 2012

Energy: Novel Power Plants Could Clean Up Coal

Engineerblogger
June 24, 2012


Cleaner coal: This pilot plant in Italy uses pressurized oxygen to help reduce emissions from burning coal. Credit: Unity Power Alliance

A pair of new technologies could reduce the cost of capturing carbon dioxide from coal plants and help utilities comply with existing and proposed environmental regulations, including requirements to reduce greenhouse-gas emissions. Both involve burning coal in the presence of pure oxygen rather than air, which is mostly nitrogen. Major companies including Toshiba, Shaw, and Itea have announced plans to build demonstration plants for the technologies in coming months.

The basic idea of burning fossil fuels in pure oxygen isn't new. The drawback is that it's more expensive than conventional coal plant technology, because it requires additional equipment to separate oxygen and nitrogen. The new technologies attempt to offset at least some of this cost by improving efficiency and reducing capital costs in other areas of a coal plant. Among other things, they simplify the after-treatment required to meet U.S. Environmental Protection Agency regulations.

One of the new technologies, which involves pressurizing the oxygen, is being developed by a partnership between ThermoEnergy, based in Worcester, Massachusetts, and the major Italian engineering firm Itea. A version of it has been demonstrated at a small plant in Singapore that can generate about 15 megawatts of heat (enough for about five megawatts of electricity).

The technology simplifies the clean-up of flue gases; for example, some pollutants are captured in a glass form that results from high-temperature combustion. It also has the ability to quickly change power output, going from 10 percent to 100 percent of its generating capacity in 30 minutes, says Robert Marrs, ThermoEnergy's VP of business development. Conventional coal plants take several hours to do that. More flexible power production could accommodate changes in supply from variable sources of power like wind turbines and solar panels.

Marrs says that these advantages, along with the technology's higher efficiency at converting the energy in coal into electricity, could make it roughly as cost-effective as retrofitting a coal plant with new technology to meet current EPA regulations, while producing a stream of carbon dioxide that's easy to capture. The technology also reduces net energy consumption at coal plants, because the water produced by combustion is captured and can be recycled. This makes it attractive for use in drought-prone areas, such as some parts of China.

The other technology, being developed by the startup Net Power along with Toshiba, the power producer Exelon, and the engineering firm Shaw, is more radical, and it's designed to make coal plants significantly more efficient than they are today—over 50 percent efficient, versus about 30 percent. The most efficient power plants today use a pair of turbines: a gas turbine and a steam turbine that runs off the gas turbine's exhaust heat. The new technology makes use of the exhaust by directing part of the carbon dioxide in the exhaust stream back into the gas turbine, doing away with the steam turbine altogether. That helps offset the cost of the oxygen separation equipment. The carbon dioxide that isn't redirected to the turbine is relatively pure compared to exhaust from a conventional plant, and it is already highly pressurized, making it suitable for sequestering underground. The technology was originally conceived to work with gasified coal, but the company is planning to demonstrate it first with natural gas, which is simpler because it doesn't require a gasifier. The company says the technology will cost about the same as conventional natural gas plants. Shaw is funding a 25-megawatt demonstration power plant that is scheduled to be completed by mid-2014. Net Power plants to sell the carbon dioxide to oil companies to help improve oil production.

The technologies may be "plausible on paper," says Ahmed Ghoniem, a professor of mechanical engineering at MIT, but questions remain "until things get demonstrated." (Ghoniem has consulted for ThermoEnergy.) The economics are still a matter of speculation. For one thing, it is "an open question" how much money the technologies could save over conventional pollution control techniques, he says. As a rule, "any time you add carbon dioxide capture, you increase costs," he points out. "The question is by how much." Selling the carbon dioxide to enhance oil recovery can help justify the extra costs, he says, and retrofitting old power plants might help create an initial market. But he says the new technologies won't become widespread unless a price on carbon dioxide emissions is widely adopted.

Ghoniem adds that even if the technology for capturing carbon proves economical, it's still necessary to demonstrate that it's feasible and safe to permanently sequester carbon underground. The challenges of doing that were highlighted by a recent study suggesting that earthquakes could cause carbon dioxide to leak out.

 Source: Technology Review

Science: Breaking the limits of classical physics

Engineerblogger
June 24, 2012


In the quantum optical laboratories at the Niels Bohr
Institute, researchers have conducted experiments that show
that light breaks with the classical physical principles. The
studies show that light can have both an electrical and a
magnetic field, but not at the same time. That is to say, light
has quantum mechanical properties.

With simple arguments, researchers show that nature is complicated! Researchers from the Niels Bohr Institute have made a simple experiment that demonstrates that nature violates common sense – the world is different than most people believe. The experiment illustrates that light does not behave according to the principles of classical physics, but that light has quantum mechanical properties. The new method could be used to study whether other systems behave quantum mechanically. The results have been published in the scientific journal, Physical Review Letters.

In physics there are two categories: classical physics and quantum physics. In classical physics, objects, e.g. a car or a ball, have a position and a velocity. This is how we classically look at our everyday world. In the quantum world objects can also have a position and a velocity, but not at the same time. At the atomic level, quantum mechanics says that nature behaves quite differently than you might think. It is not just that we do not know the position and the velocity, rather, these two things simply do not exist simultaneously. But how do we know that they do not exist simultaneously? And where is the border of these two worlds? Researchers have found a new way to answer these questions.

Light on quantum mechanics

“Our goal is to use quantum mechanics in a new way. It is therefore important for us to know that a ‘system’ really behaves in a way that has no classical explanation. To this end, we first examined light,” explains Eran Kot, PhD-student in the research group, Quantum Optics at the Niels Bohr Institute at the University of Copenhagen.

Based on a series of experiments in the quantum optics laboratories, they examined the state of light. In classical physics, light possesses both an electric and a magnetic field.

“What our study demonstrated was that light can have both an electric and a magnetic field, but not at the same time. We thus provide a simple proof that an experiment breaks the classical principles. That is to say, we showed light possesses quantum properties, and we can expand this to other systems as well” says Eran Kot.

Classical and non-classical mechanics

The aim of the research is both to fundamentally understand the world, but there is also a practical challenge in being able to exploit quantum mechanics in larger contexts. For light it is no great surprise that it behaves quantum mechanically, but the methods that have been developed can also be used to study other systems.

“We are endeavoring to develop future quantum computers and we therefore need to understand the borders for when something behaves quantum mechanically and when it is classical mechanics,” says professor of quantum physics Anders S. Sørensen, explaining that quantum computing must necessarily be comprised of systems with non-classical properties.

Source: University of Copenhagen

Saturday, 23 June 2012

Mold-making technology could speed up product development

Engineerblogger
June 23, 2012

3D solid model for a vacuum form tool created in Mechanical Desktop. Credit porenstein.com

 A new way of rapidly producing prototype molds for vacuum-forming processes could help make product development quicker and cheaper.

The system uses a grid of adjustable pins to quickly create different shapes that function as molds for plastic vacuum forming, rather than the now-conventional method of rapid prototyping that involves building molds layer by layer (additive manufacturing).

The inventor of the prototype technology, Brunel University student Patrick Bion, said the system would enable product developers to make changes to prototypes much more quickly than with additive manufacturing.

‘Many companies now use additive manufacturing for making molds, but rather than having four hours of making a mould to then vacuum form onto and then needing to change something and repeat that process we’re looking for instantaneous molding,’ he said.

‘You can update the CAD data and have the pins reconfigured in a matter of 18 minutes. We’re trying to redefine “rapid” in rapid prototyping while making the technology attainable to industry.’

Once commercialized, the system should be much cheaper than existing technology that functions in a similar way but positions the pins hydraulically rather than electrically, Bion added.

‘That system weighs about 20 tonnes and costs millions of pounds. The whole objective was to develop a system architecture that would bridge the gap between reconfigurable tooling technology and commerce.

‘So we’re trying to develop a low-cost, compact system that can compete financially with additive manufacturing systems.’

The 2mm pins are adjusted using a linear actuator running CAD data and held in place using polyethylene foam, then clamped into place for the vacuum forming before being covered by neoprene or silicon interpolation material to smooth the surface of the mold.

Bion hopes to develop the scalable system to produce larger and more detailed molds using smaller pins.

‘The next step is finding a more technically advanced foam that will enable us to position those pins with higher accuracy,’ he said. ‘At the moment, it can position them within 0.5mm.’

Source: The Engineer

Megapixel Camera? Try Gigapixel: Engineers develop revolutionary camera

Engineerblogger
June 23, 2012


The camera

By synchronizing 98 tiny cameras in a single device, electrical engineers from Duke University and the University of Arizona have developed a prototype camera that can create images with unprecedented detail.

The camera’s resolution is five times better than 20/20 human vision over a 120 degree horizontal field.

The new camera has the potential to capture up to 50 gigapixels of data, which is 50,000 megapixels. By comparison, most consumer cameras are capable of taking photographs with sizes ranging from 8 to 40 megapixels. Pixels are individual “dots” of data – the higher the number of pixels, the better resolution of the image.

The researchers believe that within five years, as the electronic components of the cameras become miniaturized and more efficient, the next generation of gigapixel cameras should be available to the general public.

Details of the new camera were published online in the journal Nature. The team’s research was supported by the Defense Advanced Research Projects Agency (DARPA).

The camera was developed by a team led by David Brady, Michael J. Fitzpatrick Professor of Electric Engineering at Duke’s Pratt School of Engineering, along with scientists from the University of Arizona, the University of California – San Diego, and Distant Focus Corp.

“Each one of the microcameras captures information from a specific area of the field of view,” Brady said. “A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later."

“The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras,” Brady said. “While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics.”

The software that combines the input from the microcameras was developed by an Arizona team led by Michael Gehm, assistant professor of electrical and computer engineering at the University of Arizona.

“Traditionally, one way of making better optics has been to add more glass elements, which increases complexity,” Gehm said. “This isn’t a problem just for imaging experts. Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive."

“Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements,” Gehm said. “A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”

The prototype camera itself is two-and-half feet square and 20 inches deep. Interestingly, only about three percent of the camera is made of the optical elements, while the rest is made of the electronics and processors needed to assemble all the information gathered. Obviously, the researchers said, this is the area where additional work to miniaturize the electronics and increase their processing ability will make the camera more practical for everyday photographers.

“The camera is so large now because of the electronic control boards and the need to add components to keep it from overheating,” Brady said, “As more efficient and compact electronics are developed, the age of hand-held gigapixel photography should follow.”

Co-authors of the Nature report with Brady and Gehm include Steve Feller, Daniel Marks, and David Kittle from Duke; Dathon Golish and Esteban Vera from Arizona; and Ron Stack from Distance Focus.



Source: Duke University

Nanotechnology: Bringing down the cost of fuel cells

Engineerblogger
June 23, 2012


Zhen (Jason) He, assistant professor of mechanical engineering (left), and Junhong Chen, professor of mechanical engineering, display a strip of carbon that contains the novel nanorod catalyst material they developed for microbial fuel cells. (Photo by Troye Fox)

Engineers at the University of Wisconsin-Milwaukee (UWM) have identified a catalyst that provides the same level of efficiency in microbial fuel cells (MFCs) as the currently used platinum catalyst, but at 5% of the cost.

Since more than 60% of the investment in making microbial fuel cells is the cost of platinum, the discovery may lead to much more affordable energy conversion and storage devices.

The material – nitrogen-enriched iron-carbon nanorods – also has the potential to replace the platinum catalyst used in hydrogen-producing microbial electrolysis cells (MECs), which use organic matter to generate a possible alternative to fossil fuels.

“Fuel cells are capable of directly converting fuel into electricity,” says UWM Professor Junhong Chen, who created the nanorods and is testing them with Assistant Professor Zhen (Jason) He. “With fuel cells, electrical power from renewable energy sources can be delivered where and when required, cleanly, efficiently and sustainably.”

The scientists also found that the nanorod catalyst outperformed a graphene-based alternative being developed elsewhere. In fact, the pair tested the material against two other contenders to replace platinum and found the nanorods’ performance consistently superior over a six-month period.

The nanorods have been proved stable and are scalable, says Chen, but more investigation is needed to determine how easily they can be mass-produced. More study is also required to determine the exact interaction responsible for the nanorods’ performance.

The work was published in March in the journal Advanced Materials.

The right recipe

MFCs generate electricity while removing organic contaminants from wastewater. On the anode electrode of an MFC, colonies of bacteria feed on organic matter, releasing electrons that create a current as they break down the waste.

On the cathode side, the most important reaction in MFCs is the oxygen reduction reaction (ORR). Platinum speeds this slow reaction, increasing efficiency of the cell, but it is expensive.

Microbial electrolysis cells (MECs) are related to MFCs. However, instead of electricity, MECs produce hydrogen. In addition to harnessing microorganisms at the anode, MECS also use decomposition of organic matter and platinum in a catalytic process at their cathodes.

Chen and He’s nanorods incorporate the best characteristics of other reactive materials, with nitrogen attached to the surface of the carbon rod and a core of iron carbide. Nitrogen’s effectiveness at improving the carbon catalyst is already well known. Iron carbide, also known for its catalytic capabilities, interacts with the carbon on the rod surface, providing “communication” with the core. Also, the material’s unique structure is optimal for electron transport, which is necessary for ORR.

When the nanorods were tested for potential use in MECs, the material did a better job than the graphene-based catalyst material, but it was still not as efficient as platinum.

“But it shows that there could be more diverse applications for this material, compared to graphene,” says He. “And it gave us clues for why the nanorods performed differently in MECs.”

Research with MECs was published in June in the journal Nano Energy.

Source:  University of Wisconsin - Milwaukee

Additional Information:

Basque scientists control light at a nanometric scale with graphene

Engineerblogger
June 23, 2012

Lab at CIC nanoGUNE

Basque research groups are part of the scientific team which has, for the first time, trapped and confined light in graphene, an achievement which constitutes the most promising candidacy to process optic information at nanometric scales and which could open the door to a new generation of nano-sensors with applications in medicine, energy and computing.

The Cooperative Research Centre nanoGUNE, along with the Institute of Physical Chemistry “Rocasolano” (Madrid) and the Institute of Photonic Sciences (Barcelona), have led a study which opens an entirely new field of research and provides a viable avenue to manipulate light in an ultra rapid manner, something that was not possible until now.

Other Basque research centres, like the Physical Materials centre CFM-CSIC-UPV/EHU, the Donostia International Physics Center (DIPC), as well as the Ikerbasque Foundation and the Graphenea company, have also collaborated in the research which has been published in the prestigious science magazine Nature.

The scientists implicated in this study have managed to, for the first time, see guided light with nanometric precision on graphene, a material made up by a layer of carbon with a thickness of only an atom. This display proves what theoretical physicists had predicted for some time: that it is possible to trap and manipulate light in a very efficient way using graphene as a new platform to process optic information and for ultra-sensitive detection.

This ability to trap light in extraordinarily small volumes could shed light on a new generation of nano-sensors with applications in several areas such as medicine, bio-detection, solar cells and light sensors, as well as processors of quantum information.

Source:  nanoBasque

Additional Information:

New technique allows simulation of noncrystalline materials

Engineerblogger
June 23, 2012


Disordered materials, such as this slice of amorphous silicon (a material often used to make solar cells), have been very difficult to model mathematically. New mathematical methods developed at MIT should help with such modeling. Image: Wikimedia commons/Asad856

A multidisciplinary team of researchers at MIT and in Spain has found a new mathematical approach to simulating the electronic behavior of noncrystalline materials, which may eventually play an important part in new devices including solar cells, organic LED lights and printable, flexible electronic circuits.

The new method uses a mathematical technique that has not previously been applied in physics or chemistry. Even though the method uses approximations rather than exact solutions, the resulting predictions turn out to match the actual electronic properties of noncrystalline materials with great precision, the researchers say. The research is being reported in the journal Physical Review Letters, published June 29.

Jiahao Chen, a postdoc in MIT’s Department of Chemistry and lead author of the report, says that finding this novel approach to simulating the electronic properties of “disordered materials” — those that lack an orderly crystal structure — involved a team of physicists, chemists, mathematicians at MIT and a computer scientist at the Universidad Autónoma de Madrid. The work was funded by a grant from the National Science Foundation aimed specifically at fostering interdisciplinary research.

The project used a mathematical concept known as free probability applied to random matrices — previously considered an abstraction with no known real-world applications — that the team found could be used as a step toward solving difficult problems in physics and chemistry. “Random-matrix theory allows us to understand how disorder in a material affects its electrical properties,” Chen says.

Typically, figuring out the electronic properties of materials from first principles requires calculating certain properties of matrices — arrays of numbers arranged in columns and rows. The numbers in the matrix represent the energies of electrons and the interactions between electrons, which arise from the way molecules are arranged in the material.

To determine how physical changes, such as shifting temperatures or adding impurities, will affect such materials would normally require varying each number in the matrix, and then calculating how this changes the properties of the matrix. With disordered materials, where the values of the numbers in the matrix are not precisely known to begin with, this is a very difficult mathematical problem to solve. But, Chen explains, “Random-matrix theory gives a way to short-circuit all that,” using a probability distribution instead of deriving all the precise values.

The new method makes it possible to translate basic information about the amount of disorder in the molecular structure of a material — that is, just how messy its molecules are — into a prediction of its electrical properties.

“There is a lot of interest in how organic semiconductors can be used to make solar cells” as a possible lower-cost alternative to silicon solar cells, Chen says. In some types of these devices, “all the molecules, instead of being perfectly ordered, are all jumbled up.” These disordered materials are very difficult to model mathematically, but this new method could be a useful step in that direction, he says.

Essentially, what the method developed by Chen and his colleagues does is take a matrix problem that is too complex to solve easily by traditional mathematical methods and “approximates it with a combination of two matrices whose properties can be calculated easily,” thus sidestepping the complex calculations that would be required to solve the original problem, he explains.

Amazingly, the researchers found that their method, although it yields an approximation instead of the real solution, turns out to be highly accurate. When the approximation is plotted on a graph along with the exact solution, “you couldn’t tell the difference with the naked eye,” Chen says.

While mathematicians have used such methods in the abstract, “to our knowledge, this is the first application of this theory to chemistry,” Chen says. “It’s been very much in the domain of pure math, but we’re starting to find real applications. It’s exciting for the mathematicians as well.”

The incredible accuracy of the method, which uses a technique called free convolution, led the team to investigate why it was so accurate, which has led in turn to new mathematical discoveries in free probability theory. The method derived for estimating the amount of deviation between the precise calculation and the approximation is new, Chen says, “driven by our questions” for the mathematicians on the team. “It’s a happy accident that it worked out as well as it did,” he adds.

“Our results are a promising first step toward highly accurate solutions of much more sophisticated models,” Chen says. Ultimately, an extension of such methods could lead to “reducing the overall cost of computational modeling of next-generation solar materials and devices.”

David Leitner, a professor of theoretical and biophysical chemistry and chemical physics at the University of Nevada at Reno who was not involved in this work, says the potential practical impact of this research “is great, given the challenge faced in calculating the electronic structure of disordered materials and their practical importance.” He adds that the key test will be to see if this approach can be extended beyond the one-dimensional systems described in this paper to systems more applicable to actual devices. “Extension to higher dimensions is critical in assessing the work’s significance,” he says.

Such calculations “remain a big challenge,” Leitner says, and further work on this approach to the problem “could be very fruitful.”

In addition to Chen, the team included MIT associate professor of chemistry Troy Van Voorhis, chemistry graduate students Eric Hontz and Matthew Welborn and postdoc Jeremy Moix, MIT mathematics professor Alan Edelman and graduate student Ramis Movassagh, and computer scientist Alberto Suárez of the Universidad Autónoma de Madrid.

Source: MIT

Thursday, 21 June 2012

Stars, Jets and Batteries – multi-faceted magnetic phenomenon confirmed in the laboratory for the first time

Engineerblogger
June 21, 2012



Magnetic instabilities play a crucial role in the emergence of black holes, they regulate the rotation rate of collapsing stars and influence the behavior of cosmic jets. In order to improve understanding of the underlying mechanisms, laboratory experiments on earth are necessary. At the Helmholtz- Zentrum Dresden-Rossendorf (HZDR), confirmation of such a magnetic instability – the Tayler instability – was successfully achieved for the first time in collaboration with the Leibniz Institute for Astrophysics in Potsdam (AIP). The findings should be able to facilitate construction of large liquid-metal batteries, which are under discussion as cheap storage facilities for renewable energy.

The Tayler instability is being discussed by astrophysicists in reference to, among other things, the emergence of neutron-stars. Neutron stars, according to the theory, would have to rotate much faster than they actually do. The mysterious braking-effect has meanwhile been attributed to the influence of the Tayler instability, which reduces the rotation rate from 1,000 rps down to approximately 10 to 100 rps. Structures similar in appearance to the double-helix of DNA have been occasionally observed in cosmic jets, i.e. streams of matter, which emanate vertically out of the rotating accretion discs near black holes.

Liquid Metal Batteries – Energy Storage Facilities for the Future?

The Tayler instability also affects large-scale liquid metal batteries, which, in the future, could be used for renewable energy storage.

The magnetic phenomenon, observed for the first time in the laboratory at the Helmholtz-Zentrum Dresden-Rossendorf, was predicted in theory by R.J. Tayler in 1973. The Tayler instability always appears when a sufficiently strong current flows through an electrically conductive liquid. Starting from a certain magnitude, the interaction of the current with its own magnetic field creates a vortical flow structure. Ever since their involvement with liquid-metal batteries, HZDR scientists have been aware of the fact that this phenomenon can take effect not only in space but on earth as well. The future use of such batteries for renewable energy storage would be more complicated than originally thought due to the emergence of the Tayler instability during charging and discharging.

American scientists have developed the first prototypes and assume that the system could be easily scaled up. The HZDR physicist Dr. Frank Stefani is skeptical: “We have calculated that, starting at a certain current density and battery dimension, the Tayler instability emerges inevitably and leads to a powerful fluid flow within the metal layers. This stirs the liquid layers, and eventually a short circuit occurs.” In the current edition of the “Physical Review Letters”, the team directed by Stefani – together with colleagues from AIP led by Prof. Günther Rüdiger – reported on the first successful experiment to prove the Tayler instability in a liquid metal. Here a liquid alloy at room temperature consisting of indium, gallium and tin is deployed, through which a current as high as 8,000 amps is sent. In order to exclude other causes for the observed instability such as irregularities in conductivity, the researchers intentionally omitted the implementation of velocity sensors; instead, they used 14 highly-sensitive magnetic field sensors. The data collected indicate the growth rate and critical streaming effects of the Tayler instability, and these data remarkably correspond to the numerical predictions.

How liquid batteries work

Working principle of a liquid metal battery (pictures: Tom Weier, HZDR)

In the context of the smaller American prototypes the Tayler instability does not occur at all, but liquid batteries have to be quite large in order to make them economically feasible. Frank Stefani explains: “I believe that liquid-metal batteries with a base area measured in square meters are entirely possible. They can be manufactured quite easily in that one simply pours the liquids into a large container. They then independently organize their own layer structure and can be recharged and discharged as often as necessary. This makes them economically viable. Such a system can easily cope with highly fluctuating loads.” Liquid-metal batteries could thus always release excessive-supply current when the sun is not shining or the wind turbines are standing still.

The basic principle behind a liquid-metal battery is quite simple: since liquid metals are conductive, they can serve directly as anodes and cathodes. When one pours two suitable metals into a container so that the heavy metal is below and the lighter metal above, and then separates the two metals with a layer of molten salt, the arrangement becomes a galvanic cell. The metals have a tendency to form an alloy, but the molten salt in the middle prevents them from direct mixing. Therefore, the atoms of one metal are forced to release electrons. The ions thus formed wander through the molten salt. Arriving at the site of the other metal, these ions accept electrons and alloy with the second metal. During the charging process, this process is reversed and the alloy is broken up into its original components. In order to avoid the Tayler instability within big batteries – meaning a short circuit – Stefani suggests an internal tube through which the electrical current can be guided in reverse direction. This allows the capacity of the batteries to be considerably increased.

Cosmic magnetic fields in a laboratory experiment

Lab simulation of the Tayler instability: magnetic field sensors detecting the magnetic fields. The Tayler instability occurs whenever the electrical current sent through a liquid metal is high enough. (picture: AIP/HZDR)

Rossendorf researchers together with colleagues from Riga were equally successful in 1999 in their first-time-ever experimental proof of the homogenous dynamo-effect, which is responsible for the creation of the magnetic fields in both the earth and the sun. In a joint project with the Leibniz-Institut für Astrophysik Potsdam, it was possible in 2006 to recreate the so-called magneto-rotational instability in the laboratory, which enables the growth of stars and black holes. In the context of the future project DRESDYN, the researchers are currently preparing two large experiments with liquid sodium, with which the dynamo-effect is to be examined under the influence of precession, on the one hand, and a combination of magnetic instabilities on the other.
 
Source: The Institute of Fluid Dynamics at Helmholtz-Zentrum Dresden-Rossendorf

Additional Information:

Publications
  • Frank Stefani et al.: How to circumvent the size limitation of liquid metal batteries due to the Tayler instability, in: Energy Conversion and Management 52 (2011), 2982-2986, DOI: 10.1016/j.enconman.2011.03.003

Asymmetry may provide clue to superconductivity: Iron-based high-temp superconductors show unexpected electronic asymmetry

Engineerblogger
June 21, 2012

This image shows a microscopic sample of a high-temperature superconductor glued to the tip of a cantilever. To study the magnetic properties of the sample, scientists applied a magnetic field and measured the torque that was transferred from the sample to the cantilever. CREDIT: Shigeru Kasahara/Kyoto University

Physicists from Rice University, Kyoto University and the Japan Synchrotron Radiation Research Institute (JASRI) are offering new details this week in the journal Nature regarding intriguing similarities between the quirky electronic properties of a new iron-based high-temperature superconductor (HTS) and its copper-based cousins.

While investigating a recently discovered iron-based HTS, the researchers found that its electronic properties were different in the horizontal and vertical directions. This electronic asymmetry was measured across a wide range of temperatures, including those where the material is a superconductor. The asymmetry was also found in materials that were “doped” differently. Doping is a process of chemical substitution that allows both copper- and iron-based HTS materials to become superconductors.

“The robustness of the reported asymmetric order across a wide range of chemical substitutions and temperatures is an indication that this asymmetry is an example of collective electronic behavior caused by quantum correlation between electrons,” said study co-author Andriy Nevidomskyy, assistant professor of physics at Rice.

The study by Nevidomskyy and colleagues offers new clues to scientists studying the mystery of high-temperature superconductivity, one of physics’ greatest unsolved mysteries.

Superconductivity occurs when electrons form a quantum state that allows them to flow freely through a material without electrical resistance. The phenomenon only occurs at extremely cold temperatures, but two families of layered metal compounds — one based on copper and the other on iron — perform this mind-bending feat just short of or above the temperature of liquid nitrogen — negative 321 degrees Fahrenheit — an important threshold for industrial applications. Despite more than 25 years of research, scientists are still debating what causes high-temperature superconductivity.

Copper-based HTSs were discovered more than 20 years before their iron-based cousins. Both materials are layered, but they are strikingly different in other ways. For example, the undoped parent compounds of copper HTSs are nonmetallic, while their iron-based counterparts are metals. Due to these and other differences, the behavior of the two classes of HTSs are as dissimilar as they are similar — a fact that has complicated the search for answers about how high-temperature superconductivity arises.

One feature that has been found in both compounds is electronic asymmetry — properties like resistance and conductivity are different when measured up and down rather than side to side. This asymmetry, which physicists also call “nematicity,” has previously been found in both copper-based and iron-based high-temperature superconductors, and the new study provides the strongest evidence yet of electronic nematicity in HTSs.

In the study, the researchers used the parent compound barium iron arsenide, which can become a superconductor when doped with phosphorus. The temperature at which the material becomes superconducting depends upon how much phosphorus is used. By varying the amount of phosphorus and measuring electronic behavior across a range of temperatures, physicists can probe the causes of high-temperature superconductivity.

Prior studies have shown that as HTS materials are cooled, they pass through a series of intermediate electronic phases before they reach the superconducting phase. To help see these “phase changes” at a glance, physicists like Nevidomskyy often use graphs called “phase diagrams” that show the particular phase an HTS will occupy based on its temperature and chemical doping.

“With this new evidence, it is clear that the nematicity exists all the way into the superconducting region and not just in the vicinity of the magnetic phase, as it had been previously understood,” said Nevidomskyy, in reference to the line representing the boundary of the nematic order. “Perhaps the biggest discovery of this study is that this line extends all the way to the superconducting phase.”

He said another intriguing result is that the phase diagram for the barium iron arsenide bears a striking resemblance to the phase diagram for copper-based high-temperature superconductors. In particular, the newly mapped region for nematic order in the iron-based material is a close match for a region dubbed the “pseudogap” in copper-based HTSs.

“Physicists have long debated the origins and importance of the pseudogap as a possible precursor of high-temperature superconductivity,” Nevidomskyy said. “The new results offer the first hint of a potential analog for the pseudogap in an iron-based high-temperature superconductor.”

The nematic order in the barium iron arsenide was revealed during a set of experiments at Kyoto University that measured the rotational torque of HTS samples in a strong magnetic field. These findings were further corroborated by the results of X-ray diffraction performed at JASRI and aided by Nevidomskyy’s theoretical analysis. Nevidomskyy and his collaborators believe that their results could help physicists determine whether electronic nematicity is essential for HTS.

Nevidomskyy said he expects similar experiments to be conducted on other varieties of iron-based HTS. He said additional experiments are also needed to determine whether the nematic order arises from correlated electron behavior.

Nevidomskyy, a theoretical physicist, specializes in the study of correlated electron effects, which occur when electrons lose their individuality and behave collectively.

“One way of thinking about this is to envision a crowded stadium of football fans who stand up in unison to create a traveling ‘wave,’” he said. “If you observe just one person, you don’t see ‘the wave.’ You only see the wave if you look at the entire stadium, and that is a good analogy for the phenomena we observe in correlated electron systems.”

Nevidomskyy joined the research team on the new study after meeting the lead investigator, Yuji Matsuda, at the Center for Physics in Aspen, Colo., in 2011. Nevidomskyy said Matsuda’s data offers intriguing hints about a possible connection between nematicity and high-temperature superconductivity.

“It could just be serendipity that nematicity happens in both the superconducting and the nonsuperconducting states of these materials,” Nevidomskyy said. “On the other hand, it could be that superconductivity is like a ship riding on a wave, and that wave is created by electrons in the nematic collective state.”

Study co-authors include S. Kasahara, H.J. Shi, K. Hashimoto, S. Tonegawa, Y. Mizukami, T. Shibauchi and T. Terashima, all of Kyoto University; K. Sugimoto of JASRI; T. Fukuda of the Japan Atomic Energy Agency. The research was funded by the Japanese Society for the Promotion of Science, the Japanese Ministry of Education, Culture, Sports, Science and Technology, and the collaboration was made possible by the Aspen Center for Physics.

Source: Rice University


Nano-infused paint can detect strain: Fluorescent nanotube coating can reveal stress on planes, bridges, buildings

Engineerblogger
June 21, 2012

A new type of paint made with carbon nanotubes at Rice University can help detect strain in buildings, bridges and airplanes.

The Rice scientists call their mixture “strain paint” and are hopeful it can help detect deformations in structures like airplane wings. Their study, published online this month by the American Chemical Society journal Nano Letters details a composite coating they invented that could be read by a handheld infrared spectrometer.

This method could tell where a material is showing signs of deformation well before the effects become visible to the naked eye, and without touching the structure. The researchers said this provides a big advantage over conventional strain gauges, which must be physically connected to their read-out devices. In addition, the nanotube-based system could measure strain at any location and along any direction.

Rice chemistry professor Bruce Weisman led the discovery and interpretation of near-infrared fluorescence from semiconducting carbon nanotubes in 2002, and he has since developed and used novel optical instrumentation to explore nanotubes’ physical and chemical properties.

Satish Nagarajaiah, a Rice professor of civil and environmental engineering and of mechanical engineering and materials science, and his collaborators led the 2004 development of strain sensing for structural integrity monitoring at the macro level using the electrical properties of carbon nanofilms – dense networks/ensembles of nanotubes. Since then he has continued to investigate novel strain sensing methods using various nanomaterials.

But it was a stroke of luck that Weisman and Nagarajaiah attended the same NASA workshop in 2010. There, Weisman gave a talk on nanotube fluorescence. As a flight of fancy, he said, he included an illustration of a hypothetical system that would use lasers to reveal strains in the nano-coated wing of a space shuttle.

“I went up to him afterward and said, ‘Bruce, do you know we can actually try to see if this works?’” recalled Nagarajaiah.

Nanotube fluorescence shows large, predictable wavelength shifts when the tubes are deformed by tension or compression. The paint — and therefore each nanotube, about 50,000 times thinner than a human hair — would suffer the same strain as the surface it’s painted on and give a clear picture of what’s happening underneath.

“For an airplane, technicians typically apply conventional strain gauges at specific locations on the wing and subject it to force vibration testing to see how it behaves,” Nagarajaiah said. “They can only do this on the ground and can only measure part of a wing in specific directions and locations where the strain gauges are wired. But with our non-contact technique, they could aim the laser at any point on the wing and get a strain map along any direction.”

Rice University Professor Bruce Weisman introduced the idea of strain paint for finding weaknesses in materials with this slide from a presentation to NASA in 2010. (Credit: Bruce Weisman/Rice University)

He said strain paint could be designed with multifunctional properties for specific applications. “It can also have other benefits,” Nagarajaiah said. “It can be a protective film that impedes corrosion or could enhance the strength of the underlying material.”

Weisman said the project will require further development of the coating before such a product can go to market. “We’ll need to optimize details of its composition and preparation, and find the best way to apply it to the surfaces that will be monitored,” he said. “These fabrication/engineering issues should be addressed to ensure proper performance, even before we start working on portable read-out instruments.”

“There are also subtleties about how interactions among the nanotubes, the polymeric host and the substrate affect the reproducibility and long-term stability of the spectral shifts. For real-world measurements, these are important considerations,” Weisman said.

But none of those problems seem insurmountable, he said, and construction of a handheld optical strain reader should be relatively straightforward.

“There are already quite compact infrared spectrometers that could be battery-operated,” Weisman said. “Miniature lasers and optics are also readily available. So it wouldn’t require the invention of new technologies, just combining components that already exist.


An illustration shows how polarized light from a laser and a near-infrared spectrometer could read levels of strain in a material coated with nanotube-infused paint invented at Rice University. (Credit: Bruce Weisman/Rice University)

“I’m confident that if there were a market, the readout equipment could be miniaturized and packaged. It’s not science fiction.”

Lead author of the paper is Paul Withey, an associate professor of physics at the University of Houston-Clear Lake, who spent a sabbatical in Weisman’s lab at Rice studying the fluorescence of nanotubes in polymers.

Co-authors are Rice civil engineering graduate student Venkata Srivishnu Vemuru in Nagarajaiah’s group and Sergei Bachilo, a research scientist in Weisman’s group.

Support for the research came from the National Science Foundation, the Welch Foundation, the Air Force Research Laboratory and the Infrastructure-Center for Advanced Materials at Rice.



Nanotube-infused paint developed at Rice University can reveal strain in materials by its fluorescence. The material holds promise for detecting strain in aircraft, bridges and buildings.

Source: Rice University

Additional Information:

Monday, 18 June 2012

Nanotecnology: Thinner than a pencil trace

Engineerblogger
June 19, 2012


Jari Kinaret

Energy-efficient, high-speed electronics on a nanoscale and screens for mobile telephones and computers that are so thin they can be rolled up. Just a couple of examples of what the super-material graphene could give us.But is European industry up to making these visions a reality?

​Seldom has a Nobel Prize in physics sparked the imagination of gadget nerds to such an extent. When Andrej Geim and Konstantin Novoselov at the University of Manchester were rewarded in 2010 for their graphene experiments, it was remarkably easy to provide examples of future applications, mainly in the form of consumer electronics with a level of performance that up to now was virtually inconceivable.

It's not just the IT sector that is watering at the mouth at the thought of graphene. Even in the energy, medical and material technology sectors there are high hopes of using these spectacular properties. Perhaps talk of a future carbon-based technical revolution was no exaggeration.
Even if graphene has not attracted a great deal of attention in the media recently, the research world has been working feverishly behind the scenes. Last year, around 6,000 scientific articles were published worldwide in which the focus was on graphene. About six months ago, new research results were published that reinforced more than ever the idea of graphene as a potential replacement for silicon in the electronics of the future.

"As late as last autumn this was still a long-term goal bearing in mind the major challenges that are involved," explains Professor Jari Kinaret, Head of the Nanoscience Area of Advance at Chalmers. "Then a pioneering publication appeared from Manchester showing that graphene could be combined with other similar two-dimensional materials in a sandwich structure."
"The power consumption of a transistor built using this principle would be just one millionth or so compared to previous prototypes."

Jari Kinaret also heads Graphene Coordinated Action, an initiative to reinforce and bring together graphene research within the EU.
In line with the growing interest in graphene throughout the world, the EU is at risk of losing ground – particularly in applied research.

"Integrating the whole chain, from basic research to product, is something that we are by tradition not particularly skilled at in Europe compared with the Asians or the Americans," explains Jari Kinaret. He presents a pie graph on the computer to illustrate his point.
The first graph shows that to date academic research into graphene has been split fairly evenly split between the USA, Asia and Europe. However, the pie graph showing patent applications from each region is strikingly similar to the size relationship between Jupiter, Saturn and Mars.

"Something is wrong here and we're going to fix it," states Jari Kinaret.
The idea is that the research groups that are currently working independently of each other will be linked in a network and will be able to benefit from each other's results.
This planned European gathering of strengths, however, presupposes more funding, which is on the horizon in the form of "scientific flagships" – the EU Commission designation for the high-profile research initiatives with ten-year funding due to be launched next year.

Last year, Graphene Coordinated Action was named as one of the six pilot projects with a chance of being raised to flagship status. This would mean a budget of around SEK 10 billion throughout the whole period.
The downside is that only two flagships will be launched, leaving four pilots standing.
"If we are selected, it would mean a substantial increase in grants for European graphene research – up to 50 per cent more than at present," states Jari Kinaret.

"If we are unsuccessful, then hopefully we will at least retain our present financial framework."
Jari Kinaret has recently submitted the project's final report to the Commission. He is optimistic about their chances.
"One of our obvious strengths is the level of scientific excellence. Nobel Prize Winners Geim and Novoselov are members of our strategy committee along with a further two Nobel Prize Winners. That's hard to beat."
Alongside aspects bordering on science fiction, there is a very tangible side to graphene.
The fact is that now and then most people produce a little graphene –inadvertently of course. And some even eat graphene.

The link between nanoscience and daily life is the lead pencil. From its tip, a layer of soft graphite is transferred onto the surface of the paper when we draw and write. (At the same time, some of us chew the other end as we think).
If we were to study a strongly magnified pencil trace, a layer of graphite would be seen that is perhaps 100 atom layers thick. However, the outer edge of the trace becomes thinner and increasingly transparent and at some point the layer becomes so thin it comprises just one single layer of carbon atoms.
That's where it is – the graphene. It is also the background to the motto adopted by
Graphene Coordinated Action: The future in a pencil trace.
At the stroke of a pencil, the future of this planned research initiative will be decided towards the end of this year when the secret EU Commission jury will decide which of the two pilots will share the billions available for research.


ABOUT GRAPHENE
Graphene is a form of graphite, i.e. carbon, which comprises one single cohesive layer of atoms. It is super-thin, super-strong and transparent. It can be bent and stretched and it has a singular capacity to conduct both electricity and heat.
The existence of graphene has been known for a long time although in 2004 Geim and Novoselov succeeded in producing flakes of material in an entirely new way – by breaking it away from the graphite with the aid of standard household tape.
Graphene nowadays is also produced using other methods.
The centre of Swedish graphene research is Chalmers.


SOON ON TOUCH SCREENS AND IN MOBILE PHONES
The emphasis in Graphene Coordinated Action is on applied research. Ultimately, there is the potential somewhere on the horizon to build up a European industry around graphene and similar two-dimensional materials – both as components and finished products. Consequently, several large companies are included in the network, including mobile phone manufacturer Nokia.
"As graphene is both transparent and conductive, it is obviously of interest for use in the touchscreens and displays of the future. But graphene could also be used in battery technology or as reinforcement in the shell of mobile telephones," states Claudio Marinelli at the Nokia Research Department in Cambridge, England.
At Nokia, research has been conducted for a couple of years on potential applications for graphene within mobile communication. Claudio Marinelli estimates that by 2015 at the latest Nokia will be using graphene in one application or another in its telephones.
"Even when it comes to identification and other data transfer via the screen, technology based on graphene is conceivable," he says.
Farther down the line, he believes that the bendability and flexibility of graphene could become part of mobile communication and be used in products that at present we might find a little difficult to imagine.
"We believe that graphene technology will have a major impact on our business area. That is why it was an obvious move for us to be involved in this research project."

Source: Chalmers University of Technology

Green Energy: The Great German Energy Experiment

Technology Review
June 19, 2012
 
These wind turbines under construction in Görmin, Germany, are among more than 22,000 installed in that country. Credit: Sean Gallup | Getty

Germany has decided to pursue ambitious greenhouse-gas reductions—while closing down its nuclear plants. Can a heavily industrialized country power its economy with wind turbines and solar panels?

Along a rural road in the western German state of North Rhine–Westphalia lives a farmer named Norbert Leurs. An affable 36-year-old with callused hands, he has two young children and until recently pursued an unremarkable line of work: raising potatoes and pigs. But his newest businesses point to an extraordinary shift in the energy policies of Europe's largest economy. In 2003, a small wind company erected a 70-meter turbine, one of some 22,000 in hundreds of wind farms dotting the German countryside, on a piece of Leurs's potato patch. Leurs gets a 6 percent cut of the electricity sales, which comes to about $9,500 a year. He's considering adding two or three more turbines, each twice as tall as the first.

The profits from those turbines are modest next to what he stands to make on solar panels. In 2005 Leurs learned that the government was requiring the local utility to pay high prices for rooftop solar power. He took out loans, and in stages over the next seven years, he covered his piggery, barn, and house with solar panels—never mind that the skies are often gray and his roofs aren't all optimally oriented. From the resulting 690-kilowatt installation he now collects $280,000 a year, and he expects over $2 million in profits after he pays off his loans.

Stories like Leurs's help explain how Germany was able to produce 20 percent of its electricity from renewable sources in 2011, up from 6 percent in 2000. Germany has guaranteed high prices for wind, solar, biomass, and hydroelectric power, tacking the costs onto electric bills. And players like Leurs and the small power company that built his turbine have installed off-the-shelf technology and locked in profits. For them, it has been remarkably easy being green.

What's coming next won't be so easy. In 2010, the German government declared that it would undertake what has popularly come to be called an Energiewende—an energy turn, or energy revolution. This switch from fossil fuels to renewable energy is the most ambitious ever attempted by a heavily industrialized country: it aims to cut greenhouse-gas emissions 40 percent from 1990 levels by 2020, and 80 percent by midcentury. The goal was challenging, but it was made somewhat easier by the fact that Germany already generated more than 20 percent of its electricity from nuclear power, which produces almost no greenhouse gases. Then last year, responding to public concern over the post-tsunami nuclear disaster in Fukushima, Japan, Chancellor Angela Merkel ordered the eight oldest German nuclear plants shut down right away. A few months later, the government finalized a plan to shut the remaining nine by 2022. Now the Energiewende includes a turn away from Germany's biggest source of low-­carbon electricity.

Germany has set itself up for a grand experiment that could have repercussions for all of Europe, which depends heavily on German economic strength. The country must build and use renewable energy technologies at unprecedented scales, at enormous but uncertain cost, while reducing energy use. And it must pull it all off without undercutting industry, which relies on reasonably priced, reliable power. "In a sense, the Energiewende is a political statement without a technical solution," says Stephan Reimelt, CEO of GE Energy Germany. "Germany is forcing itself toward innovation. What this generates is a large industrial laboratory at a size which has never been done before. We will have to try a lot of different technologies to get there."

The major players in the German energy industry are pursuing several strategies at once. To help replace nuclear power, they are racing to install huge wind farms far off the German coast in the North Sea; new transmission infrastructure is being planned to get the power to Germany's industrial regions. At the same time, companies such as Siemens, GE, and RWE, Germany's biggest power producer, are looking for ways to keep factories humming during lulls in wind and solar power. They are searching for cheap, large-scale forms of power storage and hoping that computers can intelligently coördinate what could be millions of distributed power sources.
To read more click here...

Vehicle suspension improves handling and maintains comfort

Engnineerblogger
June 19, 2012



MIRA has developed new vehicle suspension that is claimed to offer improved handling without compromising comfort.

The suspension has been developed by vehicle dynamicists at MIRA over the past two years and builds on the existing double-wishbone suspension (DWS).

The DWS is most commonly seen in high-performance sports cars — including those made by Lamborghini and Aston Martin — thanks to its ability to provide a low bonnet line and improved handling. However, it has always struggled to achieve the comfort offered by softer suspensions, such as the MacPherson strut suspension.

Ian Willows, a MIRA consultant, told The Engineer: ‘The problem with the existing DWS is that there’s an inherent compromise with its design.’

He explained that there is a trade-off between the longitudinal compliance and the castor compliance of the suspension. The longitudinal compliance allows the wheel to be displaced rearwards if a force is applied in that direction. Meanwhile, the castor compliance relates to the rotational displacement of the wheel when braking is applied while cornering.

‘The reason we’d like some longitudinal compliance is because it gives the ability to absorb the longitudinal force input to a pothole or a ridge in the tarmac,’ said Willows. ‘But we don’t want the associated castor compliance because that reduces the stability of the steering as the steering axis rotates backwards.’

The interlinked DWS overcomes the compromise by effectively decoupling the castor and the longitudinal compliances of the traditional suspension — creating a solution that delivers the cornering, handling and steering performance of a double-wishbone design but with the longitudinal isolation associated with a more comfortable suspension design.

‘When you apply a braking force, there is no loss in castor angle for the new suspension,’ said Willows. ‘This will maintain the stability of the steering axis and eliminate any variation in steering through the wheel when you’re cornering and braking.’

He explained that one of the problems suspensions now have to address is the fact that less isolation is being offered from tyres on new vehicles as the trend towards reducing tyre profiles continues.

The concept is ready to go but the suspension would have to be customised for each individual vehicle model.

‘We’re aiming to advertise the concept advantages and then get some manufacturers or OEM suppliers to develop it for their next model,’ said Willows.

The Society of Motor Manufacturers and Traders (SMMT) told The Engineer: ‘MIRA’s suspension development is another clear example of the high level of innovation and R&D capability within UK industry.’

Source: The Engineer

Additional Information:

Solar nanowire array may increase percentage of sun’s frequencies available for energy conversion

Engineerblogger
June 18, 2012





Cross-sectional images of the indium gallium nitride nanowire solar cell. (Image courtesy of Sandia National Laboratories)

Researchers creating electricity through photovoltaics want to convert as many of the sun’s wavelengths as possible to achieve maximum efficiency. Otherwise, they’re eating only a small part of a shot duck: wasting time and money by using only a tiny bit of the sun’s incoming energies.

For this reason, they see indium gallium nitride as a valuable future material for photovoltaic systems. Changing the concentration of indium allows researchers to tune the material’s response so it collects solar energy from a variety of wavelengths. The more variations designed into the system, the more of the solar spectrum can be absorbed, leading to increased solar cell efficiencies. Silicon, today’s photovoltaic industry standard, is limited in the wavelength range it can ‘see’ and absorb.

But there is a problem: Indium gallium nitride, part of a family of materials called III-nitrides, is typically grown on thin films of gallium nitride. Because gallium nitride atomic layers have different crystal lattice spacings from indium gallium nitride atomic layers, the mismatch leads to structural strain that limits both the layer thickness and percentage of indium that can be added. Thus, increasing the percentage of indium added broadens the solar spectrum that can be collected, but reduces the material’s ability to tolerate the strain.

Sandia National Laboratories scientists Jonathan Wierer Jr. and George Wang reported in the journal Nanotechnology that if the indium mixture is grown on a phalanx of nanowires rather than on a flat surface, the small surface areas of the nanowires allow the indium shell layer to partially “relax” along each wire, easing strain. This relaxation allowed the team to create a nanowire solar cell with indium percentages of roughly 33 percent, higher than any other reported attempt at creating III-nitride solar cells.

This initial attempt also lowered the absorption base energy from 2.4eV to 2.1 eV, the lowest of any III-nitride solar cell to date, and made a wider range of wavelengths available for power conversion. Power conversion efficiencies were low — only 0.3 percent compared to a standard commercial cell that hums along at about 15 percent — but the demonstration took place on imperfect nanowire-array templates. Refinements should lead to higher efficiencies and even lower energies.

Several unique techniques were used to create the III-nitride nanowire array solar cell. A top-down fabrication process was used to create the nanowire array by masking a gallium nitride (GaN) layer with a colloidal silica mask, followed by dry and wet etching. The resulting array consisted of nanowires with vertical sidewalls and of uniform height.

Next, shell layers containing the higher indium percentage of indium gallium nitride (InGaN) were formed on the GaN nanowire template via metal organic chemical vapor deposition. Lastly, In0.02Ga0.98N was grown, in such a way that caused the nanowires to coalescence. This process produced a canopy layer at the top, facilitating simple planar processing and making the technology manufacturable.

The results, says Wierer, although modest, represent a promising path forward for III-nitride solar cell research. The nano-architecture not only enables higher indium proportion in the InGaN layers but also increased absorption via light scattering in the faceted InGaN canopy layer, as well as air voids that guide light within the nanowire array.

The research was funded by DOE’s Office of Science through the Solid State Lighting Science Energy Frontier Research Center, and Sandia’s Laboratory Directed Research and Development program.



Source: Sandia National Laboratories

Sunday, 17 June 2012

Robotic assistants may adapt to humans in the factory

Engineerblogger
June 17, 2012



Professor Julie Shah observes while grad students Ron Wilcox (left), and Matthew Gombolay coordinate human-robotic interaction.  Photo: William Litant/MIT

In today’s manufacturing plants, the division of labor between humans and robots is quite clear: Large, automated robots are typically cordoned off in metal cages, manipulating heavy machinery and performing repetitive tasks, while humans work in less hazardous areas on jobs requiring finer detail.

But according to Julie Shah, the Boeing Career Development Assistant Professor of Aeronautics and Astronautics at MIT, the factory floor of the future may host humans and robots working side by side, each helping the other in common tasks. Shah envisions robotic assistants performing tasks that would otherwise hinder a human’s efficiency, particularly in airplane manufacturing.

“If the robot can provide tools and materials so the person doesn’t have to walk over to pick up parts and walk back to the plane, you can significantly reduce the idle time of the person,” says Shah, who leads the Interactive Robotics Group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “It’s really hard to make robots do careful refinishing tasks that people do really well. But providing robotic assistants to do the non-value-added work can actually increase the productivity of the overall factory.”

A robot working in isolation has to simply follow a set of preprogrammed instructions to perform a repetitive task. But working with humans is a different matter: For example, each mechanic working at the same station at an aircraft assembly plant may prefer to work differently — and Shah says a robotic assistant would have to effortlessly adapt to an individual’s particular style to be of any practical use.

Now Shah and her colleagues at MIT have devised an algorithm that enables a robot to quickly learn an individual’s preference for a certain task, and adapt accordingly to help complete the task. The group is using the algorithm in simulations to train robots and humans to work together, and will present its findings at the Robotics: Science and Systems Conference in Sydney in July.

“It’s an interesting machine-learning human-factors problem,” Shah says. “Using this algorithm, we can significantly improve the robot’s understanding of what the person’s next likely actions are.”

Taking wing

As a test case, Shah’s team looked at spar assembly, a process of building the main structural element of an aircraft’s wing. In the typical manufacturing process, two pieces of the wing are aligned. Once in place, a mechanic applies sealant to predrilled holes, hammers bolts into the holes to secure the two pieces, then wipes away excess sealant. The entire process can be highly individualized: For example, one mechanic may choose to apply sealant to every hole before hammering in bolts, while another may like to completely finish one hole before moving on to the next. The only constraint is the sealant, which dries within three minutes.

The researchers say robots such as FRIDA, designed by Swiss robotics company ABB, may be programmed to help in the spar-assembly process. FRIDA is a flexible robot with two arms capable of a wide range of motion that Shah says can be manipulated to either fasten bolts or paint sealant into holes, depending on a human’s preferences.

To enable such a robot to anticipate a human’s actions, the group first developed a computational model in the form of a decision tree. Each branch along the tree represents a choice that a mechanic may make — for example, continue to hammer a bolt after applying sealant, or apply sealant to the next hole?

“If the robot places the bolt, how sure is it that the person will then hammer the bolt, or just wait for the robot to place the next bolt?” Shah says. “There are many branches.”

Using the model, the group performed human experiments, training a laboratory robot to observe an individual’s chain of preferences. Once the robot learned a person’s preferred order of tasks, it then quickly adapted, either applying sealant or fastening a bolt according to a person’s particular style of work.

Working side by side

Shah says in a real-life manufacturing setting, she envisions robots and humans undergoing an initial training session off the factory floor. Once the robot learns a person’s work habits, its factory counterpart can be programmed to recognize that same person, and initialize the appropriate task plan. Shah adds that many workers in existing plants wear radio-frequency identification (RFID) tags — a potential way for robots to identify individuals.

Steve Derby, associate professor and co-director of the Flexible Manufacturing Center at Rensselaer Polytechnic Institute, says the group’s adaptive algorithm moves the field of robotics one step closer to true collaboration between humans and robots.

“The evolution of the robot itself has been way too slow on all fronts, whether on mechanical design, controls or programming interface,” Derby says. “I think this paper is important — it fits in with the whole spectrum of things that need to happen in getting people and robots to work next to each other.”

Shah says robotic assistants may also be programmed to help in medical settings. For instance, a robot may be trained to monitor lengthy procedures in an operating room and anticipate a surgeon’s needs, handing over scalpels and gauze, depending on a doctor’s preference. While such a scenario may be years away, robots and humans may eventually work side by side, with the right algorithms.

“We have hardware, sensing, and can do manipulation and vision, but unless the robot really develops an almost seamless understanding of how it can help the person, the person’s just going to get frustrated and say, ‘Never mind, I’ll just go pick up the piece myself,’” Shah says.

This research was supported in part by Boeing Research and Technology and conducted in collaboration with ABB.

Source: MIT

Aircraft engineered with failure in mind may last longer: New design approach tailors planes to fly in the face of likely failures

Engineerblogger
June 17, 2012


AeroAstro professor Olivier de Weck surveys aircraft blueprints in MIT's Neumann Hangar. With de Weck's new new approach, engineers may design airplanes to fly in the face of likely failures. Photo: Dominick Reuter

Complex systems inhabit a “gray world” of partial failures, MIT’s Olivier de Weck says: While a system may continue to operate as a whole, bits and pieces inevitably degrade. Over time, these small failures can add up to a single catastrophic failure, incapacitating the system.

“Think about your car,” says de Weck, an associate professor of aeronautics and astronautics and engineering systems. “Most of the things are working, but maybe your right rearview mirror is cracked, and maybe one of the cylinders in your engine isn’t working well, and your left taillight is out. The reality is that many, many real-world systems have partial failures.”

This is no less the case for aircraft. De Weck says it’s not uncommon that, from time to time, a plane’s sensors may short-circuit, or its rudders may fail to respond: “And then the question is, in that partially failed state, how will the system perform?”

The answer to that question is often unclear — partly because of how systems are initially designed. When deciding on the configuration of aircraft, engineers typically design for the optimal condition: a scenario in which all components are working perfectly. However, de Weck notes that much of a plane’s lifetime is spent in a partially failed state. What if, he reasoned, aircraft and other complex systems could be designed from the outset to operate not in the optimal scenario, but for suboptimal conditions?

De Weck and his colleagues at MIT and the Draper Laboratory have created a design approach that tailors planes to fly in the face of likely failures. The method, which the authors call a “multistate design approach,” determines the likelihood of various failures over an airplane’s lifetime. Through simulations, the researchers changed a plane’s geometry — for example, making its tail higher, or its rudder smaller — and then observed its performance under various failure scenarios. De Weck says engineers may use the approach to design safer, longer-lasting aerial vehicles. The group will publish a paper describing its approach in the Journal of Aircraft.

“If you admit ahead of time that the system will spend most of its life in a degraded state, you make different design decisions,” de Weck says. “You can end up with airplanes that look quite different, because you’re really emphasizing robustness over optimality.”

De Weck collaborated with Jeremy Agte, formerly at Draper Laboratory and now an assistant professor of aeronautics and astronautics at the Air Force Institute of Technology, and Nicholas Borer, a systems design engineer at MIT. Agte says making design changes based on likely failures may be particularly useful for vehicles engineered for long-duration missions.

“As our systems operate for longer and longer periods of time, these changes translate to significantly improved mission completion rates,” Agte says. “For instance, an Air Force unmanned aerial vehicle that experiences a failure would have inherent stability and control designed to ensure adequate performance for continued mission operation, rather than having to turn around and come home.”

The weight of failure

As a case study, the group analyzed the performance of a military twin-engine turboprop plane — a small, 12-seater aircraft that has been well-studied in the past. The researchers set about doing what de Weck calls “guided brainstorming”: essentially drawing up a list of potential failures, starting from perfect condition and branching out to consider various possible malfunctions.

“It looks kind of like a tree where initially everything is working perfectly, and then as the tree opens up, different failure trajectories can happen,” de Weck says.

The group then used an open-source flight simulator to model how the plane would fly — following certain branches of the tree, as it were. The researchers modified the simulator to change the shape of the plane under different failure conditions, and analyzed the plane’s resulting performance. They found that for certain scenarios, changing the geometry of the plane significantly improved its safety, or robustness, following a failure.

For example, the group studied the plane’s operation during a maneuver called the “Dutch roll,” in which the plane rocks from side to side, its wingtips rolling in a figure-eight motion. The potentially dangerous motion is much more pronounced when a plane’s rudder is faulty, or one of its engines isn’t responding. Using their design approach, the group found that in such partially failed conditions, if the plane’s tail was larger, it could damp the motion, and steady the aircraft.

Of course, a plane’s shape can’t morph in midflight to accommodate an engine sputter or a rudder malfunction. To arrive at a plane’s final shape — a geometry that can withstand potential failures — de Weck and his researchers weighed the likelihood of each partial failure, using that data to inform their decisions on how to change the plane’s shape in a way that would address the likeliest failures.

Beyond perfection

De Weck says that while the group’s focus on failure represents a completely new approach to design, there is also a psychological element with which engineers may have to grapple.

“Many engineers are perfectionists, so deliberately designing something that’s not going to be fully functional is hard,” de Weck says. “But we’re showing that by acknowledging imperfection, you can actually make the system better.”

Jaroslaw Sobieski, a distinguished research associate at NASA Langley Research Center, views the new design approach as a potential improvement in the overall safety of aircraft. He says engineering future systems with failure in mind will ensure that “even if failure occurs, the flight operation will continue” — albeit with some loss in performance — “but sufficient to at least [achieve] a safe landing. In practice, that alternative may actually increase the safety level and reduce the aircraft cost,” when compared with other design approaches.

The team is using its approach to evaluate the performance of an unmanned aerial vehicle (UAV) that flies over Antarctica continuously for six months at a time, at high altitudes, to map its ice sheets. This vehicle must fly, even in the face of inevitable failures: It’s on a remote mission, and grounding the UAV for repairs is impossible. Using their method, de Weck and his colleagues are finding that the vehicle’s shape plays a crucial role in its long-term performance.

In addition to lengthy UAV missions, de Weck says the group’s approach may be used to design other systems that operate remotely, without access to regular maintenance — such as undersea sensor networks and possible colonies in space.

“If we look at the space station, the air-handling system, the water-recycling system, those systems are really important, but their components also tend to fail,” de Weck says. “So applying this [approach] to the design of habitats, and even long-term planetary colonies, is something we want to look at.”

Source: MIT

Radiation-Resistant Circuits from Mechanical Parts

Engineerblogger
June 16, 2012


Microscopic images of two “logic gates” made of microscopic mechanical parts and thus designed to resist ionizing radiation that fries conventional silicon electronics. The top gate performs the logic function named “exclusive or” and the gate in the bottom image performs the function “and.” These devices, designed at the University of Utah, are so small that four of them would fit in the cross section of a single human hair.
Photo Credit: Massood Tabib-Azar, University of Utah

University of Utah engineers designed microscopic mechanical devices that withstand intense radiation and heat, so they can be used in circuits for robots and computers exposed to radiation in space, damaged nuclear power plants or nuclear attack.

The researchers showed the devices kept working despite intense ionizing radiation and heat by dipping them for two hours into the core of the University of Utah’s research reactor. They also built simple circuits with the devices.

Ionizing radiation can quickly fry electronic circuits, so heavy shielding must be used on robots such as those sent to help contain the meltdowns at the Fukushima Daiichi nuclear power plant after Japan’s catastrophic 2011 earthquake and tsunami.

“Robots were sent to control the troubled reactors, and they ceased to operate after a few hours because their electronics failed,” says Massood Tabib-Azar, a professor of electrical and computer engineering at the University of Utah and the Utah Science Technology and Research initiative.

“We have developed a unique technology that keeps on working in the presence of ionizing radiation to provide computation power for critical defense infrastructures,” he says. “Our devices also can be used in deep space applications in the presence of cosmic ionizing radiation, and can help robotics to control troubled nuclear reactors without degradation.”

The new devices are “logic gates” that perform logical operations such as “and” or “not” and are a type of device known as MEMS or micro-electro-mechanical systems. Each gate takes the place of six to 14 switches made of conventional silicon electronics.

Development of the new logic gates and their use to build circuits such as adders and multiplexers is reported in a study set for online publication this month in the journal Sensors and Actuators. The research was conducted by Tabib-Azar, University of Utah electrical engineering doctoral student Faisal Chowdhury and computer engineer Daniel Saab at Case Western Reserve University in Cleveland.

Tabib-Azar says that if he can obtain more research funding, “then the next stage would be to build a little computer” using the logic gates and circuits.

The study was funded by the Defense Advanced Research Projects Agency.

“Its premier goal is to keep us ready,” says Tabib-Azar. “If there is a nuclear event, we need to be able to have control systems, say for radars, to be working to protect the nation. There are lots of defense applications both in peacetime and wartime that require computers that can operate in the presence of ionizing radiation.”

In April, the Defense Advanced Research Projects Agency issued a call for the development of robots to deal with stricken nuclear reactors to reduce human exposure to deadly radiation. In May, NASA said it was seeking proposals for new shields or materials able to resist radiation in space. Circuits built with the new devices also could resist intense heat in engines to monitor performance, Tabib-Azar says.

MEMS: Ability to Withstand Radiation Overcomes Drawbacks

Current radiation-resistant technologies fall into two categories: conventional complementary silicon-oxide semiconductor electronics shielded with lead or other metals, and the use of different materials that inherently resist radiation.

“Electronic materials and devices by their nature require a semiconducting channel to carry current, and the channel is controlled by charges,” Tabib-Azar says. Radiation creates current inside the semiconductor channel, and “that disrupts the ability of the normal circuitry to control the current, so the signal gets lost.”

He says the MEMS logic gates are not degraded by ionizing radiation because they lack semiconducting channels. Instead, electrical charges make electrodes move to touch each other, thus acting like a switch.

MEMS have their drawbacks, which Tabib-Azar believes is why no one until now has thought to use them for radiation-resistant circuits. Silicon electronics are 1,000 times faster, much smaller, and more reliable because they have no moving parts.

But by having one MEMS device act as a logic gate, instead of using separate MEMS switches, the number of devices needed for a computer is reduced by a factor of 10 and the reliability and speed increases, Tabib-Azar says.

Also, “mechanical switches usually require large voltages for them to turn on,” Tabib-Azar says. “What we have done is come up with a technique to form very narrow gaps between the bridges in the logic gates, and that allows us to activate these devices with very small voltages, namely 1.5 volts” versus 10 or 20 volts. Unlike conventional electronics, which get hot during use, the logic gates leak much less current and run cooler, so they would last longer if battery-operated.

Design and Reactor Testing of the Logic Gates

Each logic gate measures about 25-by-25 microns, or millionths of a meter, “so you could put four of these on the cross section of a human hair,” says Tabib-Azar. Each gate is only a half-micron thick.

The logic gates each have two “bridges,” which look somewhat like two tiny microscope slides crossing each other to form a tic-tac-toe pattern, with tungsten electrodes in the center square. Each bridge is made of a glass-like silicon nitride insulator with polysilicon under it to give rigidity. The insulator is etched and covered by metallic strips of tungsten that serve as electrodes.

“When you charge them, they attract each other and they move and contact each other. Then current flows,” says Tabib-Azar.

He and his colleagues put the logic gates and conventional silicon switches to the test, showing the logic gates kept working as they were repeatedly turned on and off under extreme heat and radiation, while the silicon switches “shorted out in minutes.”

The devices were placed on a hot plate in a vacuum chamber and heated to 277 degrees Fahrenheit for an hour.

Three times, the researchers lowered the devices for two hours into the core of the university’s 90-kilowatt TRIGA research reactor, with wires extending to the control room so the researchers could monitor their operation. The logic gates did not fail.

The researchers also tested the logic gates outside the reactor and oven, running them for some two months and more than a billion cycles without failure. But to be useful, Tabib-Azar wants to improve that reliability a millionfold.

Two Kinds of Logic Gates

For the study, Tabib-Azar and colleagues built two kinds of logic gate, each with two inputs (0 or 1) and thus four possible combinations of inputs (0-0, 0-1, 1-0, 1-1). The input and output are electrical voltages:

– An AND gate, which means “and.” If both inputs – A and B – are true (or worth 1 each), then the output is true (or equal to 1). If input A or B or both are false (worth 0), then the output is false (or equal to 0).

– An XOR gate, which means “exclusive or.” If input A doesn’t equal B (so A is 0 and B is 1 or A is 1 and B is 0), the output is true (equal 1). If both A and B are either true (1) or false (0), the output is false (0).

“In a sense, you can say these are switches with multiple outcomes,” rather than just off-on (0-1), says Tabib-Azar. “But instead of using six [silicon] switches separately, you have one structure that gives you the same logic functionality.”

“Let’s say you want to decide whether to go to dinner tonight, and that depends on if the weather is nice, if you feel like it,” he says. “In order to make that decision, you have a bunch of ‘or’ statements and a bunch of ‘and’ statements: ‘I’ll go to dinner if the weather is nice and I feel like it.’ ‘I like to eat Italian or French.’ You put these statements together and then you can make a decision.”

“To analyze this using silicon computers,” Tabib-Azar says, “you need a bunch of on-off switches that have to turn on or off in a particular sequence to give you the output, whether you go to dinner or not. But just a single one of these [MEMS logic gate] devices can be designed to perform this computation for you.”

Source: Utah University