Blogger Themes

Showing posts with label Energy. Show all posts
Showing posts with label Energy. Show all posts

Saturday, 8 September 2012

Researchers create tiny, wirelessly powered cardiac device

Engineerblogger
Sept 9, 2012

Ada Poon, assistant professor of electrical engineering, led the research. (Photo: Linda A. Cicero / Stanford News Service)

Stanford electrical engineers overturn existing models to demonstrate the feasibility of a millimeter-sized, wirelessly powered cardiac device. The findings, say the researchers, could dramatically alter the scale of medical devices implanted in the human body.

A team of engineers at Stanford has demonstrated the feasibility of a super-small, implantable cardiac device that gets its power not from batteries but from radio waves transmitted from a small power device on the surface of the body.

The implanted device is contained in a cube just 0.8 millimeter on a side. It could fit on the head of pin.

The findings were published in the journal Applied Physics Letters. In their paper, the researchers demonstrated wireless power transfer to a millimeter-sized device implanted 5 centimeters inside the chest on the surface of the heart – a depth once thought out of reach for wireless power transmission.

The engineers say the research is a major step toward a day when all implants are driven wirelessly. Beyond the heart, they believe such devices might include swallowable endoscopes – so-called "pillcams" that travel the digestive tract – permanent pacemakers and precision brain stimulators – virtually any medical applications where device size and power matter.
A revolution in the body

Implantable medical devices in the human body have revolutionized medicine. Hundreds of thousands if not millions of pacemakers, cochlear implants and drug pumps are today helping patients live relatively normal lives, but these devices are not without engineering challenges.

First, they require power, which means batteries, and batteries are bulky. In a device like a pacemaker, the battery alone accounts for as much as half the volume of the device. Second, batteries have finite lives. New surgery is needed when they wane.

"Wireless power solves both challenges," said Ada Poon, assistant professor of electrical engineering, who headed up the research. She was assisted by Sanghoek Kim and John Ho, both doctoral candidates in her lab.

Last year, Poon made headlines when she demonstrated a wirelessly powered, self-propelled device capable of swimming through the bloodstream. To get there she needed to overturn some long-held assumptions about delivery of wireless power through the human body.

Her latest device works by a combination of inductive and radiative transmission of power. Both are types of electromagnetic transfer in which a transmitter sends radio waves to a coil of wire inside the body. The radio waves produce an electrical current in the coil sufficient to operate a small device.

There is an indirect relationship between the frequency of the transmitted radio waves and the size of the receiving antenna. That is, to deliver a desired level of power, lower frequency waves require bigger coils. Higher frequency waves can work with smaller coils.

"For implantable medical devices, therefore, the goal is a high-frequency transmitter and a small receiver, but there is one big hurdle," Kim said.
Ignoring consensus

Existing mathematical models have held that high-frequency radio waves do not penetrate far enough into human tissue, necessitating the use of low-frequency transmitters and large antennas – too large to be practical for implantable devices.

Ignoring the consensus, Poon proved the models wrong. Human tissues dissipate electric fields quickly, it is true, but radio waves can travel in a different way – as alternating waves of electric and magnetic fields. With the correct equations in hand, she discovered that high-frequency signals travel much deeper than anyone suspected.

"In fact, to achieve greater power efficiency, it is actually advantageous that human tissue is a very poor electrical conductor," said Kim. "If it were a good conductor, it would absorb energy, create heating and prevent sufficient power from reaching the implant."

According to their revised models, the researchers found that the maximum power transfer through human tissue occurs at about 1.7 billion cycles per second, much higher than previously thought.

"In this high-frequency range, we can increase power transfer by about 10 times over earlier devices," said Ho, who honed the mathematical models.

The discovery meant that the team could shrink the receiving antenna by a factor of 10 as well, to a scale that makes wireless implantable devices feasible. At the optimal frequency, a millimeter-radius coil is capable of harvesting more than 50 microwatts of power, well in excess of the needs of a recently demonstrated 8-microwatt pacemaker.
Engineering challenges

With the dimensional challenges solved, the team found itself bound by other engineering constraints. First, electronic medical devices must meet stringent health standards established by IEEE (Institute of Electrical and Electronics Engineers), particularly with regard to tissue heating. Second, the team found that the receiving and transmitting antennas had to be optimally oriented to achieve maximum efficiency. Differences in alignment of just a few degrees could produce troubling drops in power.

"This can't happen medical devices," said Poon. "As the human heart and body are in constant motion, solving this issue was critical to the success of our research." The team responded by designing an innovative slotted transmitting antenna structure. It delivers consistent power efficiency regardless of orientation of the two antennas.

The new design serves additionally to focus the radio waves precisely at the point inside the body where the implanted device rests on the surface of the heart – increasing the electric field where it is needed most, but canceling it elsewhere. This helps reduce overall tissue heating to levels well within the IEEE standards. Poon has applied for a patent on the antenna structure.

This research was made possible by funding from the C2S2 Focus Center, one of six research centers funded under the Focus Center Research Program, a Semiconductor Research Corporation entity. Lisa Chen also contributed to this study.

Source: Stanford University

Thursday, 19 July 2012

Why platinum is the wrong material for fuel cell?

Engineerblogger
July 19, 2012



Professor Alfred Anderson

Fuel cells are inefficient because the catalyst most commonly used to convert chemical energy to electricity is made of the wrong material, a researcher at Case Western Reserve University argues. Rather than continue the futile effort to tweak that material—platinum—to make it work better, Chemistry Professor Alfred Anderson urges his colleagues to start anew.

“Using platinum is like putting a resistor in the system,” he said. Anderson freely acknowledges he doesn’t know what the right material is, but he’s confident researchers’ energy would be better spent seeking it out than persisting with platinum.

“If we can find a catalyst that will do this [more efficiently],” he said, “it would reach closer to the limiting potential and get more energy out of the fuel cell.”

Anderson’s analysis and a guide for a better catalyst have been published in a recent issue of Physical Chemistry Chemical Physics and in Electrocatalysis online.

Even in the best of circumstances, Anderson explained, the chemical reaction that produces energy in a fuel cell—like those being tested by some car companies—ends up wasting a quarter of the energy that could be transformed into electricity. This point is well recognized in the scientific community, but, to date, efforts to address the problem have proved fruitless.

Anderson blames the failure on a fundamental misconception as to the reason for the energy waste. The most widely accepted theory says impurities are binding to the platinum surface of the cathode and blocking the desired reaction.

“The decades-old surface-poisoning explanation is lame because there is more to the story,” Anderson said.

To understand the loss of energy, Anderson used data derived from oxygen-reduction experiments to calculate the optimal bonding strengths between platinum and intermediate molecules formed during the oxygen-reduction reaction. The reaction takes place at the platinum-coated cathode.

He found the intermediate molecules bond too tightly or too loosely to the cathode surface, slowing the reaction and causing a drop in voltage. The result is the fuel cell produces about .93 volts instead of the potential maximum of 1.23 volts.

To eliminate the loss, calculations show, the catalyst should have bonding strengths tailored so that all reactions taking place during oxygen reduction occur at or as near to 1.23 volts as possible.

Anderson said the use of volcano plots, which are a statistical tool for comparing catalysts, has actually misguided the search for the best one. “They allow you to grade a series of similar catalysts, but they don’t point to better catalysts.”

He said a catalyst made of copper laccase, a material found in trees and fungi, has the desired bonding strength but lacks stability. Finding a catalyst that has both is the challenge.

Anderson is working with other researchers exploring alternative catalysts as well as an alternative reaction pathway in an effort to increase efficiency.


Source: Case Western Reserve University

Tuesday, 10 July 2012

How do you turn 10 minutes of power into 200? Efficiency, efficiency, efficiency.

Engineerbloggger
July 10, 2012




DARPA seeks revolutionary advances in the efficiency of robotic actuation; fundamental research into biology, physics and electrical engineering could benefit all engineered, actuated systems

A robot that drives into an industrial disaster area and shuts off a valve leaking toxic steam might save lives. A robot that applies supervised autonomy to dexterously disarm a roadside bomb would keep humans out of harm’s way. A robot that carries hundreds of pounds of equipment over rocky or wooded terrain would increase the range warfighters can travel and the speed at which they move. But a robot that runs out of power after ten to twenty minutes of operation is limited in its utility. In fact, use of robots in defense missions is currently constrained in part by power supply issues. DARPA has created the M3 Actuation program, with the goal of achieving a 2,000 percent increase in the efficiency of power transmission and application in robots, to improve performance potential.

Humans and animals have evolved to consume energy very efficiently for movement. Bones, muscles and tendons work together for propulsion using as little energy as possible. If robotic actuation can be made to approach the efficiency of human and animal actuation, the range of practical robotic applications will greatly increase and robot design will be less limited by power plant considerations.

M3 Actuation is an effort within DARPA’s Maximum Mobility and Manipulation (M3) robotics program, and adds a new dimension to DARPA’s suite of robotics research and development work.

“By exploring multiple aspects of robot design, capabilities, control and production, we hope to converge on an adaptable core of robot technologies that can be applied across mission areas,” said Gill Pratt, DARPA program manager. “Success in the M3 Actuation effort would benefit not just robotics programs, but all engineered, actuated systems, including advanced prosthetic limbs.”

Proposals are sought in response to a Broad Agency Announcement (BAA). DARPA expects that solutions will require input from a broad array of scientific and engineering specialties to understand, develop and apply actuation mechanisms inspired in part by humans and animals. Technical areas of interest include, but are not limited to: low-loss power modulation, variable recruitment of parallel transducer elements, high-bandwidth variable impedance matching, adaptive inertial and gravitational load cancellation, and high-efficiency power transmission between joints.

Research and development will cover two tracks of work:
  • Track 1 asks performer teams to develop and demonstrate high-efficiency actuation technology that will allow robots similar to the DARPA Robotics Challenge (DRC) Government Furnished Equipment (GFE) platform to have twenty times longer endurance than the DRC GFE when running on untethered battery power (currently only 10-20 minutes). Using Government Furnished Information about the GFE, M3 Actuation performers will have to build a robot that incorporates the new actuation technology. These robots will be demonstrated at, but not compete in, the second DRC live competition scheduled for December 2014.
  • Track 2 will be tailored to performers who want to explore ways of improving the efficiency of actuators, but at scales both larger and smaller than applicable to the DRC GFE platform, and at technical readiness levels insufficient for incorporation into a platform during this program. Essentially, Track 2 seeks to advance the science and engineering behind actuation without the requirement to apply it at this point.

While separate efforts, M3 Actuation will run in parallel with the DRC. In both programs DARPA seeks to develop the enabling technologies required for expanded practical use of robots in defense missions. Thus, performers on M3 Actuation will share their design approaches at the first DRC live competition scheduled for December 2013, and demonstrate their final systems at the second DRC live competition scheduled for December 2014.

Source: DARPA

New chip captures power from multiple sources

Engineerblogger
July 10, 2012


Graphic: Christine Daniloff

Researchers at MIT have taken a significant step toward battery-free monitoring systems — which could ultimately be used in biomedical devices, environmental sensors in remote locations and gauges in hard-to-reach spots, among other applications.

Previous work from the lab of MIT professor Anantha Chandrakasan has focused on the development of computer and wireless-communication chips that can operate at extremely low power levels, and on a variety of devices that can harness power from natural light, heat and vibrations in the environment. The latest development, carried out with doctoral student Saurav Bandyopadhyay, is a chip that could harness all three of these ambient power sources at once, optimizing power delivery.

The energy-combining circuit is described in a paper being published this summer in the IEEE Journal of Solid-State Circuits.

“Energy harvesting is becoming a reality,” says Chandrakasan, the Keithley Professor of Electrical Engineering and head of MIT’s Department of Electrical Engineering and Computer Science. Low-power chips that can collect data and relay it to a central facility are under development, as are systems to harness power from environmental sources. But the new design achieves efficient use of multiple power sources in a single device, a big advantage since many of these sources are intermittent and unpredictable.

“The key here is the circuit that efficiently combines many sources of energy into one,” Chandrakasan says. The individual devices needed to harness these tiny sources of energy — such as the difference between body temperature and outside air, or the motions and vibrations of anything from a person walking to a bridge vibrating as traffic passes over it — have already been developed, many of them in Chandrakasan’s lab.

Combining the power from these variable sources requires a sophisticated control system, Bandyopadhyay explains: Typically each energy source requires its own control circuit to meet its specific requirements. For example, circuits to harvest thermal differences typically produce only 0.02 to 0.15 volts, while low-power photovoltaic cells can generate 0.2 to 0.7 volts and vibration-harvesting systems can produce up to 5 volts. Coordinating these disparate sources of energy in real time to produce a constant output is a tricky process.

So far, most efforts to harness multiple energy sources have simply switched among them, taking advantage of whichever one is generating the most energy at a given moment, Bandyopadhyay says, but that can waste the energy being delivered by the other sources. “Instead of that, we extract power from all the sources,” he says. The approach combines energy from multiple sources by switching rapidly between them.

Another challenge for the researchers was to minimize the power consumed by the control circuit itself, to leave as much as possible for the actual devices it’s powering — such as sensors to monitor heartbeat, blood sugar, or the stresses on a bridge or a pipeline. The control circuits optimize the amount of energy extracted from each source.

The system uses an innovative dual-path architecture. Typically, power sources would be used to charge up a storage device, such as a battery or a supercapacitor, which would then power an actual sensor or other circuit. But in this control system, the sensor can either be powered from a storage device or directly from the source, bypassing the storage system altogether. “That makes it more efficient,” Bandyopadhyay says. The chip uses a single time-shared inductor, a crucial component to support the multiple converters needed in this design, rather than separate ones for each source.

David Freeman, chief technologist for power-supply solutions at Texas Instruments, who was not involved in this work, says, “The work being done at MIT is very important to enabling energy harvesting in various environments. The ability to extract energy from multiple different sources helps maximize the power for more functionality from systems like wireless sensor nodes.”

Only recently, Freeman says, have companies such as Texas Instruments developed very low-power micro-controllers and wireless transceivers that could be powered by such sources. “With innovations like these that combine multiple sources of energy, these systems can now start to increase functionality,” he says. “The benefits from operating from multiple sources not only include maximizing peak energy, but also help when only one source of energy may be available.”

The work has been funded by the Interconnect Focus Center, a combined program of the Defense Advanced Research Projects Agency and companies in the defense and semiconductor industries.

Source: MIT

Additional Information:

Tuesday, 3 July 2012

Researcher offers new insights into power-generating windows

Engineerblogger
July 2, 2012


(beeld: Eric Verdult, Kennis in Beeld)

On 5 July Jan Willem Wiegman is graduating from TU Delft with his research into power-generating windows. The Applied Physics Master’s student calculated how much electricity can be generated using so-called luminescent solar concentrators. These are windows which have been fitted with a thin film of material that absorbs sunlight and directs it to narrow solar cells at the perimeter of the window. Wiegman shows the relationship between the colour of the material used and the maximum amount of power that can be generated. Such power-generating windows offer potential as a cheap source of solar energy. Wiegman’s research article, which he wrote together with his supervisor at TU Delft, Erik van der Kolk, has been published in the journal Solar Energy Materials and Solar Cells("Building integrated thin film luminescent solar concentrators: Detailed efficiency characterization and light transport modelling").

Windows and glazed facades of office blocks and houses can be used to generate electricity if they are used as luminescent solar concentrators. This entails applying a thin layer (for example a foil or coating) of luminescent material to the windows, with narrow solar cells at the perimeters. The luminescent layer absorbs sunlight and guides it to the solar cells at the perimeter, where it is converted into electricity. This enables a large surface area of sunlight to be concentrated on a narrow strip of solar cells.

The new stained glass

Luminescent solar concentrators are capable of generating dozens of watts per square metre. The exact amount of power produced by the windows depends on the colour and quality of the light-emitting layer and the performance of the solar cells. Wiegman’s research shows for the first time the relationship between the colour of the film or coating and the maximum amount of power.

A transparent film produces a maximum of 20 watts per square metre, which is an efficiency of 2%. To power your computer you would need a window measuring 4 square metres. The efficiency increases if the film is able to absorb more light particles. This can be achieved by using a foil that absorbs light particles from a certain part of the solar spectrum. A foil that mainly absorbs the blue, violet and green light particles will give the window a red colour. Another option is to use a foil that absorbs all the colours of the solar spectrum equally. This would give the window a grey tint. Both the red and the grey film have an efficiency of 9%, which is comparable to the efficiency of flexible solar cells.

Wiegman’s research has also shown the importance of a smooth film surface for the efficient transport of light particles to the perimeter of the window as they are then not impeded by scattering between the film and the window surface.

The research into power-generating windows is in keeping with the European ambition to make buildings as energy neutral as possible. Luminescent solar concentrators are a good way of producing cheap solar energy.

Source: TU Delft

Additional Information:

  • Visit the research website for more information about research into luminescent materials

Research targets next-generation electric motors for luxury automobiles

Engineerblogger
July 2, 2012




Cobham, Jaguar Land Rover and Ricardo will carry out research into the design of economic electric motors that avoid expensive magnet materials.

Next-generation electric motors for low carbon emission vehicles are the target of a new collaborative research programme to be led by Cobham Technical Services. The project, ‘Rapid Design and Development of a Switched Reluctance Traction Motor’, will also involve partners Jaguar Land-Rover and engineering consultancy Ricardo UK, and is co-funded by the Technology Strategy Board.

As part of its work in the project, Cobham will develop multi-physics software and capture the other partners’ methodology in order to design, simulate and analyze the performance of high efficiency, lightweight electric traction motors that eliminate the use of expensive magnetic materials. Using these new software tools JLR and Ricardo will design and manufacture a prototype switched reluctance motor that addresses the requirements of luxury hybrid vehicles.

The project is one of 16 collaborative R&D programmes to have won funding from the UK government-backed Technology Strategy Board and the Department for Business, Innovation and Skills (BIS), which have agreed to invest £10 million aimed at achieving significant cuts in CO2 emissions for vehicle-centric technologies. The total value of this particular motor project is £1.5 million, with half the amount funded by the Technology Strategy Board/BIS, and the rest by the project partners.

According to Kevin Ward, Director of Cobham Technical Services - Vector Fields Software, “Design software for switched reluctance motors is at about the same level as diesel engine design software when it was first introduced. Cobham will develop its existing SRM capabilities to provide the consortium with enhanced tools based on the widely used Opera suite for design, finite element simulation and analysis. In addition to expanding various facets of Opera’s electromagnetic capabilities, we will investigate advanced integration with our other multi-physics software, to obtain more accurate evaluation of model related performance parameters such as vibration. Design throughput will also be enhanced via more extensive parallelization of code and developing an environment which captures the workflow of the design process.”

Tony Harper, Jaguar Land Rover Head of Research: “It is important to understand the capability of switched reluctance motors in the context of the vehicle as a whole so that we can set component targets that will deliver the overall vehicle experience. Jaguar Land Rover will apply its expertise in designing and producing world class vehicles to this project, with the aim of developing the tools and technology for the next generation of electric motors.”

Dr Andrew Atkins, chief engineer – innovation, at Ricardo UK, said: “The development of technologies enabling the design of electric vehicle motors that avoid the use of expensive and potentially carbon-intensive rare-earth metals, is a major focus for the auto industry. Ricardo is pleased to be involved in this innovative programme and we look forward to working with Cobham and Jaguar Land Rover to develop this important new technology. This will further build upon our growth plans for electric drives capability and capacity.”

The project has a three year timetable, at the end of which improved design tools and processes will be in place to support rapid design, helping to accelerate the uptake of this technology into production. Aside from the need to further reduce CO2 emissions from hybrid vehicles by moving to more efficient and lower weight electric motors, there is an urgent requirement to eliminate the use of rare earth elements, which are in increasingly short supply and have risen ten-fold in cost in recent years. Virtually all electric traction motors currently used in such applications employ permanent magnets made from materials such as neodymium-iron-boron and samarium-cobalt. Since switched reluctance motors do not use permanent magnets, they are likely to provide the ideal replacement technology. However, one of the main challenges of the project will be to produce a torque-dense motor that is also quiet enough for use in luxury vehicles.

Source: Ricardo

Additional Information:

Sustainable energy solution developed by rubbish collection

Engineerblogger
July 3, 2012


The Pyroformer overcomes many of the problems other renewable energy solutions have generated

As fuel prices continue to increase, researchers from the European Bioenergy Research Institute (EBRI) at Aston University, have developed an innovative bioenergy solution that uses waste products to generate cost-effective heat and power and that could reduce the world’s reliance on fossil fuels.

The market opportunities of the equipment – a Pyroformer, developed by Professor Andreas Hornung, of EBRI – also offer business benefits to the West Midlands region. It is anticipated that 35 jobs will be directly safeguarded or created and over 1,000 indirect jobs created in the West Midlands by 2022 as a result. This would see an increase in the turnover of the West Midlands’ regional bioenergy industry and will result in an increase in Net Regional GVA of £105 million by the same date.

The Pyroformer overcomes many of the problems other renewable energy solutions have generated. Tests have shown that unlike other bioenergy plants, the Pyroformer has no negative environmental or food security impacts. It can use multiple waste sources and therefore does not require the destruction of rainforests or the use of agricultural land for the growth of specialist bioenergy crops. In fact biochar - one of its by-products - can even be used as a fertiliser to increase crop yields.

As well as generating heat and power, the Pyroformer also dramatically reduces the amount of material sent to landfill.

Professor Andreas Hornung, Head of the European Bioenergy Research Institute at Aston University, said: “This Pyroformer is the first of its kind in the UK and the first industrial scale plant is now up and running at Harper Adams University College before it is permanently installed on the Aston campus later this year. We are delighted with the tests taking place at Harper Adams which are demonstrating that this really is a low carbon, renewable and sustainable energy source.

“However, this is about more than just energy provision. We believe this bioenergy technology could be a key stimulator of growth and jobs in the region and the reaction of the business community so far has been very enthusiastic. If you are looking for a clean energy source that ensures energy security without damaging people or planet, we already have the solution.”

The Pyroformer is capable of processing up to 100 kg/h of biomass feed and when coupled with a gasifier it will have an output of 400 kWeI – this is the equivalent to providing power for 800 homes[1]. It is currently being tested at Harper Adams University College in Shropshire before moving to its permanent home at EBRI’s new £16.5m ERDF funded laboratories later this year. This facility will showcase the Pyroformer to industry and demonstrate how real-life solutions for tackling biomass based residues and waste can be achieved, with both environmental and financial benefits for households, businesses and local authorities.

Source: Aston University (European Bioenergy Research Institute (EBRI))

Additional Information: 
  • [1] 800 homes based on a consumption of approximately 3000 kwH per home.

Sunday, 24 June 2012

Energy: Novel Power Plants Could Clean Up Coal

Engineerblogger
June 24, 2012


Cleaner coal: This pilot plant in Italy uses pressurized oxygen to help reduce emissions from burning coal. Credit: Unity Power Alliance

A pair of new technologies could reduce the cost of capturing carbon dioxide from coal plants and help utilities comply with existing and proposed environmental regulations, including requirements to reduce greenhouse-gas emissions. Both involve burning coal in the presence of pure oxygen rather than air, which is mostly nitrogen. Major companies including Toshiba, Shaw, and Itea have announced plans to build demonstration plants for the technologies in coming months.

The basic idea of burning fossil fuels in pure oxygen isn't new. The drawback is that it's more expensive than conventional coal plant technology, because it requires additional equipment to separate oxygen and nitrogen. The new technologies attempt to offset at least some of this cost by improving efficiency and reducing capital costs in other areas of a coal plant. Among other things, they simplify the after-treatment required to meet U.S. Environmental Protection Agency regulations.

One of the new technologies, which involves pressurizing the oxygen, is being developed by a partnership between ThermoEnergy, based in Worcester, Massachusetts, and the major Italian engineering firm Itea. A version of it has been demonstrated at a small plant in Singapore that can generate about 15 megawatts of heat (enough for about five megawatts of electricity).

The technology simplifies the clean-up of flue gases; for example, some pollutants are captured in a glass form that results from high-temperature combustion. It also has the ability to quickly change power output, going from 10 percent to 100 percent of its generating capacity in 30 minutes, says Robert Marrs, ThermoEnergy's VP of business development. Conventional coal plants take several hours to do that. More flexible power production could accommodate changes in supply from variable sources of power like wind turbines and solar panels.

Marrs says that these advantages, along with the technology's higher efficiency at converting the energy in coal into electricity, could make it roughly as cost-effective as retrofitting a coal plant with new technology to meet current EPA regulations, while producing a stream of carbon dioxide that's easy to capture. The technology also reduces net energy consumption at coal plants, because the water produced by combustion is captured and can be recycled. This makes it attractive for use in drought-prone areas, such as some parts of China.

The other technology, being developed by the startup Net Power along with Toshiba, the power producer Exelon, and the engineering firm Shaw, is more radical, and it's designed to make coal plants significantly more efficient than they are today—over 50 percent efficient, versus about 30 percent. The most efficient power plants today use a pair of turbines: a gas turbine and a steam turbine that runs off the gas turbine's exhaust heat. The new technology makes use of the exhaust by directing part of the carbon dioxide in the exhaust stream back into the gas turbine, doing away with the steam turbine altogether. That helps offset the cost of the oxygen separation equipment. The carbon dioxide that isn't redirected to the turbine is relatively pure compared to exhaust from a conventional plant, and it is already highly pressurized, making it suitable for sequestering underground. The technology was originally conceived to work with gasified coal, but the company is planning to demonstrate it first with natural gas, which is simpler because it doesn't require a gasifier. The company says the technology will cost about the same as conventional natural gas plants. Shaw is funding a 25-megawatt demonstration power plant that is scheduled to be completed by mid-2014. Net Power plants to sell the carbon dioxide to oil companies to help improve oil production.

The technologies may be "plausible on paper," says Ahmed Ghoniem, a professor of mechanical engineering at MIT, but questions remain "until things get demonstrated." (Ghoniem has consulted for ThermoEnergy.) The economics are still a matter of speculation. For one thing, it is "an open question" how much money the technologies could save over conventional pollution control techniques, he says. As a rule, "any time you add carbon dioxide capture, you increase costs," he points out. "The question is by how much." Selling the carbon dioxide to enhance oil recovery can help justify the extra costs, he says, and retrofitting old power plants might help create an initial market. But he says the new technologies won't become widespread unless a price on carbon dioxide emissions is widely adopted.

Ghoniem adds that even if the technology for capturing carbon proves economical, it's still necessary to demonstrate that it's feasible and safe to permanently sequester carbon underground. The challenges of doing that were highlighted by a recent study suggesting that earthquakes could cause carbon dioxide to leak out.

 Source: Technology Review

Saturday, 23 June 2012

Nanotechnology: Bringing down the cost of fuel cells

Engineerblogger
June 23, 2012


Zhen (Jason) He, assistant professor of mechanical engineering (left), and Junhong Chen, professor of mechanical engineering, display a strip of carbon that contains the novel nanorod catalyst material they developed for microbial fuel cells. (Photo by Troye Fox)

Engineers at the University of Wisconsin-Milwaukee (UWM) have identified a catalyst that provides the same level of efficiency in microbial fuel cells (MFCs) as the currently used platinum catalyst, but at 5% of the cost.

Since more than 60% of the investment in making microbial fuel cells is the cost of platinum, the discovery may lead to much more affordable energy conversion and storage devices.

The material – nitrogen-enriched iron-carbon nanorods – also has the potential to replace the platinum catalyst used in hydrogen-producing microbial electrolysis cells (MECs), which use organic matter to generate a possible alternative to fossil fuels.

“Fuel cells are capable of directly converting fuel into electricity,” says UWM Professor Junhong Chen, who created the nanorods and is testing them with Assistant Professor Zhen (Jason) He. “With fuel cells, electrical power from renewable energy sources can be delivered where and when required, cleanly, efficiently and sustainably.”

The scientists also found that the nanorod catalyst outperformed a graphene-based alternative being developed elsewhere. In fact, the pair tested the material against two other contenders to replace platinum and found the nanorods’ performance consistently superior over a six-month period.

The nanorods have been proved stable and are scalable, says Chen, but more investigation is needed to determine how easily they can be mass-produced. More study is also required to determine the exact interaction responsible for the nanorods’ performance.

The work was published in March in the journal Advanced Materials.

The right recipe

MFCs generate electricity while removing organic contaminants from wastewater. On the anode electrode of an MFC, colonies of bacteria feed on organic matter, releasing electrons that create a current as they break down the waste.

On the cathode side, the most important reaction in MFCs is the oxygen reduction reaction (ORR). Platinum speeds this slow reaction, increasing efficiency of the cell, but it is expensive.

Microbial electrolysis cells (MECs) are related to MFCs. However, instead of electricity, MECs produce hydrogen. In addition to harnessing microorganisms at the anode, MECS also use decomposition of organic matter and platinum in a catalytic process at their cathodes.

Chen and He’s nanorods incorporate the best characteristics of other reactive materials, with nitrogen attached to the surface of the carbon rod and a core of iron carbide. Nitrogen’s effectiveness at improving the carbon catalyst is already well known. Iron carbide, also known for its catalytic capabilities, interacts with the carbon on the rod surface, providing “communication” with the core. Also, the material’s unique structure is optimal for electron transport, which is necessary for ORR.

When the nanorods were tested for potential use in MECs, the material did a better job than the graphene-based catalyst material, but it was still not as efficient as platinum.

“But it shows that there could be more diverse applications for this material, compared to graphene,” says He. “And it gave us clues for why the nanorods performed differently in MECs.”

Research with MECs was published in June in the journal Nano Energy.

Source:  University of Wisconsin - Milwaukee

Additional Information:

Basque scientists control light at a nanometric scale with graphene

Engineerblogger
June 23, 2012

Lab at CIC nanoGUNE

Basque research groups are part of the scientific team which has, for the first time, trapped and confined light in graphene, an achievement which constitutes the most promising candidacy to process optic information at nanometric scales and which could open the door to a new generation of nano-sensors with applications in medicine, energy and computing.

The Cooperative Research Centre nanoGUNE, along with the Institute of Physical Chemistry “Rocasolano” (Madrid) and the Institute of Photonic Sciences (Barcelona), have led a study which opens an entirely new field of research and provides a viable avenue to manipulate light in an ultra rapid manner, something that was not possible until now.

Other Basque research centres, like the Physical Materials centre CFM-CSIC-UPV/EHU, the Donostia International Physics Center (DIPC), as well as the Ikerbasque Foundation and the Graphenea company, have also collaborated in the research which has been published in the prestigious science magazine Nature.

The scientists implicated in this study have managed to, for the first time, see guided light with nanometric precision on graphene, a material made up by a layer of carbon with a thickness of only an atom. This display proves what theoretical physicists had predicted for some time: that it is possible to trap and manipulate light in a very efficient way using graphene as a new platform to process optic information and for ultra-sensitive detection.

This ability to trap light in extraordinarily small volumes could shed light on a new generation of nano-sensors with applications in several areas such as medicine, bio-detection, solar cells and light sensors, as well as processors of quantum information.

Source:  nanoBasque

Additional Information:

Thursday, 21 June 2012

Stars, Jets and Batteries – multi-faceted magnetic phenomenon confirmed in the laboratory for the first time

Engineerblogger
June 21, 2012



Magnetic instabilities play a crucial role in the emergence of black holes, they regulate the rotation rate of collapsing stars and influence the behavior of cosmic jets. In order to improve understanding of the underlying mechanisms, laboratory experiments on earth are necessary. At the Helmholtz- Zentrum Dresden-Rossendorf (HZDR), confirmation of such a magnetic instability – the Tayler instability – was successfully achieved for the first time in collaboration with the Leibniz Institute for Astrophysics in Potsdam (AIP). The findings should be able to facilitate construction of large liquid-metal batteries, which are under discussion as cheap storage facilities for renewable energy.

The Tayler instability is being discussed by astrophysicists in reference to, among other things, the emergence of neutron-stars. Neutron stars, according to the theory, would have to rotate much faster than they actually do. The mysterious braking-effect has meanwhile been attributed to the influence of the Tayler instability, which reduces the rotation rate from 1,000 rps down to approximately 10 to 100 rps. Structures similar in appearance to the double-helix of DNA have been occasionally observed in cosmic jets, i.e. streams of matter, which emanate vertically out of the rotating accretion discs near black holes.

Liquid Metal Batteries – Energy Storage Facilities for the Future?

The Tayler instability also affects large-scale liquid metal batteries, which, in the future, could be used for renewable energy storage.

The magnetic phenomenon, observed for the first time in the laboratory at the Helmholtz-Zentrum Dresden-Rossendorf, was predicted in theory by R.J. Tayler in 1973. The Tayler instability always appears when a sufficiently strong current flows through an electrically conductive liquid. Starting from a certain magnitude, the interaction of the current with its own magnetic field creates a vortical flow structure. Ever since their involvement with liquid-metal batteries, HZDR scientists have been aware of the fact that this phenomenon can take effect not only in space but on earth as well. The future use of such batteries for renewable energy storage would be more complicated than originally thought due to the emergence of the Tayler instability during charging and discharging.

American scientists have developed the first prototypes and assume that the system could be easily scaled up. The HZDR physicist Dr. Frank Stefani is skeptical: “We have calculated that, starting at a certain current density and battery dimension, the Tayler instability emerges inevitably and leads to a powerful fluid flow within the metal layers. This stirs the liquid layers, and eventually a short circuit occurs.” In the current edition of the “Physical Review Letters”, the team directed by Stefani – together with colleagues from AIP led by Prof. Günther Rüdiger – reported on the first successful experiment to prove the Tayler instability in a liquid metal. Here a liquid alloy at room temperature consisting of indium, gallium and tin is deployed, through which a current as high as 8,000 amps is sent. In order to exclude other causes for the observed instability such as irregularities in conductivity, the researchers intentionally omitted the implementation of velocity sensors; instead, they used 14 highly-sensitive magnetic field sensors. The data collected indicate the growth rate and critical streaming effects of the Tayler instability, and these data remarkably correspond to the numerical predictions.

How liquid batteries work

Working principle of a liquid metal battery (pictures: Tom Weier, HZDR)

In the context of the smaller American prototypes the Tayler instability does not occur at all, but liquid batteries have to be quite large in order to make them economically feasible. Frank Stefani explains: “I believe that liquid-metal batteries with a base area measured in square meters are entirely possible. They can be manufactured quite easily in that one simply pours the liquids into a large container. They then independently organize their own layer structure and can be recharged and discharged as often as necessary. This makes them economically viable. Such a system can easily cope with highly fluctuating loads.” Liquid-metal batteries could thus always release excessive-supply current when the sun is not shining or the wind turbines are standing still.

The basic principle behind a liquid-metal battery is quite simple: since liquid metals are conductive, they can serve directly as anodes and cathodes. When one pours two suitable metals into a container so that the heavy metal is below and the lighter metal above, and then separates the two metals with a layer of molten salt, the arrangement becomes a galvanic cell. The metals have a tendency to form an alloy, but the molten salt in the middle prevents them from direct mixing. Therefore, the atoms of one metal are forced to release electrons. The ions thus formed wander through the molten salt. Arriving at the site of the other metal, these ions accept electrons and alloy with the second metal. During the charging process, this process is reversed and the alloy is broken up into its original components. In order to avoid the Tayler instability within big batteries – meaning a short circuit – Stefani suggests an internal tube through which the electrical current can be guided in reverse direction. This allows the capacity of the batteries to be considerably increased.

Cosmic magnetic fields in a laboratory experiment

Lab simulation of the Tayler instability: magnetic field sensors detecting the magnetic fields. The Tayler instability occurs whenever the electrical current sent through a liquid metal is high enough. (picture: AIP/HZDR)

Rossendorf researchers together with colleagues from Riga were equally successful in 1999 in their first-time-ever experimental proof of the homogenous dynamo-effect, which is responsible for the creation of the magnetic fields in both the earth and the sun. In a joint project with the Leibniz-Institut für Astrophysik Potsdam, it was possible in 2006 to recreate the so-called magneto-rotational instability in the laboratory, which enables the growth of stars and black holes. In the context of the future project DRESDYN, the researchers are currently preparing two large experiments with liquid sodium, with which the dynamo-effect is to be examined under the influence of precession, on the one hand, and a combination of magnetic instabilities on the other.
 
Source: The Institute of Fluid Dynamics at Helmholtz-Zentrum Dresden-Rossendorf

Additional Information:

Publications
  • Frank Stefani et al.: How to circumvent the size limitation of liquid metal batteries due to the Tayler instability, in: Energy Conversion and Management 52 (2011), 2982-2986, DOI: 10.1016/j.enconman.2011.03.003

Monday, 18 June 2012

Nanotecnology: Thinner than a pencil trace

Engineerblogger
June 19, 2012


Jari Kinaret

Energy-efficient, high-speed electronics on a nanoscale and screens for mobile telephones and computers that are so thin they can be rolled up. Just a couple of examples of what the super-material graphene could give us.But is European industry up to making these visions a reality?

​Seldom has a Nobel Prize in physics sparked the imagination of gadget nerds to such an extent. When Andrej Geim and Konstantin Novoselov at the University of Manchester were rewarded in 2010 for their graphene experiments, it was remarkably easy to provide examples of future applications, mainly in the form of consumer electronics with a level of performance that up to now was virtually inconceivable.

It's not just the IT sector that is watering at the mouth at the thought of graphene. Even in the energy, medical and material technology sectors there are high hopes of using these spectacular properties. Perhaps talk of a future carbon-based technical revolution was no exaggeration.
Even if graphene has not attracted a great deal of attention in the media recently, the research world has been working feverishly behind the scenes. Last year, around 6,000 scientific articles were published worldwide in which the focus was on graphene. About six months ago, new research results were published that reinforced more than ever the idea of graphene as a potential replacement for silicon in the electronics of the future.

"As late as last autumn this was still a long-term goal bearing in mind the major challenges that are involved," explains Professor Jari Kinaret, Head of the Nanoscience Area of Advance at Chalmers. "Then a pioneering publication appeared from Manchester showing that graphene could be combined with other similar two-dimensional materials in a sandwich structure."
"The power consumption of a transistor built using this principle would be just one millionth or so compared to previous prototypes."

Jari Kinaret also heads Graphene Coordinated Action, an initiative to reinforce and bring together graphene research within the EU.
In line with the growing interest in graphene throughout the world, the EU is at risk of losing ground – particularly in applied research.

"Integrating the whole chain, from basic research to product, is something that we are by tradition not particularly skilled at in Europe compared with the Asians or the Americans," explains Jari Kinaret. He presents a pie graph on the computer to illustrate his point.
The first graph shows that to date academic research into graphene has been split fairly evenly split between the USA, Asia and Europe. However, the pie graph showing patent applications from each region is strikingly similar to the size relationship between Jupiter, Saturn and Mars.

"Something is wrong here and we're going to fix it," states Jari Kinaret.
The idea is that the research groups that are currently working independently of each other will be linked in a network and will be able to benefit from each other's results.
This planned European gathering of strengths, however, presupposes more funding, which is on the horizon in the form of "scientific flagships" – the EU Commission designation for the high-profile research initiatives with ten-year funding due to be launched next year.

Last year, Graphene Coordinated Action was named as one of the six pilot projects with a chance of being raised to flagship status. This would mean a budget of around SEK 10 billion throughout the whole period.
The downside is that only two flagships will be launched, leaving four pilots standing.
"If we are selected, it would mean a substantial increase in grants for European graphene research – up to 50 per cent more than at present," states Jari Kinaret.

"If we are unsuccessful, then hopefully we will at least retain our present financial framework."
Jari Kinaret has recently submitted the project's final report to the Commission. He is optimistic about their chances.
"One of our obvious strengths is the level of scientific excellence. Nobel Prize Winners Geim and Novoselov are members of our strategy committee along with a further two Nobel Prize Winners. That's hard to beat."
Alongside aspects bordering on science fiction, there is a very tangible side to graphene.
The fact is that now and then most people produce a little graphene –inadvertently of course. And some even eat graphene.

The link between nanoscience and daily life is the lead pencil. From its tip, a layer of soft graphite is transferred onto the surface of the paper when we draw and write. (At the same time, some of us chew the other end as we think).
If we were to study a strongly magnified pencil trace, a layer of graphite would be seen that is perhaps 100 atom layers thick. However, the outer edge of the trace becomes thinner and increasingly transparent and at some point the layer becomes so thin it comprises just one single layer of carbon atoms.
That's where it is – the graphene. It is also the background to the motto adopted by
Graphene Coordinated Action: The future in a pencil trace.
At the stroke of a pencil, the future of this planned research initiative will be decided towards the end of this year when the secret EU Commission jury will decide which of the two pilots will share the billions available for research.


ABOUT GRAPHENE
Graphene is a form of graphite, i.e. carbon, which comprises one single cohesive layer of atoms. It is super-thin, super-strong and transparent. It can be bent and stretched and it has a singular capacity to conduct both electricity and heat.
The existence of graphene has been known for a long time although in 2004 Geim and Novoselov succeeded in producing flakes of material in an entirely new way – by breaking it away from the graphite with the aid of standard household tape.
Graphene nowadays is also produced using other methods.
The centre of Swedish graphene research is Chalmers.


SOON ON TOUCH SCREENS AND IN MOBILE PHONES
The emphasis in Graphene Coordinated Action is on applied research. Ultimately, there is the potential somewhere on the horizon to build up a European industry around graphene and similar two-dimensional materials – both as components and finished products. Consequently, several large companies are included in the network, including mobile phone manufacturer Nokia.
"As graphene is both transparent and conductive, it is obviously of interest for use in the touchscreens and displays of the future. But graphene could also be used in battery technology or as reinforcement in the shell of mobile telephones," states Claudio Marinelli at the Nokia Research Department in Cambridge, England.
At Nokia, research has been conducted for a couple of years on potential applications for graphene within mobile communication. Claudio Marinelli estimates that by 2015 at the latest Nokia will be using graphene in one application or another in its telephones.
"Even when it comes to identification and other data transfer via the screen, technology based on graphene is conceivable," he says.
Farther down the line, he believes that the bendability and flexibility of graphene could become part of mobile communication and be used in products that at present we might find a little difficult to imagine.
"We believe that graphene technology will have a major impact on our business area. That is why it was an obvious move for us to be involved in this research project."

Source: Chalmers University of Technology

Green Energy: The Great German Energy Experiment

Technology Review
June 19, 2012
 
These wind turbines under construction in Görmin, Germany, are among more than 22,000 installed in that country. Credit: Sean Gallup | Getty

Germany has decided to pursue ambitious greenhouse-gas reductions—while closing down its nuclear plants. Can a heavily industrialized country power its economy with wind turbines and solar panels?

Along a rural road in the western German state of North Rhine–Westphalia lives a farmer named Norbert Leurs. An affable 36-year-old with callused hands, he has two young children and until recently pursued an unremarkable line of work: raising potatoes and pigs. But his newest businesses point to an extraordinary shift in the energy policies of Europe's largest economy. In 2003, a small wind company erected a 70-meter turbine, one of some 22,000 in hundreds of wind farms dotting the German countryside, on a piece of Leurs's potato patch. Leurs gets a 6 percent cut of the electricity sales, which comes to about $9,500 a year. He's considering adding two or three more turbines, each twice as tall as the first.

The profits from those turbines are modest next to what he stands to make on solar panels. In 2005 Leurs learned that the government was requiring the local utility to pay high prices for rooftop solar power. He took out loans, and in stages over the next seven years, he covered his piggery, barn, and house with solar panels—never mind that the skies are often gray and his roofs aren't all optimally oriented. From the resulting 690-kilowatt installation he now collects $280,000 a year, and he expects over $2 million in profits after he pays off his loans.

Stories like Leurs's help explain how Germany was able to produce 20 percent of its electricity from renewable sources in 2011, up from 6 percent in 2000. Germany has guaranteed high prices for wind, solar, biomass, and hydroelectric power, tacking the costs onto electric bills. And players like Leurs and the small power company that built his turbine have installed off-the-shelf technology and locked in profits. For them, it has been remarkably easy being green.

What's coming next won't be so easy. In 2010, the German government declared that it would undertake what has popularly come to be called an Energiewende—an energy turn, or energy revolution. This switch from fossil fuels to renewable energy is the most ambitious ever attempted by a heavily industrialized country: it aims to cut greenhouse-gas emissions 40 percent from 1990 levels by 2020, and 80 percent by midcentury. The goal was challenging, but it was made somewhat easier by the fact that Germany already generated more than 20 percent of its electricity from nuclear power, which produces almost no greenhouse gases. Then last year, responding to public concern over the post-tsunami nuclear disaster in Fukushima, Japan, Chancellor Angela Merkel ordered the eight oldest German nuclear plants shut down right away. A few months later, the government finalized a plan to shut the remaining nine by 2022. Now the Energiewende includes a turn away from Germany's biggest source of low-­carbon electricity.

Germany has set itself up for a grand experiment that could have repercussions for all of Europe, which depends heavily on German economic strength. The country must build and use renewable energy technologies at unprecedented scales, at enormous but uncertain cost, while reducing energy use. And it must pull it all off without undercutting industry, which relies on reasonably priced, reliable power. "In a sense, the Energiewende is a political statement without a technical solution," says Stephan Reimelt, CEO of GE Energy Germany. "Germany is forcing itself toward innovation. What this generates is a large industrial laboratory at a size which has never been done before. We will have to try a lot of different technologies to get there."

The major players in the German energy industry are pursuing several strategies at once. To help replace nuclear power, they are racing to install huge wind farms far off the German coast in the North Sea; new transmission infrastructure is being planned to get the power to Germany's industrial regions. At the same time, companies such as Siemens, GE, and RWE, Germany's biggest power producer, are looking for ways to keep factories humming during lulls in wind and solar power. They are searching for cheap, large-scale forms of power storage and hoping that computers can intelligently coördinate what could be millions of distributed power sources.
To read more click here...

Sunday, 17 June 2012

Green Fuel from Carbon Dioxide: Freiburg Research Team Develops Method for Sustainable Use of CO2

Engineerblogger
June 16, 2012



Doctoral candidate Elias Frei controls the temperature in the reactor of the catalyst test device. Source: FMF

It is beyond dispute that carbon dioxide (CO2) has an effect on global warming as a greenhouse gas, but we still pump tons and tons of CO2 into the atmosphere every day. A research team at the Freiburg Materials Research Center (FMF) led by the chemist Prof. Dr. Ingo Krossing has now developed a new system for producing methanol that uses CO2 and hydrogen. Methanol can, for example, be used as an environmentally friendly alternative for gasoline. The goal of the scientists is to harness the power of CO2 on a large scale and integrate it into the utilization cycle as a sustainable form of energy production.

In order to produce methanol, Krossing’s doctoral candidates combine the carbon dioxide with hydrogen in a high pressure environment, a process known as hydrogenolysis. Doctoral candidate Elias Frei has already been conducting research on methanol for several years. “Our goal is to develop new catalyst systems and methods for accelerating the chemical reaction even more,” explains Frei. The researchers at FMF use the metal oxides copper, zinc, and zirconium dioxide as catalysts, enabling the reaction to happen at lower temperatures. In this way, the gases don’t have to be heated as much. Together the catalysts form a so-called mixed system of surface-rich porous solid matter with defined properties. If the catalysts consist of nanoparticles, their activity is increased even more.

Frei and his colleague Dr. Marina Artamonova are also testing techniques in which the catalysts are impregnated with ionic liquids, salts in a liquid state that cover the catalyst like a thin film. They help to fix CO2 and hydrogen to the catalyst and remove the products methanol and water from it. This conversion leads to the production of pure methanol, which is used as a component in the chemical industry and as a fuel. When used as an alternative to gasoline it is less dangerous and less harmful to the environment than conventional fuels. In around two years, the researchers aim to be able to produce methanol on a mass scale according to this technique. Then the CO2 will be filtered out of the waste gas stream of a combined heat and power plant and used to produce methanol. When methanol is burned in a motor, CO2 is released again. If the same molecule were used twice, it would theoretically be possible to use 50 percent less CO2 to create the same amount of energy. The amount of methanol that could be converted from 10 percent of the yearly CO2 emissions in Germany would cover the country’s yearly fuel needs.

Methanol is also used as a chemical means of hydrogen storage and could thus also be used to power the fuel cells of automobiles in the future. “There is enough energy out there, but it needs to be stored,” says Frei. “As a sustainable means of energy storage, methanol has potential in a wide range of areas. We want to use that potential, because the storage and conversion of energy are important topics for the future.”

Source:  University of Freiburg

Electrified graphene a shutter for light: Researchers tune material to control terahertz, infrared waves

Engineerblogger
June 16, 2012


Experiments at Rice University showed that voltage applied to a sheet of graphene on a silicon-based substrate can turn it into a shutter for both terahertz and infrared wavelengths of light. Changing the voltage alters the Fermi energy (Ef) of the graphene, which controls the transmission or absorption of the beam. The Fermi energy divides the conduction band (CB), which contains electrons that absorb the waves, and the valance band (VB), which contains the holes to which the electrons flow. (Credit: Lei Ren/Rice University)


An applied electric voltage can prompt a centimeter-square slice of graphene to change and control the transmission of electromagnetic radiation with wavelengths from the terahertz to the midinfrared.

The experiment at Rice University advances the science of manipulating particular wavelengths of light in ways that could be useful in advanced electronics and optoelectronic sensing devices.

In previous work, the Rice lab of physicist Junichiro Kono found a way to use arrays of carbon nanotubes as a near-perfect terahertz polarizer. This time, the team led by Kono is working on an even more basic level; the researchers are wiring a sheet of graphene – the one-atom-thick form of carbon – to apply an electric voltage and thus manipulate what’s known as Fermi energy. That, in turn, lets the graphene serve as a sieve or a shutter for light.

The discovery by Kono and his colleagues at Rice and the Institute of Laser Engineering at Osaka University was reported online this month in the American Chemical Society journal Nano Letters.

In graphene, “electrons move like photons, or light. It’s the fastest material for moving electrons at room temperature,” said Kono, a professor of electrical and computer engineering and of physics and astronomy. He noted many groups have investigated the exotic electrical properties of graphene at zero- or low frequencies.

“There have been theoretical predictions about the unusual terahertz and midinfrared properties of electrons in graphene in the literature, but almost nothing had been done in this range experimentally,” Kono said.

Key to the new work, he said, are the words “large area” and “gated.”

“Large because infrared and terahertz have long wavelengths and are difficult to focus on a small area,” Kono said. “Gated simply means we attached electrodes, and by applying a voltage between the electrodes and (silicon) substrate, we can tune the Fermi energy.”

Fermi energy is the energy of the highest occupied quantum state of electrons within a material. In other words, it defines a line that separates quantum states that are occupied by electrons from the empty states. “Depending on the value of the Fermi energy, graphene can be either p-type (positive) or n-type (negative),” he said.

Making fine measurements required what is considered in the nano world to be a very large sheet of graphene, even though it was a little smaller than a postage stamp. The square centimeter of atom-thick carbon was grown in the lab of Rice chemist James Tour, a co-author of the paper, and gold electrodes were attached to the corners.

Raising or lowering the applied voltage tuned the Fermi energy in the graphene sheet, which in turn changed the density of free carriers that are good absorbers of terahertz and infrared waves. This gave the graphene sheet the ability to either absorb some or all of the terahertz or infrared waves or let them pass. With a spectrometer, the team found that terahertz transmission peaked at near-zero Fermi energy, around plus-30 volts; with more or less voltage, the graphene became more opaque. For infrared, the effect was the opposite, he said, as absorption was large when the Fermi energy was near zero.

“This experiment is interesting because it lets us study the basic terahertz properties of free carriers with electrons (supplied by the gate voltage) or without,” Kono said. The research extended to analysis of the two methods by which graphene absorbs light: through interband (for infrared) and intraband (for terahertz) absorption. Kono and his team found that varying the wavelength of light containing both terahertz and infrared frequencies enabled a transition from the absorption of one to the other. “When we vary the photon energy, we can smoothly transition from the intraband terahertz regime into the interband-dominated infrared. This helps us understand the physics underlying the process,” he said.

They also found that thermal annealing – heating – of the graphene cleans it of impurities and alters its Fermi energy, he said.

Kono said his lab will begin building devices while investigating new ways to manipulate light, perhaps by combining graphene with plasmonic elements that would allow a finer degree of control.

Co-authors of the paper include former Rice graduate students Lei Ren, Jun Yao and Zhengzong Sun; Rice graduate student Qi Zhang; Rice postdoctoral researchers Zheng Yan and Sébastien Nanot; former Rice postdoctoral researcher Zhong Jin; and graduate student Ryosuke Kaneko, assistant professor Iwao Kawayama and Professor Masayoshi Tonouchi of the Institute of Laser Engineering, Osaka University.

The research was supported by the Department of Energy, the National Science Foundation, the Robert A. Welch Foundation and the Japan Society for the Promotion of Science Core-to-Core Program. Support for the Tour Group came from the Office of Naval Research and the Air Force Office of Scientific Research.

Source: Rice University

Additional Information:


Ionic liquid improves speed and efficiency of hydrogen-producing catalyst

Engineerblogger
June 16, 2012



Combined with an acidic ionic liquid, this catalyst can make hydrogen gas fast and efficiently.

The design of a nature-inspired material that can make energy-storing hydrogen gas has gone holistic. Usually, tweaking the design of this particular catalyst — a work in progress for cheaper, better fuel cells — results in either faster or more energy efficient production but not both. Now, researchers have found a condition that creates hydrogen faster without a loss in efficiency.

And, holistically, it requires the entire system — the hydrogen-producing catalyst and the liquid environment in which it works — to overcome the speed-efficiency tradeoff. The results, published online June 8 in the Proceedings of the National Academy of Sciences, provide insights into making better materials for energy production.

"Our work shows that the liquid medium can improve the catalyst's performance," said chemist John Roberts of the Center for Molecular Electrocatalysis at the Department of Energy's Pacific Northwest National Laboratory. "It's an important step in the transformation of laboratory results into useable technology."

The results also provide molecular details into how the catalytic material converts electrical energy into the chemical bonds between hydrogen atoms. This information will help the researchers build better catalysts, ones that are both fast and efficient, and made with the common metal nickel instead of expensive platinum.

A Solution Solution

The work explores a type of dissolvable nickel-based catalyst, which is a material that eggs on chemical reactions. Catalysts that dissolve are easier to study than fixed catalysts, but fixed catalysts are needed for most real-world applications, such as a car's pollution-busting catalytic converter. Studying the catalyst comes first, affixing to a surface comes later.

In their search for a better catalyst to produce hydrogen to feed into fuel cells, the team of PNNL chemists modeled this dissolvable catalyst after a protein called a hydrogenase. Such a protein helps tie two hydrogen atoms together with electrons, storing energy in their chemical bond in the process. They modeled the catalytic center after the protein's important parts and built a chemical scaffold around it.

In previous versions, the catalyst was either efficient but slow, making about a thousand hydrogen molecules per second; or inefficient yet fast — clocking in at 100,000 molecules per second. (Efficiency is based on how much electricity the catalyst requires.) The previous work didn't get around this pesky relation between speed and efficiency in the catalysts — it seemed they could have one but not the other.

Hoping to uncouple the two, Roberts and colleagues put the slow catalyst in a medium called an acidic ionic liquid. Ionic liquids are liquid salts and contain molecules or atoms with negative or positive charges mixed together. They are sometimes used in batteries to allow for electrical current between the positive and negative electrodes.

The researchers mixed the catalyst, the ionic liquid, and a drop of water. The catalyst, with the help of the ionic liquid and an electrical current, produced hydrogen molecules, stuffing some of the electrons coming in from the current into the hydrogen's chemical bonds, as expected.

As they continued to add more water, they expected the catalyst to speed up briefly then slow down, as the slow catalyst in their previous solvent did. But that's not what they saw.

"The catalyst lights up like a rocket when you start adding water," said Roberts.

The rate continued to increase as they added more and more water. With the largest amount of water they tested, the catalyst produced up to 53,000 hydrogen molecules per second, almost as fast as their fast and inefficient version.

Importantly, the speedy catalyst stayed just as efficient when it was cranking out hydrogen as when it produced the gas more slowly. Being able to separate the speed from the efficiency means the team might be able to improve both aspects of the catalyst.

Liquid Protein

The team also wanted to understand how the catalyst worked in its liquid salt environment. The speed of hydrogen production suggested that the catalyst moved electrons around fast. But something also had to be moving protons around fast, because protons are the positively charged hydrogen ions that electrons follow around. Just like on an assembly line, protons move through the catalyst or a protein such as hydrogenase, pick up electrons, form bonds between pairs to make hydrogen, then fall off the catalyst.

Additional tests hinted how this catalyst-ionic liquid set-up works. Roberts suspects the water and the ionic liquid collaborated to mimic parts of the natural hydrogenase protein that shuffled protons through. In these proteins, the chemical scaffold holding the catalytic center also contributes to fast proton movement. The ionic liquid-water mixture may be doing the same thing.

Next, the team will explore the hints they gathered about why the catalyst works so fast in this mixture. They will also need to attach it to a surface. Lastly, this catalyst produces hydrogen gas. To create a fuel technology that converts electrical energy to chemical bonds and back again, they also plan to examine ionic liquids that will help a catalyst take the hydrogen molecule apart.


This graphic roostertail illustrates how the catalyst picks up speed as water is added to the ionic liquid (more water equals taller, faster current)

Source:  Pacific Northwest National Laboratory


Additional Information:
 

  • Reference: Douglas H. Pool, Michael P. Stewart, Molly O'Hagan, Wendy J. Shaw, John A. S. Roberts, R. Morris Bullock, and Daniel L. DuBois, 2012. An Acidic Ionic Liquid/Water Solution as Both Medium and Proton Source for Electrocatalytic H2 Evolution by [Ni(P2N2)2]2+ Complexes, Proc Natl Acad Sci U S A Early Edition online the week of June 8, DOI 10.1073/pnas.1120208109. (http://www.pnas.org/content/early/2012/06/07/1120208109)

Monday, 11 June 2012

ICECool to Crack Thermal Management Barrier, Enable Breakthrough Electronics

Engineerblogger
June 11, 2012

DARPA is behind new microfluidics miniaturization technology that embeds microchannels directly into computer chips, helping to cool them down.

The continued miniaturization and the increased density of components in today’s electronics have pushed heat generation and power dissipation to unprecedented levels. Current thermal management solutions, usually involving remote cooling, are unable to limit the temperature rise of today’s complex electronic components. Such remote cooling solutions, where heat must be conducted away from components before rejection to the air, add considerable weight and volume to electronic systems. The result is complex military systems that continue to grow in size and weight due to the inefficiencies of existing thermal management hardware.

Recent advances of the DARPA Thermal Management Technologies (TMT) program enable a paradigm shift—better thermal management. DARPA’s Intrachip/Interchip Enhanced Cooling (ICECool) program seeks to crack the thermal management barrier and overcome the limitations of remote cooling. ICECool will explore ‘embedded’ thermal management by bringing microfluidic cooling inside the substrate, chip or package by including thermal management in the earliest stages of electronics design.

“Think of current electronics thermal management methods as the cooling system in your car,” said Avram Bar-Cohen, DARPA program manager. “Water is pumped directly through the engine block and carries the absorbed heat through hoses back to the radiator to be cooled. By analogy, ICECool seeks technologies that would put the cooling fluid directly into the electronic ‘engine’. In DARPA’s case this embedded cooling comes in the form of microchannels designed and built directly into chips, substrates and/or packages as well as research into the thermal and fluid flow characteristics of such systems at both small and large scales.”

The ICECool Fundamentals solicitation released today seeks proposals to research and demonstrate the microfabrication and evaporative cooling techniques needed to implement embedded cooling. Proposals are sought for intrachip/interchip solutions that bring microchannels, micropores, etc. into the design and fabrication of chips. Interchip solutions for chip stacks are also sought.

“Thermal management is key for advancing Defense electronics,” said Thomas Lee, director, Microsystems Technology Office. “Embedded cooling may allow for smaller electronics, enabling a more mobile, versatile force. Reduced thermal resistance would improve performance of DoD electronics and may result in breakthrough capabilities we cannot yet envision.”

Source: DARPA

Engineers Devise New Way to Split Water: Nontoxic, noncorrosive, "low-temperature" method makes use of wasted heat

Engineerblogger
June 11, 2012


Providing a possible new route to hydrogen-gas production, researchers at the California Institute of Technology (Caltech) have devised a series of chemical reactions that allows them, for the first time, to split water in a nontoxic, noncorrosive way, at relatively low temperatures.

A research group led by Mark Davis, the Warren and Katharine Schlinger Professor of Chemical Engineering at Caltech, describes the new, four-reaction process in the early edition of the Proceedings of the National Academy of Sciences (PNAS).

Hydrogen is a coveted gas: industry uses it for everything from removing sulfur from crude oil to manufacturing vitamins. Since its combustion does not emit carbon dioxide into the atmosphere, there is some belief that it could even fuel a potential "hydrogen economy"—an energy-delivery system based entirely on this one gas. But since there is no abundant supply of hydrogen gas that can be simply tapped into, this lighter-than-air gas has to be mass-produced.

One way to make hydrogen is by using heat to split water, yielding pure hydrogen and oxygen. Known as thermochemical water splitting, this method is appealing because it can take advantage of excess heat given off by other processes. Thus far, it has been attempted in two ways: using two steps and taking advantage of high temperatures (above 1000°C) associated with solar collectors; or through multiple steps at "lower temperatures"—those below 1000°C—where, for example, the excess heat from nuclear reactors could drive the chemistry.

Davis is interested in this latter approach, which actually takes him back to his academic roots: his first paper as a graduate student dealt with a low-temperature water-splitting cycle, called the sulfur-iodine system, which has since been piloted for use around the world. Although that cycle operates at a maximum temperature of 850°C, it also produces a number of toxic and corrosive liquid intermediates that have to be dealt with. The cycle's high-temperature counterparts typically involve simpler reactions and solid intermediates—but there are very few processes that produce excess heat at such high temperatures.

"We wanted to combine the best of both worlds," Davis says. "We wanted to use solids, as they do in the high-temperature cycles, so we could avoid these toxicity and corrosion issues. But we also wanted to learn how to lower the temperature."

The first thing postdoctoral scholar and lead author Bingjun Xu and graduate student Yashodhan Bhawe did was to prove via thermodynamic arguments that a two-step, low-temperature cycle for water splitting will not be practical. "Nature's telling you 'No way,'" Davis says. "It was really a key point that told us we had to go away from looking for a two-step process, and that guidance directed us down another pathway that turned out to be quite fruitful."

The four-reaction cycle the team came up with begins with a manganese oxide and sodium carbonate, and is a completely closed system: the water that enters the system in the second step comes out completely converted into hydrogen and oxygen during each cycle. That's important because it means that none of the hydrogen or oxygen is lost, and the cycle can run over and over, splitting water into the two gases. In the current paper, the researchers ran their newly created cycle five times to show reproducibility. It will be needed to show that the cycle can run thousands of times in order to be practical. Experiments of this type are beyond the capabilities currently in the Davis lab.

"We're excited about this new cycle because the chemistry works, and it allows you to do real thermochemical water splitting with temperatures of 850°C without producing any of the halides or other types of corrosive acids that have been problems in the past," Davis says. Still, he is careful to point out that the implementation of the cycle as a functioning water-splitting system will require clever engineering. For example, for practical purposes, engineers will want some of the reactions to go faster, and they would also need to build processing reactors that have efficient-energy flows and recycling amongst the different stages of the cycle.

Going forward, the team plans to study further the chemistry of the cycle at the molecular level. They have already learned that shuttling sodium in and out of the manganese oxide is critical in lowering the operating temperature, but they want to know more about what exactly is happening during those steps. They hope that the enhanced understanding will allow them to devise cycles that could operate at even lower maximum temperatures.

Figuring out ways to decrease the operating temperatures is at the heart of Davis's interest in this project. "What we're trying to ask is, 'Where are the places around the world where people are just throwing away energy in the form of heat?'" he says. He speculates that there could be a day when water-splitting plants are able to run on the heat given off by a variety of manufacturing industries such as the steel- and aluminum-making industries and the petrochemicals industries, and by the more traditional power-generation industries. "The lower the temperature that we can use for driving these types of water-splitting processes," he says, "the more we can make use of energy that people are currently just wasting."

 Source: Caltech