Blogger Themes

Wednesday, 29 February 2012

Total Immersion: Immersive Engineering equals Improved Ergonomics

Engineerblogger
Feb 29, 2012


Technicians at Lockheed Martin wear motion tracking sensors (above) as they mime aircraft carrier deck tasks. The information captured animates digital avatars (below) in simulations.

A large and growing part of safety engineering in factories—a.k.a. human factors—is a sharp focus on ergonomics and what it can tell engineers about injuries. The emphasis is on eliminating over-exertion and awkward work postures in repetitive factory jobs.

The solution is immersive engineering, which integrates virtual reality (VR), digital video and related 3-D technologies, computer-aided design (CAD), simulation and analysis, and solid modeling. These theater-like systems surround problem solvers with real-time engineering data presented digitally in life-sized displays with ergonomically accurate, motion-tracked avatars—digital humans.

Computerization and Ergonomics

These efforts mark a new safety push that comes on top of avoiding workplace accidents, especially around machinery, and preventing illnesses due to chemical exposure and excess noise. This new focus within factory safety is a direct extension of longstanding efforts to eliminate repetitive stress injuries such as lower back pain and carpal tunnel syndrome related to computerized office tasks. After four decades of office automation, nearly every office job has been computerized.

Computerization has revolutionized factory work, too, along with myriad mechanical assistance devices, from simple counter-balanced lifters to programmable industrial robots in foundries, welding and painting. Hundreds of thousands of formerly onerous jobs have been made easier, even though so many jobs have been outsourced to low-labor-cost countries.

Much of the reason for the early initial ergonomic success of immersive
engineering relates to a unique strength of the technology. It lets ergonomists and other safety experts solve workplace problems working in the virtual world of the computer. Immersive engineering lets ergonomists work directly with engineers (mechanical, industrial, and manufacturing), productivity managers, and even cost-control staff.

Information captured by the Lockheed Martin technicians animates digital avatars in simulations.

Reaping the Benefits

The results are dramatic, as shown by data from vehicle assembly operations of Ford Motor Co. in Dearborn, Mich. Ford has documented simultaneous reductions in injuries, fewer claims for compensation, shorter learning curves (getting new vehicles into production), lower cost for tooling changes, reduced production costs in general, and higher workplace productivity. The United Auto Workers and other unions support these efforts.

Ford's premiums for worker's compensation insurance have fell by about 55% since 2000, to under $15 million for 2007 from an average of $40 million in the early 1990s. By far the biggest portion of the drop was in repetitive-stress injuries that ergonomic analyses play such a big role in preventing. This is backed up by company medical records that show dramatic reductions in injuries related to spinal compression, back and upper body strains, and shoulder/rotator cuff injuries.


Allison Stephens directs a study of the physical exertion of an assembly task—installing a console between a vehicle's two front seats—at Ford's Dearborn Ergonomics Laboratory in Michigan.

At the same time, new-vehicle quality has soared five times more than the industry average. Ford now matches Honda and they exceed all other manufacturers. Product development times have shrunk eight to 14 months during the past five years. Cost details have not been released but across the industry such costs fall in line with product development time. In just one year, 2007, Ford new-vehicle quality soared an unprecedented 11%, measured three months after sale. The North American industry average was just 2%. In 2009, Ford added an immersive engineering system to its European operations.

A similar system was installed late in 2010 at the Lockheed Martin Space Systems Co. in Denver, Colo., to generate gains in the final assembly of satellites. That is Lockheed Martin's third immersive engineering system.

Key elements of these systems include Jack (Tecnomatix) and Delmia ergonomic and analysis software. The developers (respectively) are Siemens PLM in Ann Arbor, Mich., and Dassault Systemes in Auburn Hills, Mich. The leading developer of motion tracking and analysis systems is Motion Analysis Corp. in Santa Rosa, Calif. The leading systems integrator for immersive engineering is Mechdyne Corp. in Marshalltown, Iowa.

Source: ASME

Battery to Take On Diesel and Natural Gas

Technology Review
Feb 29, 2012


Battery building: Aquion Energy recently announced plans to retrofit this factory—which used to make Sony televisions—to make large batteries for use with solar power plants. Credit: RIDC Westmoreland

Aquion Energy, a company that's making low-cost batteries for large-scale electricity storage, has selected a site for its first factory and says it's lined up the financing it needs to build it.

The company hopes its novel battery technology could allow some of the world's 1.4 billion people without electricity to get power without having to hook up to the grid.

The site for Aquion's factory is a sprawling former Sony television factory near Pittsburgh. The initial production capacity will be "hundreds" of megawatt-hours of batteries per year—the company doesn't want to be specific yet. It also isn't saying how much funding it's raised or where the money comes from, except to mention that some of it comes from the state of Pennsylvania, and that $5 million, in the form of an R&D grant, comes from the federal government.

The first applications are expected to be in countries like India, where hundreds of millions of people in communities outside major cities don't have a connection to the electrical grid or any other reliable source of electricity. Most of these communities use diesel generators for power, but high prices for oil and low prices for solar panels are making it cheaper to install solar in some cases.

To store power generated during the day for use at night, these communities need battery systems that can handle anything from tens of kilowatt-hours to a few megawatt-hours, says Scott Pearson, Aquion's CEO. Such a system could make long-distance transmission lines unnecessary, in much the same way that cell-phone towers have allowed such communities access to cellular service before they had land lines.

Eventually Aquion plans to sell stacks of batteries in countries that have electrical grids. They could provide power during times of peak demand and make up for fluctuations in power that big wind farms and solar power plants contribute to the grid. Those applications require tens to hundreds of gigawatt-hours' worth of storage, so to supply them, Aquion needs to increase its manufacturing capacity. Competing with natural-gas power plants—especially in the United States, where natural gas is so cheap—will mean waiting until economies of scale bring costs down.

The company has said that it initially hopes to make batteries for under $300 per kilowatt-hour, far cheaper than conventional lithium-ion batteries. Lead-acid batteries can be cheaper than Aquion's, but they last only two or three years. Aquion's batteries, which can be recharged 5,000 times, could last for over a decade in situations in which they're charged once a day (the company has tested the batteries for a couple of years so far).
To read more click here...

Seeking Cheaper, Nimbler Satellites and Safer Disposal of Space Debris

Engineerblogger
Feb 29, 2012


Credit: RPI


A new research program at Rensselaer Polytechnic Institute seeks to define the next-generation of low-orbit satellites that are more maneuverable, cheaper to launch, easier to hide, and longer lived. Additionally, this research holds the promise of guiding dead satellites and other space debris more safely to the Earth’s surface.

Led by Rensselaer faculty member Riccardo Bevilacqua, the research team is challenged with developing new theories for exploiting the forces of atmospheric drag to maneuver satellites in low-Earth orbits. Atmospheric drag is present up to 500 kilometers of altitude. Using this drag to alter the trajectory of a satellite alleviates the need to burn propellant to perform such action. Decreasing the amount of required propellant will make satellites weigh less, which reduces the overall cost of launching satellites into orbit.

Additionally, this new research holds the promise of using drag to control and maneuver dead satellites that are inoperable or have run out of propellant.

This project, titled “Propellant-free Spacecraft Relative Maneuvering via Atmospheric Differential Drag,” is funded by the Air Force Office of Scientific Research (AFOSR) Young Investigator Research Program with an expected three-year, $334,000 grant.

“Using differential drag to maneuver multi-spacecraft systems in low-Earth orbit is a new, non-chemical way to potentially reduce or even eliminate the need for propellant,” said Bevilacqua, assistant professor in the Department of Mechanical, Aerospace, and Nuclear Engineering (MANE) at Rensselaer. “Reducing the satellite’s overall mass at launch, by carrying less propellant, allows for easier, cheaper, and faster access to space. In addition, the ability to maneuver without expulsion of gases enables spacecraft missions that are harder to detect.”

Satellites experience drag while in low-Earth orbits, and this drag causes their orbits to decay—sending the satellites closer and closer to Earth. Bevilacqua wants to take advantage of this drag by attaching large retractable panels to satellites. When deployed, these panels would work like a parachute and create more drag in order to slow down or maneuver the satellite.

This type of system could be built into new satellites, or even designed as a separate device that could be attached to existing satellites already in orbit. The drag panel system would use electrical power—which can be recharged via solar panels—to perform its maneuvers. The system would not require any fuel or propellant. Bevilacqua said such a device could be attached to a dead satellite already in freefall, in order to help control where the satellite will land on the Earth’s surface.

This new project is a key component of Bevilacqua’s overall research portfolio, which focuses on the guidance, navigation, and control of multiple spacecraft. The overall trend in spacecraft design is to go smaller and smaller, he said. Today’s satellites are generally one big unit. In the future, satellite systems likely will be made up of many smaller satellites that join together and form one larger device. This type of modular system allows for individual components to be replaced or upgraded while the overall system remains functional in orbit. One of the major challenges to realizing this vision is developing a propellant-free means to maneuver small satellites so they’re able to rendezvous and join with one another. Differential drag could be one such way to accomplish this, Bevilacqua said.

Bevilacqua joined the Rensselaer School of Engineering faculty in 2010, before which he served as a lecturer and researcher at the Naval Postgraduate School in Monterey, Calif. He earned his laurea degree in aerospace engineering, and his doctoral degree in mathematical methods and models for applied sciences, both from the Sapienza University of Rome.

He is also a faculty member of the Center for Automation Technologies and Systems (CATS) at Rensselaer.


Source: Rensselaer Polytechnic Institute

Tiny 3D chips: Researchers develop a new approach to producing microchips

MIT News
Feb 29, 2012
A new approach helps researchers make tiny three-dimensional structures. Pictured are two packaged microchips, each with tiny bridges fabricated on their surfaces.

Microelectromechanical systems, or MEMS, are small devices with huge potential. Typically made of components less than 100 microns in size — the diameter of a human hair — they have been used as tiny biological sensors, accelerometers, gyroscopes and actuators.

For the most part, existing MEMS devices are two-dimensional, with functional elements engineered on the surface of a chip. It was thought that operating in three dimensions — to detect acceleration, for example — would require complex manufacturing and costly merging of multiple devices in precise orientations.

Now researchers at MIT have come up with a new approach to MEMS design that enables engineers to design 3-D configurations, using existing fabrication processes; with this approach, the researchers built a MEMS device that enables 3-D sensing on a single chip. The silicon device, not much larger than Abraham Lincoln’s ear on a U.S. penny, contains microscopic elements about the width of a red blood cell that can be engineered to reach heights of hundreds of microns above the chip’s surface.

Fabio Fachin, a postdoc in the Department of Aeronautics and Astronautics, says the device may be outfitted with sensors, placed atop and underneath the chip’s minuscule bridges, to detect three-dimensional phenomena such as acceleration. Such a compact accelerometer may be useful in several applications, including autonomous space navigation, where extremely accurate resolution of three-dimensional acceleration fields is key.

“One of the main driving factors in the current MEMS industry is to try to make fully three-dimensional devices on a single chip, which would not only enable real 3-D sensing and actuation, but also yield significant cost benefits,” Fachin says. “A MEMS accelerometer could give you very accurate acceleration [measurements] with a very small footprint, which in space is critical.”

Fachin collaborated with Brian Wardle, an associate professor of aeronautics and astronautics at MIT, and Stefan Nikles, a design engineer at MEMSIC, an Andover, Mass., company that develops wireless-sensor technology. The team outlined the principles behind their 3-D approach in a paper accepted for publication in the Journal of Microelectromechanical Systems.
To read more click here...

New laser can point the way to new energy harvesting

Engineerblogger
Feb 29, 2012


Ismael Heisler next to a diffractive optic polarisation spectrometer. Credit: EPRSC


New ultrafast laser equipment, capable of generating intense pulses of light as short as a few femtoseconds from the UV to the Infra Red, will help scientists at the University of East Anglia (UEA) measure how energy is transferred from molecule to molecule and point the way to molecular structures for exploiting solar radiation.

Funded by a £466,000 grant from the Engineering and Physical Sciences Research Council, the new laser will be used for 2D electronic spectroscopy experiments that look at the very fastest reactions. By studying how energy transfers in natural and artificial systems such as proteins and molecular materials, researchers will in turn be able to help the design of new nanomachines and solar power collectors.

Steve Meech, Professor of Chemistry at UEA’s said:

"With this equipment we will be able to develop experiments which probe in exquisite detail the link between the efficiency of light driven processes in natural and synthetic systems and the underlying molecular architecture."

2D electronic spectroscopy is in many ways analogous to the much better known 2D Nuclear Magnetic Resonance method. It uses ultra fast visible light pulses to reveal coupling between electronic states whereas NMR uses radio frequency pulses to measure couplings between nuclear spins.

Twenty years ago most ultrafast experiments relied upon amplified dye lasers. These difficult to use and unstable devices severely limited the range of experiments possible. Starting with the discovery of the Titanium Sapphire laser, a whole new family of experiments became possible.

"It is because of the amazing stability and reliability of these modern devices that we can even consider 2D optical experiments, which may take days to run", added Meech.

Lesley Thompson, EPSRC’s Director of Research Base, said:

"The grant for equipment made by our strategic equipment panel will give UEA the tools they need, but EPSRC has also allocated a further £613,000 for staff and collaborations to drive this research forward."

The announcement coincides with the inaugural lecture by Professor Alf Adams at the Royal Society in London, to mark the 25th anniversary of his work on strained quantum well lasers, recently named as one of the Top Ten greatest UK scientific breakthroughs of all time.

The lecture, entitled Semiconductor Lasers TakeThe Strain, is the first in a series named in his honour.

Source: Engineering and Physical Sciences Research Council (EPSRC)

Tuesday, 28 February 2012

Experimental smart outlet brings flexibility, resiliency to grid architecture

Engineerblogger
Feb 28, 2012


Anthony Lentine with the smart outlet. Photo by Randy Montoya

Sandia National Laboratories has developed an experimental “smart outlet” that autonomously measures, monitors and controls electrical loads with no connection to a centralized computer or system. The goal of the smart outlet and similar innovations is to make the power grid more distributed and intelligent, capable of reconfiguring itself as conditions change.

Decentralizing power generation and controls would allow the grid to evolve into a more collaborative and responsive collection of microgrids, which could function individually as an island or collectively as part of a hierarchy or other organized system.

“A more distributed architecture can also be more reliable because it reduces the possibility of a single-point failure. Problems with parts of the system can be routed around or dropped on and off the larger grid system as the need arises,” said smart outlet co-inventor Anthony Lentine.

Such flexibility could make more use of variable output energy resources such as wind and solar because devices such as the smart outlet can vary their load demand to compensate for variations in energy production.

“This new distributed, sensor-aware, intelligent control architecture, of which the smart outlet is a key component, could also identify malicious control actions and prevent their propagation throughout the grid, enhancing the grid’s cyber security profile,” Lentine said.

Anatomy of a smart outlet

The outlet includes four receptacles, each with voltage/current sensing; actuation (switching); a computer for implementing the controls; and an Ethernet bridge for communicating with other outlets and sending data to a collection computer.

The outlet measures power usage and the direction of power flow, which is normally one-way, but could be bi-directional if something like a photovoltaic system is connected to send power onto the grid. Bi-directional monitoring and control could allow each location with its own energy production, such as photovoltaic or wind, to become an “island” when the main power grid goes down. Currently, that rarely occurs due to the lack of equipment to prevent power from flowing back toward the grid.

The outlet also measures real power and reactive power, which provides a more accurate measurement of the power potentially available to drive the loads, allowing the outlets to better adapt to changing energy needs and production.

Similar technology could be built into energy-intensive appliances and connected to a home monitoring system, allowing the homeowner greater control of energy use. What is different about the smart outlet is that distributed autonomous control allows a homeowner with little technical expertise to manage loads and the utility to manage loads with less hands-on, and costly, human intervention.

Utilities currently use mostly fossil fuels and nuclear reactors to generate baseload electric power, the amount needed to meet the minimum requirements of power users. Utilities know how much power they need based on decades of usage data, so they can predict demand under normal conditions.

“With the increased use of variable renewable resources, such as wind and solar, we need to develop new ways to manage the grid in the presence of a significant generation that can no longer supply arbitrary power on demand,” Lentine said. “The smart outlet is a small, localized approach to solving that problem.”

Source: Sandia National Laboratories

The Asian Research Network: a Collaboration to Boost Science

Engineerblogger
Feb 29, 2012


Prof. Haiwon Lee: “Giving is better than taking. So I thought to myself, what about giving something to the other people in Asia? I want to give something as long as I have something to give.” Credit : Asian Research Network

Hanyang University of Korea and RIKEN of Japan, along with other Asian research institutes, are launching the Asian Research Network (ARN). Recently ARN members succeeded in producing transparent touch sensors using carbon nanotubes and ink solutions that can print electronic circuits or change colour in exposure to heat or UV radiation.

“I say to people, ‘I’m a small, skinny guy and I have a dream, I want to do something for Asia,’” beams Prof. Haiwon Lee, Director of the Institute of Nanoscience and Technology at Hanyang University in South Korea.

Small as his stature may be, Lee’s wit, enthusiasm and intelligence make up for it in fair measure. Holding more professorships, directorships and editorial posts than there is space to mention here, it is immediately clear that here is a man who does not define himself by these titles, but by his actions. In particular, it is the Asian Research Network that he speaks of with a passion often rare in professors who are comfortably at the top of their game.

In 1989, on his own accord, Lee started yearly trips to Japan—a step made all the more significant by the historical tensions between the two countries. He sought to establish relationships with other researchers and institutes, integrating science in Asia for a better future. It was a slow process. Apart from exchanges on a company or government level it was, and perhaps still is, highly unusual for a South Korean individual to be promoting research, development and educational cooperation across borders.

Step-by-step Lee built a performance-based relationship with RIKEN. Nevertheless, it was not until 2003 that an alliance between RIKEN and Hanyang was formally established. The significance was profound. Never before had Japan opened up its doors for a private research university.

Next Lee sought to obtain funding for a cooperative research laboratory to give tangible structure to the Asian Research Network. In 2008, following grants from the Korean Ministry of Education, Science and Technology, Seoul’s mayor and Samsung electronics, the Hanyang-RIKEN Collaboration Centre was established. Here researchers from both institutions could work side by side to produce world-class research.

Many would be satisfied with these achievements. For Lee however, it is just the start. The alliance needs to go across Asia. “The idea is to exchange information and relationships at a high level,” he explains. ARN is starting with tangible goals, initially focusing on the areas of nanoscience and nanotechnology. Lee points to a poster advertising a recent joint Hanyang-RIKEN nanoscience conference. However, as they expand ARN is to encompass all science and technology and include other Asian partners such as China, India and Singapore.

“Our aim is to build a borderless research environment,” says Lee. He stresses that this is not just for Korea, but also for Asia and ultimately he aims to go global. The reason that Lee has made his dream a reality is due to his insistence on a pragmatic approach. He looks to innovate, change and truly engage rather than go through set patterns and motions.

“In the beginning, I was talking to government people who would always say, ‘Show me the MOU’ said Lee. A ‘memorandum of understanding’ or ‘MOU’ is a traditional document indicating a multilateral agreement between parties. MOU’s are popular across Asia, so Lee took me by surprise when he continued matter-of-factly: “MOU’s don’t mean anything – its just politics”.

He continued, “It took five years to get people onboard. They always wanted to wait and consider things endlessly, it was very difficult.” If there is one thing that is clear about Lee, it is that he is a man of deeds, not just words, who does not shy away from getting things done.

But why put so much effort into this? I asked. Of course there are huge benefits, but most academics are more concerned with climbing up the citation league table, (and it is clear that Lee has spent at least a hundred papers worth of time establishing ARN!). He looks at me with thoughtful eyes and stares into the distance. “I was born in 1954, right after the Korean war,” he says. “I was one of eight children, there was nothing left of Korea and it was miserable. Our parents sacrificed everything for our education. They did not spend even a single penny. I am not from a rich family, my mother only went to elementary school, but because of their efforts four of us are now professors. They knew how to save material, how to manage, how to change their country. This is the strength and spirit of our parents.”

And the spirit of cooperation is certainly helping the research productivity and output of ARN members. Take for example Choi Eunsuk and colleagues; they recently announced they had made a transparent touch sensor using carbon nanotube thin films (Journal of Nanoscience and Nanotechnology, vol. 11, 2011). These films are optically transparent and electrically conductive in thin layers. The applications are enormous: think of flexible electronic interfaces such as “e-paper”, or television screens that you can roll up.

Similarly, Jong-Man Kim and his team have managed to devise an ink solution that can repeatedly change colour upon exposure to heat or UV radiation. Their results in the Journal of Advanced Materials (Vol. 23, 2011) open the possibility of printing electronic circuits on paper. Being able to integrate such circuitry into lightweight, disposable materials such as paper using simple ‘inkjet’ technology is of great interest to manufacturers.

Prof. Lee meanwhile revels in this spirit of collaboration: “Giving is better than taking. So I thought to myself, what about giving something to the other people in Asia? I want to give something as long as I have something to give.”


Source: Asian Research Network

Additional Information:


How to measure solar cell efficiency correctly

Engineerblogger
Feb 29, 2012


Photographs of liquid electrolyte-based dye-sensitised solar cells with different masking configurations, including no mask and set on its side. The active area of None is taken to be the area of the screen printed dye-sensitised TiO2 dot, Mask and Mask + Edge are taken to be the area of the square mask aperture and Side-on is the same as None

The significance of new solar cell technologies tends to rest heavily on their measured efficiency. But compounding small mistakes in measuring that efficiency can lead to values up to five times higher than the true reading, says Henry Snaith from the University of Oxford, UK.

Snaith has therefore set out a guide that illustrates the factors that should be taken into consideration when measuring efficiency, and outlines the potential sources of error. It is an attempt to restore confidence in literature claims and make them more easily comparable - both within fields and across different types of cells including dye-sensitised solar cells (DSSCs), organic photovoltaics and hybrid solar cells. The guidance includes how to mask cells to get an accurate measure of the test area; the type of lamps to use and how to calibrate them; and the importance of positioning the cell in exactly the same place as the calibration reference.

'There's an ongoing stream of papers in which it's not entirely clear exactly how the measurements have been made,' says Snaith. And worse than that, some papers claim values that appear to be grossly overinflated. That has an impact on genuine claims, Snaith explains. 'If, for example, someone claims their hybrid solar cell has an efficiency of 4% when it's really more like 1%, that makes it problematic for someone else to write an exciting paper when they've genuinely improved something to 1.5%.'

However, Snaith is quick to point out that his intention is not to point the finger of blame. 'The field has grown rapidly, so there are a lot of people coming in - without much device experience - who want to be able to make a solar cell and test it to see if their systems have made improvements,' he adds. This influx brings new ideas and approaches, which is definitely to be encouraged. Unfortunately, there are some easy-to-make mistakes that can have drastic effects on measurements. 'There's nothing particularly new or complex in the paper - the idea is to provide a clear protocol for how to get a value that accurately reflects the efficiency of the solar cell, and to point out the common pitfalls that can occur.'

Nicolas Tétreault, who develops DSSCs at the Swiss Federal Polytechnic School in Lausanne, agrees that having a single reference point for best practice will be very useful, especially one showing the possibility of such huge variations and illustrating how they relate to what's going on in the cell. 'One of the benefits of showing these extremes is that it shows that the consequence of not doing it correctly can introduce errors that border on cheating!' Tétreault adds that accurate measurements are even more important when trying to claim a new efficiency record. Snaith agrees, although in that case, he says, measurements should be independently certified by one of the national laboratories such as the US National Renewable Energy Laboratory.

Source: Royal Society of Chemistry - Chemistry World


Additional Information:

Monday, 27 February 2012

New energy storage device based on water: Solution for increasing energy demand

Engineerblogger
Feb 27, 2012


Semiconductor and Energy Conversion”-group (pictured left to right): Alberto Battistel (Ph.D. Student), Dr. Edyta Madej (PostDoc), Dr. Fabio La Mantia (Junior Group Leader), Dr. Jelena Stojadinovic (PostDoc), Mu Fan (Ph.D. Student)

The global energy demand is still increasing. However, today's concepts for power generation aren't able to deliver the amount of electricity, which is needed in the future. Dr. Fabio La Mantia, junior group leader of the “Semiconductor and Energy Conversion”-group (Center for Electrochemical Sciences) of the Ruhr-Universität Bochum, is working on a solution for the problem. In March he and his team are going to start a project, with the ambition to develop an aqueous lithium-ion battery. They want to produce an accumulator, which is working at two volt with a three times decreased cost, compared to conventional ones. The Federal Ministry of Education and Research is going to support the project with 1.424.000 Euro for a duration of five years.

Renewable energies fall short

The current world-wide consumption is predicted by experts to rise up from 13 to 25 terawatt by 2050. Renewable energies are only able to supply ten percent of the need, because they are expensive and not always available in the same extent. This applies especially for solar and wind energy. “Fast and economical systems, to cache the current, are in demand”, explains La Mantia. The idea is to produce batteries, which are appropriate for the application in the power grid.

Higher performance and lifespan

General lithium-ion batteries are based on organic solvents. They are the standard for all portable devices. However, for the use in power supply systems, they are too expensive and unsafe. They overheat too quickly, which can cause short circuits. To improve the performance, lifespan, energy density and the price-performance ratio, the young scientists concentrate themselves on the combination of appropriate materials, separators, cells and aqueous electrolytes (liquid conductor of electricity).

Source: Ruhr-University Bochum

Reduction in U.S. carbon emissions attributed to cheaper natural gas

Harvard University
Feb 27, 2012

Changes in carbon dioxide emissions from the power sector in the nine census regions of the contiguous United States, 2008-2009. Image courtesy of Xi Lu.

In 2009, when the United States fell into economic recession, greenhouse gas emissions also fell, by 6.59 percent relative to 2008.

In the power sector, however, the recession was not the main cause.

Researchers at the Harvard School of Engineering and Applied Sciences (SEAS) have shown that the primary explanation for the reduction in CO2 emissions from power generation that year was that a decrease in the price of natural gas reduced the industry's reliance on coal.

According to their econometric model, emissions could be cut further by the introduction of a carbon tax, with negligible impact on the price of electricity for consumers.

A regional analysis, assessing the long-term implications for energy investment and policy, appears in the journal Environmental Science and Technology.

In the United States, the power sector is responsible for 40 percent of all carbon emissions. In 2009, CO2 emissions from power generation dropped by 8.76 percent. The researchers attribute that change to the new abundance of cheap natural gas.

"Generating 1 kilowatt-hour of electricity from coal releases twice as much CO2 to the atmosphere as generating the same amount from natural gas, so a slight shift in the relative prices of coal and natural gas can result in a sharp drop in carbon emissions," explains Michael B. McElroy, Gilbert Butler Professor of Environmental Studies at SEAS, who led the study.

"That's what we saw in 2009," he says, "and we may well see it again."

Patterns of electricity generation, use, and pricing vary widely across the United States. In parts of the Midwest, for instance, almost half of the available power plants (by capacity) were built to process coal. Electricity production can only switch over to natural gas to the extent that gas-fired plants are available to meet the demand. By contrast, the Pacific states and New England barely rely on coal, so price differences there might make less of an impact.

To account for the many variables, McElroy and his colleagues at SEAS developed a model that considers nine regions separately.
To read more click here...

Mechanism Behind Capacitor’s High-Speed Energy Storage Discovered

Engineerblogger
Feb 27, 2012



Researchers at North Carolina State University have discovered the means by which a polymer known as PVDF enables capacitors to store and release large amounts of energy quickly. Their findings could lead to much more powerful and efficient electric cars.

Capacitors are like batteries in that they store and release energy. However, capacitors use separated electrical charges, rather than chemical reactions, to store energy. The charged particles enable energy to be stored and released very quickly. Imagine an electric vehicle that can accelerate from zero to 60 miles per hour at the same rate as a gasoline-powered sports car. There are no batteries that can power that type of acceleration because they release their energy too slowly. Capacitors, however, could be up to the job – if they contained the right materials.

NC State physicist Dr. Vivek Ranjan had previously found that capacitors which contained the polymer polyvinylidene fluoride, or PVDF, in combination with another polymer called CTFE, were able to store up to seven times more energy than those currently in use.

“We knew that this material makes an efficient capacitor, but wanted to understand the mechanism behind its storage capabilities,” Ranjan says.

In research published in Physical Review Letters, Ranjan, fellow NC State physicist Dr. Jerzy Bernholc and Dr. Marco Buongiorno-Nardelli from the University of North Texas, did computer simulations to see how the atomic structure within the polymer changed when an electric field was applied. Applying an electric field to the polymer causes atoms within it to polarize, which enables the capacitor to store and release energy quickly. They found that when an electrical field was applied to the PVDF mixture, the atoms performed a synchronized dance, flipping from a non-polar to a polar state simultaneously, and requiring a very small electrical charge to do so.

“Usually when materials change from a polar to non-polar state it’s a chain reaction – starting in one place and then moving outward,” Ranjan explains. “In terms of creating an efficient capacitor, this type of movement doesn’t work well – it requires a large amount of energy to get the atoms to switch phases, and you don’t get out much more energy than you put into the system.

“In the case of the PVDF mixture, the atoms change their state all at once, which means that you get a large amount of energy out of the system at very little cost in terms of what you need to put into it. Hopefully these findings will bring us even closer to developing capacitors that will give electric vehicles the same acceleration capabilities as gasoline engines.”

Source: North Carolina State University

Additional Information:

Graphyne May Be Better than Graphene

Engineerblogger
Feb 27, 2012


Stretched honeycomb. The carbon lattice in this 6,6,12-graphyne has a rectangular symmetry, unlike the hexagonal symmetry of graphene. Credit: APS

Sheets of single-layer carbon with a variety of bonding patterns may have properties similar to the wonder material graphene, according to new computer simulations.

Super-strong, highly conducting graphene is the hottest ticket in physics, but new computer simulations suggest that materials called graphynes could be just as impressive. Graphynes are one-atom-thick sheets of carbon that resemble graphene, except in the type of atomic bonds. Only small pieces of graphyne have so far been fabricated, but the new simulations, described in Physical Review Letters, may inspire fresh efforts to construct larger samples. The authors show that three different graphynes have a graphenelike electronic structure, which results in effectively massless electrons. The unique symmetry in one of these graphynes may potentially lead to new uses in electronic devices, beyond those of graphene.

The singe-atom-thick structure of carbon atoms arranged in a honeycomb pattern, known as graphene, was first isolated in a lab 2004, but many of its remarkable electronic properties were revealed by theorists 60 years before. The most striking aspect of graphene is that its electronic energy levels, or “bands,” produce conduction electrons whose energies are directly proportional to their momentum. This is the energy-momentum relationship exhibited by photons, which are massless particles of light. Electrons and other particles of matter normally have energies that depend on the square of their momentum.

When the bands are plotted in three dimensions, the photonlike energy-momentum relationship appears as an inverted cone, called a Dirac cone. This unusual relationship causes conduction electrons to behave as though they were massless, like photons, so that all of them travel at roughly the same speed (about 0.3 percent of the speed of light). This uniformity leads to a conductivity greater than copper.

Graphynes differ from their carbon cousin graphene in that their 2D framework contains triple bonds in addition to double bonds. These triple bonds open up a potentially limitless array of different geometries beyond the perfect hexagonal lattice of graphene, although only small pieces of graphynes have been synthesized so far. Still, this hasn’t stopped theorists from exploring their properties [1]. Recent work gave an indication that certain graphynes might have Dirac cones [2]. To verify this, Andreas Görling of the University of Erlangen-Nürnberg in Germany and his colleagues have now performed a more rigorous investigation of graphyne using state-of-the-art methods.

The team selected three graphynes to study: two with hexagonal symmetry and a third with rectangular symmetry. The researchers first checked that these graphynes were stable by simulating their vibrations and checking that they returned to their original shape. They then determined the band structure using density-functional theory, the gold standard for dealing with the hopelessly large number of electron-electron interactions inside a material. The simulations showed that all three graphynes had Dirac cones. This was surprising in the case of the rectangular graphyne, Görling says, because most people assumed this sort of electronic structure was tied to hexagonal symmetry. The implication is that many other materials (some containing atoms other than carbon) could have Dirac cones.

On closer examination of the rectangularly symmetric graphyne, the team discovered that the Dirac cones were not perfectly conical. A vertical slice in the direction of the “short side” of the rectangular lattice gave an inverted triangle as would be expected, but in the perpendicular direction, parallel to the “long side,” the cross section was curved, like a triangle bent towards a parabola. This distortion should lead to a conductance that depends on the direction of the current, a property not found in graphene but one that could be exploited in nanoscale electronic devices, Görling says. Another potentially useful property of this graphyne is that it should naturally contain conducting electrons and should not require noncarbon “dopant” atoms to be added as a source of electrons, as is required for graphene.

The big challenge now is to make large graphyne samples. “Organic chemists like myself can synthesize (often with difficulty) complex molecular subunits,” but these small sections of graphyne do not exhibit the expected properties of a large lattice, says Michael Haley of the University of Oregon in Eugene. Andre Geim of the University of Manchester, UK, who was awarded the 2010 Nobel Prize for his experimental work on graphene, says that graphyne is “an extremely interesting material, and this report adds to the excitement.” He only hopes it won’t take 60 years for experimentalists to make the excitement a reality this time.

Source: American Physical Society

Additional Information:
  • (1) R. H. Baughman, H. Eckhardt, and M. Kertesz, “Structure‐property predictions for new planar forms of carbon: Layered phases containing sp² and sp atoms,” J. Chem. Phys. 87, 6687 (1987).

Saturday, 25 February 2012

Team’s efficient unmanned aircraft jetting toward commercialization

Engineerblogger
Feb 25, 2012


CU-Boulder Assistant Professor Ryan Starkey, left, with some members of his team, looks over engine model nozzles for a first-of-its-kind supersonic unmanned aircraft vehicle, visible in the rendering on the computer screen. From left are Starkey; Sibylle Walter, doctoral degree student; Josh Fromm, master's degree graduate; and Greg Rancourt, master's degree student. (Photo by Glenn Asakawa/University of Colorado)


Propulsion by a novel jet engine is the crux of the innovation behind a University of Colorado Boulder-developed aircraft that’s accelerating toward commercialization.

Jet engine technology can be small, fuel-efficient and cost-effective, at least with Assistant Professor Ryan Starkey’s design. The CU-Boulder aerospace engineer, with a team of students, has developed a first-of-its-kind supersonic unmanned aircraft vehicle, or UAV. The UAV, which is currently in a prototype state, is expected to fly farther and faster -- using less fuel -- than anything remotely similar to date.

The fuel efficiency of the engine that powers the 50-kilogram UAV is already double that of similar-scale engines, and Starkey says he hopes to double that efficiency again through further engineering.

A rendering, created by master's degree student Greg Rancourt, of the UAV. (Courtesy Ryan Starkey)

Starkey says his UAV could be used for everything from penetrating and analyzing storms to military reconnaissance missions -- both expeditions that can require the long-distance, high-speed travel his UAV will deliver -- without placing human pilots in danger. The UAV also could be used for testing low-sonic-boom supersonic transport aircraft technology, which his team is working toward designing.

The UAV is intended to shape the next generation of flight experimentation after post-World War II rocket-powered research aircraft, like the legendary North American X-15, have long been retired.

“I believe that what we’re going to do is reinvigorate the testing world, and that’s what we’re pushing to do,” said Starkey. “The group of students who are working on this are very excited because we’re not just creeping into something with incremental change, we’re creeping in with monumental change and trying to shake up the ground.”

Its thrust capacity makes the aircraft capable of reaching Mach 1.4, which is slightly faster than the speed of sound. Starkey says that regardless of the speed reached by the UAV, the aircraft will break the world record for speed in its weight class.

Its compact airframe is about 5 feet wide and 6 feet long. The aircraft costs between $50,000 and $100,000 -- a relatively small price tag in a field that can advance only through testing, which sometimes means equipment loss.

Starkey’s technology -- three years in the making at CU-Boulder -- is transitioning into a business venture through his weeks-old Starkey Aerospace Corp., called Starcor for short. The company was incubated by eSpace, which is a CU-affiliated nonprofit organization that supports entrepreneurial space companies. Starkey’s UAV already has garnered interest from the U.S. Army, Navy, Defense Advanced Research Projects Agency and NASA. The acclaimed Aviation Week publication also has highlighted Starkey’s UAV.

Starkey says technology transfer is important because it parlays university research into real-life applications that advance societies and contribute to local and global economies.

It also can provide job tracks for undergraduate and graduate students, says Starkey who’s bringing some of the roughly 50 students involved in UAV development into his budding Starcor.

“There are great students everywhere, but one of the reasons why I came to CU was because of how the students are trained. We definitely make sure they understand everything from circuit board wiring to going into the shop and building something,” said Starkey. “It makes them very effective and powerful even as fresh engineers with bachelor’s degrees. They’re very good students to hire. That’s a piece that I’m interested in embracing -- finding the really good talent that we have right here in Colorado and pulling it into the company.”

Starkey and his students are currently creating a fully integrated and functioning engineering test unit of the UAV, which will be followed by a critical design review after resolving any problems. The building of the aircraft and process of applying for FAA approval to test it in the air will carry into next year.

Starkey’s continuing fascination with speed first began to burn inside of him when he visited Kennedy Space Center at the age of 5. “When I teach I tell my class, ‘If it goes fast and gets hot, I’m in it.’ That’s what I want to do. There needs to be fire involved somewhere.”

Source: University of Colorado at Boulder 

Aircraft of the future could capture and re-use some of their own power

Engineerblogger
Feb 25, 2012



Credit: Lincoln University

Tomorrow's aircraft could contribute to their power needs by harnessing energy from the wheel rotation of their landing gear to generate electricity, according to research by the University of Lincoln.

Planes could use this to power their taxiing to and from airport buildings, reducing the need to use their jet engines. This would save on aviation fuel, cut emissions and reduce noise pollution at airports.

The feasibility of this has been confirmed by a team of engineers from the University of Lincoln with funding from the Engineering and Physical Sciences Research Council (EPSRC).

The energy produced by a plane's braking system during landing – currently wasted as heat produced by friction in the aircraft's disc brakes - would be captured and converted into electricity by motor-generators built into the landing gear. The electricity would then be stored and supplied to the in-hub motors in the wheels of the plane when it needed to taxi.

'Engine-less taxiing' could therefore become a reality. ACARE (the Advisory Council for Aeronautics Research in Europe) has made engine-less taxiing one of the key objectives beyond 2020 for the European aviation industry.

"Taxiing is a highly fuel-inefficient part of any trip by plane with emissions and noise pollution caused by jet engines being a huge issue for airports all over the world," said Professor Paul Stewart, who led the research.
"If the next generation of aircraft that emerges over the next 15 to 20 years could incorporate this kind of technology, it would deliver enormous benefits, especially for people living near airports. Currently, commercial aircraft spend a lot of time on the ground with their noisy jet engines running. In the future this technology could significantly reduce the need to do that."

The University of Lincoln's research formed part of a project that aimed to assess the basic feasibility of as many ways of capturing energy from a landing aircraft as possible.

"When an Airbus 320 lands, for example, a combination of its weight and speed gives it around three megawatts peak available power," Professor Stewart explained. "We explored a wide variety of ways of harnessing that energy, such as generating electricity from the interaction between copper coils embedded in the runway and magnets attached to the underside of the aircraft, and then feeding the power produced into the local electricity grid."

Unfortunately, most of the ideas weren't technically feasible or simply wouldn't be cost-effective. But the study showed that capturing energy direct from a plane's landing gear and recycling it for the aircraft's own use really could work, particularly if integrated with new technologies emerging from current research related to the more-electric or all-electric aircraft.

A number of technical challenges would need to be overcome. For example, weight would be a key issue, so a way of minimising the amount of conductors and electronic power converters used in an on-board energy recovery system would need to be identified.

The project was carried out under the auspices of the EPSRC-funded Airport Energy Technologies Network (AETN) established in 2008 to undertake low-carbon research in the field of aviation, and was undertaken in collaboration with researchers at the University of Loughborough.

Source: Lincoln University

Thursday, 23 February 2012

Making droplets drop faster: nanopatterned surfaces could improve the efficiency of powerplants and desalination systems

Engineerblogger
Feb 23, 2012






The condensation of water is crucial to the operation of most of the powerplants that provide our electricity — whether they are fueled by coal, natural gas or nuclear fuel. It is also the key to producing potable water from salty or brackish water. But there are still large gaps in the scientific understanding of exactly how water condenses on the surfaces used to turn steam back into water in a powerplant, or to condense water in an evaporation-based desalination plant.

New research by a team at MIT offers important new insights into how these droplets form, and ways to pattern the collecting surfaces at the nanoscale to encourage droplets to form more rapidly. These insights could enable a new generation of significantly more efficient powerplants and desalination plants, the researchers say.

The new results were published online this month in the journal ACS Nano, a publication of the American Chemical Society, in a paper by MIT mechanical engineering graduate student Nenad Miljkovic, postdoc Ryan Enright and associate professor Evelyn Wang.

Although analysis of condensation mechanisms is an old field, Miljkovic says, it has re-emerged in recent years with the rise of micro- and nanopatterning technologies that shape condensing surfaces to an unprecedented degree. The key property of surfaces that influences droplet-forming behavior is known as “wettability,” which determines whether droplets stand high on a surface like water drops on a hot griddle, or spread out quickly to form a thin film.

It’s a question that’s key to the operation of powerplants, where water is boiled using fossil fuel or the heat of nuclear fission; the resulting steam drives a turbine attached to a dynamo, producing electricity. After exiting the turbine, the steam needs to cool and condense back into liquid water, so it can return to the boiler and begin the process again. (That’s what goes on inside the giant cooling towers seen at powerplants.)

Typically, on a condensing surface, droplets gradually grow larger while adhering to the material through surface tension. Once they get so big that gravity overcomes the surface tension holding them in place, they rain down into a container below. But it turns out there are ways to get them to fall from the surface — and even to “jump” from the surface — at much smaller sizes, long before gravity takes over. That reduces the size of the removed droplets and makes the resulting transfer of heat much more efficient, Miljkovic says.

One mechanism is a surface pattern that encourages adjacent droplets to merge together. As they do so, energy is released, which “causes a recoil from the surface, and droplets will actually jump off,” Miljkovic says. That mechanism has been observed before, he notes, but the new work “adds a new chapter to the story. Few researchers have looked at the growth of the droplets prior to the jumping in detail.”

That’s important because even if the jumping effect allows droplets to leave the surface faster than they would otherwise, if their growth lags, you might actually reduce efficiency. In other words, it’s not just the size of the droplet when it gets released that matters, but also how fast it grows to that size.

“This has not been identified before,” Miljkovic says. And in many cases, the team found, “you think you’re getting enhanced heat transfer, but you’re actually getting worse heat transfer.”

In previous research, “heat transfer has not been explicitly measured,” he says, because it’s difficult to measure and the field of condensation with surface patterning is still fairly young. By incorporating measurements of droplet growth rates and heat transfer into their computer models, the MIT team was able to compare a variety of approaches to the surface patterning and find those that actually provided the most efficient transfer of heat.

One approach has been to create a forest of tiny pillars on the surface: Droplets tend to sit on top of the pillars while only locally wetting the surface rather than wetting the whole surface, minimizing the area of contact and facilitating easier release. But the exact sizes, spacing, width-to-height ratios and nanoscale roughness of the pillars can make a big difference in how well they work, the team found.

“We showed that our surfaces improved heat transfer up to 71 percent [compared to flat, non-wetting surfaces currently used only in high-efficiency condenser systems] if you tailor them properly,” Miljkovic says. With more work to explore variations in surface patterns, it should be possible to improve even further, he says.

The enhanced efficiency could also improve the rate of water production in plants that produce drinking water from seawater, or even in proposed new solar-power systems that rely on maximizing evaporator (solar collector) surface area and minimizing condenser (heat exchanger) surface area to increase the overall efficiency of solar-energy collection. A similar system could improve heat removal in computer chips, which is often based on internal evaporation and recondensation of a heat-transfer liquid through a device called a heat pipe.

Chuan-Hua Chen, an assistant professor of mechanical engineering and materials science at Duke University who was not involved in this work, says, “It is intriguing to see the coexistence of both sphere- and balloon-shaped condensate drops on the same structure. … Very little is known at the scales resolved by the environmental electron microscope used in this paper. Such findings will likely influence future research on anti-dew materials and … condensers.”

The next step in the research, underway now, is to extend the findings from the droplet experiments and computer modeling — and to find even more efficient configurations and ways of manufacturing them rapidly and inexpensively on an industrial scale, Miljkovic says.

This work was supported as part of the MIT S3TEC Center, an Energy Frontier Research Center funded by the U.S. Department of Energy.

Source: MIT News

SPIDERS microgrid project secures military installations

Engineerblogger
Feb 23, 2012


Bill Waugaman is the SPIDERS operational lead at Sandia National Laboratories. Credit: Randy Montoya

When the lights go out, most of us find flashlights, dig out board games and wait for the power to come back. But that’s not an option for hospitals and military installations, where lives are on the line. Power outages can have disastrous consequences for such critical organizations, and it’s especially unsettling that they rely on the nation’s aging, fragile and fossil-fuel dependent grid.

A three-phase, $30 million, multi-agency project known as SPIDERS, or the Smart Power Infrastructure Demonstration for Energy Reliability and Security, is focused on lessening those risks by building smarter, more secure and robust microgrids that incorporate renewable energy sources.

Sandia was selected as the lead designer for SPIDERS, the first major project under a Memorandum of Understanding (MOU) signed by the Department of Energy (DOE) and the Department of Defense (DoD) to accelerate joint innovations in clean energy and national energy security. The effort builds on Sandia’s decade of experience with microgrids – localized, closed-circuit grids that both generate and consume power – that can be run connected to or independent of the larger utility grid.

The goal for SPIDERS microgrid technology is to provide secure control of on-base generation.

“If there is a disruption to the commercial utility power grid, a secure microgrid can isolate from the grid and provide backup power to ensure continuity of mission-critical loads. The microgrid can allow time for the commercial utility to restore service and coordinate reconnection when service is stabilized,” said Col. Nancy Grandy, oversight executive of the SPIDERS Joint Capability Technology Demonstration (JCTD). “This capability provides much-needed energy security for our vital military missions.”

SPIDERS is addressing the challenge of tying intermittent clean energy sources such as solar and wind to a grid. “People run single diesel generators all the time to support buildings, but they don’t run interconnected diesels with solar, hydrogen fuel cells and so on, as a significant energy source. It’s not completely unheard of, but it’s a real integration challenge,” said Jason Stamp, Sandia’s lead project engineer for SPIDERS.

Currently, when power is disrupted at a military base, individual buildings switch to backup diesel generators, but that approach has several limitations. Generators might fail to start, and if a building’s backup power system doesn’t start, there is no way to use power from another building’s generator. Most generators are oversized for the load and run at less-than-optimal capacity, and excess fuel is consumed. Furthermore, safety requirements state that all renewable energy sources on base must disconnect when off-site power is lost.

A smart, cybersecure microgrid addresses these issues by allowing renewable energy sources to stay connected and run in coordination with diesel generators, which can all be brought online as needed. Such a system would dramatically help the military increase power reliability, lessen its need for diesel fuel and reduce its “carbon bootprint.”

“The military has indicated it wants to be protected against disruptions, to integrate renewable energy sources and to reduce petroleum demand,” Stamp said. “SPIDERS is focused on accomplishing those tasks, and the end result is having better energy delivery for critical mission support, and that is important for every American.”

SPIDERS uses existing, commercially available technologies for implementation, so the individual technologies are not novel. “What’s novel is the system integration of the various technologies, and demonstrating them in an operational field environment. Microgrid concepts are still fairly new, and that’s where Sandia’s microgrid design expertise is coming into play,” said Sandia researcher Bill Waugaman, SPIDERS operational lead.

It is common practice to connect diesel generators to buildings, but integrating significant amounts of energy from intermittent clean sources such as solar and wind to that system is unique, and it is a challenge that Sandia and SPIDERS are working to address.

Such integration requires data to determine the most efficient and effective way to operate, but that can open system vulnerabilities, so cybersecurity is paramount. SPIDERS addresses that issue by incorporating an unprecedented level of cybersecurity into the system from the outset.

“Any perturbation of information flow by an adversary would possibly cause an interruption to electrical service, which can have significant consequences,” Stamp said. “It’s important that if we build a microgrid system that depends explicitly on greater information flow, that it operate as intended: reliably and securely.”

SPIDERS is funded and managed through the DoD’s JCTD, which joins the efforts of other government organizations and companies to rapidly develop, assess and transition needed capabilities to support DoD missions. With the DOE’s support, the SPIDERS transition plan includes civilian facilities.

“The SPIDERS approach has many applications beyond military uses. Our interest in SPIDERS extends to organizations, like hospitals, that are critical to our nation’s functionality, especially in times of emergency,” said Merrill Smith, DOE program manager.

Sandia’s microgrid expertise spans the past decade, beginning when Sandia designed microgrids for the DOE’s Federal Energy Management Program (FEMP) and the DOE’s Office of Electricity Delivery and Energy Reliability (OE). The DOE initially asked Sandia to develop a conceptual design for a microgrid at Fort Carson in Colorado Springs, Colo., and another for Camp H.M. Smith in Hawaii.

After Sandia conducted a feasibility analysis and modeling and simulation work for the two bases, U.S. Pacific Command (USPACOM) and U.S. Northern Command (USNORTHCOM) asked Sandia to prove the concept through field work under a JCTD. The two commands pulled together a team of national labs and defense organizations, and selected Sandia to lead the development of the initial designs for three separate microgrids, each more complex than the previous.

The Army Construction Engineering Research Laboratory will use the Sandia designs as a basis for developing contracts with potential system integrators, who will construct the actual microgrids. Other partners in the SPIDERS JCTD include National Renewable Energy Laboratory for renewable energy and electrical vehicle expertise, Pacific Northwest National Laboratory for testing and transition, Oak Ridge National Laboratory to assist with control system development and Idaho National Laboratory for cybersecurity.

The first SPIDERS microgrid will be implemented at Joint Base Pearl Harbor Hickam in Honolulu, and will take advantage of several existing generation assets, including a 146-kW photovoltaic solar power system, and up to 50 kW of wind power. The integrator for the project has been selected and the final design and construction process is underway.

The second installation, at Fort Carson, is much larger and more complex and will integrate an existing 2 MW of solar power, several large diesel generators and electric vehicles. Large-scale electrical energy storage will also be implemented to ensure microgrid stability and to reduce the effects of PV variability on the system. Camp H.M. Smith, the most ambitious project, will rely on solar and diesel generators to power the entire base, which will be its own self-sufficient 5 MW microgrid when the national grid is unavailable.

Integration and implementation are scheduled through 2014. The goal is to install the circuit level demonstration at Pearl Hickam and Fort Carson next year, with Camp Smith installed in 2013.

Source: Sandia National Laboratories

An Early Start on Innovation: Corporations inspire the next generation of researchers to embrace science and innovation

R&D Magazine
Feb 23, 2012



Girl Scouts in Phoenix work on the Electronic Matching Game, one of 22 Agilent After School kits. Photo: Agilent

In order for technology companies to bring innovative products to market, they need enthusiastic, educated scientists and engineers to drive the process. To inspire the next generation of researchers, some industrial developers are going back to school.

A 2011 Harvard University study found that U.S. students ranked behind 31 other countries in math and science efficiency, and fewer than one in three students are proficient in science after high school.

A recent teleconference held in October by STEM Connects, a curriculum and career development resource from Discovery Education—an educational resource for teachers—reported that 10 to 15% of students in the U.S. enter college as science, technology, engineering, or mathematics (STEM) majors; in China that number is 30 to 40%, paving the way for a scientific and technological advantage for that nation.

"I think people need to realize that a lack of students going into STEM fields not only affects the learning curves in schools, but it also affects our global competitiveness and our ability as a nation to innovate," says Jennifer Harper-Taylor, president of the Siemens Foundation, Iselin, N.J. "If we don’t have a smart workforce, we are not going to have sophisticated R&D happening."

To drive more interest to these fields, industrial companies are helping students understand the importance of science and mathematics, and are promoting STEM education and innovation to the next generation.

From school to scientific discovery
"One of the keys to innovation is engaging the future scientists and engineers of our nation," says Tom Buckmaster, president of Honeywell Hometown Solutions, Morris Township, N.J. "The more students that have an interest in science and math means the possibility of more scientists and engineers our society could have, which will expand our nation’s capacity for innovation."

Agilent Technologies Inc., Santa Clara, Calif., promotes hands-on learning to enhance understanding of basic science concepts. The company has created the Agilent After School program, a hands-on, experimental science program targeted at children from the ages 9 to 13. The program has reached 550,000 students globally; and Agilent has invested around $3.5 million in the program in the past 10 years.

The program features 22 kits or projects that range from simple experiments for elementary school students, to more complex experiments that require advanced critical thinking and measurement skills for high school students. Projects include creating electronic circuit boards and balloon- or solar-powered cars, learning how to clean up oil spills, and solving a crime scene mystery. Held at universities and other local facilities, Agilent employees teach the students the basics about their projects and what they are creating, providing the students with knowledge that they can take back to their classrooms.

"Students really love the hands-on aspect of the projects, and in turn love leaving with what they built," says Terry Lincoln, Agilent Technologies’ global signature programs manager. "They also love the engagement between themselves and the employee running the program and talking about their project, making them want to take their projects outside of the program and into the classroom."

Science hits the road
Morris Township, N.J.-based Honeywell International has partnered with NASA to create FMA Live!, a program that explains Sir Isaac Newton's laws of motion in an exciting and entertaining way. The MTV-style interactive traveling show teaches basic science concepts and engages future engineers and scientists in the seventh to ninth grades.

FMA Live! features high-energy actors, music, videos, and demonstrations to teach Newton's laws of motion and the process of scientific inquiry. During each performance, students, teachers, and school administrators interact with three professional actors on stage in front of a live audience.

The actors use a large Velcro wall to demonstrate inertia when a student jumps off a springboard and is immediately stuck to the wall. Go-carts race across the stage to illustrate action and reaction. Extreme wrestling and a giant soccer ball show how force equals mass multiplied by acceleration. All three laws are shown simultaneously when a participant—usually a teacher or administrator—rides a futuristic hover chair and collides face first with a gigantic cream pie, exciting the students and providing lasting and memorable lessons, says Buckmaster.
To read more click here...

Smaller antennas for smaller wireless devices and still smaller micro-air vehicles

Engineerblogger
Feb 23, 2012



In most cases the size of the antenna within a wireless device is actually the limiting factor in the minimum achievable size of the device itself. As such, manufacturers must "build up" to the required antenna size. Dr. Grbic's team provides a way for manufacturers to either "build down" to a much smaller size, or with a smaller antenna, to allow additional room for more capabilities with built-in options.

Supported by a Presidential Early Career Award for Scientists and Engineers through the Air Force Office of Scientific Research, Dr. Anthony Grbic utilizes an innovative fabrication process to produce small, efficient antennas.

When you thought our hand held electronic devices could not get any smaller or more efficient, along comes Dr. Anthony Grbic and his research team from the Department of Electrical Engineering and Computer Science at the University of Michigan, with an antenna the size of an quarter.

You may ask: why is this significant? Dr. Grbic, and his colleague Dr. Stephen Forrest, point out that in most cases the size of the antenna within a wireless device is actually the limiting factor in the minimum achievable size of the device itself. As such, manufacturers must "build up" to the required antenna size. Dr. Grbic's team provides a way for manufacturers to either "build down" to a much smaller size, or with a smaller antenna, to allow additional room for more capabilities with built-in options.

The key to this new design is the hemispherical shape of the antenna which takes advantage of volume—just imagine the top half of a sphere with a descending spiral antenna winding down to the base—instant miniaturization. Dr. Grbic notes that this hemispherical antenna concept had been around for several years, but there was no practical way to mass produce the spiral antenna pattern. The Grbic and Forrest teams overcame this obstacle with a simple metallic stamping process which is very quick, efficient and potentially inexpensive, while maintaining the same bandwidth as their larger counterparts.

Currently this antenna design operates in only one frequency band, so the next step is to make the antenna operate in multiple frequency bands for use in multiple applications. Talks are also underway with Bluetooth and WiFi communications manufacturers to utilize this new technology. Of particular interest to the Air Force is the integration of these small and highly efficient antennas on autonomous micro-air vehicles, and taking this process one step further, the technique could be applied to the manufacture of conformal antennas that could be integrated onto the surface of an air vehicle—conforming to their low profile stealth design.


Source: Air Force Office of Scientific Research

Value Stream Analysis Improves Processes, Saves Money

Engineerblogger
Feb 23, 2012


An example Pareto chart from a value stream analysis shows the potential benefit of implementing VSA findings. (AFRL Graphic)


Engineers from the Air Force Research Laboratory have stimulated industrial base investments in infrastructure and technology by leveraging the value stream analysis (VSA) process to identify significant process improvement opportunities.

As a result, General Electric Aviation, Pratt & Whitney and Rolls Royce, together with some of their suppliers, invested in process improvements to produce an expected $34 million cost avoidance for current and future products. Because many of the manufacturing technologies are applicable to advanced turbine engine performance improvements, the potential for an additional $126 million cost avoidance for current projects exists.

For the last five years, AFRL's Manufacturing Technology Division (AFRL/RXM), in cooperation with General Dynamics Information Technology and TechSolve, Inc., has been conducting VSAs within the advanced turbine engine industrial base. Each VSA generated a list of potential process improvements, projected costs, and assessed the risks associated with achieving the anticipated benefits.

AFRL/RXM used the data from these VSAs to develop successful ManTech programs, including a program for the advanced machining of CMCs. This program yielded increases in material removal rates and a reduction in cutting tool costs by two orders of magnitude. Additionally, the 3D airfoil inspection process reduced the dimensional inspection of complex shapes from 60 minutes down to 3 minutes.

Industry has used this data to pursue lower risk process improvements. These process improvements have been implemented and are anticipated to yield benefits of $27 million. Process improvements that have been partially implemented through industry investment are anticipated to yield an additional $7 million. When implemented, processes that are still maturing could provide an additional $126 million in benefits. For industry, the return on investment is about 15 to 1 and it is even greater for the Air Force, at 28 to 1.

Source: Air Force Office of Scientific Research

Materials: Graphene Is Thinnest Known Anti-Corrosion Coating

Engineerblogger
Feb 23, 2012




New research has established the "miracle material" called graphene as the world's thinnest known coating for protecting metals against corrosion. Their study on this potential new use of graphene appears in ACS Nano.

In the study, Dhiraj Prasai and colleagues point out that rusting and other corrosion of metals is a serious global problem, and intense efforts are underway to find new ways to slow or prevent it. Corrosion results from contact of the metal's surface with air, water or other substances. One major approach involves coating metals with materials that shield the metal surface, but currently used materials have limitations. The scientists decided to evaluate graphene as a new coating. Graphene is a single layer of carbon atoms, many layers of which are in lead pencils and charcoal, and is the thinnest, strongest known material. That's why it is called the miracle material. In graphene, the carbon atoms are arranged like a chicken-wire fence in a layer so thin that is transparent, and an ounce would cover 28 football fields.

They found that graphene, whether made directly on copper or nickel or transferred onto another metal, provides protection against corrosion. Copper coated by growing a single layer of graphene through chemical vapor deposition (CVD) corroded seven times slower than bare copper, and nickel coated by growing multiple layers of graphene corroded 20 times slower than bare nickel. Remarkably, a single layer of graphene provides the same corrosion protection as conventional organic coatings that are more than five times thicker. Graphene coatings could be ideal corrosion-inhibiting coatings in applications where a thin coating is favorable, such as microelectronic components (e.g., interconnects, aircraft components and implantable devices), say the scientists.

The researchers acknowledge funding from the National Science Foundation.


Source: American Chemical Society (ASC)


Additional Information:

Ferroelectric Nanotubes: “Soft Template Infiltration” Technique Fabricates Free-Standing Piezoelectric Nanostructures from PZT Material

Georgia Tech 
Feb 23, 2012
Composite scanning electron microscope (SEM) image of PZT nanotube arrays and their piezoelectric response as measured by band-excitation PFM (BE-PFM). (Click image for high-resolution version. Image courtesy of Ashley Bernal and Nazanin Bassiri-Gharb)

Researchers have developed a “soft template infiltration” technique for fabricating free-standing piezoelectrically active ferroelectric nanotubes and other nanostructures from PZT – a material that is attractive because of its large piezoelectric response. Developed at the Georgia Institute of Technology, the technique allows fabrication of ferroelectric nanostructures with user-defined shapes, location and pattern variation across the same substrate.

The resulting structures, which are 100 to 200 nanometers in outer diameter with thickness ranging from 5 to 25 nanometers, show a piezoelectric response comparable to that of PZT thin films of much larger dimensions. The technique could ultimately lead to production of actively-tunable photonic and phononic crystals, terahertz emitters, energy harvesters, micromotors, micropumps and nanoelectromechanical sensors, actuators and transducers – all made from the PZT material.

Using a novel characterization technique developed at Oak Ridge National Laboratory, the researchers for the first time made high-accuracy in-situ measurements of the nanoscale piezoelectric properties of the structures.

“We are using a new nano-manufacturing method for creating three-dimensional nanostructures with high aspect ratios in ferroelectric materials that have attractive piezoelectric properties,” said Nazanin Bassiri-Gharb, an assistant professor in Georgia Tech’s Woodruff School of Mechanical Engineering. “We also leveraged a new characterization method available through Oak Ridge to study the piezoelectric response of these nanostructures on the substrate where they were produced.”

The research was published online on Jan. 26, 2012, and is scheduled for publication in the print edition (Vol. 24, Issue 9) of the journal Advanced Materials. The research was supported by Georgia Tech new faculty startup funds.

Ferroelectric materials at the nanometer scale are promising for a wide range of applications, but processing them into useful devices has proven challenging – despite success at producing such devices at the micrometer scale. Top-down manufacturing techniques, such as focused ion beam milling, allow accurate definition of devices at the nanometer scale, but the process can induce surface damage that degrades the ferroelectric and piezoelectric properties that make the material interesting.

Until now, bottom-up fabrication techniques have been unable to produce structures with both high aspect ratios and precise control over location. The technique reported by the Georgia Tech researchers allows production of nanotubes made from PZT (PbZr0.52Ti0.48O3) with aspect ratios of up to 5 to 1.
To read more click here...