Engineerblogger
Feb 18, 2013
Paved roads are nice to look at, but they’re easily damaged and costly to repair. Erik Schlangen demos a new type of porous asphalt made of simple materials with an astonishing feature: When cracked, it can be “healed” by induction heating.
Source: TED
Additional Information:
The Engineering Economist
Monday, 18 February 2013
Erik Schlangen: A "self-healing" asphalt
Posted by
Unknown
3
comments
Labels:
Manufacturing,
Materials,
Nanotechnology,
Science,
TED
Miguel Nicolelis: A monkey that controls a robot with its thoughts. No, really.
Engineerblogger
Feb 18, 2013
Can we use our brains to directly control machines -- without requiring a body as the middleman? Miguel Nicolelis talks through an astonishing experiment, in which a clever monkey in the US learns to control a monkey avatar, and then a robot arm in Japan, purely with its thoughts. The research has big implications for quadraplegic people -- and maybe for all of us.
Source: TED
Additional Information:
Feb 18, 2013
Can we use our brains to directly control machines -- without requiring a body as the middleman? Miguel Nicolelis talks through an astonishing experiment, in which a clever monkey in the US learns to control a monkey avatar, and then a robot arm in Japan, purely with its thoughts. The research has big implications for quadraplegic people -- and maybe for all of us.
Source: TED
Additional Information:
Posted by
Unknown
0
comments
Labels:
Science,
Technology,
TED
Friday, 15 February 2013
New carbon films improve prospects of solar energy devices
Engineerblogger
Feb 15, 2013
New research by Yale University scientists helps pave the way for the next generation of solar cells, a renewable energy technology that directly converts solar energy into electricity.
In a pair of recent papers, Yale engineers report a novel and cost-effective way to improve the efficiency of crystalline silicon solar cells through the application of thin, smooth carbon nanotube films. These films could be used to produce hybrid carbon/silicon solar cells with far greater power-conversion efficiency than reported in this system to date.
“Our approach bridges the cost-effectiveness and excellent electrical and optical properties of novel nanomaterials with well-established, high efficiency silicon solar cell technologies,” said André D. Taylor, assistant professor of chemical and environmental engineering at Yale and a principal investigator of the research.
The researchers reported their work in two papers published in December, one in the journal Energy and Environmental Science and one in Nano Letters (Record High Efficiency Single-Walled Carbon Nanotube/Silicon p–n Junction Solar Cells). Mark A. Reed, a professor of electrical engineering and applied physics at Yale, is also a principal investigator.
Silicon, an abundant element, is an ideal material for solar cells because its optical properties make it an intrinsically efficient energy converter. But the high cost of processing single-crystalline silicon at necessarily high temperatures has hindered widespread commercialization.
Organic solar cells — an existing alternative to high-cost crystalline silicon solar cells — allow for simpler, room-temperature processing and lower costs, researchers said, but they have low power-conversion efficiency.
Instead of using only organic substitutes, the Yale team applied thin, smooth carbon nanotube films with superior conductance and optical properties to the surface of single crystalline silicon to create a hybrid solar cell architecture. To do it, they developed a method called superacid sliding.
As reported in the papers, the approach allows them to take advantage of the desirable photovoltaic properties of single-crystalline silicon through a simpler, low-temperature, lower-cost process. It allows for both high light absorption and high electrical conductivity.
“This is striking, as it suggests that the superior photovoltaic properties of single-crystalline silicon can be realized by a simple, low-temperature process,” said Xiaokai Li, a doctoral student in Taylor’s lab and a lead author on both papers. “The secret lies in the arrangement and assembly of these carbon nanotube thin films,”
In previous work, Yale scientist successfully developed a carbon nanotube composite thin film that could be used in fuel cells and lithium ion batteries. The recent research suggests how to extend the film’s application to solar cells by optimizing its smoothness and durability.
“Optimizing this interface could also serve as a platform for many next-generation solar cell devices, including carbon nanotube/polymer, carbon/polymer, and all carbon solar cells,” said Yeonwoong (Eric) Jung, a postdoctoral researcher in Reed’s lab and also a lead author of the papers.
All authors are listed on the papers (links above).
The National Science Foundation, NASA, the U.S. Department of Energy, and the Yale Institute for Nanoscience and Quantum Engineering provided support for the research.
Source: Yale University
Additional Information:
Feb 15, 2013
New research by Yale University scientists helps pave the way for the next generation of solar cells, a renewable energy technology that directly converts solar energy into electricity. (Illustration by the researchers) |
New research by Yale University scientists helps pave the way for the next generation of solar cells, a renewable energy technology that directly converts solar energy into electricity.
In a pair of recent papers, Yale engineers report a novel and cost-effective way to improve the efficiency of crystalline silicon solar cells through the application of thin, smooth carbon nanotube films. These films could be used to produce hybrid carbon/silicon solar cells with far greater power-conversion efficiency than reported in this system to date.
“Our approach bridges the cost-effectiveness and excellent electrical and optical properties of novel nanomaterials with well-established, high efficiency silicon solar cell technologies,” said André D. Taylor, assistant professor of chemical and environmental engineering at Yale and a principal investigator of the research.
The researchers reported their work in two papers published in December, one in the journal Energy and Environmental Science and one in Nano Letters (Record High Efficiency Single-Walled Carbon Nanotube/Silicon p–n Junction Solar Cells). Mark A. Reed, a professor of electrical engineering and applied physics at Yale, is also a principal investigator.
Silicon, an abundant element, is an ideal material for solar cells because its optical properties make it an intrinsically efficient energy converter. But the high cost of processing single-crystalline silicon at necessarily high temperatures has hindered widespread commercialization.
Organic solar cells — an existing alternative to high-cost crystalline silicon solar cells — allow for simpler, room-temperature processing and lower costs, researchers said, but they have low power-conversion efficiency.
Instead of using only organic substitutes, the Yale team applied thin, smooth carbon nanotube films with superior conductance and optical properties to the surface of single crystalline silicon to create a hybrid solar cell architecture. To do it, they developed a method called superacid sliding.
As reported in the papers, the approach allows them to take advantage of the desirable photovoltaic properties of single-crystalline silicon through a simpler, low-temperature, lower-cost process. It allows for both high light absorption and high electrical conductivity.
“This is striking, as it suggests that the superior photovoltaic properties of single-crystalline silicon can be realized by a simple, low-temperature process,” said Xiaokai Li, a doctoral student in Taylor’s lab and a lead author on both papers. “The secret lies in the arrangement and assembly of these carbon nanotube thin films,”
In previous work, Yale scientist successfully developed a carbon nanotube composite thin film that could be used in fuel cells and lithium ion batteries. The recent research suggests how to extend the film’s application to solar cells by optimizing its smoothness and durability.
“Optimizing this interface could also serve as a platform for many next-generation solar cell devices, including carbon nanotube/polymer, carbon/polymer, and all carbon solar cells,” said Yeonwoong (Eric) Jung, a postdoctoral researcher in Reed’s lab and also a lead author of the papers.
All authors are listed on the papers (links above).
The National Science Foundation, NASA, the U.S. Department of Energy, and the Yale Institute for Nanoscience and Quantum Engineering provided support for the research.
Source: Yale University
Additional Information:
Posted by
Unknown
2
comments
Forget about leprechauns, engineers are catching rainbows
Engineerblogger
Feb 15, 2012
University at Buffalo engineers have created a more efficient way to catch rainbows, an advancement in photonics that could lead to technological breakthroughs in solar energy, stealth technology and other areas of research.
Qiaoqiang Gan, PhD, an assistant professor of electrical engineering at UB, and a team of graduate students described their work in a paper called “Rainbow Trapping in Hyperbolic Metamaterial Waveguide,” published in the online journal Scientific Reports.
They developed a “hyperbolic metamaterial waveguide,” which is essentially an advanced microchip made of alternate ultra-thin films of metal and semiconductors and/or insulators. The waveguide halts and ultimately absorbs each frequency of light, at slightly different places in a vertical direction (see the above figure), to catch a “rainbow” of wavelengths.
Gan is a researcher within UB’s new Center of Excellence in Materials Informatics.
“Electromagnetic absorbers have been studied for many years, especially for military radar systems,” Gan said. “Right now, researchers are developing compact light absorbers based on optically thick semiconductors or carbon nanotubes. However, it is still challenging to realize the perfect absorber in ultra-thin films with tunable absorption band.
“We are developing ultra-thin films that will slow the light and therefore allow much more efficient absorption, which will address the long existing challenge.”
Light is made of photons that, because they move extremely fast (i.e., at the speed of light), are difficult to tame. In their initial attempts to slow light, researchers relied upon cryogenic gases. But because cryogenic gases are very cold – roughly 240 degrees below zero Fahrenheit – they are difficult to work with outside a laboratory.
Before joining UB, Gan helped pioneer a way to slow light without cryogenic gases. He and other researchers at Lehigh University made nano-scale-sized grooves in metallic surfaces at different depths, a process that altered the optical properties of the metal. While the grooves worked, they had limitations. For example, the energy of the incident light cannot be transferred onto the metal surface efficiently, which hampered its use for practical applications, Gan said.
The hyperbolic metamaterial waveguide solves that problem because it is a large area of patterned film that can collect the incident light efficiently. It is referred to as an artificial medium with subwavelength features whose frequency surface is hyperboloid, which allows it to capture a wide range of wavelengths in different frequencies including visible, near-infrared, mid-infrared, terahertz and microwaves.
It could lead to advancements in an array of fields.
For example, in electronics there is a phenomenon known as crosstalk, in which a signal transmitted on one circuit or channel creates an undesired effect in another circuit or channel. The on-chip absorber could potentially prevent this.
The on-chip absorber may also be applied to solar panels and other energy-harvesting devices. It could be especially useful in mid-infrared spectral regions as thermal absorber for devices that recycle heat after sundown, Gan said.
Technology such as the Stealth bomber involves materials that make planes, ships and other devices invisible to radar, infrared, sonar and other detection methods. Because the on-chip absorber has the potential to absorb different wavelengths at a multitude of frequencies, it could be useful as a stealth coating material.
Additional authors of the paper include Haifeng Hu, Dengxin Ji, Xie Zeng and Kai Liu, all PhD candidates in UB’s Department of Electrical Engineering. The work was sponsored by the National Science Foundation and UB’s electrical engineering department.
Source: University of Buffalo
Additional Information:
Related Article:
Feb 15, 2012
An up-close look at the “hyperbolic metamaterial waveguide,” which catches and ultimately absorbs wavelengths (or color) in a vertical direction. |
University at Buffalo engineers have created a more efficient way to catch rainbows, an advancement in photonics that could lead to technological breakthroughs in solar energy, stealth technology and other areas of research.
Qiaoqiang Gan, PhD, an assistant professor of electrical engineering at UB, and a team of graduate students described their work in a paper called “Rainbow Trapping in Hyperbolic Metamaterial Waveguide,” published in the online journal Scientific Reports.
They developed a “hyperbolic metamaterial waveguide,” which is essentially an advanced microchip made of alternate ultra-thin films of metal and semiconductors and/or insulators. The waveguide halts and ultimately absorbs each frequency of light, at slightly different places in a vertical direction (see the above figure), to catch a “rainbow” of wavelengths.
Gan is a researcher within UB’s new Center of Excellence in Materials Informatics.
“Electromagnetic absorbers have been studied for many years, especially for military radar systems,” Gan said. “Right now, researchers are developing compact light absorbers based on optically thick semiconductors or carbon nanotubes. However, it is still challenging to realize the perfect absorber in ultra-thin films with tunable absorption band.
“We are developing ultra-thin films that will slow the light and therefore allow much more efficient absorption, which will address the long existing challenge.”
Light is made of photons that, because they move extremely fast (i.e., at the speed of light), are difficult to tame. In their initial attempts to slow light, researchers relied upon cryogenic gases. But because cryogenic gases are very cold – roughly 240 degrees below zero Fahrenheit – they are difficult to work with outside a laboratory.
Before joining UB, Gan helped pioneer a way to slow light without cryogenic gases. He and other researchers at Lehigh University made nano-scale-sized grooves in metallic surfaces at different depths, a process that altered the optical properties of the metal. While the grooves worked, they had limitations. For example, the energy of the incident light cannot be transferred onto the metal surface efficiently, which hampered its use for practical applications, Gan said.
The hyperbolic metamaterial waveguide solves that problem because it is a large area of patterned film that can collect the incident light efficiently. It is referred to as an artificial medium with subwavelength features whose frequency surface is hyperboloid, which allows it to capture a wide range of wavelengths in different frequencies including visible, near-infrared, mid-infrared, terahertz and microwaves.
It could lead to advancements in an array of fields.
For example, in electronics there is a phenomenon known as crosstalk, in which a signal transmitted on one circuit or channel creates an undesired effect in another circuit or channel. The on-chip absorber could potentially prevent this.
The on-chip absorber may also be applied to solar panels and other energy-harvesting devices. It could be especially useful in mid-infrared spectral regions as thermal absorber for devices that recycle heat after sundown, Gan said.
Technology such as the Stealth bomber involves materials that make planes, ships and other devices invisible to radar, infrared, sonar and other detection methods. Because the on-chip absorber has the potential to absorb different wavelengths at a multitude of frequencies, it could be useful as a stealth coating material.
Additional authors of the paper include Haifeng Hu, Dengxin Ji, Xie Zeng and Kai Liu, all PhD candidates in UB’s Department of Electrical Engineering. The work was sponsored by the National Science Foundation and UB’s electrical engineering department.
Source: University of Buffalo
Additional Information:
Related Article:
Posted by
Unknown
0
comments
Labels:
Materials,
Research and Development,
United States
Tuesday, 12 February 2013
Stem cell breakthrough could lead to new bone repair therapies on nanoscale surfaces
Engineerblogger
Feb 12, 2013
Scientists at the University of Southampton have created a new method to generate bone cells which could lead to revolutionary bone repair therapies for people with bone fractures or those who need hip replacement surgery due to osteoporosis and osteoarthritis.
The research, carried out by Dr Emmajayne Kingham at the University of Southampton in collaboration with the University of Glasgow and published in the journal Small, cultured human embryonic stem cells on to the surface of plastic materials and assessed their ability to change.
Scientists were able to use the nanotopographical patterns on the biomedical plastic to manipulate human embryonic stem cells towards bone cells. This was done without any chemical enhancement.
The materials, including the biomedical implantable material polycarbonate plastic, which is a versatile plastic used in things from bullet proof windows to CDs, offer an accessible and cheaper way of culturing human embryonic stem cells and presents new opportunities for future medical research in this area.
Professor Richard Oreffo, who led the University of Southampton team, explains: “To generate bone cells for regenerative medicine and further medical research remains a significant challenge. However we have found that by harnessing surface technologies that allow the generation and ultimately scale up of human embryonic stem cells to skeletal cells, we can aid the tissue engineering process. This is very exciting.
“Our research may offer a whole new approach to skeletal regenerative medicine. The use of nanotopographical patterns could enable new cell culture designs, new device designs, and could herald the development of new bone repair therapies as well as further human stem cell research,” Professor Oreffo adds.
The study was funded by the Biotechnology and Biological Sciences Research Council (BBSRC).
This latest discovery expands on the close collaborative work previously undertaken by the University of Southampton and the University of Glasgow. In 2011 the team successfully used plastic with embossed nanopatterns to grow and spread adult stem cells while keeping their stem cell characteristics; a process which is cheaper and easier to manufacture than previous ways of working.
Dr Nikolaj Gadegaard, Institute of Molecular, Cell and Systems Biology at the University of Glasgow, says: "Our previous collaborative research showed exciting new ways to control mesenchymal stem cell – stem cells from the bone marrow of adults – growth and differentiation on nanoscale patterns.
“This new Southampton-led discovery shows a totally different stem cell source, embryonic, also respond in a similar manner and this really starts to open this new field of discovery up. With more research impetus, it gives us the hope that we can go on to target a wider variety of degenerative conditions than we originally aspired to. This result is of fundamental significance."
Source: University of Southampton
Feb 12, 2013
Stem cell breakthrough could lead to new bone repair therapies |
Scientists at the University of Southampton have created a new method to generate bone cells which could lead to revolutionary bone repair therapies for people with bone fractures or those who need hip replacement surgery due to osteoporosis and osteoarthritis.
The research, carried out by Dr Emmajayne Kingham at the University of Southampton in collaboration with the University of Glasgow and published in the journal Small, cultured human embryonic stem cells on to the surface of plastic materials and assessed their ability to change.
Scientists were able to use the nanotopographical patterns on the biomedical plastic to manipulate human embryonic stem cells towards bone cells. This was done without any chemical enhancement.
The materials, including the biomedical implantable material polycarbonate plastic, which is a versatile plastic used in things from bullet proof windows to CDs, offer an accessible and cheaper way of culturing human embryonic stem cells and presents new opportunities for future medical research in this area.
Professor Richard Oreffo, who led the University of Southampton team, explains: “To generate bone cells for regenerative medicine and further medical research remains a significant challenge. However we have found that by harnessing surface technologies that allow the generation and ultimately scale up of human embryonic stem cells to skeletal cells, we can aid the tissue engineering process. This is very exciting.
“Our research may offer a whole new approach to skeletal regenerative medicine. The use of nanotopographical patterns could enable new cell culture designs, new device designs, and could herald the development of new bone repair therapies as well as further human stem cell research,” Professor Oreffo adds.
The study was funded by the Biotechnology and Biological Sciences Research Council (BBSRC).
This latest discovery expands on the close collaborative work previously undertaken by the University of Southampton and the University of Glasgow. In 2011 the team successfully used plastic with embossed nanopatterns to grow and spread adult stem cells while keeping their stem cell characteristics; a process which is cheaper and easier to manufacture than previous ways of working.
Dr Nikolaj Gadegaard, Institute of Molecular, Cell and Systems Biology at the University of Glasgow, says: "Our previous collaborative research showed exciting new ways to control mesenchymal stem cell – stem cells from the bone marrow of adults – growth and differentiation on nanoscale patterns.
“This new Southampton-led discovery shows a totally different stem cell source, embryonic, also respond in a similar manner and this really starts to open this new field of discovery up. With more research impetus, it gives us the hope that we can go on to target a wider variety of degenerative conditions than we originally aspired to. This result is of fundamental significance."
Source: University of Southampton
Posted by
Unknown
0
comments
Labels:
Education,
Materials,
Medical,
Research and Development,
Technology,
UK
Monday, 11 February 2013
Humans and robots work better together following cross-training
Engineerblogger
Feb 11, 2013
Spending a day in someone else’s shoes can help us to learn what makes them tick. Now the same approach is being used to develop a better understanding between humans and robots, to enable them to work together as a team.
Robots are increasingly being used in the manufacturing industry to perform tasks that bring them into closer contact with humans. But while a great deal of work is being done to ensure robots and humans can operate safely side-by-side, more effort is needed to make robots smart enough to work effectively with people, says Julie Shah, an assistant professor of aeronautics and astronautics at MIT and head of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
“People aren’t robots, they don’t do things the same way every single time,” Shah says. “And so there is a mismatch between the way we program robots to perform tasks in exactly the same way each time and what we need them to do if they are going to work in concert with people.”
Most existing research into making robots better team players is based on the concept of interactive reward, in which a human trainer gives a positive or negative response each time a robot performs a task.
However, human studies carried out by the military have shown that simply telling people they have done well or badly at a task is a very inefficient method of encouraging them to work well as a team.
So Shah and PhD student Stefanos Nikolaidis began to investigate whether techniques that have been shown to work well in training people could also be applied to mixed teams of humans and robots. One such technique, known as cross-training, sees team members swap roles with each other on given days. “This allows people to form a better idea of how their role affects their partner and how their partner’s role affects them,” Shah says.
In a paper to be presented at the International Conference on Human-Robot Interaction in Tokyo in March, Shah and Nikolaidis will present the results of experiments they carried out with a mixed group of humans and robots, demonstrating that cross-training is an extremely effective team-building tool.
To allow robots to take part in the cross-training experiments, the pair first had to design a new algorithm to allow the devices to learn from their role-swapping experiences. So they modified existing reinforcement-learning algorithms to allow the robots to take in not only information from positive and negative rewards, but also information gained through demonstration. In this way, by watching their human counterparts switch roles to carry out their work, the robots were able to learn how the humans wanted them to perform the same task.
Each human-robot team then carried out a simulated task in a virtual environment, with half of the teams using the conventional interactive reward approach, and half using the cross-training technique of switching roles halfway through the session. Once the teams had completed this virtual training session, they were asked to carry out the task in the real world, but this time sticking to their own designated roles.
Shah and Nikolaidis found that the period in which human and robot were working at the same time — known as concurrent motion — increased by 71 percent in teams that had taken part in cross-training, compared to the interactive reward teams. They also found that the amount of time the humans spent doing nothing — while waiting for the robot to complete a stage of the task, for example — decreased by 41 percent.
What’s more, when the pair studied the robots themselves, they found that the learning algorithms recorded a much lower level of uncertainty about what their human teammate was likely to do next — a measure known as the entropy level — if they had been through cross-training.
Finally, when responding to a questionnaire after the experiment, human participants in cross-training were far more likely to say the robot had carried out the task according to their preferences than those in the reward-only group, and reported greater levels of trust in their robotic teammate. “This is the first evidence that human-robot teamwork is improved when a human and robot train together by switching roles, in a manner similar to effective human team training practices,” Nikolaidis says.
Shah believes this improvement in team performance could be due to the greater involvement of both parties in the cross-training process. “When the person trains the robot through reward it is one-way: The person says ‘good robot’ or the person says ‘bad robot,’ and it’s a very one-way passage of information,” Shah says. “But when you switch roles the person is better able to adapt to the robot’s capabilities and learn what it is likely to do, and so we think that it is adaptation on the person’s side that results in a better team performance.”
The work shows that strategies that are successful in improving interaction among humans can often do the same for humans and robots, says Kerstin Dautenhahn, a professor of artificial intelligence at the University of Hertfordshire in the U.K. “People easily attribute human characteristics to a robot and treat it socially, so it is not entirely surprising that this transfer from the human-human domain to the human-robot domain not only made the teamwork more efficient, but also enhanced the experience for the participants, in terms of trusting the robot,” Dautenhahn says.
Source: MIT
Additional Information:
Feb 11, 2013
Julie Shah, assistant professor of aeronautics and astronautics and head of the Interactive Robotics Group at MIT |
Spending a day in someone else’s shoes can help us to learn what makes them tick. Now the same approach is being used to develop a better understanding between humans and robots, to enable them to work together as a team.
Robots are increasingly being used in the manufacturing industry to perform tasks that bring them into closer contact with humans. But while a great deal of work is being done to ensure robots and humans can operate safely side-by-side, more effort is needed to make robots smart enough to work effectively with people, says Julie Shah, an assistant professor of aeronautics and astronautics at MIT and head of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
“People aren’t robots, they don’t do things the same way every single time,” Shah says. “And so there is a mismatch between the way we program robots to perform tasks in exactly the same way each time and what we need them to do if they are going to work in concert with people.”
Most existing research into making robots better team players is based on the concept of interactive reward, in which a human trainer gives a positive or negative response each time a robot performs a task.
However, human studies carried out by the military have shown that simply telling people they have done well or badly at a task is a very inefficient method of encouraging them to work well as a team.
So Shah and PhD student Stefanos Nikolaidis began to investigate whether techniques that have been shown to work well in training people could also be applied to mixed teams of humans and robots. One such technique, known as cross-training, sees team members swap roles with each other on given days. “This allows people to form a better idea of how their role affects their partner and how their partner’s role affects them,” Shah says.
In a paper to be presented at the International Conference on Human-Robot Interaction in Tokyo in March, Shah and Nikolaidis will present the results of experiments they carried out with a mixed group of humans and robots, demonstrating that cross-training is an extremely effective team-building tool.
To allow robots to take part in the cross-training experiments, the pair first had to design a new algorithm to allow the devices to learn from their role-swapping experiences. So they modified existing reinforcement-learning algorithms to allow the robots to take in not only information from positive and negative rewards, but also information gained through demonstration. In this way, by watching their human counterparts switch roles to carry out their work, the robots were able to learn how the humans wanted them to perform the same task.
Each human-robot team then carried out a simulated task in a virtual environment, with half of the teams using the conventional interactive reward approach, and half using the cross-training technique of switching roles halfway through the session. Once the teams had completed this virtual training session, they were asked to carry out the task in the real world, but this time sticking to their own designated roles.
Shah and Nikolaidis found that the period in which human and robot were working at the same time — known as concurrent motion — increased by 71 percent in teams that had taken part in cross-training, compared to the interactive reward teams. They also found that the amount of time the humans spent doing nothing — while waiting for the robot to complete a stage of the task, for example — decreased by 41 percent.
What’s more, when the pair studied the robots themselves, they found that the learning algorithms recorded a much lower level of uncertainty about what their human teammate was likely to do next — a measure known as the entropy level — if they had been through cross-training.
Finally, when responding to a questionnaire after the experiment, human participants in cross-training were far more likely to say the robot had carried out the task according to their preferences than those in the reward-only group, and reported greater levels of trust in their robotic teammate. “This is the first evidence that human-robot teamwork is improved when a human and robot train together by switching roles, in a manner similar to effective human team training practices,” Nikolaidis says.
Shah believes this improvement in team performance could be due to the greater involvement of both parties in the cross-training process. “When the person trains the robot through reward it is one-way: The person says ‘good robot’ or the person says ‘bad robot,’ and it’s a very one-way passage of information,” Shah says. “But when you switch roles the person is better able to adapt to the robot’s capabilities and learn what it is likely to do, and so we think that it is adaptation on the person’s side that results in a better team performance.”
The work shows that strategies that are successful in improving interaction among humans can often do the same for humans and robots, says Kerstin Dautenhahn, a professor of artificial intelligence at the University of Hertfordshire in the U.K. “People easily attribute human characteristics to a robot and treat it socially, so it is not entirely surprising that this transfer from the human-human domain to the human-robot domain not only made the teamwork more efficient, but also enhanced the experience for the participants, in terms of trusting the robot,” Dautenhahn says.
Source: MIT
Additional Information:
Posted by
Unknown
0
comments
Quality control at the point of a finger
Engineerblogger
Feb 11, 2013
For production operations, quality assurance over the process chain is indispensable: it is the only way to detect problems at an early stage and lower additional costs. Fraunhofer researchers developed an efficient type of quality control: With a pointing gesture, employees can input any detected defects to car body parts into the inspection system, and document them there. The non-contact gesture-detection process will be on display at the 2013 Hannover Messe from 8 to 12 April.
With utter meticulousness, the quality control inspector examines a car bumper for defects in the paint work – ultimately, only impeccable body parts get sent to final assembly. If he finds a defect in the paint, just a point of the finger is all it takes to send the defect to the QS inspection system, store it and document it. The employee obtains visual feedback through a monitor that displays a 3D reconstruction of the bumper. At first glance, it might seem completely futuristic, though soon enough, it could become an everyday part of quality assurance: Researchers at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB in Karlsruhe engineered the intelligent gesture control system on behalf of the BMW Group. In the future, it should supersede today’s time-consuming test procedures. “Previously, the inspector had to note all defects that were detected, leave his workstation, go to the PC terminal, operate multiple input screens and then label the position of the defect and the defect type. That approach is laborious, time-intensive and prone to error,” asserts Alexander Schick, scientist at IOSB. The gesture control system, by contrast, improves the inspector’s working conditions considerably, and triggering substantial time savings – the employee can remain at his workstation and interact directly with the test object. “If the bumper is fine, then he swipes over it from left to right. In the event of damage, he points to the location of the defect,” says Schick.
3D tracking records people and objects in real time
This non-contact gesture-detection system is based on 3D data. Hence, the entire workstation must first be reconstructed in 3D. That includes the individual as well as the object with which he is working. “What does the inspector look like? Where is he situated? How does he move? What is he doing? Where is the object? – all of these data are required so that the pointing gesture can properly link to the bumper,” ex-
plains the researcher. In order to enable gesture control, the experts apply 3D-body tracking, which records the individual’s posture in real time. Even the car body parts are “tracked.” When it comes to this, the hardware requirements are minimal: A standard PC and two Microsoft Kinect systems – consisting of camera and 3D sensors – suffice in order to realize the reconstruction. Schick and his team developed the corresponding algorithms, which fuse multiple 2D and 3D images together, specifically for this kind of application, and adapted them to the standards of the BMW Group.
“The breeding ground for this technology is our Smart Control Room, where people can interact with the room quite naturally. They can use pointing gestures to operate remote displays – without any additional equipment. The room recognizes what actions are taking place at that moment, and offers the appropriate information and tools. Since gesture detection does not depend on display screens, this means we can im-
plement applications that use no monitors, like the gesture interaction here with real objects,” explains Schick. “It makes no difference what kind of object we are dealing with. Instead of a bumper, we could also track a different part.”
The technology can be subsequently integrated into existing production systems at little expense. Scientists could incorporate their effective process into the BMW Group’s system through a specialized interface module. The gesture detection system will be presented at the 2013 Hannover Messe, from 8 to 12 April, at the Fraunhofer joint exhibition booth in Hall 2, Booth D18.
Plans call for the installation of a prototype model at the BMW plant in Landshut in January 2013. Working in cooperation with quality control inspectors, the system will be fine-tuned onsite before it gets deployed to production in the future.
Source: Fraunhofer-Gesellschaft
Feb 11, 2013
A point of the finger is all it takes to send the defect in the paint to the QS inspection system, store it and document it. © Fraunhofer IOSB |
For production operations, quality assurance over the process chain is indispensable: it is the only way to detect problems at an early stage and lower additional costs. Fraunhofer researchers developed an efficient type of quality control: With a pointing gesture, employees can input any detected defects to car body parts into the inspection system, and document them there. The non-contact gesture-detection process will be on display at the 2013 Hannover Messe from 8 to 12 April.
With utter meticulousness, the quality control inspector examines a car bumper for defects in the paint work – ultimately, only impeccable body parts get sent to final assembly. If he finds a defect in the paint, just a point of the finger is all it takes to send the defect to the QS inspection system, store it and document it. The employee obtains visual feedback through a monitor that displays a 3D reconstruction of the bumper. At first glance, it might seem completely futuristic, though soon enough, it could become an everyday part of quality assurance: Researchers at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB in Karlsruhe engineered the intelligent gesture control system on behalf of the BMW Group. In the future, it should supersede today’s time-consuming test procedures. “Previously, the inspector had to note all defects that were detected, leave his workstation, go to the PC terminal, operate multiple input screens and then label the position of the defect and the defect type. That approach is laborious, time-intensive and prone to error,” asserts Alexander Schick, scientist at IOSB. The gesture control system, by contrast, improves the inspector’s working conditions considerably, and triggering substantial time savings – the employee can remain at his workstation and interact directly with the test object. “If the bumper is fine, then he swipes over it from left to right. In the event of damage, he points to the location of the defect,” says Schick.
3D tracking records people and objects in real time
This non-contact gesture-detection system is based on 3D data. Hence, the entire workstation must first be reconstructed in 3D. That includes the individual as well as the object with which he is working. “What does the inspector look like? Where is he situated? How does he move? What is he doing? Where is the object? – all of these data are required so that the pointing gesture can properly link to the bumper,” ex-
plains the researcher. In order to enable gesture control, the experts apply 3D-body tracking, which records the individual’s posture in real time. Even the car body parts are “tracked.” When it comes to this, the hardware requirements are minimal: A standard PC and two Microsoft Kinect systems – consisting of camera and 3D sensors – suffice in order to realize the reconstruction. Schick and his team developed the corresponding algorithms, which fuse multiple 2D and 3D images together, specifically for this kind of application, and adapted them to the standards of the BMW Group.
“The breeding ground for this technology is our Smart Control Room, where people can interact with the room quite naturally. They can use pointing gestures to operate remote displays – without any additional equipment. The room recognizes what actions are taking place at that moment, and offers the appropriate information and tools. Since gesture detection does not depend on display screens, this means we can im-
plement applications that use no monitors, like the gesture interaction here with real objects,” explains Schick. “It makes no difference what kind of object we are dealing with. Instead of a bumper, we could also track a different part.”
The technology can be subsequently integrated into existing production systems at little expense. Scientists could incorporate their effective process into the BMW Group’s system through a specialized interface module. The gesture detection system will be presented at the 2013 Hannover Messe, from 8 to 12 April, at the Fraunhofer joint exhibition booth in Hall 2, Booth D18.
Plans call for the installation of a prototype model at the BMW plant in Landshut in January 2013. Working in cooperation with quality control inspectors, the system will be fine-tuned onsite before it gets deployed to production in the future.
Source: Fraunhofer-Gesellschaft
Posted by
Unknown
0
comments
Labels:
Education,
Europe,
Germany,
Manufacturing,
Technology
Subscribe to:
Posts (Atom)