Squeezed R&D budgets in the EU, Japan and U.S. are reducing the weight of advanced economies in science and technology research, patent applications and scientific publications and leaving China on track to be the world’s top R&D spender by around 2019, according to a OECD report.
The OECD Science, Technology and Industry Outlook 2014 finds that with R&D spending by most OECD governments and businesses yet to recover from the economic crisis, the OECD’s share in global R&D spending has slipped from 90% to 70% in a decade.
Annual growth in R&D spending across OECD countries was 1.6% over 2008-12, half the rate of 2001-08 as public R&D budgets stagnated or shrank in many countries and business investment was subdued. China’s R&D spending meanwhile doubled from 2008 to 2012.
Gross domestic expenditure on R&D (GERD) in 2012 was USD 257 billion in China, USD 397 billion in the United States, USD 282 billion for the EU28 and USD 134 billion in Japan.
The report warns that with public finances still tight in many countries, the ability of governments to compensate for lower business R&D with public funding, as they did during the worst of the economic downturn, has become more limited. Other key findings include:
2012 R&D spending surpassed USD 1.1 trillion in OECD countries and stood at USD 330 billion in the BRIICS (Brazil, Russia, India, Indonesia, China and South Africa).
Korea became the world’s most R&D intensive country in 2012, spending 4.36% of GDP on R&D, overtaking Israel (3.93%) and versus an OECD average of 2.40%.
The BRIICS produced around 12% of the top-quality scientific publications in 2013, almost twice its share of a decade ago and compared to 28% in the United States.
China and Korea are now the main destinations of scientific authors from the United States and experienced a net “brain gain” over 1996-2011.
European countries are diverging in R&D as some move closer to their R&D/GDP targets (Denmark, Germany) and others (Portugal, Spain) fall further behind.
In most countries, 10% to 20% of business R&D is funded with public money, using various investment instruments and government targets.
Source: AVS: Science & Technology of Materials, Interfaces, and Processing
Researchers are growing vertically aligned “forests” of carbon nanotubes on three-dimensional (3-D) conductive substrates to explore their potential use as a cathode in next-gen lithium batteries.
A team of University of Maryland researchers is growing vertically aligned “forests” of carbon nanotubes on three-dimensional (3-D) conductive substrates to explore their potential use as a cathode in next-gen lithium batteries.
During the AVS 61st International Symposium & Exhibition, being held November 9-14, 2014, in Baltimore, Md., the team will describe their process for creating lithium-oxygen (Li-O2) battery cells.
Carbon nanotubes are typically grown on two-dimensional or planar substrates, but the structure developed by the team is considered “3-D” because the carbon nanotubes are grown on a porous, “sponge-like” foam structure made of nickel coated with aluminum oxide ceramic.
Batteries usually consist of an anode, cathode and electrolyte; the researchers’ 3-D structure forms the “cathode” part of the battery.
“Our team developed self-standing, catalyst-decorated carbon nanotube cathodes for Li-O2 batteries using atomic layer deposition (ALD) and electrochemical deposition methods,” said Marshall Schroeder, a member of the Rubloff Research Group in Materials Science and Engineering at the University of Maryland. “And we also have unique capabilities for in situ characterization via scanning electron microscopy and X-ray photoelectron spectroscopy for elemental analysis of pristine electrodes and at different points during cycling.”
How does the team build their battery cathode? First, they use a nickel foam current collector to deposit a thin layer (~5nm) of aluminum oxide using ALD. This is chased by a layer of iron, sputtered as a growth catalyst for chemical vapor deposition (CVD) of carbon.
The ALD layer “acts as a diffusion barrier to keep the growth catalyst from diffusing into the nickel foam during the high-temperature carbon growth process,” Schroeder explained. “The type of carbon growth is heavily dependent on the CVD process parameters — catalyst ripening temperature/time, growth time/temperature, precursor type, and flow rate, etc. — so optimization of the growth process was required to achieve a vertically aligned carbon nanotube architecture.”
These structures were put to the test as cathodes in lithium oxygen cells, and the team discovered that the optimized growth process resulted in a hierarchical pore structure featuring dense carpets of vertically aligned carbon nanotubes on a 3-D current collector scaffold.
Preliminary studies of this cathode structure show promising results for oxygen reduction reaction (ORR) performance, according Schroeder. “For the oxygen evolution reaction (OER), continued studies will focus on optimization of the electrode performance via decoration with ALD-deposited catalysts,” he adds. “We’ve also started studying the catalyst performance on other carbon nanotube substrates and now have a preliminary fundamental understanding of the catalyst chemistries developed by our team.”
The team’s work shows that combining their ALD capabilities with the unique structure of the 3-D cathode may “significantly improve the performance of one of the most promising next-generation lithium battery technologies,” Schroeder noted.
Water treatment is the collective name for a group of mainly industrial processes that make water more suitable for its application, which may be drinking, medical use, industrial use and more. A water treatment process is designed to remove or reduce existing water contaminants to the point where water reaches a level that is fit for use. Specific processes are tailored according to intended use – for example, treatment of greywater (from bath, dishwasher etc.) will require different measures than black water (from toilets) treatment.
Main types of water treatments
All water treatments involve the removal of solids (usually by filtration and sedimentation), bacteria, algae and inorganic compounds. Used water can be converted into environmentally acceptable water, or even drinking water through various treatments.
Water treatments roughly divide into industrial and domestic/municipal.
Industrial water treatments include boiler water treatment (removal or chemical modification of substances that are damaging to boilers), cooling water treatment (minimization of damage to industrial cooling towers) and wastewater treatment (both from industrial use and sewage).
Wastewater treatment is the process that removes most of the contaminants from wastewater or sewage, producing a liquid that can be disposed to the natural environment and a sludge (semi-solid waste). Wastewater is used water, and includes substances like food scraps, human waste, oils and chemicals. Home uses create wastewater in sinks, bathtubs, toilets and more, and industry donates its fare share as well. Wastewater and sewage need to be treated before being released to the environment. This is done in plants that reduce pollutants to a level nature can handle, usually through repeatedly separating solids and liquids, which progressively increases water purity.
Wastewater treatments usually consist of three levels: a primary (mechanical) level, in which solids are removed from raw sewage by screening and sedimentation. This level can remove about 50-60% of the solids, and is followed by the second level – secondary (biological) treatment. Here, dissolved organic matter that escaped primary treatment is removed, by microbes that consume it as food and convert it into carbon dioxide, water and energy. The tertiary treatment removes any impurities that are left, producing an effluent of almost drinking-water quality. The technology required for this stage is usually expensive and sophisticated, and demands a steady energy supply and specific chemicals. Disinfection, typically with chlorine, can sometimes be an additional step before discharge of the effluent. It is not always done due to the high price of chlorine, as well as concern over health effects of chlorine residuals.
Municipal water consists of surface water and groundwater. surface water, like lakes and rivers, usually require more more treatment than groundwater (water located under the ground). Municipal/community water is treated by public or private water utilities companies to ensure that the water is potable (safe for drinking), palatable (have no unusual or disturbing taste) and sufficient for the needs of the community.
Water flows or is pumped to a central treatment facility, where it is pumped into a distribution system. Initial screening is performed to remove large objects and then the water undergoes a series of processes like: pre-chlorination (for algae control), aeration (removal of dissolved iron and manganese), coagulation (removal of colloids), sedimentation (solids separation), desalination (removal of salt) and disinfection (killing bacteria). Other processes that may be used are: lime softening (the addition of lime to precipitate calcium and magnesium ions), activated carbon adsorption (to remove chemicals that cause taste and odor) and fluoridation (increasing the concentration of fluoride to prevent dental cavities).
As water is both vital for life and in limited supply, many efforts are placed to find technologies that can help ensure the maintainability of water resources. Among the innovative methods that have been researched and developed are:
nanotechnology – the use of nanotechnology to purify drinking water can help remove microbes and bacteria. Many nano-water treatment technologies use composite nanoparticles that emit silver ions to destroy contaminants.
membrane chemistry – membranes, through which water passes and is filtered and purified. The pores of membranes used in ultrafiltration can be remarkably fine. This technology exists, and efforts are constantly being made to make it more dependable, cost-efficient and common. Membranes’ selective separation grants filtration abilities that can pose as alternatives to processes like flocculation, adsorption and more.
seawater desalination – processes that extract salt from saline water, to produce fresh water suitable for drinking or irrigation. While this technology is in use and also holds much promise for growing in the future, it is still expensive, with reverse osmosis technology consuming a vast amount of energy (the desalination core process is based on reverse osmosis membrane technology).
Innovative wastewater processing – new technologies aim to transform wastewater into a resource for energy generation as well as drinking water. Modular hybrid activated sludge digesters, for example, can remove nutrients for use as fertilizers, decreasing almost by half the amount of energy traditionally required for this treatment in the process.
What is graphene?
Graphene is a two dimensional mesh of carbon atoms arranged in the form of a honeycomb lattice. It has earned the title “miracle material” thanks to a startlingly large collection of incredible attributes – this thin, one atom thick substance (it is so thin in fact, that you’ll need to stack around three million layers of it to make a 1mm thick sheet!) is the lightest, strongest, thinnest, best heat-and-electricity conducting material ever discovered, and the list does not end there. Graphene is the subject of relentless research and is thought to be able to revolutionize whole industries, as researchers work on many different kinds of graphene-based materials, each one with unique qualities and designation.
Graphene and water treatment
Water is an invaluable resource and the intelligent use and maintenance of water supplies is one of the most important and crucial challenges that stand before mankind. New technologies are constantly being sought to lower the cost and footprint of processes that make use of water resources, as potable water (as well as water for agriculture and industry) are always in desperate demand. Much research is focused on graphene for different water treatment uses, and nanotechnology also has great potential for elimination of bacteria and other contaminants.
Among graphene’s host of remarkable traits, its hydrophobia is probably one of the traits most useful for water treatment. Graphene naturally repels water, but when narrow pores are made in it, rapid water permeation is allowed. This sparked ideas regarding the use of graphene for water filtration and desalination, especially once the technology for making these micro-pores has been achieved. Graphene sheets (perforated with miniature holes) are studied as a method of water filtration, because they are able to let water molecules pass but block the passage of contaminants and substances. Graphene’s small weight and size can contribute to making a lightweight, energy-efficient and environmentally friendly generation of water filters and desalinators.
It has been discovered that thin membranes made from graphene oxide are impermeable to all gases and vapors, besides water, and further research revealed that an accurate mesh can be made to allow ultrafast separation of atomic species that are very similar in size – enabling super-efficient filtering. This opens the door to the possibility of using seawater as a drinking water resource, in a fast and relatively simple way.
Recent commercial activity in the field of graphene water treatments
Recent research activity in the field of graphene water treatments
In September 2013, researchers from China’s Nanjing University of Aeronautics announced graphyne, an allotrope of graphene, a promising material for water desalination that may even outperform graphene. Its high throughput and rejection of ions and pollutants give it a great potential for this purpose, and it will require lower energy use than traditional technologies. Also in September 2013, researchers from Korea suggested a new simple, high-yield method of synthesizing a new graphene-carbon nanotube-iron oxide (G-CNT-Fe) 3D functional nanostructures. The researchers report that these structures can function as excellent arsenic absorbents.
Kansas State University researchers have developed a patented method of keeping mosquitoes and other insect pests at bay.
U.S. Patent 8,841,272, “Double-Stranded RNA-Based Nanoparticles for Insect Gene Silencing,” was recently awarded to the Kansas State University Research Foundation, a nonprofit corporation responsible for managing technology transfer activities at the university. The patent covers microscopic, genetics-based technology that can help safely kill mosquitos and other insect pests.
Kun Yan Zhu, professor of entomology; Xin Zhang, research associate in the Division of Biology; and Jianzhen Zhang, visiting scientist from Shanxi University in China, developed the technology: nanoparticles comprised of a nontoxic, biodegradable polymer matrix and insect derived double-stranded ribonucleic acid, or dsRNA. Double-stranded RNA is a synthesized molecule that can trigger a biological process known as RNA interference, or RNAi, to destroy the genetic code of an insect in a specific DNA sequence.
The technology is expected to have great potential for safe and effective control of insect pests, Zhu said.
“For example, we can buy cockroach bait that contains a toxic substance to kill cockroaches. However, the bait could potentially harm whatever else ingests it,” Zhu said. “If we can incorporate dsRNA specifically targeting a cockroach gene in the bait rather than a toxic substance, the bait would not harm other organisms, such as pets, because the dsRNA is designed to specifically disable the function of the cockroach gene.”
Researchers developed the technology while looking at how to disable gene functions in mosquito larvae. After testing a series of unsuccessful genetic techniques, the team turned to a nanoparticle-based approach.
Once ingested, the nanoparticles act as a Trojan horse, releasing the loosely bound dsRNA into the insect gut. The dsRNA then triggers a genetic chain reaction that destroys specific messenger RNA, or mRNA, in the developing insects. Messenger RNA carries important genetic information.
In the studies on mosquito larvae, researchers designed dsRNA to target the mRNA encoding the enzymes that help mosquitoes produce chitin, the main component in the hard exoskeleton of insects, crustaceans and arachnids.
Researchers found that the developing mosquitoes produced less chitin. As a result, the mosquitoes were more prone to insecticides as they no longer had a sufficient amount of chitin for a normal functioning protective shell. If the production of chitin can be further reduced, the insects can be killed without using any toxic insecticides.
While mosquitos were the primary insect for which the nanoparticle-based method was developed, the technology can be applied to other insect pests, Zhu said.
“Our dsRNA molecules were designed based on specific gene sequences of the mosquito,” Zhu said. “You can design species-specific dsRNA for the same or different genes for other insect pests. When you make baits containing gene-specific nanoparticles, you may be able to kill the insects through the RNAi pathway. We see this having really broad applications for insect pest management.”
On this Day .. This Special Day, we have an opportunity to say “Thank You” to all our men and women who serve in our Armed Forces. So many countless sacrifices .. so many Fathers, Mothers, Brothers, Sisters .. Loved Ones .. away from home, putting in long, long hours – Paying ‘The Price of Freedom’ so that we might remain FREE to enjoy and Treasure those Freedoms. THANK YOU, THANK YOU, THANK YOU, THANK YOU, THANK YOU!
We share with you 2 music videos and a commencement speech by Admiral William H. McRaven (U.S. Special Forces) which we have shared with our oldest grand children as a way to demonstrate the dedication required and to Honor and Embrace ‘The Code’ General MacArthur intoned those many years ago: “Duty, Honor, Country” and our way of Remembering and Thanking You – Veterans!
Light striking the retina in the back of the eye is the first major step in the vision process. But when the photoreceptors in the retina degenerate, as occurs in macular degeneration, the retina no longer responds to light, and the person loses some or all of their sight. However, if the retina can be made sensitive to light with the help of some type of optoelectronic implant, then vision may be restored.
The development of artificial retinas still faces many challenges: the implants should provide long-term light sensitivity, should have high spatial resolution, should not contain wires, and should be made of materials that are biocompatible and mechanically flexible. Candidate materials include conducting polymers and quantum dot films, with each having its own advantages and disadvantages in these areas.
Carbon nanotubes combined with nanorods are used to create a light-sensitive film, potentially replacing damaged photoreceptors in the retina. Charge separation at the nanorod-nanotube interface elicits a neuronal response, which could then …more
Another approach to restoring light sensitivity involves optogenetics, in which light-sensitive proteins (bacterial opsins) are introduced into neurons in the retina. However, this method still requires an electrode to assist in light-induced stimulation of these neurons.
In a new paper published in Nano Letters, researchers at Tel Aviv University, The Hebrew University of Jerusalem, and Newcastle University have found that a film containing carbon nanotubes and nanorods is particularly effective for wire-free retinal photostimulation.
“The greatest significance of our work is in demonstrating how new materials (quantum rods combined with carbon nanotubes) can yield a new system suitable for efficient stimulation of a neuronal system,” coauthor Yael Hanein, Professor at Tel Aviv University, told Phys.org.
The researchers showed that, when the film is attached to a chick retina at 14 days of development (at a time when the retinas are not yet light-sensitive, and so completely blind), the retinas produce a photogenerated current—a neuronal signal that can then be interpreted by the brain.
In the new film structure, the nanorods are interspersed throughout a 3D porous carbon nanotube matrix, and the resulting film is then patterned onto a flexible substrate for implantation. The researchers explain that the 3D structure of the new film provides several advantages, which include high light absorbance, strong binding to neurons, and efficient charge transfer. While other candidate materials for artificial retinas, such as silicon, are rigid, nontransparent, and require an external power source, the new material does not have these problems.
With these advantages, the new films look very promising for use in future artificial retina applications. The researchers also expect that the films could be improved even more with further research.
“At the present, we study the new implants in vivo, attempting to demonstrate their performances over long-term implantation,” Hanein said. “We teamed up with a retina surgeon to develop an implantation and testing procedures compatible with conventional surgical practices towards attempting human trials in the future.”
Center researchers aim to understand how quantum systems can store, transport, process information
The University of Maryland (UMD) and the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced today the creation of the Joint Center for Quantum Information and Computer Science (QuICS), with the support and participation of the Research Directorate of the National Security Agency/Central Security Service (NSA/CSS). Scientists at the center will conduct basic research to understand how quantum systems can be effectively used to store, transport and process information.
This new center complements the fundamental quantum research performed at the work of the Joint Quantum Institute (JQI), which was established in 2006 by UMD, NIST and the NSA. Focusing on one of JQI’s original objectives to fully understand quantum information, QuICS will bring together computer scientists—who have expertise in algorithm and computational complexity theory and computer architecture—with quantum information scientists and communications scientists.
“This new endeavor builds on an already successful and fruitful collaboration at JQI,” said Acting Under Secretary of Commerce for Standards and Technology and Acting Director of NIST Willie May. “The new center will be a venue for groundbreaking basic research that will help to build our capacity for quantum research and train the next generation of researchers.”
UMD and NIST have a shared history of collaboration and cooperation in education, research and public service. They have long cooperated in building collaborative research consortia and programs that have resulted in extensive personal, professional and institutional relationships.
“By deepening our partnership with NIST, we now have all the ingredients in place to make major advances in quantum science,” said UMD President Wallace Loh. “This superb, world-class quantum program will team some of the best minds in physics, computer science and engineering to overcome the limitations of current computing systems.”
Dianne O’Leary, Distinguished University Professor Emerita in computer science at UMD, and Jacob Taylor, a NIST physicist and JQI Fellow, will serve as co-directors of the new center. Like the JQI, QuICS will be located on the UMD campus in College Park, Md.
The capabilities of today’s embedded and high-performance computer architectures have limited advances in critical areas, such as modeling the physical world, improving sensors and securing communications. Quantum computing could enable us to break through some of these barriers.
QuICS’ objectives will be to:
Develop a world-class research center that will build the scientific foundation for quantum information science to enable understanding of the relationships between information theory, computational complexity theory and nature, as well as the advances in computer science necessary to support potential quantum computing and communication devices and systems;
Maintain and enhance the nation’s leading role in quantum information science by expanding an already-powerful collaboration between UMD, NIST and NSA/CSS; and
Establish a unique, interdisciplinary center for the interchange of ideas among computer scientists, physicists and quantum information researchers.
Some of the topics QuICS researchers will initially examine include understanding how quantum mechanics informs computation and communication theories, determining what insights computer science can shed on quantum computing, investigating the consequences of quantum information theory for fundamental physics, and developing practical applications for theoretical advances in quantum computation and communication.
QuICS is expected to train scientists for future industrial and academic opportunities and provide U.S. industry with cutting-edge research results. By combining the strengths of UMD and NIST, QuICS will become an international center for excellence in quantum computer and information science.
QuICS will be the newest of 16 centers and labs within the University of Maryland Institute for Advanced Computer Studies (UMIACS). The center will bring together researchers from UMIACS; the UMD Departments of Physics and Computer Science; and the UMD Applied Mathematics & Statistics, and Scientific Computation program with NIST’s Information Technology and Physical Measurement laboratories.
About the University of Maryland
The University of Maryland is home to three quantum science research centers: the Joint Center for Quantum Information and Computer Science, the Joint Quantum Institute, and the Quantum Engineering Center. UMD has nation-leading computer science, physics and math departments, with particular strengths in the areas relevant to quantum science research.
In the 2015 Best Graduate Schools ranking by U.S. News & World Report, UMD’s Department of Physics ranked 14th, the Department of Computer Science ranked 15th, and Department of Mathematics ranked 17th. The atomic/molecular/optical physics specialty ranked 6th, the quantum physics specialty ranked 8th, and the applied math specialty ranked 10th. Visit UMD’s website to learn more.
As a non-regulatory agency of the U.S. Department of Commerce, NIST promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life. Visit NIST’s website for more information.
Inspired by the unique optical and electronic property of graphene, two-dimensional layered materials have been intensively investigated in recent years, driven by their potential applications for future high speed and broadband electronic and optoelectronic devices. Layers of molybdenum disulfide (MoS2), one kind of transition metals chalcogenides, have been proven to be a very interesting material with the semiconducting property.
The basic infrastructure of molybdenum disulfide is a single-atomic layer of molybdenum sandwiched between two adjacent atomic layers of sulfide. This compound exists in nature as molybdenite, a crystal material found in rocks around the world, frequently taking the characteristic form of silver-colored hexagonal plates. For decades, molybdenite has been used in the manufacturing of lubricants and metal alloys. Like in the case of graphite, the properties of single-atom sheets of MoS2 long went unnoticed.
From the view point of applications in electronics, molybdenum disulfide sheets exhibit a significant advantage over graphene: they have an energy gap – an energy range within which no electron states can exist. By applying an electric field, the sheets can be switched between a state that conducts electricity and one that behaves like an insulator. Theoretically, a switched-off molybdenum disulfide transistor would consume even as little as several hundred thousand times less energy than a silicon transistor.
Graphene, on the other hand, has no energy gap and transistors made of graphene cannot be fully switched off. More importantly, the relatively weak absorption co-efficiency of graphene (2.3 % of incident light per layer) might significantly delimit its light modulation ability for optical communication devices such as light detector, modulator and absorber.
Molybdenum disulfide’s semiconducting ability, strong light-matter interaction and similarity to the carbon-based graphene makes it of interest to scientists as a viable alternative to graphene in the manufacture of electronics, particularly photoelectronics. Scientists have found that the physical properties of two-dimensional (2D) MoS2 change markedly when it has nanoscale properties.
A slab of MoS2 that is even a micron thick has an “indirect” bandgap while a two-dimensional sheet of molybdenum disulfide has a “direct” bandgap. It shows thickness dependent band-gap properties, allowing for the production of tunable optoelectronic devices with diversified spectral operation. In pushing towards practical optical applications of 2D MoS2, an essential gap on understanding the nonlinear optical response of 2D MoS2 and how it interacts with light, must be filled. Now, one research group on photonics based on 2D materials, from Shenzhen University, reports a breakthrough in the light-matter interaction of 2D MoS2 and fabricating a novel optical device using few layers of molybdenum disulfide (see paper in Optics Express: “Molybdenum disulfide (MoS2) as a broadband saturable absorber for ultra-fast photonics”).
Thanks to the direct-band and ultrafast response in few layer MoS2, its optical absorbance can become saturated if under high power excitation, as a result of the band filling effect in conduction band. A saturable absorber is an important element for pulse operation in a laser cavity which absorb weaker energy of light modes while get across higher energy. After millions of circulation in laser cavity, ultra-short (ps or fs in temporal duration) pulses with a high concentration power could be generated. MoS2 has indirect bandgap in bulk material with a band gap of ∼1.2 eV and direct band-gap in monolayer structure with a broader band gap of ∼1.9 eV. Although it seems that few-layer MoS2 might have limited operation bandwidth and fails to operate as a broadband saturable absorber.
However, according to their careful experimental studies, the team found that few-layer MoS2 could still possess wavelength insensitive saturable absorption responses, which is caused by the special molecular structures in few-layer MoS2. It is worth commenting on the broadband performance of graphene and MoS2. The broadband performance of graphene is intrinsic, due to its gapless nature. However, it is more complex in the exfoliated MoS2 nanoparticle sample they used (see paper in Scientific Reports: “Ytterbium-doped fiber laser passively mode locked by few-layer Molybdenum Disulfide (MoS2) saturable absorber functioned with evanescent field interaction”) due to the mixture of 1T (metallic) and 2H (semiconducting) phases present. The 1T phases usually predominate in as-exfoliated samples due to doping by impurities, giving rise to similar broadband performance as graphene. If the MoS2 can be rendered predominantly 2H, its absorption at resonance energy will be stronger.
This means that at specific wavelength that is in resonance with the band gap, we expect that MoS2 saturable absorber can potentially give stronger saturable absorption response than graphene in view of its strong bulk-like photon absorption and exciton generation owing to Van Hove singularities.
Fig. 1: The broadband saturable absorption of few-layer MoS2 and the performance of mode locked operation. (click on image to enlarge)
The enhanced, broadband and ultra-fast nonlinear optical response in 2D semiconducting transition metal disulfides (TMDs) indicates unprecedented potential for ultra-fast photonics, ranging from high speed light modulation, ultra-short pulse generation to ultra-fast optical switching. However, the stability and robustness issues of TMDs turns out to be a significant problem if exposed to high power laser illumination. Unlike graphene that has extremely high thermal conductivity, flexibility and mechanical stability, TMDs may show much lower optical damage threshold than graphene because of their poorer thermal and mechanical property, although explorations on the photonic applications are being fueled by their advantages.
It is worth mentioning that polymethacrylate (PMMA) is indispensable for protecting few-layer MoS2 from vertical transmission if under strong optical power density. In principle, MoS2 couldn’t afford even higher laser illumination than 100 mW (pure material) and 500 mW (with PMMA protection) adheres to a fiber tail with mode field diameter of several micrometers in our experiment, which might seriously limit its potential applications in practical optical devices. Taper fibers inspired us to solve this challenge, schematically shown in Fig. 2. Few layer MoS2 was coupled on the waist of the taper fiber and interacted with an evanescent field of laser illumination. In this approach, the material doesn’t need to bear high optical power.
This optical device could bear 1 W laser injection without damage and also could achieve mode locked operation in a fiber laser as a saturable absorber.
Fig. 2: Schematic diagram of the taper fiber and the ytterbium-doped fiber laser passively mode locked by the MoS2-taper-fiber-saturable absorber.
“By depositing few-layer MoS2 upon the tapered fiber, we can employ a ‘lateral interaction scheme’ of utilizing the strong optical response of 2D MoS2, through which not only the light-matter interaction can be significantly enhanced owing to the long interaction distance, but also the drawback of optical damage of MoS2 can be mitigated. This MoS2-taper-fiber device can withstand strong laser illumination up to 1 W. Considering that layered TMDs hold similar problems as MoS2, our findings may provide an effective approach to solve the optical damage problem on those layered semiconductor materials,” Prof. Han Zhang from the Key Laboratory for Micro-Nano Optoelectronic Devices at Hunan University, concludes.
“Beyond MoS2, we anticipated that a number of MoS2-like layered TMDs (such as, WSe2, MoSe2, TaS2 etc) can also be developed as promising optoelectronic devices with high power tolerance, offering inroads for more practical applications, such as large energy laser mode-locking, nonlinear optical modulation and signal processing etc.”
This work provides a very convenient but practical way to overcome the disadvantages (very low optical damage threshold) of 2D semiconducting TMDs, simply by adopting a ‘lateral interaction scheme’. Stimulated by this technological innovation, we anticipate that researcher might propose new types of light interaction modes with 2D materials, particularly, the integration of 2D materials with various waveguide structures, such as Silicon waveguide. It will definitely not only solve the problems concerning easily optical damage, but also lead to new physics on how light propagates along and interacts with the 2D semiconducting surface, in the present of waveguides. Eventually, it might revolutionize our viewpoints on 2D optoelectronics, and open up a new test-bed with unprecedented chances for conceptually new optoelectronic devices.
By Dr. Feng Luan, Assistant Professor, Division of Communication Engineering, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
The solar industry is abuzz over a relative newcomer that burst onto the scene less than a decade ago and has risen rapidly through the ranks. The all-star rookie has also been published in high-impact academic journals in the last few years, but it isn’t a newly minted professor or a hot solar startup. It’s a material known as perovskite.
Materials scientists started testing perovskite’s sun-capturing qualities in the 2000s, and by 2009, a team lead by Tsutomu Miyasaka from Toin University of Yokohama in Japan had produced a solar cell that converted 3.8% of the sun’s light into electricity, a respectable amount for such a new material. Just last fall, another group lead by Henry Snaith from the University of Oxford published a breakthrough—their perovskite solar cells were 15.4% efficient.
One of Henry Snaith and colleagues’ perovskite solar cells
In a world where gains of fractions of a percent are lauded, such a leap was unprecedented. “Very few come in out of the cold and have a 15% conversion efficiency.” says David Ginley, a research fellow at the National Renewably Energy Laboratory.
“It’s exciting,” says Michael McGehee, a professor of materials science at Stanford University. “It’s a new material with a lot of potential.”
That excitement is evident in recent news coverage. Even Nature, a well-respected academic journal, hailed Snaith as one of the 2013’s “ten people who mattered.” “This year, Snaith amazed materials researchers by massively boosting the efficiency of solar cells made with perovskite semiconductors,” they wrote.
Those plaudits come with a small catch—they tacitly presume that perovskite will continue its rapid ascent. If it does, the material truly could be revolutionary. Currently, photovoltaics cost between $2 and $5 per watt depending on the scale of the installation. That’s significantly lower than just five years ago, though it’s still not competitive with coal or natural gas. But if perovskite continues to gain efficiency, it could tilt the playing field solidly in favor of solar power.
Many photovoltaic materials operate on similar principles. Learn what makes a solar cell tick.
The target is 25% efficiency. Very few types of cells exceed that goal, and even fewer are commercially available currently. “A lot of people think that you need the efficiency of the cells to be up near 25% because if the efficiency is lower, you need a larger area to get the power, and the larger area, the more the installation costs are,” McGehee says. Perovskite made waves with how quickly it broke 15% efficiency, and unspoken assumption in many articles is that the material could breach 25% in a matter of years, not decades.
Snaith, whose team achieved the recent perovskite milestone, seems convinced that perovskite already has commercial potential. He has founded a company that’s striving to produce perovskite solar cells in mass quantity, which he says will happen in “three to five years.”
Snaith’s compressed timeline mirrors the great strides perovskite has taken as a photovoltaic material. But the road from the laboratory to the rooftop can be filled with unexpected speed bumps, something known all too well by researchers and manufacturers of copper indium gallium selenide, or CIGS, a photovoltaic material that’s just recently become available on the market. In fact, the story of CIGS could be viewed as a cautionary tale, one that might temper some of the excitement surrounding perovskite.
CIGS began life as CIS, or copper indium selenide. It, too, is a semiconductor and was originally discovered in 1953 by Harry Hahn and his team at the University of Heidelberg. They published their discovery in Zeitschrift für anorganische und allgemeine Chemie, a German-language chemistry journal. It wasn’t uncommon at the time for chemists to publish in German, though that may have been partly why it was overlooked as a photovoltaic until 1974 when Sigurd Wagner, a young Austrian scientist and a fresh face at Bell Labs, and his team published an article on how his lab-grown crystals that could capture the sun’s rays.
An entry gate at Bell Labs in Murray Hill, New Jersey, where the first solar cell was demonstrated.
CIS crystals were expensive and proved difficult to grow, though, which was part of the reason why Larry Kazmerski, then a professor at the University of Maine, started searching for a better technique. It didn’t take him long. Shortly after Wagner’s first paper came out, Kazmerski told colleagues how he deposited CIS in a thin-film on a piece of glass. His first cells were between 4-5% efficient.
It was a promising development, but work on CIS was only one part of a larger government investment in solar power. In the 1970s, the National Science Foundation was directing large investments in solar power research for the U.S. government. Much of the money was going toward developing silicon-based solar cells. “Silicon, they knew, would do well eventually. That was the known semiconductor,” says Kannan Ramanathan, head of the CIGS team at the NREL. “Yet they wanted to divest, take risks, and nurture thin films.”
Work on CIS trundled along until 1981, when Boeing scientists Reid Mickelsen and Wen Chen announced at a conference in Orlando, Florida, that they had doubled Kazmerski’s efficiency by depositing the material in a new way. Thin-films had arrived.
Though silicon remained the favored material, a handful of companies grew interested in thin-film cells and CIS in particular. They wagered that if they could get the chemistry right, thin-film cells would be vastly cheaper to produce than silicon cells, which had to be grown as crystals. Plus, CIS could be deposited on inexpensive glass, reducing weight and materials costs. For Boeing, which used solar cells on spacecraft, lightweight panels would translate into cheaper launch costs.
Meanwhile, the aerospace company’s continued investment was yielding dividends. Chen and another colleague, John Stewart, figured out in the late-1980s that they could substitute gallium for some of the indium, further raising the efficiency. (That was what put the G in CIGS.)
Earlier that decade, oil company Arco had also begun exploring CIS and other thin-film technologies. During the energy crisis in 1979, the company had become a serious player in the nascent solar power industry. After throwing its weight behind CIS research, it quickly developed an alternative to Boeing’s production technique. It wasn’t quite as efficient, but was considered easier to manufacture. By 1988, the Southern California-based Arco Solar produced a four-square-foot module with 11% efficiency. That same year, they offered to permanently light the Hollywood sign using solar power.
Despite the bravado, things weren’t going well for the Arco Solar pioneer. Development problems plagued the run-up to production, frustrating its parent company. Plus, the solar power market wasn’t growing as quickly as they had hoped. Looking to cut costs, Arco sold its solar division to Siemens in 1989.
Boeing had also lost interest, and left their work to NREL. Researchers in academia and industry had to go back to the drawing board in an attempt to resolve the issues that plagued previous manufacturing efforts. But without the major players, the material that had shown so much promise in the 1970s and 1980s stumbled. It would be almost 10 years before the CIGS industry would recover.
Out from the Shadows
By the late 1990s, Siemens was feeling confident in its progress on CIGS and spooled up a pilot production line. The results of an early run were tested at NREL and scored higher than 10% efficiency. They were the first thin-film photo-voltaics made outside of a lab to reach that landmark. But just as Arco had dropped its solar division after it made the 11% module, Siemens started looking for a buyer for the California-based division shortly thereafter. It eventually ended up with another oil company, Shell. (The division ended up being a hot potato; Shell would only own it for four years before selling it to Germany-based Solar World in 2006.)
The 2000s could have been another lost decade for CIGS, but then, in 2003, Germany began offering generous subsidies on solar power. That encouraged a number of universities and small companies to jump in the game, who, along with NREL, would end up carrying the torch when, a few years later, Shell “walked away” from their solar division, Ramanathan says.
The handful of smaller companies kept at it, encouraged by government subsidies and an influx of venture capital, fine-tuning their materials and lowering their production costs. Then, as so many times before, they ran into a series of unexpected problems. While many companies had become adept at producing cells in the lab, they couldn’t replicate that success on a large scale. Some of these delays were blamed on an incomplete scientific understanding of the CIGS material. William N. Shafarman, a professor at the University of Delaware, and Lars Stolt, a professor at Uppsala University, wrote in 2003 that the “lack of a science base has been perhaps the biggest hindrance to the maturation of Cu(InGa)Se2 solar cell technology as most of the progress has been empirical.” At many companies, the cart had gotten in front of the horse.
Close-up of a CIGS solar panel at Biosphere 2 in Arizona
That lack of understanding would catch up with manufacturers a few years later when product testing company TÜV Rheinland documented a sharp spike in the number of failures among thin-film panels, including CIGS and other types, during the damp-heat test, where panels are subjected to 1,000 hours of 85˚ F and 85% humidity. Between 2005-2007, 70% of thin-film panels failed, more than double the failures for 1997-2005. They had to go back to the drawing board, again.
Meanwhile, manufacturers also had to perfect how the cells would be packaged and connected. Each wire, sheet of glass, and piece of aluminum had to be tested for durability and reliability. They had to simulate everything from snafus that might take place during installation to 20 years of heat and moisture. Thanks to accelerated testing, the process doesn’t take 20 years, but it can still take many months to several years.
Bert Haskell, the CTO at Pecan Street, oversaw these tests in an earlier job as director of product development at Heliovolt, an Austin, Texas-based CIGS company. There, he and his team would subject completed panels to a grueling regimen of abuse. They’d yank on connecting cables, drop one-and-a-half-pound ball bearings onto the glass, and fire chunks of ice at the panels at 50 mph. They’d subject them to high humidity and drastic fluctuations in temperature. They’d bake them and they’d freeze them. “Those tests, you might run those for 90 days or six months before you get results back,” Haskell says. It was quicker than waiting 20 years, but it wasn’t instantaneous.
CIGS solar panels covered the exterior walls of Germany’s winning entry in the 2009 Solar Decathlon.
Add it all up, and you quickly realize that just testing the non-photovoltaic part of the module took several years. Some tests could occur in parallel with work on the CIGS cells themselves, but in the end, the entire package still had to be tested and certified.
It wasn’t until the mid-2000s that CIGS-based solar panels began to trickle into the market, more than 30 years after the material’s initial discovery as a photovoltaic. Today, CIGS cells remain costly relative to silicon cells and have captured just a few percent of the market. The future could still be bright, but it will require many more years of sustained funding, research and development.
A Long Road Ahead
Judging by the challenges CIGS confronted, it’s likely that perovskite solar cells have a long road in front of them. Though the material has shown great promise, moving out of the lab and into production isn’t the same as producing high-efficiency cells in the lab. It takes time. “The development time for most technologies is 20 to 30 years,” says Ginley, the NREL scientist. “That’s pretty damn canonical.”
Haksell agrees. “When a scientist discovers a new material in the lab that has some kind of unique property, going from that to the point where it’s applied in a useful product, it just takes a long time.” (I followed up with Snaith regarding his three-to-five-year commercial timeline for perovskite solar cells, but haven’t heard back.)
Inexpensive solar power has the potential to transform our energy system.
Perovskite’s biggest stumbling block could be water. While most solar cells don’t react well to water, perovskite’s current formulation is an ionic salt, which means it’s highly susceptible to water damage, both McGehee and Ginley tell me. Solar manufacturers work hard to keep their products sealed, but water has a tendency to work its way into the smallest of gaps, including those cracks that happen during installation or any of the many heating and cooling cycles solar panels endure. Reformulating the material while keeping the basic chemical structure could reduce the potential for water damage, but that would require years more research.
“There’s still a lot of questions that need to be answered,” McGehee says of perovskite. “It is exciting and I don’t want to take away from it in any way, but we still need to have a wait and see attitude before we’ll know if this is going to be a commercial success.”
John Lienhard leads coordinated interdisciplinary research efforts to confront resource challenges at the Abdul Latif Jameel World Water and Food Security Lab.
MIT Industrial Liaison Program
November 4, 2014
As world population continues to grow, so does the need for water and food. It would be easy if the fix were laying down more pipes and cultivating more crops. But it’s not that simple. The global climate is becoming unevenly warmer and more people are moving into cities. Both conditions put stress onto already-limited resources. These complex issues need complex solutions, and, for that, MIT has created the Abdul Latif Jameel World Water and Food Security Lab.
Started in the fall of 2014 under the direction of Professor John Lienhard, the lab will be able to support and coordinate research all over campus, helping at once industries trying to improve their productivity and localities trying to thrive. As Lienhard says, it’s the interdisciplinary approach, coupled with MIT’s unique capabilities, that will set the lab apart and bring innovative solutions to bear.
Taking on each region
The lab was established through a gift from Mohammed Abdul Latif Jameel ’78, a civil engineering graduate, with the intent of tackling world food and water issues and the interplay of factors that affect them. As an example, in the Arabian Gulf States, conditions are arid with little agricultural capacity. Most of the water comes from desalinated seawater, and much of the food is imported. It’s an area that will become warmer and drier and be subjected to extreme weather in the coming years, with a population that is rapidly growing, Lienhard says.
Along the equator, climate change will particularly affect agricultural regions. Some of these areas are going to warm faster, but Lienhard says that the bigger issue is that food productivity will shift, making some crops less viable in equatorial areas and more productive closer to the poles, changing what can be grown, and turning strong producers into weaker ones and vice versa. Since food always requires water, one question is whether changing management practices can be the answer to increased production. Fertilizer is a known commodity and would be an easy solution, but, as Lienhard says, it brings with it runoff into waterways and resulting damage to ecosystems.
These specific considerations are reflective of the inherent nature of what the lab faces. “Each of these issues is a regional problem that needs to be looked at in its own context,” says Lienhard, adding, “There is no single answer that’s going to come from a neat invention and a new technology.”
The lab will address this complexity by engaging faculty from across schools, including science, engineering, architecture and urban planning, humanities, arts and social sciences, and management, and by drawing upon work being done in various labs — for example, graphene membranes that can be used for desalination and wireless communication signals that can identify pipe leaks. “When we put people from different disciplines together, we get radically new ideas and approaches to the problems,” he says.
The entry into food
One particular opportunity the lab will provide MIT is having a clear presence in solving global food needs. The impact of population growth is a central issue. In 1960, the world had 3 billion people. Today, it’s 7 billion, and in 2050, the estimate is 9 billion. With that three-fold increase and ongoing development, 50, possibly 70, percent more food will be needed by 2050 than is produced today, Lienhard says. The challenge is that more than one-third of the world’s ice-free land is already being used for farming. Since converting more land to farms through practices such as cutting down rainforests isn’t viable, the answer may lie in more efficient production techniques or different food choices. As he says, one-third of all crops are used for livestock, and producing beef takes 15 times more water than producing an equivalent amount of grain.
Another issue is the rise in urbanization. More than 50 percent of the population already lives in cities. By 2050, it’s estimated that 86 percent of the developed world and 64 percent of the developing one will be there, Lienhard says. Most food, accordingly, is consumed in cities, and so another question is whether urban agriculture can be developed as a water and energy efficient approach to some portion of the food supply.
Many of these issues are known and studied, but a course of action hasn’t been established, let alone enacted. While the lab will be able to identify already-existing food technology on campus to address a problem, one other benefit is it can help identify work that wasn’t conceived for food-related uses but which nonetheless can be applied.
Take food spoilage: One MIT program in nanotechnology has developed sensors that can detect chemical weapons. But these sensors can also be used to detect ripening or rotting food. This could provide the chance to improve food distribution and reduce waste and spoilage along the supply chain. If that can be done, a significant obstacle can be cleared, since estimates suggest that wasted food is four times the amount needed to feed the world’s hungry people, Lienhard says.
In search of partners
The next step, and the essential one, is collaboration, not only within the university but also with industry. Lienhard says that the lab is looking for partners around the world who can develop and implement new water and food technologies and approaches. But more than that, the lab will help partners address their own business challenges. Some companies want to make their environmental footprints smaller. Others face product struggles in international markets, such as beverages and water. They have to contend with a different quality while also competing for it with locals. Lienhard says the lab can help find an equitable balance between commerce and sharing resources for domestic use.
Because the lab is new, Lienhard says there’s an unknown element to what the work will look like. But for potential partners, there is also a certainty. “They get MIT,” says Lienhard. They know, in other words, that they’ll be working in a context where there are world-recognized faculty members, a large population of graduate and postdoctoral researchers, approximately 120 United States patents issued to Institute-related projects annually, and 20 spinoff companies per year, he says.
There is also the overall guiding philosophy of MIT’s approach. It’s a place that doesn’t keep its work in the lab but instead focuses on translating research to real-world use. Supplying sufficient water and food as the population grows and the climate changes is a large task, but Lienhard says that’s precisely the nature of what MIT does. “We take basic science. We apply it to human needs, and we solve problems.”
Sep 22, 2018 / Comments Off on MIT: New battery technology gobbles up carbon dioxide – Ultimately may help reduce the emission of the greenhouse gas to the atmosphere + Could Carbon Dioxide Capture Batteries Replace Phone and EV Batteries?