text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Here is a machine with four coloured lights. Can you develop a strategy to work out the rules controlling each light?
What size square corners should be cut from a square piece of paper to make a box with the largest possible volume?
This game challenges you to locate hidden triangles in The White Box by firing rays and observing where the rays exit the Box.
Many natural systems appear to be in equilibrium until suddenly a critical point is reached, setting up a mudslide or an avalanche or an earthquake. In this project, students will use a simple. . . .
Can you make a hypothesis to explain these ancient numbers?
Can you guess the colours of the 10 marbles in the bag? Can you develop an effective strategy for reaching 1000 points in the least number of rounds?
Can you coach your rowing eight to win?
Can you find the values at the vertices when you know the values on the edges?
Imagine a machine with four coloured lights which respond to different rules. Can you find the smallest possible number which will make all four colours light up?
How does the time of dawn and dusk vary? What about the Moon, how does that change from night to night? Is the Sun always the same? Gather data to help you explore these questions.
A country has decided to have just two different coins, 3z and 5z coins. Which totals can be made? Is there a largest total that cannot be made? How do you know?
Can you decode the mysterious markings on this ancient bone tool?
This article explores the process of making and testing hypotheses.
Charlie and Abi put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you think?
Imagine you have a large supply of 3kg and 8kg weights. How many of each weight would you need for the average (mean) of the weights to be 6kg? What other averages could you have?
Can you find rectangles where the value of the area is the same as the value of the perimeter?
This problem offers you two ways to test reactions - use them to investigate your ideas about speeds of reaction. | <urn:uuid:d639a3a2-064f-4190-aba7-2f09bf3015fc> | 3.96875 | 462 | Content Listing | Science & Tech. | 70.021447 | 95,575,027 |
“It might explain why we’re here at all,” said David Radford, who oversees specific ORNL activities in the Majorana Demonstrator research effort. “It could help explain why the matter that we are made of exists.”
The Majorana Demonstrator is being assembled and stored 4,850 feet beneath the earth's surface in enriched copper to limit the amount of background interference from cosmic rays and radioactive isotopes.
Radford, a researcher in ORNL's Physics Division and an expert in germanium detectors, has been delivering germanium-76 to Sanford Underground Research Laboratory (SURF) in Lead, S.D., for the project. After navigating a Valentine’s Day blizzard on the first two-day drive from Oak Ridge, Radford made a second delivery in March.
ORNL serves as the lead laboratory for the Majorana Demonstrator research effort, a collaboration of research institutions representing the United States, Russia, Japan and Canada. The project is managed by the University of North Carolina’s Prof. John Wilkerson, who also has a joint faculty appointment with ORNL.
Research at SURF is being conducted 4,850 feet beneath the earth’s surface with the intention of building a 40-kilogram germanium detector, capable of detecting the theorized neutrinoless double beta decay. Detection might help to explain the matter-antimatter imbalance.
Before the detection of the unobserved decay can begin, however, the germanium must first be processed, refined and enriched. Radford coordinated the multistep process, which includes an essential pit stop in Oak Ridge.
The 42.5 kilograms of 86-percent enriched white germanium oxide powder required for the project is valued at $4 million and was transported from a Russian enrichment facility to a secure underground ORNL facility in a specially designed container. The container’s special shielding and underground storage limited exposure of the germanium to cosmic rays.
Without such preventative measures, Radford says, “Cosmic rays transmute germanium atoms into long-lived radioactive atoms, at the rate of about two atoms per day per kilogram of germanium. Even those two atoms a day will add to the background in our experiment. So we use underground storage to reduce the exposure to cosmic rays by a factor of 100.”
The germanium must further undergo a reduction and purification process at two Oak Ridge companies, Electrochemical Systems, Inc. (ESI) and Advanced Measurement Technology (AMETEK), before being moved to its final destination in South Dakota. ESI works to reduce the powdered germanium oxide to metal germanium bars. ORTEC, a division of AMETEK, further purifies the bars, using the material to grow large single crystals of germanium, and turning those into one-kilogram cylindrical germanium detectors that will be used in the Demonstrator. Once they leave AMETEK, Radford and his team transport the detectors to SURF.
The enrichment process is lengthy. The Majorana Demonstrator project began the partnership with ESI four years ago. To date, ORNL has delivered -- via Radford's two trips -- nine of the enriched detectors, which are valued at about $2 million including the original cost of the enriched germanium oxide powder.
Requiring a total of 30 enriched detectors, the Majorana Demonstrator is not expected to be fully complete and operational until 2015.
Those involved in the Majorana research effort believe its completion and anticipated results will help pave the way for a next-generation detector using germanium-76 with unprecedented sensitivity. The future one-ton detector will help to determine the ratio and masses of conserved and annihilated lepton particles that are theorized to cause the initial imbalance of matter and antimatter from the Big Bang.
“The research effort is the first major step towards building a one-ton detector — a potentially Nobel-Prize-worthy project,” Radford says.
ORNL’s partner institutions in the Majorana Demonstration Project are Black Hills State University, Duke University, Institute for Theoretical and Experimental Physics (Russia), Joint Institute for Nuclear Research (Russia), Los Alamos National Laboratory, Lawrence Berkeley National Laboratory, North Carolina State University, Osaka (Japan) University, Pacific Northwest National Laboratory, South Dakota School of Mines and Technology, Triangle Universities Nuclear Laboratory, Centre for Particle Physics (Canada), University of Chicago, University of North Carolina, University of South Carolina, University of South Dakota, University of Tennessee and the Center for Experimental Nuclear Physics and Astrophysics.
The Majorana Demonstrator research project is funded by the National Science Foundation and the Department of Energy’s Office of Nuclear Physics.
ORNL is managed by UT-Battelle for the Department of Energy's Office of Science. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov.
Joshua Haston | Newswise
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:6f9b9bbb-e633-4ee3-9a86-360061327aac> | 3.21875 | 1,712 | Content Listing | Science & Tech. | 30.920499 | 95,575,051 |
In 1967, the world changed and time became a matter of atoms rather than stars.
The National Physical Laboratory (NPL) made the first successful atomic clock in 1955. In 1967, the length of a second was redefined as an atomic measurement, rather than an astronomical one.
This particular clock, by Hewlett Packard, was used at the NPL in the 1990s for its contribution to the new global timescale; Coordinated Universal Time (UTC) which is the basis for civil time today. This 24-hour time standard is kept using highly precise atomic clocks combined with the Earth's rotation. | <urn:uuid:97ec9bdc-cb63-4ca6-bc3d-1f54817d0730> | 3.734375 | 124 | Knowledge Article | Science & Tech. | 47.955286 | 95,575,053 |
Probably the most exciting thing happening now in Science, is the restart of the Large Hadron Collider (commonly referred to as the LHC)! A massive explosion required about a year to fix, and to re-engineer to try and prevent such a problem happening again - those events, and the current status are documented.
The LHC is an enormous apparatus that is mostly a ring 27 kilometres in circumference half in France and half in Switzerland, designed to smash together the nuclei of atoms to produce as yet undetected elementary particles.
The ring has 8 major experimental stations such as the CMS and ATLAS experiments, that are 'sort of' general purpose detectors with different characteristics, each weighing over ten thousand tonnes.
The Director of the LHC, Steve Myers, paid tribute to the huge amount of dedicated team work involved.
They are starting at relatively low energies, as they need to carefully bring the machine up to its design level. So at the moment they can only see the types of particle they already know about.
At full strength they hope to detect the particle that a lot of physicists think is responsible for why other particles, such as the proton and electron have mass. This particle is called the Higgs Boson, named after Peter Higgs. If they don't find it, and can demonstrate it does not exist at the energies that the LHC can reach, then this is also interesting as it will lead to new physics.
Apart from the Higgs Boson, there is a potential doubling of the number of particles needed to explain the world according to the Super Symmetric Theory (affectionately known as SUSY), these are expected to discovered by the LHC. Plus there is the 'expected unexpected' discoveries.
Some good URL's to look at are:
Pretty pictures of the LHC
Official LHC website
Unofficial LHC webisite
LHC on Twitter
Collision events as they happen at the CMS experiment
Rorting In The Auckland Property Market !
3 years ago | <urn:uuid:85613f6f-bd83-43af-bf8a-15eb1b96e8a1> | 2.78125 | 414 | Personal Blog | Science & Tech. | 40.50804 | 95,575,057 |
New Zealand scientists have proved for the first time that Marine Protected Areas are effective in protecting endangered animals.
The findings have come from a 21-year study of Hector's dolphins at a South Island marine sanctuary.
The sanctuary covers 1170 square kilometres off the coast of Christchurch.
Researcher Dr Liz Slooten from Otago University says the dolphins survival increased by 5.4%. However, she says if the reserve is not enlarged, the species is still in danger of extinction.
Each year 23 Hector's dolphins die in commercial gill nets off the east coast of the South Island each year. The sustainable limit is about one death a year.
Conservation group Nabu International says the reserves could also be crucial for the survival of the critically endangered Maui's dolphin, which has only 55 adults surviving, all on the west coast of the North island. | <urn:uuid:3786cf47-ee41-4ac1-91c5-2c76b965d3d2> | 3.328125 | 177 | News Article | Science & Tech. | 53.285238 | 95,575,063 |
A study of the content of rare earth elements in U.S. coal ashes shows that coal mined from the Appalachian Mountains could be the proverbial golden goose for hard-to-find materials critical to clean energy and other emerging technologies.
In the wake of a 2014 coal ash spill into North Carolina's Dan River from a ruptured Duke Energy drainage pipe, the question of what to do with the nation's aging retention ponds and future coal ash waste has been a highly contested topic.
One particularly entrepreneurial idea is to extract so-called "critical" rare earth elements such as neodymium, europium, terbium, dysprosium, yttrium and erbium from the burned coal. The Department of Energy has identified these globally scarce metals as a priority for their uses in clean energy and other emerging technologies. But exactly how much of these elements are contained in different sources of coal ash in the U.S. had never been explored.
Researchers from Duke University measured the content of rare earth elements in samples of coal ash representing every major coal source in the United States. They also looked at how much of these elements could be extracted from ash using a common industrial technique.
The results, published online on May 26 in the journal Environmental Science and Technology, showed that coal from the Appalachian Mountains contains the most rare earth elements. However, if extraction technologies were cheap enough, there are plenty of rare earth elements to be found in other sources as well.
"If a program were to move forward, they'd clearly want to pick the coal ash with the highest amount of extractable rare earth elements, and our work is the first comprehensive study to begin surveying the options," Hsu-Kim said.
The researchers took coal ash samples from power plants located mostly in the American Midwest that burn coal sourced from all over the country, including the three largest sources: the Appalachian Mountains, southern and western Illinois, and the Powder River Basin in Wyoming and Montana. The content of rare earth elements was then tested using hydrofluoric acid, which is much stronger and more efficient than industrial methods, but is too hazardous to use on a large scale.
The results showed that ash collected from Appalachian Mountain coal has the highest amount of rare earth elements at 591 milligrams per kilogram (or parts per million). Ash from Illinois and the Powder River Basin contain 403 mg/kg and 337 mg/kg, respectively.
The researchers then used a common industrial extraction technique featuring nitric acid to see how much of the rare earth elements could be recovered. Coal ash from the Appalachian Mountains saw the lowest extraction percentages, while ash from the Powder River Basin saw the highest. Hsu-Kim thnks this might be because the rare earth elements in the Appalachian Mountain coal ash are encapsulated within a glassy matrix of aluminum silicates, which nitric acid doesn't dissolve very well.
"One reason to pick coal ash from the Appalachian Mountains would be for its high rare earth element content, but you'd have to use a recovery method other than nitric acid," said Hsu-Kim, who also holds an appointment in Duke's Nicholas School of the Environment. "For any future venture to begin an extraction program, the recovery method will need to be tailored to the specific chemistry of the coal ash being used."
The Duke researchers also tried "roasting" the coal ash with an alkali agent before dissolving it with nitric acid. Even though the process hadn't been optimized for recovery purposes, the tests showed a marked improvement in extraction efficiency.
"The reagents we used are probably too expensive to use on an industrial scale, but there are many similar chemicals," said Hsu-Kim. "The trick will be exploring our options and developing technologies to drive the costs down. That way we can tap into this vast resource that is currently just sitting around in disposal ponds."
The above post is reprinted from materials provided by Duke University. | <urn:uuid:e13273b9-ddc6-404d-b035-af2450991d79> | 3.5 | 802 | News Article | Science & Tech. | 39.148858 | 95,575,073 |
1. What is the volume of a quantity of a gas at 172 Celcius, if its volume is 262 mL at -25 Celsius and the pressure remains constant.
2. What is the pressure, in inches of mercury, of a gas that originally had a volume of 252 mL, under a pressure of 0.834 atmosphere and a temperature of 45 Celsius if the volume is reduced to 167 mL at a temperature of 175 Celcius?
3. A helium balloon has a volume of 45 liters at 15.2 Celsius and 728 mm Hg atmosphere of pressure. To what volume will the balloon expand if it rises to an altitude where the pressure is 0.114 atm and temperature is -34.5 Celsius?
4. A cylinder with a fixed volume of 685 mL exerts a pressure of 11.5 psi at 38 Celsius. What will the pressure, in psi, become if the temperature is raised to 549 Fahrenheit?© BrainMass Inc. brainmass.com July 17, 2018, 3:40 pm ad1c9bdddf
Assume this is an ideal gas problem.
PV=nRT Universal gas constant link http://scienceworld.wolfram.com/physics/UniversalGasConstant.html
V1/V2 = T1/T2 Temperature has to be expressed in ...
This solution provides an explanation for determining answers but does not include calculations. | <urn:uuid:1287a38b-1ce1-4027-84ae-a1aea687cac1> | 3.484375 | 291 | Q&A Forum | Science & Tech. | 78.384988 | 95,575,077 |
- Meeting report
- Open Access
Follow that plant!
© BioMed Central Ltd 2001
Published: 7 February 2001
A report on the talks presented at the Cold Spring Harbor 2000 Meeting on Arabidopsis Genomics, New York, 7-10 December, 2000.
It may be difficult to convince a lay person that the genome sequence of a little weed called Arabidopsis thaliana is not only providing an invaluable resource for understanding plant biology but is also serving as a model system for improvement of economically important crop species. An even more difficult task is to persuade people that the Arabidopsis Genome Initiative became an example of technology development, later used for the Human Genome Project, and that the sequence of the Arabidopsis genome might be useful as a model to study (and eventually, even help cure) human diseases. Researchers called to the National Science Foundation headquarters to give a press conference highlighted these issues when making the public announcement of the completion of the Arabidopsis genome sequence. One week before this announcement, these issues were discussed at the Cold Spring Harbor 2000 Meeting on Arabidopsis Genomics.
Databases and centralized resources
As for other complete genomes, annotation of the Arabidopsis genome generated substantial discussion. The case of Drosophila melanogaster was described by the plenary speaker Michael Ashburner (EMBL-EBI, Cambridge, UK). Because a whole-genome shotgun-sequencing approach was used for Drosophila, the annotation process was different from the procedure used for Arabidopsis. Initially, a 3 Mbp region of the Drosophila genome was annotated independently by several groups over 18 months. This genome annotation assessment project (GASP) allowed the consortium to decide which tools would be most useful for annotating the whole genome when it was finished.
In the case of Arabidopsis, the annotation will be now centralized at The Institute for Genome Research (TIGR, Bethesda, USA), the Munich Information Center for Protein Sequences (MIPS, Martinsried, Germany) and the Kazusa DNA Research Institute (KDRI, Japan). These institutes have received accurate sequences from bacterial artificial chromosome (BAC) clones, often annotated in a detailed way after manual curation. This strategy resulted in richer and more precise information than would a fully automated approach carried out on the complete genome. The current annotation is heterogeneous, posing a problem for global electronic analysis of the annotated data. A more automated system to add uniformity to the annotation is being developed, although a complete 're-annotation' will not be possible. An alternative to the centralized annotation was proposed by Lincoln Stein (Cold Spring Harbor Laboratories (CSHL), New York, USA). The distributed annotation system (DAS) incorporates expert information on each gene or genomic feature derived from all members of the community. In this system, each investigator curates the annotation of his or her favorite gene using a unified format and all this information is collated in a reference computer server.
The Arabidopsis Information Resource (TAIR) [http://www.arabidopsis.org/home.html], as discussed by Margarita Garcia-Hernandez (Carnegie Institution, Stanford, USA) is an attempt to provide the Arabidopsis community with a comprehensive and integrated database. TAIR contains extensive genomic information including clones, genes, both older genetic and visible marker maps and more recent AGI sequence maps, as well as some plant literature. In the future, additional features to be incorporated are gene function, gene and protein expression data, and so on. The idea of collecting, in a single website, data from many different labs, institutes and resource centers, raised the issue of intellectual property. In this regard, a concern was expressed by TIGR and MIPS, who feel their effort should receive the proper credit if used by others.
Over the past few years, a growing number of plant populations mutated with T-DNA, transposons or ethylmethane sulfonate (EMS) have been generated to facilitate functional genomics. Many of these are available through the Arabidopsis Biological Resource Center (ARBC, Ohio State University, Columbus, USA), as outlined by Randy Scholl (Ohio State University). Resources created more recently were also presented. Ottoline Leyser (University of York, UK), introduced GARNet [http://garnet.arabidopsis.org.uk], the UK Arabidopsis functional genomics network. Michel Caboche (INRA, Versailles, France) described GENOPLANTE, a comprehensive program that includes genomic sequencing as well as functional genomics involving several plant species. Rob Martienssen (CSHL) introduced a database for transposon-based enhancer- and gene-trap systems, which includes systematic sequencing of transposon insertion sites.
Tools for gene identification
One powerful tool for gene identification is transposon tagging. Using this technique, Michael Snyder (Yale University, New Haven, USA) showed that 600 previously non-annotated genes were found in the genome of Saccharomyces cerevisiae, many of which had less than 100 amino acids. Such small open reading frames are typically excluded from annotation routines, and this criterion was also used during Arabidopsis annotation. As in yeast, transposon tagging is an important tool for the identification of new genes in Arabidopsis, and a number of collections of transposon lines are available. Moreover, Dick McCombie (CSHL) proposes to sequence other Brassica species, which will provide a resource for gene discovery. Still to be decided is which Brassica species would be most useful to sequence. For evolutionarily distant species, however, comparative genomics may not be so promising. In comparing the partial tomato sequence with that of Arabidopsis, Steve Tanskley (Cornell University, Ithaca, USA) found that a high proportion of tomato genes have a significant match in the Arabidopsis genome. But most of the 'hits' correspond to members of gene families, which complicates the identification of orthologous genes. Orthology could be determined for only 4% of the genes analyzed, and little colinearity was observed between the two genomes with respect to these genes.
As pointed out by Hans-Werner Mewes (MIPS, Martinsried, Germany), somewhat unexpectedly, the sequence of the Arabidopsis genome showed a high degree of gene duplication. Some genes are found in tandem duplications or multiple copies, and large chromosomal regions are found more than once in the same or different chromosomes. This no doubt contributes a degree of genetic redundancy. Ashburner argued, however, that truly redundant genes (ones with identical function) are unlikely, because they would not be maintained by natural selection. Redundancy may be observed in the case of recent evolutionary events. Whatever the cause, Owen White (TIGR, Bethesda, USA) proposed taking advantage of gene duplication to correct gene modeling annotation errors.
Arabidopsis is now well into the 'post-genome' era, as shown by the substantial number of presentations describing the use of genomic tools to study a wide variety of biological processes. For example, Jeffery Dangl (University of North Carolina, Chapel Hill, USA) used Arabidopsis DNA microarrays to identify groups of genes coordinately regulated during the onset of systemic acquired resistance and then, taking advantage of the genome sequence, determined the regulatory sequences in the promoters responsible for each pattern of expression. Stacey Harmer (Scripps Research Institute, La Jolla, USA) applied similar resources to identify the mechanism of circadian control of genes and their regulatory regions. Phil Benfey (New York University, USA) developed an algorithm to draw transcription factor networks using microarrays and sequence data; this will be applied to different stages of root development. Pam Green and Rodrigo Gutiérrez (Michigan State University, East Lansing, USA) are using cDNA microarrays from the Arabidopsis Functional Genomics Consortium (AFGC [http://afgc.stanford.edu], which includes a facility providing Arabidopsis cDNA microarrays containing 11,000 genes) to classify genes regulated by mRNA stability. Daphne Preuss (University of Chicago, USA) could identify centromeric genes and analyze in detail their expression and methylation patterns thanks to the availability (unique to Arabidopsis) of deep coverage of heterochromatic sequence. David Galbraith (University of Arizona, Tucson, USA) is attempting to assign a function to each of the cytochrome P450 proteins, which are encoded by a family of nearly 300 genes in Arabidopsis, by reverse genetics and microarray expression profiling under different biotic and abiotic treatments.
The private sector presented a focus on the implementation of functional genomics. Ken Feldman (Ceres Inc., Malibu, USA) described progress on obtaining full-length cDNA sequences from Arabidopsis, which are being used for several functional genomics approaches. Information from a database of 8,000 cDNA sequences has provided knowledge of general sequence features and will be useful for modifying gene-prediction programs, as sequence from cDNAs indicates that only 60% of genes are correctly predicted. In another cDNA sequencing program, Gary Temple (Life Technologies/Invitrogen Corp., Rockville, USA) in collaboration with GENOSCOPE (Evry, France) described a versatile system to normalize a cDNA library and generate full-length cDNA sequences, which will be publicly available. In order to overcome the problem that many mutations may only create subtle phenotypic effects, Keith Davis (Paradigm Genetics, Inc., Research Triangle Park, USA) described a high-throughput platform collating phenotypic data from 100 traits measured at predetermined stages of plant growth and development. Jack Okamuro (Ceres) presented a similarly detailed phenotypic analysis of fruit development.
The complete genome sequence has made it possible to generate many more markers for mapping quantitative trait loci, and a number of groups are identifying loci that contribute to natural variation among Arabidopsis strains and close relatives, with the expectation that the genes identified will provide information not readily derived by mutagenesis. Insect resistance is one such variable among naturally occurring populations of Arabidopsis. Thomas Mitchell-Olds (Max Planck Institute, Jena, Germany) described genotypic variation in enzymes of the glycosinolate biosynthetic pathway, products of which confer insect resistance. One enzyme in this pathway, which controls glycosinolate chain length, was found to be absent in the Landsberg erecta strain and may have undergone gene conversion or recombination with a closely related gene. Another trait that varies significantly between different strains is hypocotyl length, as described by Detlef Weigel (Salk Institute, La Jolla, USA). Interestingly, the variation in hypocotyl length of different strains under various light conditions could be correlated with the incident sunlight where a strain typically grows. Cluster analysis also showed that several determinants of hypocotyl length map to genes known from their mutant phenotypes to affect hypocotyl length. Ben Bowen (Lynx Therapeutics Inc., Hayward, USA) described quantitative trait loci associated with nitrogen utilization. Several candidate genes were defined using massively parallel signature sequencing (MPSS) technology, in which short signature sequences are obtained from cDNAs and attached to microbeads. All these studies should contribute more to our understanding of evolutionary processes, defining whether mechanisms of adaptation principally involve changes in enhancers or protein coding regions and whether such changes predominantly occur in regulatory or basic cellular components.
Keeping up with the leadership of Arabidopsis in technology development, two new technologies are being applied to it. One, presented by Michael Sussman (University of Wisconsin, Madison, USA), is a microarray technology called the maskless array synthesizer (MAS). It is an oligonucleotide microarray construction system based on a digital micromirror device designed by Texas Instruments. This device generates successive virtual masks on a slide for solid-phase oligonucleotide synthesis. These arrays are faster to build and cheaper than commercially available ones. The second is called targeting induced local lesions in genomes (TILLING) and was presented by Steve Henikoff (Fred Hutchinson Cancer Research Center, Seattle, USA). This EMS mutagenesis system allows identification of point mutations in known genes after denaturing high-performance liquid chromatography of denatured and re-annealed PCR fragments from mutant and wild-type plants. Henikoff is carrying out high-throughput TILLING and will provide the results to the community.
Undoubtedly, the Arabidopsis genome sequence is, so far, the most comprehensive compared with other higher eukaryotes for which the genome sequence has been completed. Although its sequence shows the deepest coverage of heterochromatic regions, it was repeatedly referred to as 'almost' or 'nearly' complete during the meeting. This was not because of the few remaining sequence gaps, but because of the determination of the researchers to hold their excitement until the release of the 14 December 2000 issue of Nature in which the annotated sequence was to be published. Holding back the celebration of the achievement of this milestone for five days was not too difficult, considering that the job was completed five years earlier than had been initially planned.
The milestone achieved by this group of plant biologists will certainly change the way we study not only plant biology but also biology in general. The approach for sequencing the Arabidopsis genome resulted in the availability of the BAC-based physical map, which was an invaluable tool long before the genome sequence was finished. The approach proved so successful that it became an example followed by the Human Genome Project. The idea that a weed can improve human health is not far fetched. Plants are not only a source of food but a source of drugs for treating diseases. Moreover, the discovery of Arabidopsis genes homologous to mammalian cancer-related genes opens up the possibility of using a plant as a model to study the basis of human diseases as complex as cancer. The outstanding success of the Arabidopsis Genome Initiative is to be followed by sequencing projects aimed at increasingly complex plant genomes. | <urn:uuid:ed432db6-db72-463a-a18f-0c88cf54ca69> | 2.609375 | 2,938 | Academic Writing | Science & Tech. | 9.547222 | 95,575,091 |
|Unsolved problem in mathematics:|
Does the Beal conjecture holds true for all positive integers?
(more unsolved problems in mathematics)
- where A, B, C, x, y, and z are positive integers with x, y, z > 2, then A, B, and C have a common prime factor.
- There are no solutions to the above equation in positive integers A, B, C, x, y, z with A, B, and C being pairwise coprime and all of x, y, z being greater than 2.
The conjecture was formulated in 1993 by Andrew Beal, a banker and amateur mathematician, while investigating generalizations of Fermat's last theorem. Since 1997, Beal has offered a monetary prize for a peer-reviewed proof of this conjecture or a counterexample. The value of the prize has increased several times and is currently $1 million.
To illustrate, the solution has bases with a common factor of 3, the solution has bases with a common factor of 7, and has bases with a common factor of 2. Indeed the equation has infinitely many solutions where the bases share a common factor, including generalizations of the above three examples, respectively
Furthermore, for each solution (with or without coprime bases), there are infinitely many solutions with the same set of exponents and an increasing set of non-coprime bases. That is, for solution
we additionally have
Any solutions to the Beal conjecture will necessarily involve three terms all of which are 3-powerful numbers, i.e. numbers where the exponent of every prime factor is at least three. It is known that there are an infinite number of such sums involving coprime 3-powerful numbers; however, such sums are rare. The smallest two examples are:
What distinguishes Beal's conjecture is that it requires each of the three terms to be expressible as a single power.
Relation to other conjectures
Fermat's Last Theorem established that has no solutions for n > 2 for positive integers A, B, and C. If any solutions had existed to Fermat's Last Theorem, then by dividing out every common factor, there would also exist solutions with A, B, and C coprime. Hence, Fermat's Last Theorem can be seen as a special case of the Beal conjecture restricted to x = y = z.
The Fermat–Catalan conjecture is that has only finitely many solutions with A, B, and C being positive integers with no common prime factor and x, y, and z being positive integers satisfying Beal's conjecture can be restated as "All Fermat–Catalan conjecture solutions will use 2 as an exponent."
The abc conjecture would imply that there are at most finitely many counterexamples to Beal's conjecture.
In the cases below where 2 is an exponent, multiples of 2 are also proven, since a power can be squared. Similarly, where n is an exponent, multiples of n are also proven.
- The case gcd(x, y, z) > 2 is implied by Fermat's Last Theorem.
- The case (x, y, z) = (2, 4, 4) and all its permutations were proven to have no solutions by Pierre de Fermat in the 1600s. (See one proof here for the x = 2 or y = 2 case.)
- A potential class of solutions to the equation, namely those with A, B, C also forming a Pythagorean triple, were considered by L. Jesmanowicz in the 1950s. J. Jozefiak proved that there are an infinite number of primitive Pythagorean triples that cannot satisfy the Beal equation. Further results are due to Chao Ko.
- The case x = y = z is Fermat's Last Theorem, proven to have no solutions by Andrew Wiles in 1994.
- The cases (x, y, z) = (2, n, n) and (3, n, n) and all their permutations were proved by Darmon and Merel in 1995.
- The case (x, y, z) = (n, 4, 4) and all its permutations have been proven for n ≥ 2.
- The impossibility of the case A = 1 or B = 1 is implied by Catalan's conjecture, proven in 2002 by Preda Mihăilescu. (Notice C cannot be 1, or one of A and B must be 0, which is not permitted.)
- The case (x, y, z) = (2, 3, 7) and all its permutations were proven to have only five solutions, none of them involving an even power greater than 2, by Bjorn Poonen, Edward F. Schaefer, and Michael Stoll in 2005.
- The case (x, y, z) = (2, 3, 8) and all its permutations are known to have only three solutions, none of them involving an even power greater than 2.
- The case (x, y, z) = (2, 3, 9) and all its permutations are known to have only two solutions, neither of them involving an even power greater than 2.
- The case (x, y, z) = (2, 3, 10) and all its permutations were proved by David Brown in 2009.
- The case (x, y, z) = (2, 4, n) and all its permutations were proved for n ≥ 4 by Michael Bennet, Jordan Ellenberg, and Nathan Ng in 2009.
- The case (x, y, z) = (2, 3, 15) and all its permutations were proved by Samir Siksek and Michael Stoll in 2013.
- The case (x, y, z) = (3, 3, n) and all its permutations have been proven for 3 ≤ n ≤ 10000 except n = 7, 11, and 13.
- The cases (5, 5, 7), (5, 5, 19), and (7, 7, 5) and all their permutations were proved by Sander R. Dahmen and Samir Siksek in 2013.
- The Darmon–Granville theorem uses Faltings's theorem to show that for every specific choice of exponents (x, y, z), there are at most finitely many solutions.:p. 64
- Peter Norvig, Director of Research at Google, reported having conducted a series of numerical searches for counterexamples to Beal's conjecture. Among his results, he excluded all possible solutions having each of x, y, z ≤ 7 and each of A, B, C ≤ 250,000, as well as possible solutions having each of x, y, z ≤ 100 and each of A, B, C ≤ 10,000.
For a proof or counterexample published in a refereed journal, banker Andrew Beal initially offered a prize of US $5,000 in 1997, raising it to $50,000 over ten years, but has since raised it to US $1,000,000.
The American Mathematical Society (AMS) holds the $1 million prize in a trust until the Beal conjecture is solved. It is supervised by the Beal Prize Committee (BPC), which is appointed by the AMS president.
The counterexamples and show that the conjecture would be false if one of the exponents were allowed to be 2. The Fermat–Catalan conjecture is an open conjecture dealing with such cases. If we allow that at most one of the exponents is 2, then there may be only finitely many solutions (except the case 1+2^3=3^2).
A variation of the conjecture asserting that x, y, z (instead of A, B, C) must have a common prime factor is not true. A counterexample is in which 4, 3, and 7 have no common prime factor. (In fact, the maximum common prime factor of the exponents that is valid is 2; a common factor greater than 2 would be a counterexample to Fermat's Last Theorem.)
- Euler's sum of powers conjecture
- Jacobi–Madden equation
- Prouhet–Tarry–Escott problem
- Taxicab number
- Pythagorean quadruple
- Sums of powers, a list of related conjectures and theorems
- Distributed computing
- "Beal Conjecture". American Mathematical Society. Retrieved 21 August 2016.
- "Beal Conjecture". Bealconjecture.com. Retrieved 2014-03-06.
- R. Daniel Mauldin (1997). "A Generalization of Fermat's Last Theorem: The Beal Conjecture and Prize Problem" (PDF). Notices of the AMS. 44 (11): 1436–1439.
- "Beal Prize". Ams.org. Retrieved 2014-03-06.
- Bennett, Michael A.; Chen, Imin; Dahmen, Sander R.; Yazdani, Soroosh (June 2014). "Generalized Fermat Equations: A Miscellany" (PDF). Simon Fraser University. Retrieved 1 October 2016.
- "Mauldin / Tijdeman-Zagier Conjecture". Prime Puzzles. Retrieved 1 October 2016.
- Elkies, Noam D. (2007). "The ABC's of Number Theory" (PDF). The Harvard College Mathematics Review. 1 (1).
- Michel Waldschmidt (2004). "Open Diophantine Problems". Moscow Mathematics. 4: 245–305.
- Crandall, Richard; Pomerance, Carl (2000). Prime Numbers: A Computational Perspective. Springer. p. 417. ISBN 978-0387-25282-7.
- Nitaj, Abderrahmane (1995). "On A Conjecture of Erdos on 3-Powerful Numbers". Bulletin of the London Mathematical Society. 27 (4): 317–318. doi:10.1112/blms/27.4.317.
- Wacław Sierpiński, Pythagorean triangles, Dover, 2003, p. 55 (orig. Graduate School of Science, Yeshiva University, 1962).
- "Billionaire Offers $1 Million to Solve Math Problem | ABC News Blogs – Yahoo". Gma.yahoo.com. 2013-06-06. Retrieved 2014-03-06.
- H. Darmon and L. Merel. Winding quotients and some variants of Fermat’s Last Theorem, J. Reine Angew. Math. 490 (1997), 81–100.
- Frits Beukers (January 20, 2006). "The generalized Fermat equation" (PDF). Staff.science.uu.nl. Retrieved 2014-03-06.
- Poonen, Bjorn; Schaefer, Edward F.; Stoll, Michael (2005). "Twists of X(7) and primitive solutions to x2 + y3 = z7". Duke Mathematical Journal. 137: 103–158. arXiv: . doi:10.1215/S0012-7094-07-13714-1.
- Brown, David (2009). "Primitive Integral Solutions to x2 + y3 = z10". arXiv: [math.NT].
- "The Diophantine Equation" (PDF). Math.wisc.edu. Retrieved 2014-03-06.
- Siksek, Samir; Stoll, Michael (2013). "The Generalised Fermat Equation x2 + y3 = z15". Archiv der Mathematik. 102: 411–421. arXiv: [math.NT]. doi:10.1007/s00013-014-0639-z.
- Dahmen, Sander R.; Siksek, Samir (2013). "Perfect powers expressible as sums of two fifth or seventh powers". arXiv: [math.NT].
- Darmon, H.; Granville, A. (1995). "On the equations zm = F(x, y) and Axp + Byq = Czr". Bulletin of the London Mathematical Society. 27: 513–43. doi:10.1112/blms/27.6.513.
- Norvig, Peter. "Beal's Conjecture: A Search for Counterexamples". Norvig.com. Retrieved 2014-03-06.
- Walter Hickey (5 June 2013). "If You Can Solve This Math Problem, Then A Texas Banker Will Give You $1 Million". Business Insider. Retrieved 8 July 2016.
- "$1 Million Math Problem: Banker D. Andrew Beal Offers Award To Crack Conjecture Unsolved For 30 Years". International Science Times. 5 June 2013. Retrieved 8 July 2016.
- "Neglected Gaussians". Mathpuzzle.com. Retrieved 2014-03-06. | <urn:uuid:f33ce089-7913-4e96-97e3-0c908f1d5aef> | 2.625 | 2,778 | Knowledge Article | Science & Tech. | 74.9317 | 95,575,092 |
WCF binding is a set of binding elements and each element specify, how the service and client will communicates with each other's. Each binding must have at least one transport element and one message encoding element.
Different types of WCF bindings
WCF has a couple of built in bindings which are designed to fulfill some specific need. You can also define your own custom binding in WCF to fulfill your need. All built in bindings are defined in the System.ServiceModel Namespace. Here is the list of 10 built in bindings in WCF which we commonly used:
This binding is provided by the BasicHttpBinding class. It is designed to expose a WCF service as an ASMX web service, so that old clients (which are still using ASMX web service) can consume new service. By default, it uses Http protocol for transport and encodes the message in UTF - 8 text for-mat. You can also use Https with this binding.
This binding is provided by the WebHttpBinding class. It is designed to expose WCF services as Http requests by using HTTP-GET, HTTP-POST. It is used with REST based services which may give output as an XML or JSON format. This is very much used with social networks for implementing a syndication feed.
Web Service (WS) binding
This binding is provided by the WSHttpBinding class. It is like as Basic binding and uses Http or Https protocols for transport. But this is designed to offer various WS - * specifications such as WS – Reliable Messaging, WS - Transactions, WS - Security and so on which are not supported by Basic binding.
wsHttpBinding= basicHttpBinding + WS-* specification
WS Dual binding
This binding is provided by the WsDualHttpBinding class. It is like as wsHttpBinding except it sup-ports bi-directional communication means both clients and services can send and receive messages.
This binding is provided by the NetTcpBinding class. It uses TCP protocol for communication be-tween two machines with in intranet (means same network). It encodes the message in binary format. This is faster and more reliable binding as compared to the Http protocol bindings. It is only used when communication is WCF - to – WCF means both client and service should have WCF.
This binding is provided by the NetNamedPipeBinding class. It uses named pipe for Communication between two services on the same machine. This is the most secure and fastest binding among all the bindings.
This binding is provided by the NetMsmqBinding class. It uses MSMQ for transport and offers sup-port to disconnected message queued. It provides solutions for disconnected scenarios in which service processes the message at a different time than the client send the messages.
Federated WS binding
This binding is provided by the WSFederationHttpBinding class. It is a specialized form of WS binding and provides support to federated security.
Peer Network binding
This binding is provided by the NetPeerTcpBinding class. It uses TCP protocol but uses peer net-working as transport. In this networking each machine (node) acts as a client and a server to the other nodes. This is used in the file sharing systems like torrent.
MSMQ Integration binding
This binding is provided by the MsmqIntegrationBinding class. This binding offers support to communicate with existing systems that communicate via MSMQ.
Choosing an Appropriate WCF binding
. Depending upon your requirements, you can choose a binding for your service as shown below in the diagram:
WCF bindings comparison
What do you think?
I hope you will enjoy the tips while programming with WCF. I would like to have feedback from my blog readers. Your valuable feedback, question, or comments about this article are always welcome. | <urn:uuid:fb96c424-87e2-4fa9-9eb6-c19c979715be> | 2.890625 | 811 | Tutorial | Software Dev. | 54.677161 | 95,575,095 |
DESCRIPTION: The principle of superposition builds on the principle of original horizontality. An unstable isotope ane emits radiation from its atomic nucleus radioactive decay: Dating method that uses heat to measure the amount of radioactivity accumulated by a rock or stone tool since it was last heated..#1 rytu6: Muito bom mas d trabalho fica montando e desmontando toda hora bom normal mesmo
#2 Hipa: Umm soy jacobo wey, no creo que seas tu pero con razon, no pones tu voz, pos asi si, aver pon tu voz, :)
#3 su4ka11: Nice acting.Zelda.
#4 xereon: don't know why on Christmas game there is not the slam over Durant and an other player of the warriors plus the foul
#5 needforspeedmw: Me deixou at com fome
#6 cougar7: http://marvytvseries.info/download-black-panther-hd-for-free-2018/
#7 EvgenichS: Watching this video, years passing. *_BUT_ Then you realized You won't be able to watch even 2100.
#8 rfgfcfdf1: Helio
#9 XxIriskaxX: I've read that this August 10 Moscow students arrived to Agafia for 10 days to help her out (gather grass for the goats, pick up pine nuts and produce from the garden). It says they have also brought her food for the chickens, frying pan, grains and fruits. :)
#10 Semenyik96: Eres tan cool adrie
#11 iiaue: OK THAT IS IT LET ME MARRY CARA DELEVINGNE XD
#12 dom21415: Navas
#14 maks12: Nova's reaction to what Brett said hahahahah
#15 parapapa: H
#16 epibalete: weird
#17 zippa1: Sorry to say but Chris brown is still very relevant
#18 Adver88: Amit Raj ara
#19 Juliana2007: This could easily have been a top 10
Dating Rocks and Fossils Using Geologic Methods | Learn Science at Scitable
The isotope that forms as a result of radioactive decay electrons: Dating method that uses light to measure the amount of radioactivity accumulated by crystals in sand grains or bones since the time they were buried. Mountains have been built and eroded, continents and oceans have moved great distances, and the Earth has fluctuated from being extremely cold and almost completely covered with ice to being very warm and ice-free. Fossil assemblage B includes the index fossils the orange ammonite and the blue ammonite, meaning that assemblage B must have been deposited during the interval of time indicated by the red box. Thus, radiocarbon dating is only useful for measuring things that were formed in the relatively recent geologic past. If there is three times less 14 C than 14 N in the bone, two half lives have passed and the sample is 11, years old. Remanent magnetization in ancient rocks that records the orientation of the earth's magnetic field and can be used to determine the location of the magnetic poles and the latitude of the rocks at the time the rocks were formed parent isotope:
Luminescence Dating Basics Methods And Applications.
Accordingly, the oldest rocks in a sequence are at the bottom and the youngest rocks are at the top. The atomic nucleus in C 14 is unstable making the isotope radioactive..
- Using radiometric dates and measurements of the ancient magnetic polarity in volcanic and sedimentary rocks termed paleomagnetism , geologists have been able to determine precisely when magnetic reversals occurred in the past. However, once rocks or fossils become much older than that, all of the "traps" in the crystal structures become full and no more electrons can accumulate, even if they are dislodged..
- Dating Rocks and Fossils Using Geologic Methods
- Christian Teenage Dating Advice
Geologists can measure the paleomagnetism of rocks at a site to reveal its record of ancient magnetic reversals. Luminescence dating basics methods and applications..
- kante aufgefüllt werden; damit geht das Loch vom Valenzband in den lokalisierten Zustand über. Luminescence dating: basics, methods and applications. 97 Missing: hookup.
- Using relative and radiometric dating methods, geologists are able to answer the The study of strata is called stratigraphy, and using a few basic principles, it is . Name of Method, Age Range of Application, Material Dated, Methodology Dating methods like thermoluminescence, optical stimulating luminescence and.
- Eiszeitalter und Gegenwart 57/1–2 95– Hannover Quaternary Science Journal Luminescence dating: basics, methods and applications FRANK Missing: hookup.
Unlike relative dating methods, absolute dating methods provide chronological estimates of the age of certain geological materials associated with fossils, and even direct age luminescence hookup basics methods and applications of the fossil material itself. The principle of faunal succession allows scientists to use the fossils to understand the relative age of rocks and fossils. For example, in the rocks exposed in the walls of the Grand Canyon Figure 1 there are many horizontal layers, which are called strata. Good questions to ask a guy youre hookup method that uses light to measure the amount of radioactivity accumulated by crystals in sand grains or bones since the time they were buried. As these changes have occurred, organisms have evolved, and applicationd of some have been preserved as fossils. Oxford University Press applicatlons Characteristics of Crown Primates.
endclickbait to leave their fingerprints for proof in court.
#2 24.03.2018 at 16:47 SolderSide:
2:13 this alien looks soo sad he wants to get back to hes family he knows hes going to die ;(
#3 02.04.2018 at 21:08 alekseka:
Yo tengo la plastificado del video en naranja
#4 07.04.2018 at 15:00 selerado1:
#5 13.04.2018 at 02:59 lexina:
This wasn't even filmed in the last decade. Muslims really are dumb lol. And I'm not Jewish FYI atheist.
#6 19.04.2018 at 16:07 mikkieisback:
#7 27.04.2018 at 03:44 god03:
#8 05.05.2018 at 05:31 FreeDancer:
WTH I didn't know ARod was this good of an analyst. Very impressed
#9 11.05.2018 at 12:48 Pointer69:
1 time didnt sleep 2 full days n the third day I started hullicinating and taking micro naps and which made me think I was going somewhere but I was only dreaming I was walking somewhere
#10 20.05.2018 at 02:06 nigodnik:
Just like vampires and zombies movie I am so tired of superhero movie.
#11 25.05.2018 at 06:16 varvar2dise:
lmao Tamar is a complete fool! I love it! I wish i was there
#12 28.05.2018 at 10:26 acc4cb:
Fear and pain is the reason ppl go on and stay on these they have personal problems that underline all the ppl on these im one who i speak of if we could work on our probs the pills will work for us i have ptsd so its hard seeing my pain
#13 05.06.2018 at 17:00 erem:
I love Ellen but she is beeing a bit rude to Celine. Celine is one of the singers/artist that you love no matter how old you are. I am almost 24 and i've loved her since I was like 10 years old.
#14 15.06.2018 at 11:03 sone44ka:
best list I ever seen great job keep it up
#15 21.06.2018 at 11:13 vihrom2010:
The cubes look like they were nicely cut I love this Asmr my favorite one of all Im watching it every night
#16 30.06.2018 at 03:07 c9icuk2:
GREAT VID AS USUAL JORDON GREAT WORK
#17 01.07.2018 at 11:12 v2dlbot1:
You missed a sin with Ricky How'd he get his jail picture if he's a animal?
#18 06.07.2018 at 01:29 Stels14:
#19 16.07.2018 at 13:27 filipep:
It didn't make me laugh much but all the babies were lovely and super cute. | <urn:uuid:f9919f71-85f8-42b8-ac6d-e786de9b571a> | 3.140625 | 1,890 | Comment Section | Science & Tech. | 63.63868 | 95,575,106 |
At a recent Kavli Futures Symposium, 19 experts from a diverse range of fields discussed the promise of using the lab to understand and exploit the evolution of organisms -- progress that may one day lead to new vaccines or other biotechnology products.
Now, three of the participants have joined in a discussion of the issues and topics raised during the meeting: Michael Brenner, Professor of Applied Mathematics and Applied Physics at the School of Engineering and Applied Sciences and member of the Kavli Institute for Bionano Science and Technology, Harvard University; Stephen Quake, Professor of Applied Physics and Bioengineering at Stanford University and Investigator, Howard Hughes Medical Institute; and Mark Martindale, Director of the Kewalo Marine Laboratory, University of Hawaii.
In the dialogue, the researchers discuss how investigators in several different scientific fields are now exploring how organisms evolve new functions in a much more detailed way. They also discuss how new experimental methods and tools are expected to greatly aid those explorations by enabling the quick, inexpensive and complex analyses that are needed for laboratory investigations of evolution.
The hope is that the synergy of all these fields can one day lead to a better understanding of how complex new structures, such as the eye or even the entire nervous system, evolved and enabled new functions. These findings are likely to further advances in directed evolution, with such practical applications as improved vaccines or bacteria engineered to produce oil from sugar, or to carry out other useful new functions. "All of the same principles and concepts that apply to studying evolution over the hundred-million-year time scale should also describe what goes on in your immune system over the course of much briefer periods -- years, months, weeks," said Quake. "I'm very excited about trying to take general concepts and apply them to areas that haven't previously been explored as evolutionary models."
Brenner concurred on this point. "Every method people have for thinking about how to combat disease or anything else is developed under an intellectual paradigm. If one could invent new concepts for how evolutionary change occurs, then they could really change the way you think about those problems."
Read story: http://www.kavlifoundation.org/kavli-futures-symposium-evolution-new-functions-main
James Cohen | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:f74e25c4-6edb-4b89-8328-9286ce6da88d> | 2.984375 | 1,051 | Content Listing | Science & Tech. | 32.549882 | 95,575,116 |
posted by jake
A class of 11 students taking an exam has a power output per student of 122 W. Assume that the initial temperature of the room is 18.8oC and that its dimensions are 6.40 m by 14.5 m by 3.50 m. What is the temperature (in oC, do not enter units) of the room at the end of 54.0 min if all the heat remains in the air in the room and none is added by an outside source? The specific heat of air is 840 J/kg*oC, and its density is about 1.25E-3 g/cm3
** I keep getting different answers ranging from 30-40 and nothing is correct! HelP!
now change denstity to kg/m^3
Tf= about one C higher. Check my calcs | <urn:uuid:ae25aa7e-7128-47a0-b0d0-c4562387dbf0> | 2.65625 | 177 | Q&A Forum | Science & Tech. | 98.282917 | 95,575,117 |
A smartphone application helps Ducks Unlimited Canada tackle invasive Spartina plants.
The fight to rescue habitat from a beautiful and destructive invader.
Researchers shut out an invasive fish species to help restore Delta Marsh.
One strategy birds use to evade collisions.
It’s hard to stump a DUC biologist when it comes to identifying waterfowl, but a group of Inuvialuit came close.
Radar technology informs the weather forecast…and plays an important role in conservation
Aerial surveys today could influence what Canada's boreal forest looks like in the future
Study to help quantify how the boreal helps fight climate change
Rescue Our Wetlands contest winner gets caught up in the research with DUC’s science team
Win an all-expenses paid trip to Manitoba’s historic Delta Marsh this summer, where you’ll spend three days as part of a DUC research team
Research projects will show what’s entering Prairie watersheds, and the role of nature in protecting water quality | <urn:uuid:b28a9a1f-471d-499e-a160-411e92cfcb60> | 2.671875 | 211 | Content Listing | Science & Tech. | 29.293444 | 95,575,136 |
Visual Orbits of A- and F-stars in Spectroscopic Binaries
The fundamental parameters of eclipsing binary stars are used to test stellar
evolutionary models by comparing the observed and predicted stellar parameters, such
as mass, radius and temperature. However, most eclipsing binaries have short orbital
periods, which implies that the stars probably interacted in their early phases and are
currently subject to tidal forces. So, it is not clear how applicable the parameters of
close binaries are to evolutionary models of single stars. The solution to this problem
is to expand binary star studies to longer period systems that are widely separated and
not interacting. This requires the determination of a visual orbit to estimate the
orbital inclination, which is then combined with the spectroscopic elements to find
masses. For my thesis project, we plan to determine the visual and spectroscopic orbits for 14
double-lined spectroscopic binaries by combining echelle spectroscopy from APO
with long baseline interferometry from CHARA.
Image Credit: the CHARA array
A Photometric, Spectroscopic and Apsidal Motion Analysis of BW Aqr
Eclipsing binaries are important tools for studying stellar evolution and stellar interiors. Their accurate fundamental parameters are used to test evolutionary models, and systems showing apsidal motion can also be used to test the model's internal structure predictions. For this purpose, we present a photometric and spectroscopic analysis of the eclipsing binary BW Aquarii, an evolved F-type binary with slow apsidal motion. We model the K2 C3 light curve using the Eclipsing Light Curve code to determine several orbital and stellar parameters, as well as measure the eclipse times to determine updated apsidal motion parameters for the system. Furthermore, we obtain high-resolution spectra of BW Aqr using the CHIRON echelle spectrograph on the CTIO 1.5m for radial velocity analysis. We then reconstruct the spectra of each component using Doppler tomography in order to determine the atmospheric parameters. We find that both components of BW Aqr are late F-type stars with M1 = 1.365 +/- 0.008 Msun, M2 = 1.483 +/- 0.009 Msun, and R1 = 1.782 +/- 0.021 Rsun, R2 = 2.053 +/- 0.020 Rsun. We then compare these results to the predictions of several stellar evolution models, finding that the models cannot reproduce the observed properties of both components at the same age.
Accepted for publication in AJ.
K2 C0 Eclipsing Binary near EPIC 202062176
We completed a photometric and light curve analysis of an eccentric eclipsing
binary in the K2 Campaign 0 feld that resides in Sh 2-252E, a
young star cluster embedded in an H II region. Because there dozens of stars in this embedded
cluster fall within the Kepler aperture, we obtained spectra of the three brightest stars in the
crowded aperture to identify which is the binary system. We found that none of these stars are
components of the eclipsing binary system, which must
be one of the fainter nearby stars. However, these bright cluster members
all have remarkable spectra: Sh 2-252a (EPIC 202062176) is a B0.5 V star with
razor sharp absorption lines, Sh 2-252b is a Herbig A0 star with disk-like emission
lines, and Sh 2-252c is a pre-main sequence star with very red color.
Lester, K.V., Gies, D.R., & Guo, Z. 2016, AJ, 152, 194
All of my publications can be found HERE on NASA/ADS. | <urn:uuid:3bac622a-0ae5-4324-9ab4-f70b3a2532b0> | 2.765625 | 794 | Academic Writing | Science & Tech. | 46.325612 | 95,575,175 |
- Open Access
They fought the law and the law won
© BioMed Central Ltd 2007
Published: 2 November 2007
Australia used to have a cane beetle problem. The cane beetle was slowly destroying the country's sugar cane crops, and there seemed to be no way to get rid of it. Then, in 1935, someone had the bright idea to import a box of cane toads from the Hawaiian Islands, where the large frogs (which were 25 cm long and up to 4 kg in weight) supposedly kept the pest in check. So, 102 cane toads were delivered to Gordonvale, just south of Cairns, where after a few rounds of captive breeding to increase their numbers, they were released into the sugar cane fields. Then the fun began. It turned out that cane toads can't jump very high so they did not eat the cane beetles, which tended to reside on the upper stalks of the cane plants. But they were able to eat just about anything else: dog food, mice, the insects that native Australian frogs eat, the native Australian frogs themselves, and so on. They bred like flies: a pair of cane toads can lay 33,000 eggs per spawning. They proved resistant to herbicides that would normally kill frogs and tadpoles. And they are deadly poisonous; so they had no natural predators. (Australian museums have exhibits of snakes that were killed by toad toxin so fast that the toads are still in their mouths.) The cane toad has turned out to be one of Australia's worst environmental disasters. Since 1935, it has spread across most of Queensland, the entire Northern Territory, and down the coast of New South Wales. So now Australia has a cane toad problem. Oh yes, and it still has a cane beetle problem.
The cane toad is one of the more spectacular examples of the only scientific law for which there is no exception: The Law of Unintended Consequences. Loosely stated, the Law says that all human actions can produce unforeseen effects, and these are often more momentous, and frequently damaging, than the original problem those actions were meant to solve. It has been expressed colloquially in many forms; my favorite is "when you are up to your ass in alligators, it is difficult to remember that your initial objective was to drain the swamp." The late, great sociologist Robert K Merton was fascinated by it; in his book On Social Structure and Science (The University of Chicago Press, 1996), he listed five causes of the law:
1. Ignorance (it is impossible to anticipate everything, thereby leading to incomplete analysis).
2. Error (incorrect analysis of the problem, or following habits that worked in the past but may not apply to the current situation).
3. Immediate interest, which may override long-term interests.
4. Basic values may require or prohibit certain actions, even if the long-term result might be unfavorable (these long-term consequences may eventually cause changes in basic values).
5. Self-defeating prophecy (fear of some consequence drives people to find solutions before the problem occurs; thus, the non-occurrence of the problem is unanticipated).
He left out the most significant one besides ignorance: arrogance, our persistent belief that we are smart enough to plan for all possible consequences.
The Law of Unintended Consequences shows up in all aspects of human endeavor. A familiar example would be the attempt by moral reformers in the 1920s to curb the evil of alcohol consumption by banning all such beverages in the United States ('prohibition'), which neither curbed excessive drinking nor increased public morality. What it did increase, of course, was crime: organized crime was born in the 1920s to cash in on the lucrative market for illegal drink. The law also abounds in time of war - look at how the disastrous invasion of Iraq, which was intended to improve the security of Western nations, has actually turned that land into a breeding ground for terrorists. But where it really seems to come into play is whenever mankind monkeys around with the environment or the ecosystem. Australia's cane toad story is by no means the only example. In the US, gypsy moth caterpillars were imported into New England by one Leopold Trouvelot in the hope of starting a new silk industry. That idea failed, but some of the moths escaped, and over the past 150 years their periodic outbreaks have led to the deforestation of millions of acres of trees and shrubs.
You'd think that after a couple of centuries of disasters like this, we would know enough not to tamper with the natural order. But hubris has no sense of history. The power of genomics has led to numerous bioengineering projects to improve food crop yields, increase disease and pest resistance in many plants, and express foreign proteins in farm animals and tobacco. The altered organisms have been carefully confined for the most part, but I'm sure that's what Trouvelot would have said about his gypsy moths. I'm not fond of quoting Stephen Spielberg - I think he's an antiscience opportunist philosophically - but he's right when he has the character Ian Malcolm state, in the movie Jurassic Park, that "if there is one thing the history of evolution has taught us, it's that life will not be contained. Life breaks free. It expands to new territories, it crashes through barriers, painfully, maybe even dangerously, but, uh, well, there it is!" (Now, don't get me wrong; I'm not opposed to genetic engineering of crops or to genetically modified foods. I think both can have important benefits, especially in countries where agriculture is difficult and famine is frequent. But given the difficulty of containment, I would argue that it behooves us to do everything possible to perform such activities with as much foresight as possible.)
So I hope you will understand why the new science of geo-engineering gives me the willies. Geo-engineering doesn't try to alter a few corn plants; it aims to tinker with the entire planet. It was born out of a desire to do something about global warming. You're going to be hearing a lot more about it, I'm afraid, because it could mean a lot of money for some companies and it is very appealing to conservatives, who have always had an exaggerated faith in our ability to manage the environment. Geo-engineering involves using deliberate human acts, based on novel technologies, to slow down or reverse the climate change being driven by technology-produced greenhouse gasses. Unlike conservation efforts, which are motivated by a desire to roll back the damaging effects of human activities, geo-engineering is based on the notion that ultimately we can actively manipulate the planet to have any climate pattern we want. Some of the more astounding ideas that geo-engineers have put forward lately include fertilizing the sea with iron particles to create explosions of plankton, which take CO2 out of the atmosphere; erecting giant mirrors above the earth to reflect the sun's energy; and dropping clouds of sulfur particles from high-altitude balloons to do the same. You may laugh, but this is no laughing matter - people are really serious about doing these things. A scientific meeting on iron fertilization was held at the end of September at the Woods Hole Oceanographic Institute, and while no one there could agree on the likely consequences of such intervention, no one was laughing about doing it, either. It isn't clear that a company or private organization that wanted to try this on a massive scale could even be prevented from doing so - the maritime laws don't really cover such things and there's an awful lot of water to patrol. It might well be profitable, too, since a company that seeded the production of lots of plankton could, in theory, sell carbon sequestration credits to other, polluting companies.
But the Law of Unintended Consequences makes any such efforts frightening, to say the least. Some of the long-term consequences of a massive, engineered plankton bloom might actually be an increase in global warming, since the dead plankton may give off nitrous oxide, which is an even worse greenhouse gas than CO2. Iron particles also will react with oxygen dissolved in the seawater; the resulting oxygen depletion may kill off countless fish, although no one knows for sure. The problem is that a number of people are getting very serious about trying this and other massive environmental engineering projects, and it's a sure bet that genome biologists are going to be asked to join such efforts (creating, for example, plankton that are more efficient in utilizing iron, or in absorbing carbon dioxide).
I think we should resist such siren calls, and indeed, campaign for a moratorium on all such geo-engineering projects. Some scientists are already calling for that, until an assessment of the likely consequences can be produced. But I would argue that there is no way we can ever assess all of the likely consequences; that the history of environmental tinkering should convince us that the probability of disaster is so high as to require that we prohibit this sort of nonsense forever. I would feel differently if there were no Law of Unintended Consequences. But Australia used to have a cane beetle problem, and now it has a cane toad problem and a cane beetle problem because there is such a law, and that law constantly winks at us, from those dark corners where our ignorance and our arrogance meet. | <urn:uuid:aa15c6eb-ffa4-433e-a8cf-e5068546dfaf> | 3.296875 | 1,932 | Truncated | Science & Tech. | 41.625868 | 95,575,179 |
Primary production data from the south-eastern Weddell Sea
- 61 Downloads
Phytoplankton production for three size classes (<20 μm, 20–100 μm, >100 μm), total primary production and qualitative composition of phytoplankton populations were recorded from 18 stations in the south-eastern Weddell Sea in February/March 1983. Total primary production ranged between 80 and 1670 mg C m-2 d-1 with an average of 670 mg C m-2 d-1, nearly 70% of which was contributed by the <20 μm size fraction (usually pennate and/or centric diatoms). Production of phytoplankton was in the higher range of values reported by other authors for the same region. Variations in primary production could not be attributed to composition of populations, ambient light levels or concentrations of macronutrients (N, P, Si). Phytoplankton populations had a higher diversity in the deeper parts of the Weddell Sea and coincided with different oceanographic situations. Three zones (along the shelf-ice edge from Atka Bay to Halley Bay, west of Halley Bay and off the Filchner/Rønne Ice Shelf) with different communities could be clearly distinguished.
KeywordsPhytoplankton Size Classis Size Fraction Production Data Light Level
Unable to display preview. Download preview PDF.
- Bodungen B von, Tilzer MM, Zeitzschel B (1982) Phytoplankton growth dynamics during spring bloom in Antarctic waters. Joint Oceanogr Assoc, Halifax Canada, Abstracts of invited papers, pp 59Google Scholar
- Brökel K von (1981) The importance of nanoplankton within the pelagic Antarctic ecosystem. Kiel Meeresforsch, Sonderh 5:61–67Google Scholar
- El-Sayed S (1971) Observations on phytoplankton bloom in the Weddell Sea. Antarct Res Ser 17:301–312Google Scholar
- El-Sayed S, Mandelli E (1965) Primary production and standing crop of phytoplankton in the Weddell Sea and Drake Passage. Antarct Res Ser 5:87–106Google Scholar
- El-Sayed S, Taguchi S (1981) Primary production and standing crop of phytoplankton along the ice-edge in the Weddell Sea. Deep-Sea Res 28:1017–1032Google Scholar
- Gammelsrød T, Slotsvik N (1981) Hydrographic and current measurements in the Southern Weddell Sea 1979/80. Polarforsch 51:101–111Google Scholar
- Gill AE (1973) Circulation and bottom water production in the Weddell Sea. Deep-Sea Res 20:111–140Google Scholar
- Holm-Hansen O, El-Sayed S, Franceschini GA, Cuhel RL (1977) Primary production and factors controlling phytoplankton growth in the Southern Ocean. In: Llano GA (ed) Adaptations within Antarctic ecosystems. 3. SCAR Symp, Antarct Biol Wash 1974. Gulf Publ Comp, Houston, pp 11–50Google Scholar
- Steemann-Nielsen E (1952) The use of radio-active carbon (14C) for measuring organic production in the sea. J Cons Int Expl Mer 19:309–328Google Scholar | <urn:uuid:48d918cd-1ebb-428e-b660-9c0c96b9b9b0> | 2.828125 | 735 | Academic Writing | Science & Tech. | 52.782652 | 95,575,180 |
Volume 17, Number 1, Jan 1995, p.23
Reprinted with permission from The Torch, Dec. 1994, p.3. The Torch is published monthly by the Office of Public Affairs of the Smithsonian Institution for distribution to Smithsonian employees.
For decades, museums have kept their thermostats at a steady 21 degrees Celsius (70 degrees Fahrenheit ), with a relative humidity of 50 percent. Now, a team of Conservation Analytical Laboratory researchers has found that most museum objects can safely tolerate a wider range of both temperature and relative humidity.
In fact, according to the teams research, there can be as much as plus or minus 15 percent fluctuation in relative humidity and as much as 10C (50 F) difference in temperature. Within that range the scientists say, any object -- whether it's Leonardo daVinci's painting "Mona Lisa" or an installation of Jeff Koons' vacuum cleaners -- may be safely stored or placed on exhibit.
The researchers' insights could save museums, archives and libraries millions of dollars in construction and energy costs necessary to maintain rigid environmental controls.
The CAL researchers -- Marion Mecklenburg, Charles Tumosa, David Erhardt, and Mark McCormick-Goodhart -- reached their conclusions during a series of investigations of the chemical, physical, and mechanical properties of materials common to a wide variety of museum objects. The objects ranged from natural history specimens and archaeological artifacts, for example, to 19th century landscape paintings and photographic prints and film.
In the past year, the researchers have presented their work in a variety of papers and presentations for organizations such as the Materials Research Society, the American Chemical Society, and, most recently, at a meeting in Ottawa, Canada, of the International Institute for Conservation of Historic and Artistic Work.
"As scientists, we don't work from the idea that each object in a museum is unique," Mecklenburg says. "Rather, we start by looking at the whole picture -- examining and understanding all of the materials found in the vast majority of museum objects."
Through informal discussions of their work, the researchers say, came the understanding that materials such as wood, cellulose, various polymer coating, fibers, minerals, pigments and the like share an overlapping range of tolerance to temperature and relative humidity.
"Up to 50 percent of construction costs for new museums and archival storage facilities may go toward highly overbuilt heating and cooling systems," Mecklenburg says. "Our research shows that such specialized systems are unnecessary. Most museums can adequately protect their collections with commercially available technology, such as the heating and cooling systems used in grocery or retail stores."
Moreover, Mecklenburg says, specialized heating and cooling systems that keep temperature and humidity stable can be expensive to operate. Seasonal variations in outdoor temperature and relative humidity, particularly in temperate climates, he says, can mean monthly energy costs that soar to tens of thousands of dollars in order to maintain strict environmental controls.
For older or historic buildings, Mecklenburg adds, making use of conventional equipment avoids the structural damage that might result from installing more elaborate heating and cooling systems.
The materials research at CAL that has let to the new insights about temperature and relative humidity involves laboratory tests of the properties (physical, mechanical, and chemical) of materials commonly found in museums. The overall goal of the CAL researchers is to apply the best scientific knowledge about various materials to the treatment and conservation of cultural, historic, artistic, and scientific artifacts.
Chemist Tumosa has measured the effects of changes in relative humidity on acrylic paints. For example, he has cooled and dried samples of acrylic paint on canvas to document responses to lowered temperature and humidity (if temperature and humidity are too low, many paints and coatings become brittle and crack). Tumosa also considers changes on stretched canvas in response to changing temperature and humidity, which might cause paint to crack and fall off.
Other materials -- wood, photographic emulsions, paper -- are subjected to high humidity, or they undergo accelerated aging through exposure to many potentially damaging environmental factors, including heat, humidity, light and various pollutants.
For example, McCormick-Goodhart has tested the effects of temperature and relative humidity on photographic prints and film, especially motion picture film. Results show that temperatures below freezing provide the best storage for maintaining the film (particularly color film) and that commercially available freezers are adequate, despite fluctuations in temperature that might occur with such off-the-shelf equipment. Precautions must be taken to guard film against high humidity, he says. For motion picture film, McCormick-Goodhart places each reel inside a zip-lock freezer bag, which is encased in a cardboard box.
In general, the CAL researchers say, for most materials the low end of the temperature / relative humidity range prevents biological damage from microbial growth and minimizes chemical reactions that occur naturally within objects over time. At slightly higher values for temperature and relative humidity, they say, physical damage is minimized.
"This work is capable of defining the tolerance limits for temperature and relative humidity of large classes of materials represented in museum collections," McCormick-Goodhart says. "It means we don't have to study every single object. That's the breakthrough."
Timestamp: Thursday, 11-Dec-2008 13:02:34 PST
Retrieved: Wednesday, 18-Jul-2018 22:27:41 GMT | <urn:uuid:468b0760-92fb-47cd-893b-915aaf692777> | 2.859375 | 1,103 | Knowledge Article | Science & Tech. | 21.220341 | 95,575,188 |
I need to compile a list of renewable and three non-renewable energy sources. These energy sources should be located in the region of Southeast Asia. For each source of energy I need at least four advantages and four disadvantages of each renewable and non-renewable energy source.
I need to write a paper but I need help gathering some information can someone help me?
(If there isn't enough energy sources located in southeast Asia then choose another region of the world but all must be in the same region.)© BrainMass Inc. brainmass.com July 19, 2018, 7:06 pm ad1c9bdddf
You have a lot of options for this assignment. Southeast Asia is a large area, and I can't think of any form of energy that wouldn't be available in some part of it. Of course if the whole region were entirely dependent on a certain source, there probably wouldn't be enough, and imports would be necessary. Here are some renewable options to consider:
Pros - Cost effective in terms of real estate with increasing population and land cultivation; can even be built off-shore
- Farming and grazing can still take place on land occupied by wind turbines.
- Useful in remote locations where electricity is not available
- No chemical byproducts
Cons -Difficult to create enough energy to sustain civilization
- Can negatively affect bird migration patterns and pose a danger directly to the birds
- Useful only in locations with regular wind
- Existing infrastructure does not support wind power
-Does not ...
Renewable energy sources in Southeast Asia | <urn:uuid:44031822-526a-439a-b1eb-9deb3f36e3c5> | 2.609375 | 322 | Q&A Forum | Science & Tech. | 41.883488 | 95,575,210 |
Truck of milk
How many hectoliters of ,,the box" milk fit in the truck, ake cargo size area is 2.8 m x 3 m x 17 m? A liter of milk in a box measuring 12 cm x 7 cm x 20 cm.
Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
To solve this example are needed these knowledge from mathematics:
Next similar examples:
How many hectoliters of water fits into cuboid tank with dimensions of a = 3.5 m b = 2.5 m c = 1.4 m?
- Water reservoir
The water tank has a cuboid with edges a= 1 m, b=2 m , c = 1 m. Calculate how many centimeters of water level falls, if we fill fifteen 12 liters cans.
- Concrete box
The concrete box with walls thick 5 cm has the following external dimensions: length 1.4 m, width 38 cm and height 42 cm. How many liters of soil can fit if I fill it to the brim?
- Water lake
The length of the lake water is 8 meters width 7 meters and depth 120 centimeters. How many liters of water can fit into the water lake?
- Circular pool
The 3.6-meter pool has a depth of 90 cm. How many liters of water is in the pool?
- The wall
We have to build a cuboid wall with dimensions base 30 cm and 45 cm and height 3.25 meters. Calculate how many we need bricks if we spend 400 pieces of bricks to 1 m3 of wall?
To cuboid tank whose bottom has dimensions of 9 m and 15 m were flow 1080 hectoliters of water. This was filled 40% of the tank volume. Calculate the depth of the tank.
Aquarium is rectangular box with square base containing 85 liters of water. Length of base edge is 54 cm. To what height the water level goes?
- Pool in litres
Pool has a width of 3.5 m length of 6 m and a height 1.60 meters. Calculate pool volume in liters.
- Pool 3
How long will fill pool cuboid shape (8m 6m 1.5m) when flows 15 liters/s?
- Two cuboids
Find the volume of cuboidal box whose one edge is: a) 1.4m and b) 2.1dm
The tank bottom has dimensions of 1.5 m and 3 2/6 m. The tank is 459.1 hl of water. How high is the water surface?
- Glass door
What is the weight of glass door panel 5 mm thick height 2.1 meters and a width of 65 cm and 1 cubic dm of glass weighs 2.5 kg?
- Oak cuboid
Oak timber is rectangular shaped with dimensions of 2m, 30 cm and 15 cm. It weight is 70 kg. Calculate the weight 1 dm³ of timber.
Excavation for the base of the cottage 4.5 m x 3.24 m x 60 cm. The excavated soil will increase its volume by one-quarter. Calculate the volume of excavated soil.
- Water pool
What water level is in the pool shaped cuboid with bottom dimensions of 25 m and 10 meters, when is 3750hl water in the pool.
- Fire tank
How deep is the fire tank with the dimensions of the bottom 7m and 12m, when filled with 420 m3 of water? | <urn:uuid:d341075a-c252-4cd2-ba67-f832f92c976e> | 2.828125 | 759 | Tutorial | Science & Tech. | 100.222933 | 95,575,213 |
The Sierra Nevada is such a high and rocky mountain range that one might wonder how trees like Jeffrey pines and giant sequoias are able to grow. Dust collected in Yosemite National Park contains nutrients such as phosphorus, calcium, magnesium, and potassium, which are not typically found in areas where there is a lot of granite rock. In work published last year, researchers reported that phosphorous and other nutrients travel to the Sierra Nevada via dust carried in the jet stream.
A team from UC Riverside and UC Merced conducted a study in Yosemite Valley to establish where the dust and minerals originated. After analyzing the dust they concluded that the (more…) | <urn:uuid:e889eb03-5f5f-4a0d-8430-a10deef0a079> | 3.53125 | 129 | Truncated | Science & Tech. | 32.631071 | 95,575,214 |
anybody know how to see this???
(23,12,7) binary code???
afraid not -- have no clue what you are asking.
Representing integers as a binary number..? if not, would you be a bit more specific?
Here's one way, using a bitset
//bitset with 8 bits
typedef std::bitset<8> bits_8;
std::cout << bits_8(27) << '\n'
<< bits_8(12) << '\n'
<< bits_8(7) << '\n';
sorry about that...
I just found out that the binary code (23,12, 7) is (n,k,d* = minimum distance).
I need to find the word error with the probability of bit error p = 0.01.
Is there a formula to calculate the word error?
Anybody knows any website that teaches me how to calculate the word error
or maybe someone knows the formula.
lot of help
do you mean this ? Nope, I haven't the slightest idea.
Apparently you have posted the same questions on other boards.
Error probabilities? Codes? Why are you asking this in a C++ forum.
Horrible, untagged, outdated code posted seemingly haphazard for no apparent reason.
vb.Net - Regular Expression Tester
Every now and then I find another use for a regular expression. For those not familiar with regular expressions, they can be as cryptic to ...
I'm trying to build a client and a server in the same program. For example, user 1 sends a packet of data to user 2, user 2 after receiving the ...
Could anyone please review my code https://github.com/LeoUpperThrower4/GeneticAlgorithm | <urn:uuid:b815759a-adfd-4a31-ba77-85d736724d84> | 2.953125 | 382 | Comment Section | Software Dev. | 73.54532 | 95,575,258 |
|The animal cell|
Components of a typical animal cell:
In cell biology, the cytoplasm is the material within a living cell, excluding the cell nucleus. It comprises cytosol (the gel-like substance enclosed within the cell membrane) and the organelles – the cell's internal sub-structures. All of the contents of the cells of prokaryotic organisms (such as bacteria, which lack a cell nucleus) are contained within the cytoplasm. Within the cells of eukaryotic organisms the contents of the cell nucleus are separated from the cytoplasm, and are then called the nucleoplasm. The cytoplasm is about 80% water and usually colorless.
The submicroscopic ground cell substance or cytoplasmatic matrix which remains after exclusion the cell organelles and particles is groundplasm. It is the hyaloplasm of light microscopy, and high complex, polyphasic system in which all of resolvable cytoplasmic elements of are suspended, including the larger organelles such as the ribosomes, mitochondria, the plant plastids, lipid droplets, and vacuoles.
It is within the cytoplasm that most cellular activities occur, such as many metabolic pathways including glycolysis, and processes such as cell division. The concentrated inner area is called the endoplasm and the outer layer is called the cell cortex or the ectoplasm.
The physical properties of the cytoplasm have been contested in recent years. It remains uncertain how the varied components of the cytoplasm interact to allow movement of particles[clarification needed] and organelles while maintaining the cell’s structure. The flow of cytoplasmic components plays an important role in many cellular functions which are dependent on the permeability of the cytoplasm. An example of such function is cell signalling, a process which is dependent on the manner in which signaling molecules are allowed to diffuse across the cell. While small signaling molecules like calcium ions are able to diffuse with ease, larger molecules and subcellular structures often require aid in moving through the cytoplasm. The irregular dynamics of such particles have given rise to various theories on the nature of the cytoplasm.
As a sol-gel
There has long been evidence that the cytoplasm behaves like a sol-gel. It is thought that the component molecules and structures of the cytoplasm behave at times like a disordered colloidal solution (sol) and at other times like an integrated network, forming a solid mass (gel). This theory thus proposes that the cytoplasm exists in distinct fluid and solid phases depending on the level of interaction between cytoplasmic components, which may explain the differential dynamics of different particles observed moving through the cytoplasm.
As a glass
Recently it has been proposed that the cytoplasm behaves like a glass-forming liquid approaching the glass transition. In this theory, the greater the concentration of cytoplasmic components, the less the cytoplasm behaves like a liquid and the more it behaves as a solid glass, freezing larger cytoplasmic components in place (it is thought that the cell's metabolic activity is able to fluidize the cytoplasm to allow the movement of such larger cytoplasmic components). A cell's ability to vitrify in the absence of metabolic activity, as in dormant periods, may be beneficial as a defence strategy. A solid glass cytoplasm would freeze subcellular structures in place, preventing damage, while allowing the transmission of very small proteins and metabolites, helping to kickstart growth upon the cell's revival from dormancy.
There has been research examining the motion of cytoplasmic particles independent of the nature of the cytoplasm. In such an alternative approach, the aggregate random forces within the cell caused by motor proteins explain the non-Brownian motion of cytoplasmic constituents.
The cytosol is the portion of the cytoplasm not contained within membrane-bound organelles. Cytosol makes up about 70% of the cell volume and is a complex mixture of cytoskeleton filaments, dissolved molecules, and water. The cytosol's filaments include the protein filaments such as actin filaments and microtubules that make up the cytoskeleton, as well as soluble proteins and small structures such as ribosomes, proteasomes, and the mysterious vault complexes. The inner, granular and more fluid portion of the cytoplasm is referred to as endoplasm.
Due to this network of fibres and high concentrations of dissolved macromolecules, such as proteins, an effect called macromolecular crowding occurs and the cytosol does not act as an ideal solution. This crowding effect alters how the components of the cytosol interact with each other.
Organelles (literally "little organs"), are usually membrane-bound structures inside the cell that have specific functions. Some major organelles that are suspended in the cytosol are the mitochondria, the endoplasmic reticulum, the Golgi apparatus, vacuoles, lysosomes, and in plant cells, chloroplasts.
The inclusions are small particles of insoluble substances suspended in the cytosol. A huge range of inclusions exist in different cell types, and range from crystals of calcium oxalate or silicon dioxide in plants, to granules of energy-storage materials such as starch, glycogen, or polyhydroxybutyrate. A particularly widespread example are lipid droplets, which are spherical droplets composed of lipids and proteins that are used in both prokaryotes and eukaryotes as a way of storing lipids such as fatty acids and sterols. Lipid droplets make up much of the volume of adipocytes, which are specialized lipid-storage cells, but they are also found in a range of other cell types.
Controversy and research
The cytoplasm, mitochondria and most organelles are contributions to the cell from the maternal gamete. Contrary to the older information that disregards any notion of the cytoplasm being active, new research has shown it to be in control of movement and flow of nutrients in and out of the cell by viscoplastic behavior and a measure of the reciprocal rate of bond breakage within the cytoplasmic network.
The material properties of the cytoplasm remain an ongoing investigation. Recent measurements using force spectrum microscopy reveal that the cytoplasm can be likened to an elastic solid, rather than a viscoelastic fluid.
- Shepherd, V. A. (2006). "The cytomatrix as a cooperative system of macromolecular and water networks". Current Topics in Developmental Biology. Current Topics in Developmental Biology. 75: 171–223. doi:10.1016/S0070-2153(06)75006-2. ISBN 9780121531751. PMID 16984813.
- Hogan, C. Michael (2010). "Calcium" Archived 12 June 2012 at the Wayback Machine. in Encyclopedia of Earth. A. Jorgensen, C. Cleveland (eds.). National Council for Science and the Environment.
- Kölliker, R. A. v. (1863). Handbuch der Gewebelehre des Menschen. 4. Auflage. Leipzig: Wilhelm Engelmann.
- Bynum, W. F., Browne, E. J. and Porter, Ray (1981). Dictionary of the history of science. Princeton University Press.
- Parker, J. (1972). "Protoplasmic resistance to water deficits", pp. 125–176 in Kozlowski, T. T. (ed.), Water deficits and plant growth. Vol. III. Plant responses and control of water balance. Academic Press, New York, p. 144, .
- Strasburger, E. (1882). "Ueber den Theilungsvorgang der Zellkerne und das Verhältnis der Kernteilung zur Zellteilung". Arch Mikr Anat. 21: 476–590. Archived from the original on 27 August 2017.
- Cowan AE, Moraru II, Schaff JC, Slepchenko BM, Loew LM (2012). "Spatial Modeling of Cell Signaling Networks". Methods in Cell Biology. Methods in Cell Biology. 110: 195–221. doi:10.1016/B978-0-12-388403-9.00008-4. ISBN 9780123884039. PMC . PMID 22482950.
- Holcman, David; Korenbrot, Juan I. (2004). "Longitudinal Diffusion in Retinal Rod and Cone Outer Segment Cytoplasm: The Consequence of Cell Structure". Biophysical Journal. 86 (4): 2566–2582. Bibcode:2004BpJ....86.2566H. doi:10.1016/S0006-3495(04)74312-X. PMC . PMID 15041693.
- Parry, Bradley R.; Surovtsev, Ivan V.; Cabeen, Matthew T.; o'Hern, Corey S.; Dufresne, Eric R.; Jacobs-Wagner, Christine (2014). "The Bacterial Cytoplasm Has Glass-like Properties and is Fluidized by Metabolic Activity". Cell. 156 (1–2): 183–94. doi:10.1016/j.cell.2013.11.028. PMC . PMID 24361104.
- Taylor, C. V. (1923). "The contractile vacuole in Euplotes: An example of the sol-gel reversibility of cytoplasm". Journal of Experimental Zoology. 37 (3): 259–289. doi:10.1002/jez.1400370302.
- Guo, Ming; Ehrlicher, Allen J.; Jensen, Mikkel H.; Renz, Malte; Moore, Jeffrey R.; Goldman, Robert D.; Lippincott-Schwartz, Jennifer; MacKintosh, Frederick C.; Weitz, David A. (2014). "Probing the Stochastic, Motor-Driven Properties of the Cytoplasm Using Force Spectrum Microscopy". Cell. 158 (4): 822–32. doi:10.1016/j.cell.2014.06.051. PMC . PMID 25126787.
- van Zon A, Mossink MH, Scheper RJ, Sonneveld P, Wiemer EA (September 2003). "The vault complex". Cell. Mol. Life Sci. 60 (9): 1828–37. doi:10.1007/s00018-003-3030-y. PMID 14523546.
- Prychid, Christina J.; Rudall, Paula J. (1999). "Calcium Oxalate Crystals in Monocotyledons: A Review of their Structure and Systematics" (PDF). Annals of Botany. 84 (6): 725–739. doi:10.1006/anbo.1999.0975.
- Prychid, C. J.; Rudall, P. J.; Gregory, M. (2004). "Systematics and Biology of Silica Bodies in Monocotyledons". The Botanical Review. 69 (4): 377–440. doi:10.1663/0006-8101(2004)069[0377:SABOSB]2.0.CO;2. JSTOR 4354467.
- Ball SG, Morell MK (2003). "From bacterial glycogen to starch: understanding the biogenesis of the plant starch granule". Annu Rev Plant Biol. 54: 207–33. doi:10.1146/annurev.arplant.54.031902.134927. PMID 14502990.
- Shearer J, Graham TE (April 2002). "New perspectives on the storage and organization of muscle glycogen". Can J Appl Physiol. 27 (2): 179–203. doi:10.1139/h02-012. PMID 12179957.
- Anderson AJ, Dawes EA (1 December 1990). "Occurrence, metabolism, metabolic role, and industrial uses of bacterial polyhydroxyalkanoates". Microbiol. Rev. 54 (4): 450–72. PMC . PMID 2087222.
- Murphy DJ (September 2001). "The biogenesis and functions of lipid bodies in animals, growth and microorganisms". Prog. Lipid Res. 40 (5): 325–438. doi:10.1016/S0163-7827(01)00013-3. PMID 11470496.
- Feneberg, Wolfgang; Sackmann, Erich; Westphal, Monika (2001). "Dictyostelium cells' cytoplasm as an active viscoplastic body". European Biophysics Journal. 30 (4): 284–94. doi:10.1007/s002490100135. PMID 11548131. | <urn:uuid:3623b834-3b45-4c33-84b7-daa9392c46d5> | 3.6875 | 2,826 | Knowledge Article | Science & Tech. | 59.310088 | 95,575,276 |
Critical exponents describe the behavior of physical quantities near continuous phase transitions. It is believed, though not proven, that they are universal, i.e. they do not depend on the details of the physical system, but only on
- the dimension of the system,
- the range of the interaction,
- the spin dimension.
These properties of critical exponents are supported by experimental data. Analytical results can be theoretically achieved in mean field theory for higher-dimensional systems (4 or more dimensions). The theoretical treatment of lower-dimensional systems (1 or 2 dimensions) is more difficult and requires the renormalization group approach. Phase transitions and critical exponents appear also in percolation systems. However, here the critical dimension above which mean field exponents are valid is 6 and higher dimensions. Mean field critical exponents are also valid for random graphs, such as Erdős–Rényi graphs, which can be regarded as infinite dimensional systems.
- 1 Definition
- 2 The most important critical exponents
- 3 Mean field critical exponents of Ising-like systems
- 4 Experimental values
- 5 Scaling functions
- 6 Scaling relations
- 7 Anisotropy
- 8 Multicritical points
- 9 Static versus dynamic properties
- 10 Transport properties
- 11 Self-organized criticality
- 12 Percolation Theory
- 13 See also
- 14 External links and literature
- 15 References
Phase transitions occur at a certain temperature, called the critical temperature Tc. We want to describe the behavior of a physical quantity f in terms of a power law around the critical temperature. So we introduce the reduced temperature
which is zero at the phase transition, and define the critical exponent :
This results in the power law we were looking for:
It is important to remember that this represents the asymptotic behavior of the function f(τ) as τ → 0.
More generally one might expect
The most important critical exponents
Below Tc the system has two different phases characterized by an order parameter Ψ, which vanishes at and above Tc.
Let us consider the disordered phase (τ > 0), ordered phase (τ < 0) and critical temperature (τ = 0) phases separately. Following the standard convention, the critical exponents related to the ordered phase are primed. It is also another standard convention to use superscript/subscript + (−) for the disordered (ordered) state. We have spontaneous symmetry breaking in the ordered phase. So, we will arbitrarily take any solution in the phase.
|Ψ||order parameter (e.g. ρ − ρc/ for the liquid–gas critical point, magnetization for the Curie point, etc.)|
|τ||T − Tc/|
|f||specific free energy|
|C||specific heat; −T∂2f/|
|J||source field (e.g. P − Pc/ where P is the pressure and Pc the critical pressure for the liquid-gas critical point, reduced chemical potential, the magnetic field H for the Curie point)|
|χ||the susceptibility, compressibility, etc.; ∂ψ/|
|d||the number of spatial dimensions|
|⟨ψ(x→) ψ(y→)⟩||the correlation function|
The following entries are evaluated at J = 0 (except for the δ entry)
The critical exponents can be derived from the specific free energy f(J,T) as a function of the source and temperature. The correlation length can be derived from the functional F[J;T].
These relations are accurate close to the critical point in two- and three-dimensional systems. In four dimensions, however, the power laws are modified by logarithmic factors. This problem does not appear in 3.99 dimensions, though.
Mean field critical exponents of Ising-like systems
If we add derivative terms turning it into a mean field Ginzburg–Landau theory, we get
One of the major discoveries in the study of critical phenomena is that mean field theory of critical points is only correct when the space dimension of the system is four or higher (which unfortunately excludes many of the experimentally relevant cases). This dimension is called the upper critical dimension. The problem with mean field theory is that the critical exponents do not depend on the space dimension. This leads to a quantitative discrepancy in space dimensions 2 and 3, where the true critical exponents differ from the mean field values. It leads to a qualitative discrepancy in space dimension 1, where a critical point in fact no longer exists, even though mean field theory still predicts there is one. The space dimension where mean field theory becomes qualitatively incorrect is called the lower critical dimension.
The most accurately measured value of α is −0.0127(3) for the phase transition of superfluid helium (the so-called lambda transition). The value was measured on a space shuttle to minimize pressure differences in the sample. Interestingly, this value is in a significant disagreement with the most precise theoretical determination by a combination of Monte Carlo and high temperature expansion techniques. Other techniques give results in agreement in the experiment but are less precise.
In light of the critical scalings, we can reexpress all thermodynamic quantities in terms of dimensionless quantities. Close enough to the critical point, everything can be reexpressed in terms of certain ratios of the powers of the reduced quantities. These are the scaling functions.
The origin of scaling functions can be seen from the renormalization group. The critical point is an infrared fixed point. In a sufficiently small neighborhood of the critical point, we may linearize the action of the renormalization group. This basically means that rescaling the system by a factor of a will be equivalent to rescaling operators and source fields by a factor of aΔ for some Δ. So, we may reparameterize all quantities in terms of rescaled scale independent quantities.
It was believed for a long time that the critical exponents were the same above and below the critical temperature, e.g. α ≡ α′ or γ ≡ γ′. It has now been shown that this is not necessarily true: When a continuous symmetry is explicitly broken down to a discrete symmetry by irrelevant (in the renormalization group sense) anisotropies, then the exponents γ and γ′ are not identical.
These equations imply that there are only two independent exponents, e.g., ν and η. All this follows from the theory of the renormalization group.
Directed percolation can be also regarded as anisotropic percolation. In this case the critical exponents are different and the upper critical dimension is 5.
Static versus dynamic properties
The above examples exclusively refer to the static properties of a critical system. However dynamic properties of the system may become critical, too. Especially, the characteristic time, τchar, of a system diverges as τchar ∝ ξz, with a dynamical exponent z. Moreover, the large static universality classes of equivalent models with identical static critical exponents decompose into smaller dynamical universality classes, if one demands that also the dynamical exponents are identical. For critical exponents for dynamics in percolation systems see reference.
The critical exponents can be computed from conformal field theory.
See also anomalous scaling dimension.
Critical exponents also exist for self organized criticality for dissipative systems.
Phase transitions and critical exponents appear also in percolation processes where the concentration of occupied sites or links play the role of temperature. See percolation critical exponents. For percolation the critical exponents are different from Ising. For example, in the mean field for percolation^ compared to for Ising.
- Complex networks
- Random graphs
- Rushbrooke inequality
- Widom scaling
- Ising critical exponents
- Percolation critical exponents
- Network science
- Percolation theory
- Graph theory
- Hagen Kleinert and Verena Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapore, 2001); Paperback ISBN 981-02-4658-7
- Toda, M., Kubo, R., N. Saito, Statistical Physics I, Springer-Verlag (Berlin, 1983); Hardcover ISBN 3-540-11460-2
- J.M.Yeomans, Statistical Mechanics of Phase Transitions, Oxford Clarendon Press
- H. E. Stanley Introduction to Phase Transitions and Critical Phenomena, Oxford University Press, 1971
- A. Bunde and S. Havlin (editors), Fractals in Science, Springer, 1995
- A. Bunde and S. Havlin (editors), Fractals and Disordered Systems, Springer, 1996
- Universality classes from Sklogwiki
- Zinn-Justin, Jean (2002). Quantum field theory and critical phenomena, Oxford, Clarendon Press (2002), ISBN 0-19-850923-5
- Zinn-Justin, J. (2010). "Critical phenomena: field theoretical approach" Scholarpedia article Scholarpedia, 5(5):8346.
- F. Leonard and B. Delamotte Critical exponents can be different on the two sides of a transition: A generic mechanism https://arxiv.org/abs/1508.07852
- Bunde, Armin; Havlin, Shlomo (1996). "Percolation I". Fractals and Disordered Systems. Springer, Berlin, Heidelberg. pp. 59–114. doi:10.1007/978-3-642-84868-1_2. ISBN 9783642848704.
- Cohen, Reuven; Havlin, Shlomo (2010). "Introduction". Complex Networks: Structure, Robustness and Function. Cambridge University Press. pp. 1–6. doi:10.1017/cbo9780511780356.001. ISBN 9780521841566.
- Lipa, J. A.; Nissen, J.; Stricker, D.; Swanson, D.; Chui, T. (2003). "Specific heat of liquid helium in zero gravity very near the lambda point". Physical Review B. 68 (17): 174518. arXiv: . Bibcode:2003PhRvB..68q4518L. doi:10.1103/PhysRevB.68.174518.
- Vicari, Ettore (2007). Critical phenomena and renormalization-group flow of multi-parameter Φ4 field theories (PDF). The XXV International Symposium on Lattice Field Theory, July 30 - August 4, 2007, Regensburg, Germany. p. 7 (Table 2). arXiv: .
- Leonard, F.; Delamotte, B. (2015). "Critical exponents can be different on the two sides of a transition". Phys. Rev. Lett. 115 (20): 200601. arXiv: . Bibcode:2015PhRvL.115t0601L. doi:10.1103/PhysRevLett.115.200601.
- Dayan, I.; Gouyet, J.F.; Havlin, S. (1991). "Percolation in multi-layered structures". J. Phys. A. 24 (6): L287. Bibcode:1991JPhA...24L.287D. doi:10.1088/0305-4470/24/6/007.
- Kinzel, W. (1982). Deutscher, G., ed. "Directed Percolation". Percolation and Processes. Bristol: Adam Hilger Pub. Co.
- Majdandzic, A.; Podobnik, B.; Buldyrev, S.V.; Kenett, D.Y.; Havlin, S.; Stanley, H.E. (2014). "Spontaneous recovery in dynamical networks". Nature Physics. 10: 34. Bibcode:2014NatPh..10...34M. doi:10.1038/nphys2819.
- Zeng, Guanwen; Li, Daqing; Gao, Liang; Gao, Ziyou; Havlin, Shlomo (2017-09-10). "Switch of critical percolation modes in dynamical city traffic". arXiv: . Bibcode:2017arXiv170903134Z. | <urn:uuid:89bb51a6-435f-4080-9204-e02e82d84d5c> | 3.25 | 2,666 | Knowledge Article | Science & Tech. | 51.093298 | 95,575,305 |
8 April 2015
Ocean commotion: Protecting sea life from our noise
Human noise pollution stresses marine animals and has been blamed for the death of whales – what can we do to live in harmony?
DARLENE KETTEN hoped the morbid express delivery would finally answer some questions. The two beaked whale heads, packed in ice, were flown in from the Bahamas for autopsy in a lab in Boston. The pair had died in an unusual mass stranding of 17 whales on the Abacos Islands. Beached animals tend to be dead when they are discovered, but this time the whales were caught in the act. Local marine biologists returned more than half to the water and preserved the heads of some of those they couldn’t save. It was the first time a post-mortem had followed so swiftly after a beaching.
The incident coincided with a US navy sonar exercise nearby. Naval sonar tests have been blamed for driving thousands of whales and dolphins to their deaths. Ketten, who specialises in underwater hearing at Woods Hole Oceanographic Institution in Massachusetts, examined the whales’ ears with a CT scanner. She was looking for signs of nerve damage and hair-cell loss, which tend to happen after acute exposure to loud sounds. What she saw was more grisly – and not obviously linked to noise. In both cases, there was blood in the inner ear, but it was draining from a cavity on the surface of the brain. Had sonar really killed the Abacos whales?
Human noise pollution stresses marine animals (Image: Tim Mcdonagh)
The Bahamas case is one of many. Marine mammals have been found stranded on numerous beaches from Hawaii to the Canary Islands, Greece to Madagascar – all when sonar tests or other noisy industrial activity are taking place nearby. For many, the evidence is …
*Free book How to Be Human is only available with annual App + Web and Print + App + Web subscription purchases where subscription delivery is in the United Kingdom, USA, Canada, Australia, New Zealand or Euro area. | <urn:uuid:bd0920b2-e0bd-49bc-a5a7-92838eded184> | 2.875 | 421 | Truncated | Science & Tech. | 44.986956 | 95,575,309 |
- Zugriffe: 20744
HOW TO GET BASIC SHAPES (SPHERE, CUBE, ...) EASILY
You can simply use the NFF (Neutral File Format) for this task. NFF is quite a simple, text-based format that allows you to create spheres, cones and cylinders with just a single line. To get a sphere with position (x,y,z) and radius r, use a file with this contents:
--- begin of file
s x y z r
--- end of file
The full specification of the NFF format can be found here. However, ASSIMP extends this specification and supports more basic shapes, including all platonic solids ('#' starts a comment line):
--- begin of file
# A tetrahedron at -10 0 0 with a 'radius' of 2
tet -10 0 0 2
# A cube at -7 0 0 with a 'radius' (a/2) of 2
hex -7 0 0 2
# An octahedron at -4 0 0 with a 'radius' of 2
hex -4 0 0 2
# A dodecahedron at -1 0 0 with a 'radius' (a/2) of 2
hex -1 0 0 2
# An icosahedron at 2 0 0 with a 'radius' (a/2) of 2
# This is a non-tesselated sphere. 'tess' sets the number of subdivisions.
# The default value for spheres is 4.
tess 0 s 2 0 0 2
--- end of file
(the 'radius' is the radius of the respective circumscribed sphere)
- Zugriffe: 20218
ASSIMP - Frequently Asked Questions (FAQ)
Am I allowed to use the library in a commercial product?
Yes. ASSIMP is licensed under a modified BSD license. Its contents in just one sentence: you may use the library for free, in commercial or non-commercial applications, but you must include our license with your product and you may not advertise with us.
Does ASSIMP depend on D3D(X)?
No. AssimpView does (it uses D3D9 for rendering).
Is ASSIMP able to export models?
Yes, Due to popular demand, assimp now supports an export API that is similar to the import side. Supported file formats include Obj, X, Collada, STL and more.
For which languages is the API provided?
The ASSIMP API is provided both as a plain-C interface and as an object-oriented C++ interface, which is the main API. All ports (jAssimp, Assimp.net, ...) and even the C-style API are just wrappers around this interface. Therefore, if your project allows the use of C++, you should use the C++-API.
Is this library thread-safe?
Yes. ASSIMP is completely thread-safe, as long as you create an extra 'Importer' instance for each thread. The C-API and all ports are doing this automatically for you. There are some restrictions regarding thread-safety with the -noboost workaround, don't forget to read the corresponding doc sections!
Why is it so slow?
Probably because of a excessively-validating STL implementation. There's a section in the documentation which describes how to fine-tune the performance of Microsoft's STL implementation and also another to promote STLport as a good and fast alternative. Oh yes, and try a release build with full optimizations turned on first.
Can I write a new loader for the library?
Sure, as long as you're able to write stable code and the format is not too exotic ... We'd highly appreciate any help. Why are you still reading this? Come on, start coding! :-) AssimpView complains about a missing d3dx9_3n.dll
The latest DirectX Runtimes should solve the problem.
How do I compile Assimp with a subset of importers?
You can do this via cmake, just specify the importer by typing the command:cmake CMakeLists.txt -DASSIMP_BUILD_FBX_IMPORTER
to get the FBX importer in your lib
Are you guys crazy or how did you come up with that name?
We think it's a good name, although the pun was merely unintentional. Really.
Who are you, the guys behind this library?
We are mostly professional software engineers, but not in the games industry. It's just a hobby. So far we have not much spare time. Any assistance (e.g. new loaders, bugfixes) is highly appreciated.
- Zugriffe: 3937
FEEDBACK / DISCUSS
Please use our Github-Project-Space, which you can find Here
CHAT VIA GITTER
Join us in the web
On twitter: @AssetImpLib
On facebook: The Assimp group on facebook
Telephone: +49 451 / 5859083
- Zugriffe: 5162
Export of point-clouds (2018-05-02)
We are working on point-cloud-support. The first step ready to use is a STL-export. Check my blog-post: http://kimkulling.de/2018/05/01/assimp-starting-…ort-point-clouds/
New feature poll on Patreon (2018-04-20)
There is a new poll on our Patreon-page. You can use it to priotize your most wanted feature, Just check Vote for the next features to get involved.
News from the FBX-front (2018-03-31)
In the last month we got a lot of pull-requests to get the FBX-importer more stable:
- The animtion support got a lot of bug-fixes
- There is an experimental exporter for binary-FBX-files on the current master
- There is an ASCII-exporter for the FBX-format in the pipeline
The 3MF-support got some updates as well_
- The meta-data of 3MF-files will now imported and exported
- I made some bugfixes to get this more stable
A new website ( 2018-02-03 )
To make life easier for me I satrted to use Joomble for the Asset-Importer-Lib website. To get things running I just copied all the old content to this. In the next months I will add some more content. Feel free to give any feedback.
I am not a good web-designer :-) ...
The Assimp-Blog ( 2018-01-20 )
Here I will give you some updates about the latest changes in Asset-Importer-Lib.
- Zugriffe: 3677
Assimp is released as Open Source under the terms of a 3-clause BSD license.
Copyright (c) 2006-2015 assimp team
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the name of the assimp team nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
An exception applies to all test models in Assimp's repository. They're generally free for testing and private use, but not necessarily for commercial use. There's usually an accompanying info file containing detailed license information. | <urn:uuid:e11bf608-1075-4b8e-845e-40312237c044> | 2.90625 | 1,894 | Content Listing | Software Dev. | 59.479597 | 95,575,324 |
Plasma focus based repetitive source of fusion neutrons and hard x-rays
A plasma focus device capable of operating at 0.2 pulses per second during several minutes is used as a source of hard x-rays and fast neutrons. An experimental demonstration of the use of the neutrons emissions for radiation probing of hydrogenated substances is presented, showing a particular application in detecting water concentrations differences in the proximity of the device by elastic scattering. Moreover, the device produces ultrashort hard x-rays pulses useful for introspective images of small objects, static or in fast motion, suitable for the identification of internal submillimetric defects. Clear images of metallic objects shielded by several millimeters iron walls are shown.
PACS Codes: 29.25.Dz,52.59.Px
KeywordsPlasma Focus Neutron Yield Plasma Focus Device Rogowski Coil Fusion Neutron
Pulsed sources of fusion neutrons and x-rays, as well as energetic electron and ion beams can be produced by means of plasma focus devices. After its invention in the 60's [1, 2] they were intensively studied as nuclear fusion devices and were described in detail, among other references, in [3, 4, 5]. During the last decade, they began to be investigated as convenient sources for several technological applications, to be cited below, of the mentioned radiations.
Essentially these devices are plasma accelerator guns, where the plasma is generated in a low pressure (1–10 mbar) atmosphere by a pulsed powerful capacitive discharge between a pair of coaxial cylindrical electrodes. The Lorentz force drives a plasma sheath along the electrodes, which radially collapses at the symmetry axis forming a hot dense plasma pinch of about 100 ns. Intense pulses of electromagnetic radiation as well as ions and electrons beams are emitted as a result of the focusing process. The electromagnetic emission from the collapsing plasma has a very high brightness and exhibits a broadband spectra ranging from visible light to soft x-rays. Fusion reactions can be obtained if deuterium or deuterium-tritium mixtures are used, with the consequent emission of fast neutrons. The kinetic energy of the emitted neutrons being 2.45 MeV in the first case and 2.45 MeV or 14.1 MeV in the second one. Both emissions are almost monochromatic [3, 4, 5].
Since the neutron and also the hard x-ray burst duration of this source is in the 10–100 ns range, and that its emission can be turned off, the plasma focus becomes an interesting alternative to commercially available radioisotopic sources of both neutrons and hard x-rays.
Due to the high effective cross section of hydrogen for neutron dispersion, plasma focus neutron pulses can be used as radiation probes to detect hydrogenated substances by means of neutron scattering. Examples of potential applications of this radiation are soil humidity studies and detection of hidden dangerous or illegal substances including drugs, weapons and plastic explosives . The plasma focus neutron emission is also suitable for dark matter research , radioisotopes production and neutron therapy .
Other reported applications are radiographies of biological samples [11, 12, 13], small pieces made of different materials , and a rotating car wheel . Introspective images of small metallic pieces [16, 17] even through several millimeter thick metallic walls were also produced. Additionally, a radiography of an aluminum turbine rotating at 6120 rpm using a plasma focus pulse was reported . The tomographic reconstruction of both surface and volume of small metallic objects was investigated as another non-conventional imaging application of the hard x-ray emission of plasma focus devices [16, 20]. The spatial resolution of the digitalized images has demonstrated to be suitable for 3-dimensional tomographic reconstruction of an object with just 8 projections.
Other capabilities of the plasma focus emissions include lithography with ten nanometers resolution [21, 22], medical radiation therapy applications , industrial non-destructive testing [24, 25], as well as, thin film deposition .
Compact and portable plasma focus are being developed to work efficiently on the field at relative low costs when compared to nuclear reactors or linear accelerator based sources. Moreover, the feasibility of combining neutron and x-ray scannings simultaneously in a single device is a unique advantage of plasma foci.
However, plasma foci still presents several challenges that must be overcome in order to extend the uses of these devices as x-ray and neutron source for commercial and industrial applications. In order to be useful over the widest range of applications, a plasma focus device should be able to operate at a discrete repetition rate in order to produce average neutron emission rates about (107 – 1010) neutrons/sec, and intense penetrating x-rays beams. Lee et al presented a 16 Hz plasma focus operating in neon . More recently, Rapezzi et al developed a mobile repetitive device for industrial applications .
In the current paper a repetitive plasma focus device operating with deuterium aimed to a pulsed source of neutrons and hard x-rays is presented. The repetition rate is controlled by means of a variable clock synchronized with a triggering system and a power supply, which ensures a regular operation up to 0.2 Hz. Moreover, the feasibility of technological application are analyzed.
Device design and output characteristics
A 4.7 kJ small chamber Mather-type plasma focus was used as repetitive radiation source. The gas chamber was filled with 4.0 mbar of an admixture of 2.5% (in volume) of argon in deuterium. The cylindrical chamber, 1 dm3, is made of a 3-mm thick stainless steel tube. The 2-mm thick front of the chamber is a stainless steel disk, 100 mm OD, used as the hard x-ray emission window. The electrodes are concentric cylinders, 85 mm length, 38 and 73 mm OD respectively, made of hollow electrolytic copper and twelve 3-mm diameter brass rods, respectively. The anode base is made of brass. A 50-mm OD cylindrical Pyrex insulator, 4 mm thick, 30 mm length, is located covering the anode at the base. The chamber design is optimized for hard x-ray and fast neutron production . The footprint of the whole device is 0.60 m2, its height being 1.3 m.
The capacitor bank is composed by 3 modules of six 0.7 μF Maxwell capacitors. The bank was charged using a 10 kW Maxwell CCDS power supply. The modules were connected in parallel to the discharge chamber through 3 Maxwell spark gaps (model 40264), which were triggered simultaneously by means of a car ignition coil.
The device was able to operate regularly at 30 kV at a repetition rate up to 0.2 Hz during runs of 2 minutes maintaining the temperature within reasonable ranges. The external temperature of the anode reached 30°C above room temperature after each run. The continuous compressed air flow needed to set the spark-gap operating voltage and cleaning, was sufficient to cool this element. The external surface of the spark-gap plastic encasing body heated up only few degrees over the room temperature, whereas the chamber refrigerated by natural air convection with the environment.
The discharge current was monitored by a non-integrating Rogowski coil. The x-ray emission was detected with a NE102A scintillator optically coupled to a photomultiplier (PMT) polarized at 800 V and placed 3.9 m away from the chamber. The Rogowski coil and the PMT signals were acquired with a Tektronix TDS540A oscilloscope. Both the PMT and the oscilloscope are placed inside a Faraday cage.
As it usually happen in plasma focus discharges focusing not always occur in every shot, specially when working in rep-rate mode without replacing the working gas. In the case shown in figure 2, 4 shots out of 24 failed to focus, whereas very intense current dips were attained in 16 of the others. The remaining 4 focalizations were not particularly intense. The PMT signals show that x-ray pulses are produced around 35 ns after the pinch, lasting each about 50 ns FWHM.
Applications: Methods and Results
Both neutron and x-ray emissions of the plasma focus device were tested for different technological applications with a potential industrial or therapeutical use. The following sections describe the obtained results.
Prospection by fast neutrons scattering
Hard x-rays introspective imaging of metallic pieces
Plasma focus operated in single shot mode has demonstrated to be a hard x-ray source suitable to obtain introspective images of small metallic objects.
A repetitive plasma focus device capable of emitting fusion neutrons and hard x-rays was presented, showing that it can be operated at a moderate repetition rate for several minutes. The performance and thermal conditions of the device is stable during 2 to 3 minute runs. Higher shot frequencies can in principle be achieved if some thermal management is provided to cool the heat dissipated in every shot.
It was demonstrated that the emitted neutrons can be used to detect the presence of water near the discharge chamber by neutron scattering. In principle, this procedure can be extended to detect other hydrogenated substances such as explosives or drugs.
Radiographic images could be obtained from the hard x-ray emissions showing submillimetric spatial resolutions with expositions times around 50 ns. The energy and intensity of the x-rays are sufficiently high for the inspection of metallic objects located behind or inside several millimeters of iron or steel. These characteristics are suitable to develop non-intrusive detection systems of internal defects and the imaging of fast rotating components.
This research was supported by PLADEMA – CNEA, and Universidad de Buenos Aires. VR, FDL, PK and AT are doctoral fellows of CONICET. CM and AC are members of CONICET.
- 1.Fillipova TI, Fillipov NV, Vinogradov VP: Nucl Fusion, Suppl. 1962, 2: 577-587.Google Scholar
- 3.Mather JW: Dense plasma focus. Methods of Experimental Physics. Edited by: Lovberg RH, Griem HR. 1971, New York: Academic Press, 9B: 187-250.Google Scholar
- 4.Lee S: Small Plasma Physics Experiments II. Symposium on Small Scale Laboratory Plasma Experiments, Spring College on Plasma Physics, 1989. Edited by: Lee S, Sakanaka P. 1990, World Scientific Publishing, Singapore, 113-169.Google Scholar
- 5.Bernard A, Bruzzone H, Choi P, Chuaqui H, Gribkov V, Herrera J, Hirano H, Krejci A, Lee S, Luo C, Mezzetti F, Sadowski M, Schmidt H, Ware K, Wong CS, Zoita V: Moscow Physical Society Journal. 1998, 8: 97-170.Google Scholar
- 11.Decker G, Wienecke R: Physica. 1976, 82C: 155-164.Google Scholar
- 15.Dubrovsky AV, Silin PV, Gribkov VA, Volobuev IV: Nukleonika. 2000, 45 (3): 185-187.Google Scholar
- 16.Moreno C, Martínez J, Vénere M, Clausse A, Barbuzza R, del Fresno M: Non-Conventional Radiographic and Tomographic Applications of a Compact Plasma Focus. Proceedings of the Regional Conference on Plasma Research in 21st Century: 7–12 May 2000; Bangkok, Thailand. Edited by: Paosawatyanyong B. 2000, World Scientific Publishing, Singapore, 61-63.Google Scholar
- 17.Moreno C, Clausse A, Martínez J, Llovera R, Tartaglione A: Nukleonika. 2001, 46: S33-S34.Google Scholar
- 26.Zhang T, Lin J, Patran A, Wong D, Hassan SM, Mahmood S, White T, Tan TL, Springham SV, Lee S, Lee P, Rawat RS: Plasma Sources Sci Technol. 2007, 16:Google Scholar
- 27.Mahmood S, Springham SV, Zhang T, Rawat RS, Tan TL, Krishnan M, Beg FN, Lee S, Schmidt H, Lee P: Stability of high repetition rate plasma focus neutron source. ECA: 33rd EPS Conference on Plasma Phys. Rome, Italy. 30I: 19–23 June 2006Google Scholar
- 31.Moreno C, Clausse A, Martínez J, Llovera R, Tartaglione A, Vénere M, Barbuza R, Fresno MD: Using a 4.7 kJ Plasma Focus for introspective imaging of metallic objects and for neutronic detection of water. AIP Conf Proc: IX Latin American Workshop, La Serena, Chile. Edited by: Chuaqui H, Favre M. 2000, 563: 300-305.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | <urn:uuid:50211336-90cc-4440-b1e4-72768978cecf> | 2.90625 | 2,775 | Academic Writing | Science & Tech. | 46.440089 | 95,575,345 |
Daily news articles relating Biodiversity. The Earth times and pollution with daily updates on breaking news. Stay informed, learn how you can take action to reverse global warming. Biodiversity is the foundation of ecosystem services to which human well-being is intimately linked. No feature of Earth is more complex, dynamic, and varied than the layer of living organisms that occupy its surfaces and its seas, and no feature is experiencing more dramatic change at the hands of humans than this extraordinary, singularly unique feature of Earth. This layer of living organisms—the biosphere—through the collective metabolic activities of its innumerable plants, animals, and microbes physically and chemically unites the atmosphere, geosphere, and hydrosphere into one environmental system within which millions of species, including humans, have thrived. Breathable air, potable water, fertile soils, productive lands, bountiful seas, the equitable climate of Earth’s recent history, and other ecosystem services (see Box 1.1 and Key Question 2) are manifestations of the workings of life. It follows that large-scale human influences over this biota have tremendous impacts on human well-being. It also follows that the nature of these impacts, good or bad, is within the power of humans to influence. Defining Biodiversity Biodiversity is defined as “the variability among living organisms from all sources including, inter alia, terrestrial, marine and other aquatic ecosystems and the ecological complexes of which they are part; this includes diversity within species, between species and of ecosystems.” The importance of this definition is that it draws attention to the many dimensions of biodiversity. It explicitly recognizes that every biota can be characterized by its taxonomic, ecological, and genetic diversity and that the way these dimensions of diversity vary over space and time is a key feature of biodiversity. Thus only a multidimensional assessment of biodiversity can provide insights into the relationship between changes in biodiversity and changes in ecosystem functioning and ecosystem services. Biodiversity includes all ecosystems—managed or unmanaged. Sometimes biodiversity is presumed to be a relevant feature of only unmanaged ecosystems, such as wildlands, nature preserves, or national parks. This is incorrect. Managed systems—be they plantations, farms, croplands, aquaculture sites, rangelands, or even urban parks and urban ecosystems—have their own biodiversity. Given that cultivated systems alone now account for more than 24% of Earth’s terrestrial surface, it is critical that any decision concerning biodiversity or ecosystem services address the maintenance of biodiversity in these largely anthropogenic systems. >> Tell Us << You are witnessing of Pollution, tell us how you feel after witnessing this major event in history of the earth. | <urn:uuid:8ee327ef-f92d-410c-931f-94937d9ce2a5> | 3.75 | 547 | Knowledge Article | Science & Tech. | 11.835996 | 95,575,356 |
When plants come under attack internal alarm bells ring and their defence mechanisms swing into action - and it happens in the space of just a few minutes. Now, for the first time, plant scientists - including experts from The University of Nottingham - have imaged, in real time, what happens when plants beat off the bugs and respond to disease and damage.
The research, "A fluorescent hormone biosensor reveals the dynamics of jasmonate signalling in plants", was carried out by an interdisciplinary team from the UK, France and Switzerland and has been published in the leading academic journal Nature Communications.
Malcolm Bennett, Professor in Plant Science at The University of Nottingham and Director of the Centre for Plant Integrative Biology, said: "Understanding how plants respond to mechanical damage, such as insect attack, is important for developing crops which cope better under stress."
Their research focussed on the plant hormone jasmonic acid which is part of the plant's alarm system and defence mechanism. Jasmonic acid is released during insect attack and controls the response to damage. Disease can also trigger jasmonic acid - so it's a general defence compound.
Professor Bennett said: "We have created a special fluorescent protein - Jas9-VENUS - that is rapidly degraded after jasmonic acid is produced. This allowed us to monitor where jasmonic levels are increased when the fluorescent signal is lost."
Using a blade to damage a leaf the research team mimicked an insect feeding. With the fluorescent protein they were able to image how damage to a leaf quickly results in a pulse of jasmonic acid that reaches all the way down to the tip of the root, at a speed of more than a centimetre per minute. Once this hormone pulse reaches the root it triggers more jasmonic acid to be produced locally, amplifying the wounding signal and ensuring other parts of the plant are prepared for attack.
Professor Bennett said: "Jasmonic acid triggers the production of defence compounds like protease inhibitors to stop the insect being able to digest the plant proteins - the plant becomes indigestible and the insect stops eating it."
Laurent Laplaze, a group leader at IRD (Institut de recherche pour le développement) in Montpellier, described the new biosensor used to pinpoint what happens when plants are damaged. He said: "The Jas9-VENUS biosensor responds to changes in jasmonic acid levels in plant cells within a few minutes. Our new biosensor now allows us to see exactly where jasmonic acid is being perceived by the plant, but in a quantifiable way."
The new biosensor can be used to understand how the plant can coordinate a defence response. Teva Vernoux, a CNRS group leader at the Ecole Normale Supérieure in Lyon, said: "The amazing sensitivity of our new biosensor allows us to follow in real time how jasmonic acid levels are modified in a tissue when a mechanical damage occurs in another tissue some distance away. This really opens the possibility to understand changes in the physiology at the whole plant level upon stress or damage."
This research was partly funded by the Biotechnology and Biological Sciences Research Council (BBSRC), the Agence Nationale de la Recherche (ANR), the Agropolis Fondation, and the Région Languedoc-Roussillon.
Lindsay Brooke | EurekAlert!
Investigating cell membranes: researchers develop a substance mimicking a vital membrane component
25.05.2018 | Westfälische Wilhelms-Universität Münster
New approach: Researchers succeed in directly labelling and detecting an important RNA modification
30.04.2018 | Westfälische Wilhelms-Universität Münster
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:1ccb8440-3fbc-43e6-ac40-d2d4266527cc> | 3.34375 | 1,372 | Content Listing | Science & Tech. | 37.161752 | 95,575,358 |
image © sezer66/iStock/Thinkstock
Video in French.
Ocean Wave Model - ECWAM
The ECMWF Ocean Wave Model (ECWAM) describes the development and evolution of wind generated surface waves and their height, direction and period. It does not dynamically model the ocean itself (see section on NEMO). It is coupled to the atmospheric forecast model (all configurations) and to the ocean model.
The ECMWF Ocean Wave Model (ECWAM) evaluates the 2-dimensional surface wave spectrum, in both oceanic and coastal waters. This describes how much wave energy is present for given sea wave frequencies and associated propagation directions. That part of the spectrum under the direct influence of the local wind is called “wind-wave" or "windsea”; the remaining part is usually referred to as “swell”. Changes in the wave spectrum are derived from the processes of:
- wave advection,
- wave refraction,
- wind-wave generation,
- wave dissipation due to white capping and bottom friction,
- non-linear wave interactions.
ECWAM has two-way interaction with the Atmospheric models:
- ECWAM supplies surface roughness (according to the forecast sea state) which is used to specify boundary layer winds.
- Atmospheric models supply surface wind conditions, and these dominate sea-surface wave development.
ECWAM has two-way interaction with NEMO and the LIM2 sub-program:
- ECWAM supplies surface stress, Stokes drift and turbulent energy flux to the ocean surface.
- NEMO and the LIM2 sub-program supply ice concentration and thickness information.
ECWAM does not dynamically model the ocean itself but is solely concerned with ocean wave forecasting (NEMO describes dynamical modelling of the ocean). It is run in association with all atmospheric models - HRES and ENS, Extended-range ENS and Seasonal - but can also be run as a Stand-alone Wave model (SAW) not coupled to HRES nor ENS. The domain extends across the full globe.
Wave Data Assimilation
Space-borne altimeter wave height data are assimilated. Buoy wave data are not assimilated; instead, they serve as an independent check on the quality of modelled wave parameters.
Output from ECWAM
ECWAM is run:
- twice daily giving forecasts to Day10 for the HRES and Day15 for ENS based on 00 and 12UTC data times.
- on Mondays and Thursdays, Day15 to Day46 based on 00UTC data time.
- monthly for forecasts to 7 months ahead, and run quarterly for forecasts to 1 year ahead (in the seasonal forecast model, System 5).
Output is in the form of wave and swell height, direction and period, and wave energy flux and direction are also available. Users should note that by convention the direction of waves (and hence also wave energy flux) is described as the direction the waves are moving towards. This is opposite from the convention for wind direction which is defined as where the winds are coming from. Thus a southwesterly wind blows from the southwest; the corresponding wind-sea moves towards the northeast and is a thus described as a northeasterly wind-sea.
Current coverage and resolution (see Operational configurations of IFS, ocean wave component):
- HRES-WAM global coverage on 0.125o x 0.125o latitude-longitude grid.
- HRES-SAW 90oN to 78oS on a 0.1o x 0.1o latitude-longitude grid.
- ENS-WAM 0-15days global coverage 0.25o x 0.25o latitude-longitude grid.
- ENS-WAM 15-45days global coverage 0.5o x 0.5o latitude-longitude grid.
- SEAS-WAM Seasonal global coverage 1.0o x 1.0o latitude-longitude grid.
Wave Height Definitions
The wave height is the distance between trough and crest. However, many waves co-exist at the surface of the ocean and their distribution is given by the 2D wave spectrum. From this distribution, the significant wave height is defined as 4 times the square root of the integral over frequency and direction of the wave spectrum. It can be shown to correspond to the average wave height of the one-third highest waves, commonly known as H1/3. The mean wave direction is the spectrally averaged propagation direction of the waves (weighted by amplitude).
Fig2.2.1: An example of wave heights at a platform in the North Sea. Wave height is the distance between trough and crest. The significant wave height (Hs) is defined as 4 times the square root of the integral over frequency and direction of the wave spectrum. It can be shown to correspond to the average wave height of the one-third highest waves, commonly known as H1/3. Occasionally wave of different periods reinforce and interact non-linearly giving a wave considerably larger than Hs giving a maximum trough to crest height Hmax.
The irregular surface of the sea can be decomposed into a number of components with different frequencies (f) and also directions (θ). The distribution of wave energy among these components is the Wave Spectrum E(f,θ). These can be plotted in two dimensions (Fig2.2.2A). For simplicity and ease of use the complete frequency-energy description of the sea state in 2-dimension form is simplified to 1-dimentional form by integrating over all directions and/or over a frequency range (Fig2.2.2B).
Fig2.2.2A: The irregular surface of the sea can be decomposed into a number of components with different frequencies (f) and also directions (θ) and the distribution of wave energy among these components is the Wave Spectrum E(f) here plotted in two dimensions.
Fig2.2.2B: For simplicity and ease of use the complete frequency-energy description of the sea state in 2-dimension form can be simplified to 1-dimensional form by integrating over all directions and/or over a frequency range.
Other parameters are defined to characterise the sea state as prescribed by the wave spectrum. In particular, the reciprocal of the frequency corresponding to the peak of the spectrum is the wave peak period. Different mean periods are calculated by spectrally averaging the spectrum and similarly for mean wave direction (see IFS documentation part VII, chapter10).
Very often, the sea state is composed of different wave systems. If there is any sufficient wind, there will always be a wave system associated with it, referred to to "wind-wave" or "wind-sea". The part of the spectrum that is not associated with the local wind is normally called "swell".
Swell propagates at different speeds for different frequencies and if aproaching from a remote source each frequency will arrive at a given location at different times but with a well defined peak in frequency and direction. Wind-sea is more variable in frequency and direction with a broad distribution of the waves around a peak. These can be plotted in 2-dimensional form or simplified to 1-dimensional form (Fig2.2.3).
Fig2.2.3: A schematic example of the Wave Spectrum at a location off the Dutch coast associated with a long wave swell propagating from the northern North Sea and wind-sea propagating across the southern North Sea. At a given time there will be a swell of relatively uniform frequency and direction, and a wind-sea of rather broader frequency and direction, giving a 2D plot of wave energy against frequency and direction as in the top right diagram. For simplicity this is reduced to a 1D plot of wave energy against frequency. These peak values of swell and wind-sea can be plotted in chart form.
Based on theory of wave-wave interaction, the estimate of highest equivalent weight (Hmax) is calculated from the wave spectrum.
Fig2.2.4: Wave Energy associated with a given frequency E(f) plotted against wave frequency (f). The Equivalent Wave Height (EWH) associated with a given wave frequency is derived from the area under the curve for that frequency bin. The significant wave height Hs is derived from the total area beneath the curve.
A wide range of wave model output parameters are currently available. Parameters available as Web and ecCharts products are:
- Significant wave height
- Significant wave height of all waves with period shorter than 10seconds )
- Significant wave height of all waves with period between 10 and 12 seconds )
- Significant wave height of all waves with period between 12 and 14 seconds )
- Significant wave height of all waves with period between 14 and 17 seconds )
- Significant wave height of all waves with period between 17 and 21 seconds )
- Significant wave height of all waves with period between 21 and 25 seconds )
- Significant wave height of all waves with period between 25 and 30 seconds )
- Significant height of wind waves
- Significant height of total swell
- Mean wave period
- Mean period of wind waves
- Mean period of total swell
- Mean wave direction and height
- Mean direction and height of wind waves
- Mean direction and height of total swell
(note the convention for description of wave direction is the direction towards which the waves are moving)
- Probability of combined events of 10m wind speed and significant wave height
- Significant wave height probability
- Mean wave period probability
- Significant wave height percentile
- Mean wave period percentile
- Wave energy flux magnitude
- Wave energy flux mean direction and magnitude - important for assessment of the impact of the waves on coastlines and offshore structures.
(note the convention for description of wave energy flux direction is the direction towards which the waves are moving)
- 24hr maximum sig wave height from M-climate at various percentiles. M-climate data are produced twice a week on Mondays and Thursday.
- (the above products are availabile at the following post-processing steps: 3-hourly from T+0h to T+144h; 6 hourly from T+150h to T+240h).
- Significant wave height extreme forecast index
- (product is availabile at post-processing steps: 12-hourly from T+0h to T+168).
- ECMWF Medium-range Forecast Graphical Products:
- Significant wave height / Mean wave direction and height
- Significant height of total swell / Mean direction and height of total swell
- Significant height of wind waves / Mean direction and height of wind waves
- Mean wave period / Mean wave direction and height
- Mean period of total swell / Mean direction and height of total swell
- Mean period of wind waves / Mean direction and height of wind waves
- (the above products are availabile at the following post-processing steps: 12-hourly from T+0h to T+168h)
- Significant wave height probability >= a height (height under user control)
- Mean wave period probability >= a period (period under user control)
- (the above products are availabile at post-processing steps: 12-hourly from T+0h to T+360h)
- Significant wave height extreme forecast index (M-climate quantile under user control)
- (product is availabile at post-processing steps: 24hourly from T+0h to T+168).
Wavegrams are also available to show a time series of significant wave height, mean wave direction, and mean wave period for any sea location.
Fig2.2.5: Sequence of ocean wave forecasts. Significant wave height forecast (colours) and 10m wind (arrows) from data time 00UTC 4 January 2014, step 12 hours. Wave heights at 1.25m intervals as scale.
Swell propagates outwards well away from the source. Increased swell (e.g. reaching a coast) can give forewarning of a storm system well before any indication in the atmosphere. Fishermen have long used the arrival of long-period swell as an indication of an approaching storm even if the sky is clear, and surfers often benefit from significantly large swell in calm conditions well away from the swell source region.
Fig2.2.6: 180h forecast for significant wave height (contours) for all waves with periods between 21 and 25 seconds (shading), initialised at 00UTC 2 December 2016. The highest significant wave heights (contours) are still confined to the storm location in the Atlantic south of Iceland, while long waves from that storm are already affecting coastlines from Iberia to South Greenland (coloured).
Waves with different periods propagate with different speeds - longer periods travel fastest. These can be tracked through the forecast period and areas where different wave trains potentially interact can be identified.
Fig2.2.7: Chart showing forecast significant wave heights for several ranges of wave periods (Blue,10-12s; Green, 12-14s; Yellow,14-17s; Red,17-21s). Forecast data based on data time 00Z 25 October 2017. The faster southward propagation of the long period waves over the shorter period waves from their source off NW Africa is clear.
The Extreme Forecast Index (EFI) can be used to indicate the significance of forecast significant wave heights when compared with the range of wave heights that might usually be expected as defined by the M-climate.
Fig2.2.8: In this example the colours west of Ireland denote a low-point in wave heights, or potentially a form of 'weather window' for certain types of marine/shipping operations. Equally this EFI can signify periods with anomalously big waves (yellow to red shading).
Additional information using wind-sea and swell charts
Use of the mean wave height and direction is the simplest method of describing the forecast wave regime in a given area and it is easy to be beguiled into just using this output for forecasts to customers. However, the mean wave direction and height is made up of contributions from wind-sea and swell with different wave periods and they interact in a complex manner. It is important to investigate the forecast wind-sea and swell separately to give an understanding of likely sea conditions in an area (e.g. for a ship requiring a particularly smooth passage) or at a location (e.g. an oil rig).
When wind-sea and swell move in similar directions the wave heights can give information on the likely sea state as one is superimposed on the other, particularly where both have a significant and comparable wave height. On occasion the swell and wind-sea may be moving in opposite directions (an opposing sea) and wave heights give information on the likely rougher sea state to be expected. Often the wind-sea and swell are at right-angles (a cross sea) and where the wind-sea and swell heights are similar the sea can be very disturbed and difficult for shipping. An illustration is given in Figs2.2.9, 2.2.10 & 2.1.11.
Fig2.2.9: ecChart of mean wave direction (height is indicated by the length of the arrow). This gives an overview of wave conditions with northwesterly waves (i.e. moving towards the northwest) indicated near point A and eastery waves i.e. moving towards the east) near point C (by convention wave directions are described as the direction they are travelling towards). However, it is important to investigate the contributions to the mean wave directions and heights from inspection of the wind-sea and swell at this time.
Fig2.2.10 (left): The forecast wind-sea has developed in response to the forecast winds around a depression in mid-Atlantic with waves moving northwestwards near point A and southeastwards near point B with arrow length suggesting wave heights of around 3m (wave heights are also available as charts, not shown here). Near point C wind-sea waves are relatively small and move towards the northeast.
Fig2.2.10(right): The forecast total swell has been developed in response to earlier weather systems elsewhere and has propagated across the Atlantic. Swell is moving northwards near point A and southwestwards near point C with arrow length suggesting wave heights of around 2m (wave heights are also available as charts, not shown here). Near point B swell waves are relatively small and move towards the southeast.
Fig2.2.11(left): The forecast wind-sea (blue) and swell (black) shown on a single chart. To the north of point B the wind-sea and swell waves have a similar direction of travel; to the east of point B wind-sea dominates with only weak swell contribution but almost at a right-angle. Near points A and C the wind-sea and swell waves differ widely in direction but with similar heights (a cross sea).
Fig2.2.11(right): The forecast mean wave directions derived from the wind-sea and mean swell (as shown in Fig2.2.9) superimposed on the previous chart (Fig2.2.11 left) illustrating the important additional information gained from consideration of the wind-sea and mean swell forecasts. The mean wave directions give no indication of that sea passage to the west of Portugal is likely to be through confused rough seas.
Considerations when using output from ECWAM
Not yet modelled are the interaction of waves with sea-surface currents. In particular areas, (e.g. Gulf Stream or Agulhas current), the current effect may give rise to localised changes of upto a metre in the wave height.
Sea ice is not static but forms or extends with low air and/or sea-surface temperatures, and can move with winds and sea current. NEMO passes information to ECWAM regarding the extent and movement of the sea ice field forecast by LIM2 allowing a more realistic definition of what is open sea throughout the forecast period. However, wave products near coasts, ice-edges and to a lesser extent within small and enclosed basins (e.g. Baltic Sea) may be of lower quality than for the open-ocean due to uncertain resolution of the land-sea mask or detail of the ice edge and hence also the boundary of the water area. Spurious areas or incorrect extent of ice will act as if a coastline or island and stop waves from propagating correctly, possibly decaying the waves completely and incorrectly sheltering an otherwise exposed location. Equally small islands may not be identified by the land-sea mask and hence allow waves to propagate unhindered, but note however that the wave model has a scheme that attempt to represent the impact of unresolved islands on the global propagation of waves.
Additional Sources of Information
(Note: In older material there may be references to issues that have subsequently been addressed)
- Definitive information on the ECMWF WAVE MODEL is available in the IFS Documentation PART VII: ECMWF WAVE MODEL.
- Read an in depth view of the structure of ECWAM which gives a description of the theory behind the ECWAM model.
- Read about techniques regarding satellite measurement of wave height.
- Read more on ocean waves at ECMWF.
- Read more on ocean wave modelling.
- Model output parameters and full list of available ECWAM output.
- Watch a comprehensive lecture on ocean wave forecasting at ECMWF or air-sea Interaction and earth system modelling. | <urn:uuid:eb86bfbf-7f35-43dc-9d83-65da1a679685> | 3.0625 | 4,078 | Knowledge Article | Science & Tech. | 48.598895 | 95,575,364 |
Knowing how much DNA you have is fundamental to successful experiments. Without a firm number in which you are confident, the DNA input for subsequent experiments can lead you astray. Below are six reasons why DNA samples should be quantitated.
6. Saving time by knowing what you have rather than repeating experiments. Without quantitating your DNA, how certain can you be that the same amount of DNA is consistently added? Always using the same volume for every experiment does not guarantee the same DNA amount goes into the assay. Each time there is a new purified DNA sample, the chances that you have the same quantity as before are lessened. Consequently, without knowing the DNA concentration of the sample you are using, the amount of input DNA cannot be guaranteed and experiments may have to be repeated.
5. Consistently using the same amount of DNA in your downstream assay. There are many variables that need to be taken into account during research. You need high-quality reagents that will help you answer your hypothesis, correct timing of measurements and cells of the right density among others. Make sure that the concentration of your DNA is not a variable. UV absorbance (A260nm) is more of an estimate than known quantity. A more sensitive quantitation method (e.g., using DNA-binding fluorescent dyes) offers a more solid measurement to ensure you know how much DNA that can be used for your experiments.
4. Knowing that the amount of DNA will be enough for several experiments. Repeating experiments because something did not work is frustrating. Needing to continuously isolate DNA each time you want to run an experiment is time consuming. By quantitating your purified DNA, you will know if it is sufficient for one or more experiments. Being able to skip the DNA extraction step for a few experiments saves you some time and effort so you can focus on getting your assay done and analyzing your results.
3. Learning that your sample is too dilute before the next experiment is run. Sometimes, extracting DNA from some sample types can be problematic. This means your yield may be lower than desired. Some quantitation methods may not be sensitive enough to tell you how much you have in your purified DNA sample. The commonly used UV absorbance method can detect 2ng/µl in a perfectly pure sample. While some assays can use lower DNA input, others need larger amounts. Know that your DNA concentration is sufficient for the experiments you need to perform before you start. A sensitive quantitation method can determine if you have samples that are 0.5ng/µl or even less.
2.5 Spending more time at the beach. You know how much DNA you have, you consistently add the same amount as input for your assays and your experiments work. You have time to relax and enjoy the outdoors rather than redoing experiments or repurifying DNA.
2. Certainty that you are using the right DNA species. For some applications, knowing if you have double- or single-stranded DNA can be important. Even if you use an extraction protocol that preferentially isolates one or the other, UV absorbance measurements cannot distinguish between ssDNA and dsDNA. Using a species-specific fluorescent dye like the QuantiFluor® ssDNA System and the QuantiFluor® dsDNA System would ensure you are only quantitating the desired DNA molecules of interest.
1. Ensuring that the results refer only to DNA and not contaminating RNA or proteins. Using UV absorbance at 260nm to quantify DNA does not ensure that the results are only for DNA. Many other organic compounds, including proteins, common contaminants left over from purification methods like phenols and other nucleic acids (RNA, ssDNA and primers), also absorb at 260nm. Because these other compounds will contribute to the absorbance reading, you may grossly overestimate the sample concentration. Using sensitive fluorescent dyes with a two-point standard curve like the QuantiFluor® ONE dsDNA System with the Quantus™ Fluorometer accurately determine how much DNA you have with minimal interference from other nucleic acids and contaminants.
Learn more about nucleic acid quantitation by reviewing the slides from this webinar.
Latest posts by Sara Klink (see all)
- All You Need is a Tether: Improving Repair Efficiency for CRISPR-Cas9 Gene Editing - July 12, 2018
- Finding Chinks in the Armor: Cancer’s Need for Metabolites - May 9, 2018
- Genomic Breakthroughs One Letter at a Time - April 9, 2018 | <urn:uuid:e0b6a4d2-3d81-4923-8ae7-14a9a2f7a222> | 3.234375 | 937 | Listicle | Science & Tech. | 45.338493 | 95,575,369 |
What Is Machine Learning and Why Is It Important?
What Is Machine Learning and Why Is It Important?
Machine learning has been highlighted in articles covering everything from Virtual Assistant solutions to self-driving cars and robots that can perform the same tasks as humans. A number of large companies are defining machine learning as ‘the future’, but what does that really mean?
Join the DZone community and get the full member experience.Join For Free
Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.
Lately, it seems that every time you open your browser or casually scroll through a news feed, someone is writing about machine learning and its impact on both humans and the advancement of artificial intelligence. Machine learning has been highlighted in articles covering everything from Virtual Assistant solutions to self-driving cars and robots that can perform the same tasks as humans. A number of large companies are defining machine learning as ‘the future’, but what does that really mean?
What Is Machine Learning?
Machine learning is nothing new. The history, in fact, dates back over sixty years to when Alan Turing created the ‘Turing test’ to determine whether a computer had real intelligence. It can be argued, however, that the past 25-30 years have seen the biggest leaps and bounds in terms of advances in speech technology. But I’m getting ahead of myself here.
Think of machine learning like this. As a human, and as a user of technology, you complete certain tasks that require you to make a decision or classify something. For instance, when you read your inbox in the morning, you decide to mark that ‘Win a Free Cruise if You Click Here’ email as spam. How would a computer know to do the same thing? Machine learning is comprised of algorithms that teach computers to perform tasks that human beings do naturally on a daily basis.
The first attempts at artificial intelligence involved teaching a computer by writing a rule. If we wanted to teach a computer to make recommendations based on the weather, then we might write a rule that said: IF the weather is cloudy AND the chance of rainfall is greater than 50%, THEN suggest taking an umbrella. The problem with this approach used in traditional expert systems, however, is that we don’t know how much confidence to place on the rule. Is it right 50% of the time? More? Less?
For this reason, machine learning has evolved to mimic the pattern-matching that human brains perform. Today, machine learning algorithms teach computers to recognize features of an object. In these models, for example, a computer is shown an apple and told that it is an apple. The computer then uses that information to classify the various characteristics of an apple, building upon new information each time. At first, a computer might classify an apple as round, and build a model that states that if something is round, it’s an apple. Then later, when an orange is introduced, the computer learns that if something is round AND red, it’s an apple. Then a tomato is introduced, and so on and so forth. The computer must continually modify its model based on new information and assign a predictive value to each model, indicating the degree of confidence that an object is one thing over another. For example, yellow is a more predictive value for a banana than red is for an apple.
So Why Is Everyone Talking About Machine Learning?
These basic algorithms for teaching a machine to complete tasks and classify like a human date back several decades. The difference between now and when the models were first invented is that the more information is fed into the algorithms, the more accurate they become. The past few decades have seen massive scalability of data and information, allowing for much more accurate predictions than were ever possible in the long history of machine learning.
New techniques in the field of machine learning — that mostly involve combining pieces that already existed in the past — have enabled an extraordinary research effort in Deep Neural Networks (DNN). This has not been the result of a major breakthrough, but rather of much faster computers and thousands of researchers contributing incremental improvements. This has enabled researchers to expand what’s possible in machine learning, to the point that machines are outperforming humans for difficult but narrowly defined tasks such as recognizing faces or playing the game of Go.
Why Is This Important?
Machine learning has several very practical applications that drive the kind of real business results — such as time and money savings — that have the potential to dramatically impact the future of your organization. At Interactions, in particular, we see tremendous impact occurring within the customer care industry, whereby machine learning is allowing people to get things done more quickly and efficiently. Through Virtual Assistant solutions, machine learning automates tasks that would otherwise need to be performed by a live agent — such as changing a password or checking an account balance. This frees up valuable agent time that can be used to focus on the kind of customer care that humans perform best: high touch, complicated decision-making that is not as easily handled by a machine.
Machine learning has made dramatic improvements in the past few years, but we are still very far from reaching human performance. Many times, the machine needs the assistance of human to complete its task. At Interactions, we have deployed Virtual Assistant solutions that seamlessly blend artificial with true human intelligence to deliver the highest level of accuracy and understanding.
Published at DZone with permission of Patrick Haffner . See the original article here.
Opinions expressed by DZone contributors are their own. | <urn:uuid:ac79cc21-28f3-4a99-8479-4edbeede6d89> | 2.875 | 1,168 | Truncated | Science & Tech. | 39.990162 | 95,575,375 |
Published on July 11th, 2012 | by pressroom0
Grassroots Approach to Conservation Developed
A new strategy to manage invasive species and achieve broader conservation goals is being tested in the Grand River Grasslands, an area within the North American tallgrass prairie ecoregion. A University of Illinois researcher along with his colleagues at Iowa State and Oklahoma State Universities enlisted private landowners in a grassroots community-building effort to establish a more diverse landscape for native wildlife.
The Grand River Grasslands has three main problems that pose challenges to conservation efforts: invasive juniper trees, tall fescue, and heavy grazing of cattle. U of I ecologist Jim Miller and his team developed a new model for conservation that begins by raising landowners’ awareness of these problems and providing strategies, such as moderate livestock grazing and regularly scheduled controlled burns. Miller and his team identified landowners who are interested in trying something different — who will, in turn, transfer their newfound knowledge and understanding to larger groups of people in the region.
“We conducted a survey and learned that people recognize burning as a legitimate management tool but don’t have experience with it,” Miller said. “Most of the landowners have never participated in a controlled burn, so we’ve essentially lost a fire culture in much of that part of the country.”
Miller’s team invited landowners to hands-on educational field days at nearby nature reserves to show them how grazing and burning techniques work. They got experience with drip torches and learned how to work with the wind and moisture levels.
“We followed that up with a burn at one of the landowner’s savannahs that he was trying to restore,” Miller said. “It went really well and was a key step for us in our process because now we’re getting landowners to try these new strategies on their own properties.”
Miller said the next step in the model is to encourage the landowners to champion these new practices to the larger community. “They go down to the coffee shop and meet their neighbors and friends and tell them about the success they’re having with the new practices to control the juniper trees and tall fescue and how well their cattle are doing on these pastures. The neighbors start to pick up on this, and then we have the whole process repeat itself with a larger group of landowners.
“If we’re successful with this, we’ll start to see changes, not just on individual properties here and there for key landowners but over the whole landscape or the whole region,” he said.
According to Miller, the fastest-growing group of landowners in the area is non-traditional. They don’t live in the region or come from a farming background, but they instead buy land to hunt deer, turkey, quail, or maybe just to birdwatch. He said that on land with intensive cattle grazing, the cedars can be kept at bay.
“Without burning or grazing, the cedars will take over,” Miller said. “Trees seem like a good thing to wildlife enthusiasts, but they don’t see that their land will go from being an open grassland to a closed-canopy cedar stand in 20 to 25 years. Under those conditions, there are no deer, no turkey, no quail – it’s a biological desert, and it’s too late to do much with it. We think we can make the most inroads with the non-traditional owners.”
Juniper trees are invasive, largely due to fire suppression. Junipers are a fire-intolerant, woody plant. This particular species of juniper is also called eastern red cedar.
Although that may sound appealing for patio furniture or decking or biofuels, it’s not. Miller said there’s no market for this type of tree. The trees produce a prodigious seed rain that facilitates rapid colonization of an area when left unchecked. With a survey from aerial photography dating back to 1983, Miller estimated a 3 percent increase in cedar coverage per year.
Tall fescue, an exotic invasive plant that forms a monoculture, greens up early in the spring making it difficult to burn.
“Heavy stocking of cattle is an issue,” Miller said. “Cattle quickly reduce available forage to the point that some ranchers feed hay by July and August. That’s not quality habitat for grassland birds, which have seen the steepest declines in North America since we’ve been monitoring bird populations.” He said.
“There are at least two things necessary for this model to work: ecological potential in the landscape and some level of social readiness,” Miller said. “In the Grand River Grasslands, there is ecological potential, but landowners don’t all recognize that eastern red cedar trees are invasive. We’re working on that.”
Miller says that with conservation, you need a plurality, a variety of approaches, because one size doesn’t fit all.
“We’re providing a model or a road map for a different way of doing things in conversation,” Miller said. “We need to go beyond the traditional jewels-in-the-crown or fortress conservation models, characterized by national parks and other set-asides. Paying people to take their land out of production and creating state and national parks or reserves just aren’t enough. This model may not work everywhere, but in some landscapes we think this can work, and we’re trying to provide an initial example to demonstrate how it could work.
“It’s meant to be a dialogue between, our team, landowners, and other resource management professionals, such as biologists who work for the Department of Natural Resources — not us telling them what they need to do,” he said.
Source: AAAS EurekAlert
Photo: Ryan Harr. © Ecological Society of America
« Considerations in Financing and Building a New Green Home How to be an Eco Warrior without Leaving Your Home » | <urn:uuid:4734fdfa-a39f-4690-a5f6-ef3f8c9138cb> | 2.65625 | 1,286 | News Article | Science & Tech. | 44.524702 | 95,575,377 |
Broadcaster: BBC One
Duration: 4 mins
Presenter Anita Rani joins citizen scientist in Leicestershire collecting samples for the Leicester Herbarium. As well as the more traditional pressing of specimens, the University now runs Genebank 55, an archive of seeds for potential future use.
Broadcaster: BBC One
Duration: 60 mins
Review by Tamara Ozog and Chris Willmott
When you think of predator–prey relationships, the first image that springs to mind is often that of a big cat chasing some kind of antelope before dragging it to the ground. It is perhaps no surprise therefore, that this first episode of the BBC series The Hunt starts and finishes with a chase of that kind, specifically a leopard tracking an impala, and a cheetah hunting a gazelle.
The real value of this programme, however, is that it doesn’t simply settle for those stories and includes a variety of different predator-prey encounters. In total the programme features nine sections, eight describing different sorts of interactions and, as is increasingly common with wildlife programmes, a final section on how they went about capturing some of the footage featured earlier.
The hunts covered include: Continue reading
Genetic modification of aubergine in Bangladesh has dramatically reduced the need to use pesticides
Broadcaster: BBC 1
Review by Prof John Bryant (University of Exeter)
Fierce opposition to the growth of GM crops, especially in the EU (including the UK), goes back to the late 1990s, shortly after the first successful commercialism of crops bred by these techniques. One of the most unfortunate casualties of this opposition is Golden Rice™, bred by GM techniques to provide extra vitamin-A. Its use in SE Asia would save the eyesight of tens of thousands of children and the lives of several thousand each year. However, its uptake into agriculture has been opposed by anti-GM activists at every step such that in 2015, 16 years after this development was announced to the world, the variety is still not available to Asian farmers. Nevertheless, GM-bred crops are now grown in 28 countries (the programme says 27, which was the total in 2013) on a total area of 182 million hectares and, as pointed out in the programme, the countries in which these crops are grown have not suffered environmental disasters nor have there been any detrimental effects on human or animal health.
This brief background leads us to the theme of the programme which asks whether two newer GM-bred crops may be ‘game-changers’ in respect of public attitudes. The first is insect-resistant aubergines which are now being grown in Bangladesh (where the local name for aubergine is brinjal). These plants carry the Bt-toxin gene, already widely used across the world in insect-resistant maize and cotton. Farmers growing Bt-brinjal are enthusiastic about it: the development reduces their costs, reduces crop losses and above all reduces the use of insecticides which, because of poor safety measures, cause harm to farmers’ health. Continue reading
A live pond-dip failed to locate a Great Crested Newt
Broadcaster: Channel 4
The conflict between housing and infrastructure development and protected species such as the Great Crested Newt are often mentioned in the news. On this occasion, C4 News correspondent Tom Clarke visits Dorset prompted by concerns that changes to the European Habitats Directive might be watering down protection of endangered species.
Migration of cuckoos has been tracked for four years
Broadcaster: Radio 4
Genre: Radio, interview
A four minute interview with Chris Hewson from the British Trust for Ornithology about satellite tracking of a cuckoo, also called “Chris”, over the past four years. Chris (the cuckoo) has visited 28 countries in that time and has made four journeys over the Sahara. Prior to this project, very little was known about the migration of cuckoos which are a species in serious decline. Cuckoos seem to all congregate in the Congo, but get there via two different routes – via Italy or via Spain. It appears that going via the Spanish route is much more perilous to the birds.
More details of the BTO project can be found on their website (this link).
British zoos have paid for GPS collars to learn more about leopards and cheetahs in Namibia
Broadcaster: BBC 1
This 3 minute package from BBC Breakfast (30th April 2015) looks at efforts to protect leopards and cheetahs in Namibia from farmers who consider their livestock to be under threat. Chester and Colchester Zoos have paid for GPS collars to track the big cats and learn more about their lifestyles. The research is uncovering the fact that some cattle DO get attacked, but in relatively small numbers. Farmers are (hopefully) being persuaded to stop shooting and trapping the big cats, and adopt other protective strategies instead – including “alarm donkeys”.
A version of this story is also currently on the BBC website.
Broadcaster: BBC 1
Genre: Magazine, factual
This 7-minute clip from Countryfile looks at research into honeycomb worms (Sabellaria alveolata) being conducted by scientists from Bangor University. Would be of interest to students of marine ecology.
See this link from the Marine Reserves Coalition for more detail.
Britain’s most famous TV naturalist turns his attention to plants
Year: 2013 & 2014 (originally shown 1995)
The classic six-part series featuring David Attenborough – no notes yet, please feel free to offer recommendations for teaching using these programmes.
Episodes (50 mins each)
- Travelling http://bobnational.net/record/253535
- Growing http://bobnational.net/record/253536
- Flowering http://bobnational.net/record/289579
- The Social Struggle http://bobnational.net/record/289580
- Living Together http://bobnational.net/record/289581
- Surviving http://bobnational.net/record/289582
This programme focused on the emerging potential for 3D printing of organic material
Broadcaster: BBC Radio 4
Genre: Radio, Documentary
Howard Stableford anchors this 30 minute documentary on the growing applications of 3D printing to bioscience.
As long ago as 2005, a bald eagle had a damaged beak repaired using 3D printing. Stableford talks to a team who are working on a 3D printed seawall and reef structures, that has nooks and crannies suitable for various organisms to live, in a way that is not possible with more typical engineered materials. It is like producing a city for the anticipated biolife likely to live in that area. The reefs have a natural appearance and replacing existing areas lost in previous developments. It may be possible to adapt existing processes to work with living tissue.
Bio-printing involves material that incorporate “viable living cells”. This is not about printing tissue directly, but is more an extension of existing tissue engineering approaches, in which cells are persuaded to develop into tissues. A temporary scaffold is used to direct the required shape. Currently a “soup” of cell suspension is introduced into a scaffold. 3D Bio-printing would incorporate cells into a scaffold in new orientations, rather than actually printing a tissue. The question is posed whether parallel advances in 3D printing and DNA manipulation techniques, might allow us to reach a point where we could print an organism.
Of course this would be far from trivial. If you knew the entire internal 3D layout of an organism you might be able to print this. This is unlikely. Making an egg instead, provided with the relevant genetic information and nutrients would be more feasible but even this is a long way off.
The strength of this programme is the enthusiasm for those willing to push boundaries, to see what is possible with these emerging technologies. However, although the programme overall was thought-provoking episode, floating possibilities, there was little solid content. In that sense it was rather reminiscent of an old episode of Tomorrow’s World, which – of course – Stableford also used to present. | <urn:uuid:61b0ad37-a599-4d77-a2ec-8d82f6e35305> | 2.515625 | 1,721 | Content Listing | Science & Tech. | 39.650459 | 95,575,394 |
Mystery of how sperm whales hold their breath for 90 minutes solved as scientists discover the key is unique protein in their blood
- Huge sea mammals often dive underwater for over an hour for food
- Team of biologists say they have adapted protein myoglobin in their blood
- It becomes charged and is able to store more oxygen
The sperm whale is able to hold its breath underwater for up to 90 minutes because of a unique protein in its blood, an international team of scientists has discovered.
The huge mammal will often dive up to 3km deep under water for more than an hour in search of food.
The world record for the longest time a human has held their breath underwater is 19minutes which was set by Swiss freediver Peter Colat in 2010.
Gulp: The sperm whale is renowned for its ability to dive under water for long periods at a time in search of food
Unique: Biologists believe the sperm whale has an adapted protein in its blood which enables it to dive for so long
Biologists at the University of Manitoba and the University of Liverpool revealed today that they have identified a distinctive molecular signature in the oxygen-binding protein myoglobin.
Deep-diving mammals have a much higher concentration of myoglobin than land-based mammals such as humans.
But until now, little was known about how it is adapted in champion divers such as the sperm whale.
Biologist Kevin Campbell says that, essentially, the proteins become more electronically charged which forces them to repel each other.
This helps them carry more oxygen.
'The trick, it appears, is to evolve a protein with a strong positive surface charge,' he said.
'The resulting molecular repulsion allows the oxygen-storing myoglobin of divers to accrue in much higher concentrations.
'By mapping this molecular signature onto the family tree of mammals, we were able to reconstruct the muscle oxygen stores in extinct ancestors of today’s diving mammals,' added team leader Michael Berenbrink from the University of Liverpool.
Theory: The biologists believe the whale's protein myoglobin becomes charged enabling it to store more oxygen
Scott Mirceta, who worked in both labs, added: 'We are really excited by this new find, because it allows us to align the anatomical changes that occurred during the land-to-water transitions of mammals with their actual physiological diving capacity.'
Mr Campell added that the myoglobin characteristic has even been found in the DNA of land-dwelling mammals which once lived in the sea.
He said: 'What’s more remarkable is that telltale signs of this novel attribute remain in the DNA of terrestrial mammals with an aquatic ancestry, such as spiny echidnas, subterranean moles, and even elephants, for which an amphibious past has long been suggested.
Mr Berenbrink suggested that the evidence only strengthen long-held theories of evolution.
'This finding not only illustrates the strength of evolutionary theory, but, for the first time, allows us to put ‘flesh onto the bones’ of these long extinct divers,' he concluded,
It is hoped the research could help improve understanding of a number of human diseases where protein aggregation is a problem, such as Alzheimer’s and diabetes.
Most watched News videos
- The terrifying moment a plane comes crashing down in South Africa
- Comedian is forced to move her scooter from disability space on train
- 'Africa won the world cup': Trevor Noah mocks France World Cup win
- Brutal bat attack caught on surveillance video in the Bronx
- Macron's security advisor IMPERSONATES police to beat protestors
- Shocking video shows mother brutally beating her twin girls
- Drowned woman and child found next to survivor clinging to wreck
- Bon Jovi star Richie Sambora soars in fighter plane
- Utah train worker calls a group of girls porn stars
- Biker jailed after filming himself speeding at 200mph
- Trump's daughter grasps her Secret Service agent's hand
- Tourist dies after waterfall jump in background of music video | <urn:uuid:737bf6f1-dc0e-4f71-8dc4-7c3fbe9ba716> | 3.1875 | 838 | Truncated | Science & Tech. | 20.964554 | 95,575,406 |
A new, wide-ranging survey that compares the past and present condition of oyster reefs around the globe finds that more than 90 percent of former reefs have been lost in most of the "bays" and ecoregions where the prized molluscs were formerly abundant.
In many places, such as the Wadden Sea in Europe and Narragansett Bay, oysters are rated "functionally extinct," with fewer than 1 percent of former reefs persisting. The declines are in most cases a result of over-harvesting of wild populations and disease, often exacerbated by the introduction of non-native species.
Oysters have fueled coastal economies for centuries, and were once astoundingly abundant in favored areas. The new survey is published in the February issue of BioScience, the journal of the American Institute of Biological Sciences. It was conducted by an international team led by Michael W. Beck of The Nature Conservancy and the University of California, Santa Cruz. Beck's team examined oyster reefs across 144 bays and 44 ecoregions. It also studied historical records as well as national catch statistics. The survey suggests that about 85 percent of reefs worldwide have now been lost. The BioScience authors rate the condition of oysters as "poor" overall.
Most of the world's harvest of native oysters comes from just five ecoregions in North America, but even there, the condition of reefs is "poor" or worse, except in the Gulf of Mexico. Oyster fisheries there are "probably the last opportunity to achieve large-scale oyster reef conservation and sustainable fisheries," Beck and his coauthors write. Oysters provide important ecosystem services, such as water filtration, as well as food for people. The survey team argues for improved mapping efforts and the removal of incentives to over-exploitation. It also recommends that harvesting and further reef destruction should not be allowed wherever oysters are at less than 10 percent of their former abundance, unless it can be shown that these activities do not substantially affect reef recovery.
After noon EST on 3 February and for the remainder of the month, the full text of the article will be available for free download through the copy of this Press Release available at http://www.aibs.org/bioscience-press-releases/
BioScience, published monthly, is the journal of the American Institute of Biological Sciences (AIBS). BioScience publishes commentary and peer-reviewed articles covering a wide range of biological fields, with a focus on "Organisms from Molecules to the Environment." The journal has been published since 1964. AIBS is an umbrella organization for professional scientific societies and organizations that are involved with biology. It represents some 200 member societies and organizations with a combined membership of about 250,000.
The complete list of peer-reviewed articles in the February 2011 issue of BioScience is as follows:
Oyster Reefs at Risk and Recommendations for Conservation, Restoration, and Management. Michael W. Beck and colleagues.
Sustainability Challenges of Phosphorus and Food: Solutions From Closing the Human Phosphorus Cycle. Daniel L. Childers, Jessica Corman, Mark Edwards, and James L. Elser.
Is Wildlife Going to the Dogs? Impacts of Feral and Free-roaming Dogs on Wildlife Populations. Julie K. Young, Kirk A. Olson , Richard P. Reading, Sukh Amgalanbaatar, and Joel Berger.
Perceptions of Strengths and Deficiencies; Disconnects between Graduate Students and Prospective Employers. Marshall D. Sundberg and colleagues.
The Short- and Long-term Effects of Fire on Carbon in Dry, Temperate Forest Systems of the United States. Matthew D. Hurteau and Matthew L. Brooks.
Climate Change and Biosphere Response: Unlocking the Collections Vault. Kenneth G. Johnson and colleagues.
Tim Beardsley | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:1bafd332-388a-401b-8d8b-ce0804564375> | 3.96875 | 1,391 | Content Listing | Science & Tech. | 40.250631 | 95,575,427 |
Apurely functional function has no side effects. A purely functional programming language would not allow functions with side effects to be defined. But wait, there’s more! Because functions and algorithms cannot have side effects, variables are immutable and persistent. This persistence is not the same as disk storage or serialization; it means that previous versions of a given value can be retained by the language.
KeywordsObject Oriented Programming Functional Programming Functional Language Imperative Language Functional Programming Language
Unable to display preview. Download preview PDF. | <urn:uuid:ae8ab8ba-cd18-49e2-93a3-1a202a68c6d3> | 2.59375 | 107 | Truncated | Software Dev. | 20.333529 | 95,575,436 |
Washington: Researchers have found evidence of another black hole in a globular cluster known as M62.
Laura Chomiuk, team member and MSU assistant professor of physics and astronomy said that black holes really may be common in globular clusters.
Black holes are stars that have died, collapsed into themselves and now have such a strong gravitational field that not even light can escape from them.
The globular cluster M62 is located in the constellation Ophiuchus, about 22,000 light years from Earth.
Until recently, astronomers had assumed that the black holes did not occur in globular clusters, which are some of the oldest and densest collections of stars in the universe.
Stars are packed together a million times more closely than in the neighborhood of our sun.
There are so many stars in such a condensed area that they often interact with one another. Massive black holes would have the most violent encounters, "sling-shotting" each other out of the cluster.
Last year`s discovery of a pair of black holes in a cluster was especially surprising, Chomiuk said. It had been thought that if two black holes dwelled at the center, they would regularly encounter one another until one shoved the other out.
The study has been published in the recent issue of Astrophysical Journal. | <urn:uuid:5135af37-44b8-4bdd-a6ce-5711abbddfa4> | 3.609375 | 269 | News Article | Science & Tech. | 45.904091 | 95,575,448 |
A newly developed laser technology has enabled physicists in the Laboratory for Attosecond Physics (jointly run by LMU Munich and the Max Planck Institute of Quantum Optics) to generate attosecond bursts of high-energy photons of unprecedented intensity. This has made it possible to observe the interaction of multiple photons in a single such pulse with electrons in the inner orbital shell of an atom.
In order to observe the ultrafast electron motion in the inner shells of atoms with short light pulses, the pulses must not only be ultrashort, but very bright, and the photons delivered must have sufficiently high energy. This combination of properties has been sought in laboratories around the world for the past 15 years.
Physicists at the Laboratory for Attosecond Physics (LAP), a joint venture between the Ludwig-Maximilians-Universität Munich (LMU) and the Max Planck Institute of Quantum Optics (MPQ), have now succeeded in meeting the conditions necessary to achieve this goal. In their latest experiments, they have been able to observe the non-linear interaction of an attosecond pulse with electrons in one of the inner orbital shells around the atomic nucleus.
In this context, the term ‘non-linear’ indicates that the interaction involves more than one photon (in this particular case two are involved). This breakthrough was made possible by the development of a novel source of attosecond pulses. One attosecond lasts for exactly one billionth of a billionth of a second.
The door for observing the ultrafast motion of electrons deep inside atoms has been opened. Physicists in the Laboratory for Attosecond Physics (LAP) at the LMU Munich have developed a technology that allows them to generate intense attosecond pulses. These pulses can be used to follow the motion of electrons within the inner shells of atoms in real time by freezing this motion at attosecond shutter speeds.
The experimental procedure used to film electrons in motion makes use of the ‘pump-probe’ approach. Electrons within a target atom are first excited by a photon contained within the pump pulse, which is then followed after a short delay by a second photon in a probe pulse. The latter essentially reveals the effect of the pump photon.
In order to implement this procedure, the photons must be so tightly packed that a single atom within the target can be hit by two photons in succession. Moreover, if these photons are to have a chance of reaching the inner electron shells, they must have energies in the upper end of the extreme ultraviolet (XUV) spectrum. No research group has previously succeeded in generating attosecond pulses with the required photon density in this spectral region.
The technology that has now made this feat possible is based on the upscaling of conventional sources of attosecond pulses. A team led by Prof. Laszlo Veisz has developed a novel high-power laser capable of emitting bursts of infrared light – each consisting of only a few oscillation cycles – which contain 100 times as many photons per pulse as in conventional systems. These pulses, in turn, allow the generation of isolated attosecond pulses of XUV light containing 100 times more photons as in conventional attosecond sources.
In a first series of experiments, the high-energy attosecond pulses were focused on a stream of xenon gas. Photons that happen to interact with an inner shell of a xenon atom eject electrons from that shell and ionize the atom. By using what is known as an ion microscope to detect these ions, the scientists were able, for the first time, to observe the interaction of two photons confined in an attosecond pulse with electrons in the inner orbital shells of an atom. In previous attosecond experiments, it has only been possible to observe the interaction of inner shell electrons with a single XUV photon.
“Experiments in which it is possible to have inner shell electrons interacting with two XUV attosecond pulses are often referred to as the Holy Grail of attosecond physics. With two XUV pulses, we would be able to ‘film’ the electron motion in the inner atomic shells without perturbing their dynamics,” says Dr. Boris Bergues, the leader of the new study. This represents a significant advance on attosecond experiments involving excitation with a single attosecond XUV photon. In those experiments, the resulting state was ‘photographed’ with a longer infrared pulse, which itself had a significant influence on the ensuing electron motion.
“The electron dynamics in the inner shells of atoms are of particular interest, because they result from a complex interplay between many electrons that interact with each other,” as Bergues explains. “The detailed dynamics resulting from these interactions raise many questions, which we can now address experimentally using our new attosecond source.”
In the next step, the physicists plan an experiment in which they will time resolve the interaction by splitting the high-intensity attosecond pulse into separate pump and probe pulses.
The successful application of non-linear optics in the attosecond domain to probe the behaviour of electrons in the inner orbital shells of atoms opens the door to a new understanding of the complex multibody dynamics of subatomic particles. The ability to film the motion of electrons deep in the interior of atoms promises to reveal much about a mysterious realm that has remained hidden from our gaze. Thorsten Naeser
After the interaction of a xenon atom with two photons from an attosecond pulse (purple), the atom is ionized and multiple electrons (green balls) are ejected. This two-photon interaction is made possible by the latest achievements in attosecond technology.
B. Bergues, D. E. Rivas, M.Weidmann, A. A. Muschet, W. Helml, A. Guggenmoos, V. Pervak, U. Kleineberg, G. Marcus, R. Kienberger, D. Charalambidis, P. Tzallas, H. Schröder, F. Krausz, and L. Veisz
Table-Top Nonlinear Optics in the 100-eV Spectral Region
Optica, Vol. 5, Issue 3, pp. 237-242 (2018); doi.org/10.1364/OPTICA.5.000237
Dr. Boris Bergues
Laboratory for Attosecond Physics
Department of Physics, LMU Munich and
Max Planck Institute of Quantum Optics
85748 Garching, Germany
Phone: +49 (0)89 32 905 -330
Prof. Dr. Laszlo Veisz
Relativistic Attosecond Physics Laboratory
Department of Physics
Linnaeus vag 24
SE-90187 Umea, Sweden
Phone: +46 (0)90 786 66 62
Dr. Olivia Meyer-Streng
Press & Public Relations
Max Planck Institute of Quantum Optics
85748 Garching, Germany
Phone: +49 (0)89 / 32 905 - 213
Dr. Olivia Meyer-Streng | Max-Planck-Institut für Quantenoptik
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:503f6142-1e2a-4f92-b848-b3de26ef4c64> | 3.65625 | 2,071 | Content Listing | Science & Tech. | 41.617264 | 95,575,452 |
Notions on Hyperbolic Partial Differential Equations
In this chapter we study some elementary properties of a class of hyperbolic Partial Differential Equations (PDEs). The selected aspects of the equations are those thought to be essential for the analysis of the equations of fluid flow and the implementation of numerical methods. For general background on PDEs we recommend the book by John and particularly the one by Zachmanoglou and Thoe . The discretisation techniques studied in this book are strongly based on the underlying Physics and mathematical properties of PDEs. It is therefore justified to devote some effort to some fundamentals on PDEs. Here we deal almost exclusively with hyperbolic PDEs and hyperbolic conservation laws in particular. There are three main reasons for this: (i) The equations of compressible fluid flow reduce to hyperbolic systems, the Euler equations, when the effects of viscosity and heat conduction are neglected. (ii) Numerically, it is generally accepted that the hyperbolic terms of the PDEs of fluid flow are the terms that pose the most stringent requirements on the discretisation techniques. (iii) The theory of hyperbolic systems is much more advanced than that for more complete mathematical models, such as the Navier-Stokes equations. In addition, there has in recent years been a noticeable increase in research and development activities centred on the theme of hyperbolic problems, as these cover a wide range of areas of scientific and technological interest. A good source of up-to-date work in this field is found in the proceedings of the series of meetings on Hyperbolic Problems, see for instance , , . See also . Other relevant publications are those of Godlewski and Raviart , Hörmander and Tveito and Winther .
KeywordsShock Wave Hyperbolic System Rarefaction Wave Riemann Problem Characteristic Speed
Unable to display preview. Download preview PDF. | <urn:uuid:99d87b39-d68a-4b36-a272-c1f91c9cad0a> | 2.984375 | 408 | Truncated | Science & Tech. | 25.510388 | 95,575,463 |
Constructing an Object-Oriented System
This chapter takes you through the design of a simple object-oriented system without considering implementation issues or the details of any particular language. Instead, this chapter illustrates how to use object orientation concepts to construct a software system. We first describe the application and then consider where to start looking for objects, what the objects should do and how they should do it. We conclude by discussing issues such as class inheritance, and answer questions such as “where is the structure of the program?”.
KeywordsData Item Water Bottle Relay Status Instance Variable Switch Setting
Unable to display preview. Download preview PDF. | <urn:uuid:c5077fe0-1682-4fd6-96ff-3dbea7c2a144> | 3.34375 | 134 | Truncated | Software Dev. | 36.351776 | 95,575,469 |
The new paper, by a team of scientists from the U.S. Geological Survey (USGS), Woods Hole Oceanographic Institution (WHOI), University of Alaska, University of Maryland, Canadian Wildlife Service and the US Forest Service, refutes point-by-point a widely publicized critique of polar bear population predictions.
The new rebuttal reinforces the reports written by the scientists and accepted by the Department of Interior in its May 2008 decision to list polar bears as a threatened species on the U.S. Endangered Species Act.
“The decision to list the polar bear as threatened was politically charged, and the scientific research on which it was based attracted some criticisms. Our new study shows that the critique is incorrect and based on misconceptions about climate models, the Arctic environment, polar bear biology, and statistical and mathematical methods,” said WHOI biologist Hal Caswell, an author on two of the USGS reports and of the rebuttal.
The rebuttal was published in the journal Interfaces online on April 22, 2009, and will be published in the July-August print edition. The journal recently made the article available for free to the public.
In 2007 when the Department of the Interior was considering listing the polar bear under the Endangered Species Act, it asked the USGS to assemble an international team to analyze information on polar bear populations. The team included Hal Caswell, a mathematical ecologist who specializes in developing population models.
Caswell, along with former WHOI postdoctoral investigator Christine Hunter, and researchers from the USGS and other universities and agencies, developed new models that incorporated USGS-collected information about polar bears’ mortality rates, birth rates, life cycles, and habitats. They coupled these models to projections of Arctic climate changes, especially forecasts of sea ice conditions. They calculated the interplay of all these factors – some 10,000 simulations – to estimate the probabilities of future polar bear population growth or decline. Through their study, Caswell,Hunter, and their colleagues were able to link Arctic sea ice directly to population growth.
The USGS-led group presented its reports in fall 2007, and in May 2008, the Department of Interior listed the polar bear as a threatened species under the US Endangered Species Act.
Following that listing, a critique of the USGS reports was published in the Sept.-Oct. 2008 issue of Interfaces, a journal that specializes in management and operations research. Its lead author, Scott Armstrong, a professor of marketing at the University of Pennsylvania, is a key architect of a set of principles on the science of forecasting, which are intended to provide guidance on which methods to use under different circumstances. The principles were derived from such fields as economics, finance, management, politics, medicine, and weather.
In performing its “audit” of the USGS reports, Armstrong’s group applied its set of forecasting principles and claimed that nearly 70 percent of them had been contravened by the USGS reports. The authors of the forecasting audit include a physicist and two economists but do not include biologists, oceanographers or climate scientists.
“We debated writing something short outlining why we don’t think their criticism are valid,” said Caswell. “After going through their report, however, we decided we needed to do a rebuttal of this, and in the end, we went point by point to refute their criticism.”
Caswell continued: “We began by explaining why the sea ice habitat of polar bears is declining and showing how climate models, outputs from which we used as inputs to our analyses, are reliable for forecasting the future climate. Then we showed how each specific criticism of the Armstrong team was either wrong or misleading. Finally, we took a look at their principles of forecasting, and found they are too ambiguous and subjective to be used as a reliable basis for auditing scientific investigations.”
The rebuttal concludes that the audit offers no valid criticism of the USGS conclusion that global warming poses a serious threat to the future welfare of polar bears and that it only serves to distract from reasoned public policy debate.
In the meantime, the USGS continues to collect data in Alaska and Caswell says he will be involved in further analyses of the polar bear populations based on the new data.
The Woods Hole Oceanographic Institution is a private, independent organization in Falmouth, Mass., dedicated to marine research, engineering, and higher education. Established in 1930 on a recommendation from the National Academy of Sciences, its primary mission is to understand the oceans and their interaction with the Earth as a whole, and to communicate a basic understanding of the oceans’ role in the changing global environment.
WHOI Media Relations | EurekAlert!
Further reports about: > Arctic > Arctic climate changes > Arctic environment > Oceanographic Institution > Polar Bear-Climate Connection > Polar Day > USGS > WHOI > climate models > endangered species > polar bear biology > polar bear populations > polar bears > population growth > sea ice > species
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:5a0a10a6-a4b6-4316-a280-c3751193fb89> | 2.6875 | 1,682 | Knowledge Article | Science & Tech. | 34.706871 | 95,575,473 |
Working With Types in TypeScript
Working With Types in TypeScript
If you're looking at getting started with TypeScript, read on to learn about it's declarative variables and type annotation.
Join the DZone community and get the full member experience.Join For Free
Learn how error monitoring with Sentry closes the gap between the product team and your customers. With Sentry, you can focus on what you do best: building and scaling software that makes your users’ lives better.
Declare Variables in TypeScript:
Before using the Type Annotation, let’s take a look at declaring a variable using the
let , and
var, let, and const:
varis globally available in a function block and is hoisted to the top of the function.
In the above example, we declared the variable name in the if condition, but we will be able to access the same variable outside of the if condition because
var is globally available in a function block.
let is accessible in the nearest enclosing block (or globally if declared outside of a function). If we try to access the variable outside of the block, we will get a reference error.
Let’s move the
console.log(name) statement inside of the if condition and run the application. It will print the name on the console.
We can redeclare
var in the same function scope, but cannot declare
In the below example, we got an error stating ‘cannot redeclare block-scoped variable.’
In the case of
const, we cannot even assign it again, as it is a constant or read-only property.
Type Annotation in TypeScript:
In the above image, the first syntax is with type-annotation as well as value. The second syntax is with type-annotation, but no value (has an undefined value) and the third syntax is without the type-annotation but has a value.
TypeScript allows us to declare variables with the specific data type and can be classified into Built-In types (ex. String, Number, Boolean, etc.) and User-Defined Types (ex. Array, Class, Interface, Enum, etc.). Any data type is super of all data types in TypeScript. It is the default data type for variables and function.
In the above example, we declared 3 datatypes (i.e. string, number, boolean) and printed their value on the console. If I want to store a text/string in the age field, then I can’t do so because the data type of the age variable is a number. But in the case of any data type, we can store any type of value.
TypeScript also provides three types (void, undefined, null) that are not used as type annotations but can be used in case of the absence of values. Void Type is used only for functions which do not return a value.
Undefined type is the value of a variable that has not assigned any value, whereas null type represents the intentional absence of an object value.
Array: Array type is defined by the name of the type followed by a pair of square brackets (i.e. shorthand method) or by using generic type (specify the type in angular brackets of an array i.e. longhand method. Ex: Array<String>). We can get items of an array by iterating a loop on its length.
Enum: Enum allows us to define a set of named numeric constants. By default, enum is zero-based, but we can change its value according to our requirements.
An example of Enum:
In the above example, we have created an enum. As enum is zero based, the value of the first enumerator is 0 and then it increases by 1. After that, I created a variable of an enum type (shirtSize) and printed its value on the console.
Enums are zero-based, but we can start it from a specific value. In the below example, we set 5 as a starting value for our first enumerator.
We can even override the value of each enumerator of an enum.
Tuples: Tuples represent the heterogeneous collection of values. Tuple values are individually called as items. In the below example, we have created a tuple which can hold values of both type strings and numbers.
If we try to store a value of another type which is not specified, then we will get an error.
Hope this will help you. Thanks.
Published at DZone with permission of Anoop Kumar Sharma , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | <urn:uuid:df993507-41a8-40a8-8374-3447900b955d> | 3.015625 | 986 | Tutorial | Software Dev. | 52.46998 | 95,575,484 |
Rutgers professor helps show how eruptions cool tropical Africa, spawning El Niños
Explosive volcanic eruptions in the tropics can lead to El Niño events, those notorious warming periods in the Pacific Ocean with dramatic global impacts on the climate, according to a new study.
Enormous eruptions trigger El Niño events by pumping millions of tons of sulfur dioxide into the stratosphere, which form a sulfuric acid cloud, reflecting solar radiation and reducing the average global surface temperature, according to the study co-authored by Alan Robock, a distinguished professor in the Department of Environmental Sciences at Rutgers University-New Brunswick.
The study, published online today in Nature Communications, used sophisticated climate model simulations to show that El Niño tends to peak during the year after large volcanic eruptions like the one at Mount Pinatubo in the Philippines in 1991.
"We can't predict volcanic eruptions, but when the next one happens, we'll be able to do a much better job predicting the next several seasons, and before Pinatubo we really had no idea," said Robock, who has a doctorate in meteorology. "All we need is one number - how much sulfur dioxide goes into the stratosphere - and you can measure it with satellites the day after an eruption."
The El Niño Southern Oscillation (ENSO) is nature's leading mode of periodic climate variability. It features sea surface temperature anomalies in the central and eastern Pacific. ENSO events (consisting of El Niño or La Niña, a cooling period) unfold every three to seven years and usually peak at the end of the calendar year, causing worldwide impacts on the climate by altering atmospheric circulation, the study notes.
Strong El Niño events and wind shear typically suppress the development of hurricanes in the Atlantic Ocean, the National Oceanic and Atmospheric Administration says. But they can also lead to elevated sea levels and potentially damaging cold season nor'easters along the East Coast, among many other impacts.
Sea surface temperature data since 1882 document large El Niño-like patterns following four out of five big eruptions: Santa María (Guatemala) in October 1902, Mount Agung (Indonesia) in March 1963, El Chichón (Mexico) in April 1982 and Pinatubo in June 1991.
The study focused on the Mount Pinatubo eruption because it's the largest and best-documented tropical one in the modern technology period. It ejected about 20 million tons of sulfur dioxide, Robock said.
Cooling in tropical Africa after volcanic eruptions weakens the West African monsoon, and drives westerly wind anomalies near the equator over the western Pacific, the study says. The anomalies are amplified by air-sea interactions in the Pacific, favoring an El Niño-like response.
Climate model simulations show that Pinatubo-like eruptions tend to shorten La Niñas, lengthen El Niños and lead to unusual warming during neutral periods, the study says.
If there's a big volcanic eruption tomorrow, Robock said he could make predictions for seasonal temperatures, precipitation and the appearance of El Niño next winter.
"If you're a farmer and you're in a part of the world where El Niño or the lack of one determines how much rainfall you will get, you could make plans ahead of time for what crops to grow, based on the prediction for precipitation," he said.
Todd B. Bates | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:05626be0-08b4-48bb-bae5-d9780e39b5f4> | 3.953125 | 1,341 | Content Listing | Science & Tech. | 34.463199 | 95,575,491 |
It was Einstein who, with his Theory of Relativity, predicted that a clock would run more slowly the closer it was to a heavy object. As the theory (The “Principle of Equivalence”) surmises, gravity can bend the fabric of space-time, and — as it turns out — the ESA has two orbiters in operation that, while aligned incorrectly, are perfectly in place to help verify Einstein’s concept.
Intended for use as part of a global navigation system called Galileo, the pair of satellites were to have circular orbits following launch from a Russian Soyuz rocket. However, the Soyuz accidentally put the ESA devices into elliptical orbits, making the duo unusable for the original purposes.
But aside from the incorrect trajectories, the Galileo satellites are in working order, and among the pair’s onboard tech are atomic clocks. The ESA has realized that due to the offbeat orbits, the two Galileo crafts will at times be closer to the Earth in passing, at others farther away — exposing both to a changing gravitational interaction with Earth — and if Einstein’s theory holds true, a study of the Galileo clocks should demonstrate a slowing down during the closer points of their vectors.
Germany’s Center of Applied Space Technology and Microgravity and the Department of Time–Space Reference Systems at the Paris Observatory will have the honor of handling the tracking.
Such an experiment was briefly undertaken before: In 1976, NASA launched a probe that held a clock with a counterpart on Earth (“Gravity Probe A”). The craft flew for 115 minutes and slight changes in the launched clock were indeed recorded. The ESA plans to have its Galileo satellites tracked for a full year, possibly providing a much more accurate test of Einstein’s vision.
It also plans a 2017 experiment to test the Principle of Equivalence by eventually placing an atomic clock aboard the International Space Station. | <urn:uuid:7719b44a-eb95-4afd-9cd1-c0bf456e9ed4> | 4.25 | 394 | News Article | Science & Tech. | 30.162261 | 95,575,492 |
Type Number Object DisplayObject Library display.* Revision 2018.3332 Keywords x, position See also object.y
Specifies the x position (in local coordinates) of the object relative to its parent — specifically, the x position of the object's anchorX point relative to its parent. Changing this value will move the object in the x direction.
This cannot be used on a physical body during a collision event. However, your collision handler may set a flag or include a time delay via timer.performWithDelay() so that the action can occur in the next application cycle or later. See the Collision Detection guide for a complete list of which APIs and methods are subject to this rule.
local rect = display.newRect( 0, 0, 50, 50 ) rect:setFillColor( 1, 1, 1 ) rect.x = 100 | <urn:uuid:1c00fc8f-8bb3-41ef-9f6e-9bf9dffa2314> | 3 | 177 | Documentation | Software Dev. | 59.774754 | 95,575,509 |
2012 marked the centenary of one of the most significant discoveries of the early twentieth century, the discovery of X-ray diffraction (March 1912, by Laue, Friedrich, and Knipping) and of Braggs law (November 1912). The discovery of X-ray diffraction confirmed the wave nature of X-rays and the space-lattice hypothesis. It had two major consequences: the analysis of the structure of atoms, and the determination of the atomic structure of materials. The momentousimpact of the discovery in the fields of chemistry, physics, mineralogy, material science, biochemistry and biotechnology has been recognized by the General Assembly of the United Nations by establishing 2014 as the International Year of Crystallography.This book relates the discovery itself, the early days of X-ray crystallography, and the way the news of the discovery spread round the world. It explains how the first crystal structures were determined, and recounts which were the early applications of X-ray crystallography. It also tells how the concept of space lattice has developed since ancient times, and how our understanding of the nature of light has changed over time. The contributions of the main actors of the story, prior to thediscovery, at the time of the discovery and immediately afterwards, are described through their writings and are put into the context of the time, accompanied by brief biographical details.
Early Days of X-ray Crystallography
Leveres direkte via nedlastning
Les i vår app for iPhone, iPad og Android | <urn:uuid:51958ad7-6c66-4e7d-83b4-139a7f779d2c> | 3.75 | 316 | Product Page | Science & Tech. | 23.711415 | 95,575,518 |
All about fruitarianism with a long-term fruitarian, Lena
Plant is a multicellular eukaryote, kingdom Plantae - the flowering plants, conifers and other gymnosperms, ferns, clubmosses, hornworts, liverworts, mosses and the green algae. Green plants exclude the red and brown algae, the fungi, archaea, bacteria and animals. Green plants have cell walls with cellulose and obtain most of their energy from sunlight via photosynthesis by primary chloroplasts, derived from endosymbiosis with cyanobacteria. Some plants are parasitic and have lost the ability to produce normal amounts of chlorophyll or to photosynthesize.
There are ~300–315 thousand species of plants, of which the great majority are seed plants. Green plants provide most of the world's molecular oxygen and are the basis of most of the earth's ecologies, especially on land. Plants that produce grains, fruits and vegetables provide basic food to humankind. The scientific study of plants is known as botany, a branch of biology.
Page 1 of 2
Top 20 Tags
I will not kill or hurt any living creature needlessly, nor destroy any beautiful thing, but will strive to save and comfort all gentle life, and guard and perfect all natural beauty upon the earth.
Protists - members of an informal grouping of diverse eukaryotic organisms that are not animals, plants or fungi, and are grouped together for convenience, like algae or invertebrates. Besides their relatively simple levels of organization, protists do not necessarily have much in common.
Subdivisions of Protists
Protozoa the unicellular "animal-like" - Flagellata, Ciliophora, Amoeba, Sporozoans.
Protophyta the "plant-like" - mostly unicellular algae.
Molds the "fungus-like" - slime molds and water molds. | <urn:uuid:93e37115-bdf4-452c-a34c-23414bebcbc1> | 3.09375 | 413 | Knowledge Article | Science & Tech. | 31.707895 | 95,575,528 |
Plot of the error function
In mathematics, the error function (also called the Gauss error function) is a special function (non-elementary) of sigmoid shape that occurs in probability, statistics, and partial differential equations describing diffusion. It is defined as:
In statistics, for nonnegative values of x, the error function has the following interpretation: for a random variable Y that is normally distributed with mean 0 and variance 1/2, erf(x) describes the probability of Y falling in the range [−x, x].
The name "error function" and its abbreviation erf
were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of Errors."
The error function complement was also discussed by Glaisher in a separate publication in the same year.
For the "law of facility" of errors whose density is given by (the normal distribution), Glaisher calculates the chance of an error lying between and as:
When the results of a series of measurements are described by a normal distribution with standard deviation and expected value 0, then is the probability that the error of a single measurement lies between −a and +a, for positive a. This is useful, for example, in determining the bit error rate of a digital communication system.
The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function.
The error function and its approximations can be used to estimate results that hold with high probability. Given random variable and constant :
where A and B are certain numeric constants. If L is sufficiently far from the mean, i.e. , then:
so the probability goes to 0 as .
Plots in the complex plane
The property means that the error function is an odd function. This directly results from the fact that the integrand is an even function.
For any complex number z:
where is the complex conjugate of z.
The integrand f = exp(−z2) and f = erf(z) are shown in the complex z-plane in figures 2 and 3. Level of Im(f) = 0 is shown with a thick green line. Negative integer values of Im(f) are shown with thick red lines. Positive integer values of Im(f) are shown with thick blue lines. Intermediate levels of Im(f) = constant are shown with thin green lines. Intermediate levels of Re(f) = constant are shown with thin red lines for negative values and with thin blue lines for positive values.
The error function at +∞ is exactly 1 (see Gaussian integral). At the real axis, erf(z) approaches unity at z → +∞ and −1 at z → −∞. At the imaginary axis, it tends to ±i∞.
The error function is an entire function; it has no singularities (except that at infinity) and its Taylor expansion always converges.
The defining integral cannot be evaluated in closed form in terms of elementary functions, but by expanding the integrand e−z2 into its Maclaurin series and integrating term by term, one obtains the error function's Maclaurin series as:
which holds for every complex number z. The denominator terms are sequence A007680 in the OEIS.
For iterative calculation of the above series, the following alternative formulation may be useful:
because expresses the multiplier to turn the kth term into the (k + 1)th term (considering z as the first term).
The imaginary error function has a very similar Maclaurin series, which is:
which holds for every complex number z.
Derivative and integral
The derivative of the error function follows immediately from its definition:
From this, the derivative of the imaginary error function is also immediate:
An antiderivative of the error function, obtainable by integration by parts, is
An antiderivative of the imaginary error function, also obtainable by integration by parts, is
Higher order derivatives are given by
where are the physicists' Hermite polynomials.
An expansion, which converges more rapidly for all real values of than a Taylor expansion, is obtained by using Hans Heinrich Bürmann's theorem:
By keeping only the first two coefficients and choosing and , the resulting approximation shows its largest relative error at , where it is less than :
Given complex number z, there is not a unique complex number w satisfying , so a true inverse function would be multivalued. However, for −1 < x < 1, there is a unique real number denoted satisfying .
The inverse error function is usually defined with domain (−1,1), and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk |z| < 1 of the complex plane, using the Maclaurin series
where c0 = 1 and
So we have the series expansion (note that common factors have been canceled from numerators and denominators):
(After cancellation the numerator/denominator fractions are entries A092676/ A092677 in the OEIS; without cancellation the numerator terms are given in entry A002067.) Note that the error function's value at ±∞ is equal to ±1.
For |z| < 1, we have .
The inverse complementary error function is defined as
For real x, there is a unique real number satisfying . The inverse imaginary error function is defined as .
For any real x, Newton's method can be used to compute , and for , the following Maclaurin series converges:
where ck is defined as above.
A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real x is
where (2n – 1)!! is the double factorial: the product of all odd numbers up to (2n – 1). This series diverges for every finite x, and its meaning as asymptotic expansion is that, for any one has
where the remainder, in Landau notation, is
Indeed, the exact value of the remainder is
which follows easily by induction, writing and integrating by parts.
For large enough values of x, only the first few terms of this asymptotic expansion are needed to obtain a good approximation of erfc(x) (while for not too large values of x note that the above Taylor expansion at 0 provides a very fast convergence).
Continued fraction expansion
A continued fraction expansion of the complementary error function is:
Integral of error function with Gaussian density function
The inverse factorial series
converges for Here
denotes the rising factorial, and denotes a signed Stirling number of the first kind.
Approximation with elementary functions
Abramowitz and Stegun give several approximations of varying accuracy (equations 7.1.25–28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
- (maximum error: 5×10−4)
where a1 = 0.278393, a2 = 0.230389, a3 = 0.000972, a4 = 0.078108
- (maximum error: 2.5×10−5)
where p = 0.47047, a1 = 0.3480242, a2 = −0.0958798, a3 = 0.7478556
- (maximum error: 3×10−7)
where a1 = 0.0705230784, a2 = 0.0422820123, a3 = 0.0092705272, a4 = 0.0001520143, a5 = 0.0002765672, a6 = 0.0000430638
- (maximum error: 1.5×10−7)
where p = 0.3275911, a1 = 0.254829592, a2 = −0.284496736, a3 = 1.421413741, a4 = −1.453152027, a5 = 1.061405429
All of these approximations are valid for x ≥ 0. To use these approximations for negative x, use the fact that erf(x) is an odd function, so erf(x) = −erf(−x).
Another approximation is given by
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the error is less than 0.00035 for all x. Using the alternate value a ≈ 0.147 reduces the maximum error to about 0.00012.
This approximation can also be inverted to calculate the inverse error function:
Exponential bounds and a pure exponential approximation for the complementary error function are given by
A single-term lower bound is
where the parameter β can be picked to minimize error on the desired interval of approximation.
An approximation with a maximal error of for any real argument is:
Table of values
Complementary error function
The complementary error function, denoted , is defined as
which also defines , the scaled complementary error function (which can be used instead of erfc to avoid arithmetic underflow). Another form of for non-negative is known as Craig’s formula, after its discoverer:
This expression is valid only for positive values of x, but it can be used in conjunction with erfc(x) = 2 − erfc(−x) to obtain erfc(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
Imaginary error function
The imaginary error function, denoted erfi, is defined as
where D(x) is the Dawson function (which can be used instead of erfi to avoid arithmetic overflow).
Despite the name "imaginary error function", is real when x is real.
When the error function is evaluated for arbitrary complex arguments z, the resulting complex error function is usually discussed in scaled form as the Faddeeva function:
Cumulative distribution function
The error function is essentially identical to the standard normal cumulative distribution function, denoted Φ, also named norm(x) by software languages, as they differ only by scaling and translation. Indeed,
or rearranged for erf and erfc:
Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as
The inverse of is known as the normal quantile function, or probit function and may be expressed in terms of the inverse error function as
The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.
The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer’s function):
It has a simple expression in terms of the Fresnel integral.
In terms of the regularized gamma function P and the incomplete gamma function,
is the sign function.
Generalized error functions
Graph of generalised error functions En
grey curve: E1
) = (1 − e −x
red curve: E2
) = erf(x
green curve: E3
blue curve: E4
gold curve: E5
Some authors discuss the more general functions:
Notable cases are:
- E0(x) is a straight line through the origin:
- E2(x) is the error function, erf(x).
After division by n!, all the En for odd n look similar (but not identical) to each other. Similarly, the En for even n look similar (but not identical) to each other after a simple division by n!. All generalised error functions for n > 0 look similar on the positive x side of the graph.
These generalised functions can equivalently be expressed for x > 0 using the gamma function and incomplete gamma function:
Therefore, we can define the error function in terms of the incomplete Gamma function:
Iterated integrals of the complementary error function
The iterated integrals of the complementary error function are defined by
The general recurrence formula is
They have the power series
from which follow the symmetry properties
- ^ Andrews, Larry C.; Special functions of mathematics for engineers
- ^ Greene, William H.; Econometric Analysis (fifth edition), Prentice-Hall, 1993, p. 926, fn. 11
- ^ Glaisher, James Whitbread Lee (July 1871). "On a class of definite integrals". London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 4. Taylor & Francis. 42 (277): 294–302. Retrieved 6 December 2017.
- ^ Glaisher, James Whitbread Lee (September 1871). "On a class of definite integrals. Part II". London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 4. Taylor & Francis. 42 (279): 421–436. Retrieved 6 December 2017.
- ^ Wolfram MathWorld
- ^ H. M. Schöpf and P. H. Supancic, "On Bürmann's Theorem and Its Application to Problems of Linear and Nonlinear Heat Transfer and Diffusion," The Mathematica Journal, 2014. doi:10.3888/tmj.16–11.Schöpf, Supancic
Weisstein, E. W. "Bürmann's Theorem". Wolfram MathWorld—A Wolfram Web Resource.
- ^ Bergsma, Wicher. "On a new correlation coefficient, its orthogonal decomposition and associated tests of independence" (PDF).
- ^ Cuyt, Annie A. M.; Petersen, Vigdis B.; Verdonk, Brigitte; Waadeland, Haakon; Jones, William B. (2008). Handbook of Continued Fractions for Special Functions. Springer-Verlag. ISBN 978-1-4020-6948-2.
- ^ Schlömilch, Oskar Xavier (1859). "Ueber facultätenreihen". Zeitschrift für Mathematik und Physik (in German). 4: 390–415. Retrieved 2017-12-04.
- ^ Eq (3) on page 283 of Nielson, Niels (1906). Handbuch der theorie der gammafunktion (in German). Leipzig: B. G. Teubner. Retrieved 2017-12-04.
- ^ Winitzki, Sergei (6 February 2008). "A handy approximation for the error function and its inverse" (PDF). Retrieved 2011-10-03.
- ^ Chiani, M., Dardari, D., Simon, M.K. (2003). New Exponential Bounds and Approximations for the Computation of Error Probability in Fading Channels. IEEE Transactions on Wireless Communications, 4(2), 840–845, doi=10.1109/TWC.2003.814350.
- ^ Chang, Seok-Ho; Cosman, Pamela C.; Milstein, Laurence B. (November 2011). "Chernoff-Type Bounds for the Gaussian Error Function". IEEE Transactions on Communications. 59 (11): 2939–2944. doi:10.1109/TCOMM.2011.072011.100049.
- ^ Numerical Recipes in Fortran 77: The Art of Scientific Computing (ISBN 0-521-43064-X), 1992, page 214, Cambridge University Press.
- ^ a b c Cody, W. J. (March 1993), "Algorithm 715: SPECFUN—A portable FORTRAN package of special function routines and test drivers" (PDF), ACM Trans. Math. Softw., 19 (1): 22–32, doi:10.1145/151271.151273
- ^ Zaghloul, M. R. (March 1, 2007), "On the calculation of the Voigt line profile: a single proper integral with a damped sine integrand", Monthly Notices of the Royal Astronomical Society, 375 (3): 1043–1048, doi:10.1111/j.1365-2966.2006.11377.x
- ^ John W. Craig, A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations, Proceedings of the 1991 IEEE Military Communication Conference, vol. 2, pp. 571–575.
- ^ Carslaw, H. S.; Jaeger, J. C. (1959), Conduction of Heat in Solids (2nd ed.), Oxford University Press, ISBN 978-0-19-853368-9, p 484
- Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 7". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 297. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
- Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 6.2. Incomplete Gamma Function and Error Function", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
- Temme, Nico M. (2010), "Error Functions, Dawson's and Fresnel Integrals", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W., NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0521192255, MR 2723248 | <urn:uuid:a9f87853-f597-4012-b817-21417abc42c2> | 3.703125 | 3,896 | Knowledge Article | Science & Tech. | 60.249067 | 95,575,536 |
Structure and Function of Biological Membranes
All living cells contain a multiplicity of membrane systems. One type of membrane forms the boundaries of cells (plasma membranes in mammalian systems); a derivative of a plasma membrane forms the multilayered insulating myelin sheath which surrounds nerves. Many subcellular organelles are both surrounded by membranes, and contain membranes within them which are made up of enzymes that carry out complicated concerted reactions, or function in energy transduction and conservation. For example, the mitochondrion is a system of two membranes, an outer membrane, and an inner membrane which contains the proteins that couple electron transfer reactions to the synthesis of adenosine triphosphate. The light-sensitive rhodopsin molecules found in the retinal rods of the eye are the principal protein components of the membranes which are concerned in the photoreception. The list could be extended, but the point to be made is that membrane systems, while having the same general structure and characteristics, have exceedingly diverse functions.
KeywordsBiological Membrane Myelin Sheath Lipid Phase Charge Asymmetry Hydrophobic Force
Unable to display preview. Download preview PDF.
- 1.Vanderkooi, G. and Sundaralingam, M., Proc. Nat. Acad. Sci.. (U.S.) 67, September (1970).Google Scholar
- 2.Green, D.E. and Perdue, J.F., Proc. Nat. Acad. Sci. (U.S.) 55,1295 (1966).Google Scholar
- 5.Korn, E.D., Science 153, 1491 (1966); Federation Proc. 28, 6 (1969). Google Scholar
- 6.Poincelot, R.P. and Abrahamson, E.W., Biochim. Biophys. Acta. 202, 382 (1970).Google Scholar
- 7.Steim, J.M., Advances in Chemistry Series, No. 84, “Molecular Association in Biological and Related Systems,” 259 (1968).Google Scholar
- 8.Scheraga, H.A., The Harvey Lectures, Series 63, Academic Press, New York, 99 (1969).Google Scholar
- 10.Green, D.E. and Baum, H., Energy and the Mitochondrion, Academic Press, New York (1970).Google Scholar | <urn:uuid:c7d6ec7a-83ed-48e6-94f4-76cd75546311> | 3.25 | 500 | Truncated | Science & Tech. | 58.491894 | 95,575,538 |
A compound of two elements crystallizes in crystal structure such that the atoms of type A are located at relative positions in teh conventional cubic unit cell of (0,0,0); (0,1/2,1/2); (1/2,0,1/2) and (1/2,1/2,0), and of type B which are located at
(1/4,3/4,3/4); (3/4,1/4,3/4) and (3/4,3/4,1/4).
What type of crystal lattice does this material have?
What is the basis that need to be used to generate the crystal structure?
How is the primitive unit cell different from the conventional cubic unit cell above?
One of the elements is hydrogen and the other comes from much lower down in the periodic table.
Explain why might this allow you to simplify the calculations needed to predict the X-ray diffraction pattern from this compound.
Use this simplified approach to calculate the Miller indices of the three Bragg scattering peaks which you expect to be strongest.
A second compound with the same lattice constant is composed of Indium and Antimony as the type A and type B atoms. Indium and antimony are from the same row in the periodic table. One of the first three peaks is found to be much weaker than the other two. Explain why this occurs and calculate which peak is the weakest.© BrainMass Inc. brainmass.com July 22, 2018, 2:29 am ad1c9bdddf
The solution is attached below in two files. the files are ...
The7-pages solution contains full derivations and formulas that explain the results. | <urn:uuid:a9e806aa-9921-430d-bd59-4c87f5dc9cc6> | 4.03125 | 361 | Tutorial | Science & Tech. | 73.522774 | 95,575,550 |
Sea ice is said to be “an essential habitat for polar bears” but that’s an overly simplistic advocacy meme as ridiculous as the “no sea ice, no polar bears” message with which the public is constantly bombarded. Polar bears require sea ice from late fall to late spring only: from early summer to mid-fall, sea ice is optional. Historical evidence of polar bears that spent 5 months on land during the summer of 1874 proves an extended stay ashore is a natural response of polar bears to natural summer ice retreat, not a consequence of recent human-caused global warming. Sea ice is a seasonal requirement for polar bears: it’s not necessary year round.
[This PBI newsletter from 2011 repeats this meme and Andrew Derocher’s recent tweet conveys a similar message (“Sea ice loss = habitat loss for polar bears”)]
As long as sea ice is available from late fall through late spring (December to early June) and accompanied by abundant seal prey (sometimes it isn’t, see Derocher and Stirling 1995; Stirling 2002; Stirling et al. 1981, 1982, 1984), polar bears can survive a complete or nearly complete fast from June to late November (and pregnant females from June to early April the following year). That’s the beauty of their Arctic adaptation: fat deposited in early spring allows polar bears to survive an extraordinary fast whether they spend the time on land or sea ice.
Young and very old bears, as well as sick and injured ones, are the exception: these bears often come ashore in poor condition and end up dying of starvation — as a much-publicized bear on Baffin Island who likely had a form of cancer did last summer (Crockford 2018). Competition with bigger, stronger bears means these bears can’t keep what they are able to kill and they are most often the bears who cause problems. Starvation is the leading natural cause of death for polar bears because if they cannot put on the fat they need in spring, they will not survive the low food months of summer and winter, whether they are on land or out on the sea ice (Amstrup 2003). Continue reading
Posted in Advocacy, Life History, Sea ice habitat
Tagged Arctic, climate change, essential, facts, global warming, habitat, onshore, polar bear, sea ice, sea ice day, summer, terrestral dens
Polar bears in virtually all regions will now have finished their intensive spring feeding, which means sea ice levels are no longer an issue. A few additional seals won’t make much difference to a bear’s condition at this point, except perhaps for young bears that haven’t had a chance to feed as heavily as necessary over the spring due to inexperience or competition.
The only seals available on the ice for polar bears to hunt in early July through October are predator-savvy adults and subadults. But since the condition of the sea ice makes escape so much easier for the seals to escape, most bears that continue to hunt are unsuccessful – and that’s been true since the 1970s. So much for the public hand-wringing over the loss of summer sea ice on behalf of polar bear survival!
Polar bears in most areas of the Arctic are at their fattest by late June. They are well prepared to go without food for a few months if necessary – a summer fast is normal for polar bears, even for those that spend their time on the sea ice.
Putting on hundreds of pounds of fat in the spring to last through periods of food scarcity later in the year (at the height of summer and over the winter) is the evolutionary adaptation that has allowed polar bears to live successfully in the Arctic.
Posted in Life History, Sea ice habitat
Tagged Arctic, bearded seal, ecosystem, facts, feeding, harp seal, ice loss, polar bear, research, ringed seal, save our sea ice, sea ice, sea ice day, spring, summer, video
Sea ice habitat for polar bears has not become progressively worse each year during their season of critical feeding and mating, as some scaremongers often imply. It’s true that absolute extent of Arctic ice is lower this spring than it was in 1979. However, according to NSIDC Masie figures, polar bear habitat at mid-May registers about 12 million km2, just as it did in 2006 (although it is distributed a little differently); other data show spring extent has changed little since a major decline occurred in 1989, despite ever-rising CO2 levels.
In other words, there has been virtually no change in sea ice cover over the last 12 years, despite the fact that atmospheric CO2 has now surpassed 410 parts per million, a considerable and steady increase over levels in 2006 which were about 380 ppm (see below, from the Scripps Oceanographic Laboratory, included in the Washington Post story 3 May 2018):
Not only that, but if rising CO2 levels were responsible for the decline of sea ice and implied effects on polar bears since 1979 (when CO2 levels were around 340 ppm), why has spring ice extent been so variable since 1989 (when the first big decline occurred) but so little changed overall since then? See the NSIDC graph below for April:
This year on day 134 (14 May), global ice cover registered 12.3 mkm2:
In 2016 on the same day, the overall extent was much the same but there was more ice in the Chukchi and Bering Seas and less in the eastern Beaufort:
More close-up charts of different regions below for 2018 vs. 2016, showing more detail.
Posted in Conservation Status, Life History, Sea ice habitat
Tagged carbon dioxide, climate change, CO2, facts, feeding, global warming, mating, polar bear, polynya, sea ice, shore leads, spring
Large margins of error in polar bear population estimates means the conservation status threshold of a 30% decline (real or predicted) used by the US Endangered Species Act and the IUCN Red List is probably not valid for this species.
Several recent subpopulation estimates have shown an increase between one estimate and another of greater than 30% yet deemed not to be statistically significant due to large margins of error. How can such estimates be used to assess whether population numbers have declined enough to warrant IUCN Red List or ESA protection?
What do polar bear population numbers mean for conservation status, if anything?
Posted in Conservation Status, Population
Tagged baffin bay, declining, Derocher, errors, ESA, estimate, facts, IUCN Red List, numbers, polar bear, population, science, significance, statistics, Svalbard, western hudson bay
Sea ice in the Bering Sea this winter was said to be the lowest since the 1850s, largely driven by persistent winds from the south rather than the usual north winds although warm Pacific water was a factor early in the season (AIRC 2018). But what, if any, impact is this surprisingly low winter and spring ice cover likely to have on Chukchi Sea polar bear health and survival?
In fact, research on Chukchi Sea polar bears has included so few examples of individuals utilizing the Bering Sea in winter (Jan-March) and early spring (April-May) that any conclusions regarding an impact from this year’s sea ice conditions are likely to be invalid. In short, we don’t know what will happen since it has not happened before within living memory; the opinions of polar bear specialists must be taken with a grain of salt because so many of their previous assumptions have turned out to be wrong (Crockford 2017a,b, 2018), see here, here, and here. Seals, walrus and polar bears are much more flexible and resilent to changes in habitat conditions than most modern biologists give them credit for and consequently, it will be fascinating to see how the ice will change over the coming months and how the animals will respond.
Posted in Conservation Status, Life History, Sea ice habitat
Tagged Bering Sea, Chukchi Sea, facts, fast, feeding, habitat, hunting, ice-free period, mating, polar bear, science, sea ice, seals, starving
Spring in the Arctic is April-June (Pilfold et al. 2015). As late April is the peak of this critical spring feeding period for most polar bear populations, this is when sea ice conditions are also critical. This year, as has been true since 1979, that sea ice coverage is abundant across the Arctic for seals that are giving birth and mating at this time as well as for polar bears busy feeding on young seals and mating.
Below is a chart of sea ice at 25 April 2018, showing sea ice in all PBSG polar bear subpopulation regions:
Some Arctic subregions below, in detail. Continue reading
Posted in Life History, Sea ice habitat
Tagged Barents Sea, birth, facts, feeding, mating, polar bear, population size, science, sea ice, seals, spring, Svalbard, thickness
It’s been more than a year since I first published my scientific manuscript at PeerJ Preprints (a legitimate scientific forum) on the failure of Amstrup’s 2007 USGS polar bear survival model (Crockford 2017), a year waiting in vain for the polar bear community to comment. They either couldn’t be bothered or knew they couldn’t refute it – I haven’t known for sure which. But I do now.
Polar bear specialists didn’t comment because they couldn’t refute it in the scholarly manner required by PeerJ: all they could do is tear it down with derision, misdirection and strawman arguments.
I know this because the damage control team for the polar-bears-are-all-going-to-die-unless-we-stop-using-fossil-fuels message wasn’t activated over my fully-referenced State of the Polar Bear Report for 2017 (Crockford 2018) released on International Polar Bear Day last month, but for a widely-read opinion piece I’d written for the Financial Post published the same day (based on the Report) that generated three follow-up radio interviews.
By choosing to respond to my op-ed rather than the Report or my 2017 paper, biologists Andrew Derocher and Steven Amstrup, on behalf of their polar bear specialist colleagues1, display a perverse desire to control the public narrative rather than ensure sound science prevails. Their scientifically weak “analysis” of my op-ed (2 March 2018), published by Climate Feedback (self-proclaimed “fact checkers”), attempts damage control for their message and makes attacks on my integrity. However, a scientific refutation of the premise of my 2017 paper, or The State of the Polar Bear Report 2017, it is not (Crockford 2017, 2018).
Derocher further embarrasses himself by repeating the ridiculous claim that global polar bear population estimates were never meant for scientific use, then reiterates the message with added emphasis on twitter:
Just as the badly written Harvey et al. (2017) Bioscience paper said more about the naked desperation of the authors than it did about me or my fellow bloggers, this attempt by the polar bear community’s loudest bulldogs to discredit me and my work reveals their frustration at being unable to refute my scientifically supported conclusion that Amstrup’s 2007 polar bear survival model has failed miserably (Crockford 2017).
Part 1 of my detailed, fully referenced responses to their “analysis” of my op-ed are below. Part 2 to follow [here]. Continue reading
Posted in Conservation Status, Scientists hit back, Sea ice habitat, Summary
Tagged Amstrup, critique, damage control, Derocher, endangered, ESA, fact checker, facts, feedback, polar bear, predictions, sea ice, spin, State of the Polar Bear, threatened, USGS | <urn:uuid:8d0b1caa-b687-4004-8c5e-2e91fe55250e> | 3.21875 | 2,507 | Personal Blog | Science & Tech. | 32.043785 | 95,575,551 |
Date of publication: 2017-08-23 17:38
* The average global sea level has been generally rising since 6865 or earlier, which is about 95 years before surface temperatures began to rise and 75 years before man-made emissions of CO7 reached 6% of natural emissions.
*There is evidence that the Artic sea ice completely melted at least four times before man first walked the earth. Tropical turtle fossils have been found in the Arctic, proving this area was once much warmer than today.
[ ] warming” is a myth — so say 85 graphs from 58 peer-reviewed scientific papers published in 7567. In other words, the so-called “Consensus” on global warming is a massive [ ]
The Sun is not a constant star, it has cycles that average about eleven years in length. Solar activity is constantly changing. Sunspots are used to measure solar activity since the early 6655s, they have been observed and counted for hundreds of years. Cooling periods on earth have occurred when there were fewer sunspots. Similarly, the earth has warmed (such as the second half of the 75th Century) when there were more sunspots.
It is very distracting. This has nothing to do with the mechanism. You don 8767 t need to agree with the rest of the science world in order to understand the mechanism.
Do you believe scientists from China are somehow inferior, . Semi? If you had written 8775 it seems that a third of the scientists represented are Arabs 8776 do you think that would somehow undermine their results?
* The first humans to visit the surface of the North Pole region during summer were the crew of the USS Skate , a nuclear submarine that surfaced 95 miles from the North Pole in August of 6958. In the January 6959 issue of Life magazine, the commander of this mission described the ice cover by stating:
Earth 8767 s climate has swung repeatedly between warm periods and ice ages during its history. In the current climate the gain of the positive feedback effect from increased atmospheric water vapor, as well as Earth being too far away from the Sun at its current luminosity for such to occur is well below that which is required to boil away the oceans.
Page 8: &ldquo As attested by a number of studies, near-surface temperature records are often affected by time varying biases. Among the causes of such biases are station moves or relocations, changes in instrumentation, changes in observation practices, and evolution of the environment surrounding the station such as land use/cover change.&rdquo
Paper: &ldquo Difficulties in Obtaining Reliable Temperature Trends: Reconciling the Surface and Satellite Microwave Sounding Unit Records.&rdquo By James W. Hurrell and Kevin E. Trenberth. Journal of Climate , May 6998. Pages 995-967.
The 8775 Nature trick 8776 didn 8767 t 8775 hide the decline 8776 the way you think. It 8767 s just an augmentation of proxy data with real instrumental data. Why? Because the tree ring proxies diverged from the temperature record and other proxies beginning in the 6965s. | <urn:uuid:7e533da9-792d-422e-89ce-2c4843f50437> | 3.078125 | 664 | Comment Section | Science & Tech. | 52.990651 | 95,575,558 |
The system will have limited access between June 29th and July 20th as the system is transferred to a new server.
THE SEARCH FOR EXOMOON RADIO EMISSIONS
Noyola, Joaquin P
MetadataShow full item record
The field of exoplanet detection has seen many new developments since the discovery of the first exoplanet. Observational surveys by the NASA Kepler Mission and several other instrument have led to the confirmation of over 1900 exoplanets, and several thousands of exoplanet potential candidates. All this progress, however, has yet to provide the first confirmed exomoon. Since all previous attempts to discover exomoons have failed, a novel method to detect them is proposed in this dissertation, which describes development of the method and its applications to select the best exomoon candidates for observational searches. The main goal of these searches is to verify the validity and effectiveness of the method, and discover the first exomoon by using the world largest and most suitable radio telescopes. The discovery of first exomoon would begin a new era of exploratory research in exoplanetary systems. The idea that exomoons can be discovered with radio telescopes was proposed by Noyola, Satyal and Musielak et al. (2014), who suggested that the interaction between Io and the Jovian magnetosphere could also be found in exoplanet-exomoon pairs, and the resulting radio emissions could be used to directly detect these systems. The main results of the original study obtained for single prograde exomoons are also described in this dissertation, which in addition extends the previous study to multiple exomoon systems, as well as retrograde orbits. The main objective of these studies is to identify the best exomoon candidates for detection by chosen radio telescopes. One such candidates, Epsilon Eridani b, was selected and observed by the Giant Metre Radio Telescope (GMRT) in India. The preliminary results of these observations do not show any expected radio emission from the chosen systems. Thus, implementation of several important improvements to the method is discussed in details in this dissertation. | <urn:uuid:169a28f8-6550-49e9-818c-9181dd5bfc63> | 2.703125 | 434 | Academic Writing | Science & Tech. | 20.44512 | 95,575,585 |
This summer, one of the world’s leading ocean science bodies, the United Nations Educational, Scientific and Cultural Organization’s (UNESCO’s) and Intergovernmental Oceanographic Commission (IOC) adopted the new international thermodynamic equation of state for seawater called TEOS-10. A complex, dynamic mixture of dissolved minerals, salts, and organic material, seawater has historically presented difficulties in terms of determining its physical chemical properties.
For 30 years, climate models have relied on a series of equations called the International Equation of State of Seawater – or EOS-80, which uses the Practical Salinity Scale of seawater. This equation was used to determine the pressure, volume, temperature proprieties of seawater. Other thermodynamic properties, including heat capacity, enthalpy and sound speed were obtained using separate equations.
The Scientific Committee on Oceanic Research (SCOR) established a working group to look at the thermodynamic properties of seawater in 2005. The team included scientists from the Leibniz-Institut für Ostseeforschung in Warnemünde (Germany), University of Miami (USA), Desert Research Institute - Nevada System of Higher Education (USA), Bedford Institute of Oceanography (Canada), the Commonwealth Scientific and Research Organization (Australia), the National Oceanographic Centre (United Kingdom) and the Institute of Marine Geology and Chemistry (China).
This committee has established a fresh approach to seawater thermodynamics. The new equation of state is in the form of a comprehensive free energy function that includes all of the thermodynamic properties of seawater. The new thermodynamic equation of state will replace the widely-use EOS-80 with a new set of highly accurate and comprehensive formulas that provide necessary adjustments and clarifications to the original equation. Dr. Rainer Feistel, from the Leibniz-Institut, is widely recognized as the pioneer in developing the new free energy function.
“The previous International Equation of State of Seawater, which expresses the density of seawater as a function of Practical Salinity, temperature and pressure, served the oceanographic community well for three decades,” said Dr. Frank Millero, professor of Marine and Atmospheric Chemistry at the University of Miami’s Rosenstiel School of Marine and Atmospheric Science, who is the only member of the EOS-80 work group on the current team. “However, the new equation uses Absolute Salinity and is a compact ‘one-stop-shop’ of sorts that takes many variables that we hadn’t previously included into account, and allows us to get at more precise numbers, which in turn, will make climate projections even more accurate.”
Since the 1970’s more than 14 graduate students, three undergrads, a high school student and four research technicians have worked on the pressure, volume, temperature and thermodynamic properties of seawater in Millero’s lab at the University of Miami. These include Drs. Rana Fine and Arthur Chen, who have become important contributors to oceanographic science and education.
For more information, visit the IOC’s site on ocean salinity at: ttp://www.ioc-unesco.org/index.php?option=com_content&task=view&id=144&Itemid=112About the University of Miami’s Rosenstiel School
Barbra Gonzalez | RSMAS Miami
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:c0620377-9f2c-4f8c-a761-19d5f2820ebd> | 2.9375 | 1,379 | Content Listing | Science & Tech. | 29.466621 | 95,575,587 |
We know that DNA is self-replicating structure and DNA replicates semi-conservatively. However, DNA replication is catalyzed by a set of enzymes. Let’s learn about machinery and enzymes involved in DNA replication.
In the process of DNA replication, the DNA makes multiple copies of itself. It is a biological polymerization which proceeds in the sequence of initiation, elongation, and termination. The whole process takes place with the help of enzymes where DNA-dependent DNA polymerase being the chief enzyme.
Initiation: DNA replication demands a high degree of accuracy because even a minute mistake would result in mutations. Thus, replication cannot initiate randomly at any point in DNA. For the replication to begin there is a particular region called origin of replication. This is the point where the replication originates. Replication begins with the spotting of this origin followed by the unwinding of the two DNA strands.
Unzipping of DNA strands in its entire length is unfeasible due to high energy input. Hence, first, a replication fork is created catalyzed by polymerases enzyme which is an opening in the DNA strand.
Elongation: As the strands are separated, the polymerase enzymes start synthesizing the complementary sequence in each of the strands. The parental strands will act as a template for newly synthesizing daughter strands. It is to be noted that elongation is unidirectional i.e. DNA is always polymerized only in the 5′ to 3′ direction. Therefore, in one strand (the template 3‘→5‘) it is continuous, hence called continuous replication while on the other strand (the template 5‘→3‘) it is discontinuous replication. They occur as fragments called Okazaki fragments. The enzyme called DNA ligase joins them later.
Termination: Termination of replication occurs in different ways in different organisms. In E.coli like organisms, chromosomes are circular. And this happens when the two replication forks between the two terminals meet each other.
DNA replication is highly enzyme dependent process. There are lots of enzymes involved in the replication which includes the enzymes DNA-dependent DNA polymerase, helicase, ligase, etc. Among them, DNA-dependent DNA polymerase is the main enzyme.
DNA-dependent DNA polymerase: It helps in the polymerization and catalyzes and regularises the whole process of DNA replication with the support of other enzymes. Deoxyribonucleoside triphosphates are the substrate as well as the energy provider for the replication process.
Helicase: Helicase is the enzyme which unzips the DNA strands by breaking the hydrogen bonds between them. Thus, it helps in the formation of the replication fork.
Ligase: Ligase is the enzyme which glues the discontinuous DNA strands.
Stay tuned with Byju’s to learn more about DNA Replication.
Practise This Question | <urn:uuid:880e25a7-af93-453f-9df4-e16ec91f3b3e> | 4.125 | 599 | Q&A Forum | Science & Tech. | 30.75629 | 95,575,593 |
An eclipse is an astronomical event that occurs when an astronomical object is temporarily obscured, either by passing into the shadow of another body or by having another body pass between it and the viewer. This alignment of three celestial objects is known as a syzygy. Apart from syzygy, the term eclipse is also used when a spacecraft reaches a position where it can observe two celestial bodies so aligned. An eclipse is the result of either an occultation (completely hidden) or a transit (partially hidden).
The term eclipse is most often used to describe either a solar eclipse, when the Moon's shadow crosses the Earth's surface, or a lunar eclipse, when the Moon moves into the Earth's shadow. However, it can also refer to such events beyond the Earth–Moon system: for example, a planet moving into the shadow cast by one of its moons, a moon passing into the shadow cast by its host planet, or a moon passing into the shadow of another moon. A binary star system can also produce eclipses if the plane of the orbit of its constituent stars intersects the observer's position.
For the special cases of solar and lunar eclipses, these only happen during an "eclipse season", the two times of each year when the plane of the Earth's orbit around the Sun crosses with the plane of the Moon's orbit around the Earth. The type of solar eclipse that happens during each season (whether total, annular, hybrid, or partial) depends on apparent sizes of the Sun and Moon. If the orbit of the Earth around the Sun, and the Moon's orbit around the Earth were both in the same plane with each other, then eclipses would happen each and every month. There would be a lunar eclipse at every full moon, and a solar eclipse at every new moon. And if both orbits were perfectly circular, then each solar eclipse would be the same type every month. It is because of the non-planar and non-circular differences that eclipses are not a common event. Lunar eclipses can be viewed from the entire nightside half of the Earth. But solar eclipses, particularly a total eclipse, as occurring at any one particular point on the Earth's surface, is a rare event that can span many decades from one to the next.
The term is derived from the ancient Greek noun ἔκλειψις (ékleipsis), which means "the abandonment", "the downfall", or "the darkening of a heavenly body", which is derived from the verb ἐκλείπω (ekleípō) which means "to abandon", "to darken", or "to cease to exist," a combination of prefix ἐκ- (ek-), from preposition ἐκ (ek), "out," and of verb λείπω (leípō), "to be absent".
Umbra, penumbra and antumbra
This section needs additional citations for verification. (November 2017) (Learn how and when to remove this template message)
For any two objects in space, a line can be extended from the first through the second. The latter object will block some amount of light being emitted by the former, creating a region of shadow around the axis of the line. Typically these objects are moving with respect to each other and their surroundings, so the resulting shadow will sweep through a region of space, only passing through any particular location in the region for a fixed interval of time. As viewed from such a location, this shadowing event is known as an eclipse.
- The umbra, within which the object completely covers the light source. For the Sun, this light source is the photosphere.
- The antumbra, extending beyond the tip of the umbra, within which the object is completely in front of the light source but too small to completely cover it.
- The penumbra, within which the object is only partially in front of the light source.
A total eclipse occurs when the observer is within the umbra, an annular eclipse when the observer is within the antumbra, and a partial eclipse when the observer is within the penumbra. During a lunar eclipse only the umbra and penumbra are applicable. This is because Earth's apparent diameter from the viewpoint of the Moon is nearly four times that of the Sun. The same terms may be used analogously in describing other eclipses, e.g., the antumbra of Deimos crossing Mars, or Phobos entering Mars's penumbra.
The first contact occurs when the eclipsing object's disc first starts to impinge on the light source; second contact is when the disc moves completely within the light source; third contact when it starts to move out of the light; and fourth or last contact when it finally leaves the light source's disc entirely.
For spherical bodies, when the occulting object is smaller than the star, the length (L) of the umbra's cone-shaped shadow is given by:
where Rs is the radius of the star, Ro is the occulting object's radius, and r is the distance from the star to the occulting object. For Earth, on average L is equal to 1.384×106 km, which is much larger than the Moon's semimajor axis of 3.844×105 km. Hence the umbral cone of the Earth can completely envelop the Moon during a lunar eclipse. If the occulting object has an atmosphere, however, some of the luminosity of the star can be refracted into the volume of the umbra. This occurs, for example, during an eclipse of the Moon by the Earth—producing a faint, ruddy illumination of the Moon even at totality.
On Earth, the shadow cast during an eclipse moves very approximately at 1 km per sec. This depends on the location of the shadow on the Earth and the angle in which it is moving.
An eclipse cycle takes place when eclipses in a series are separated by a certain interval of time. This happens when the orbital motions of the bodies form repeating harmonic patterns. A particular instance is the saros, which results in a repetition of a solar or lunar eclipse every 6,585.3 days, or a little over 18 years. Because this is not a whole number of days, successive eclipses will be visible from different parts of the world.
An eclipse involving the Sun, Earth, and Moon can occur only when they are nearly in a straight line, allowing one to be hidden behind another, viewed from the third. Because the orbital plane of the Moon is tilted with respect to the orbital plane of the Earth (the ecliptic), eclipses can occur only when the Moon is close to the intersection of these two planes (the nodes). The Sun, Earth and nodes are aligned twice a year (during an eclipse season), and eclipses can occur during a period of about two months around these times. There can be from four to seven eclipses in a calendar year, which repeat according to various eclipse cycles, such as a saros.
Between 1901 and 2100 there are the maximum of seven eclipses in:
- four (penumbral) lunar and three solar eclipses: 1908, 2038.
- four solar and three lunar eclipses: 1918, 1973, 2094.
- five solar and two lunar eclipses: 1934.
Excluding penumbral lunar eclipses, there are a maximum of seven eclipses in:
- 1591, 1656, 1787, 1805, 1918, 1935, 1982, and 2094.
As observed from the Earth, a solar eclipse occurs when the Moon passes in front of the Sun. The type of solar eclipse event depends on the distance of the Moon from the Earth during the event. A total solar eclipse occurs when the Earth intersects the umbra portion of the Moon's shadow. When the umbra does not reach the surface of the Earth, the Sun is only partially occulted, resulting in an annular eclipse. Partial solar eclipses occur when the viewer is inside the penumbra.
The eclipse magnitude is the fraction of the Sun's diameter that is covered by the Moon. For a total eclipse, this value is always greater than or equal to one. In both annular and total eclipses, the eclipse magnitude is the ratio of the angular sizes of the Moon to the Sun.
Solar eclipses are relatively brief events that can only be viewed in totality along a relatively narrow track. Under the most favorable circumstances, a total solar eclipse can last for 7 minutes, 31 seconds, and can be viewed along a track that is up to 250 km wide. However, the region where a partial eclipse can be observed is much larger. The Moon's umbra will advance eastward at a rate of 1,700 km/h, until it no longer intersects the Earth's surface.
During a solar eclipse, the Moon can sometimes perfectly cover the Sun because its apparent size is nearly the same as the Sun's when viewed from the Earth. A total solar eclipse is in fact an occultation while an annular solar eclipse is a transit.
When observed at points in space other than from the Earth's surface, the Sun can be eclipsed by bodies other than the Moon. Two examples include when the crew of Apollo 12 observed the Earth to eclipse the Sun in 1969 and when the Cassini probe observed Saturn to eclipse the Sun in 2006.
Lunar eclipses occur when the Moon passes through the Earth's shadow. This happens only during a full moon, when the Moon is on the far side of the Earth from the Sun. Unlike a solar eclipse, an eclipse of the Moon can be observed from nearly an entire hemisphere. For this reason it is much more common to observe a lunar eclipse from a given location. A lunar eclipse lasts longer, taking several hours to complete, with totality itself usually averaging anywhere from about 30 minutes to over an hour.
There are three types of lunar eclipses: penumbral, when the Moon crosses only the Earth's penumbra; partial, when the Moon crosses partially into the Earth's umbra; and total, when the Moon crosses entirely into the Earth's umbra. Total lunar eclipses pass through all three phases. Even during a total lunar eclipse, however, the Moon is not completely dark. Sunlight refracted through the Earth's atmosphere enters the umbra and provides a faint illumination. Much as in a sunset, the atmosphere tends to more strongly scatter light with shorter wavelengths, so the illumination of the Moon by refracted light has a red hue, thus the phrase 'Blood Moon' is often found in descriptions of such lunar events as far back as eclipses are recorded.
Records of solar eclipses have been kept since ancient times. Eclipse dates can be used for chronological dating of historical records. A Syrian clay tablet, in the Ugaritic language, records a solar eclipse which occurred on March 5, 1223 B.C., while Paul Griffin argues that a stone in Ireland records an eclipse on November 30, 3340 B.C. Positing classical-era astronomers' use of Babylonian eclipse records mostly from the 13th century BC provides a feasible and mathematically consistent explanation for the Greek finding all three lunar mean motions (synodic, anomalistic, draconitic) to a precision of about one part in a million or better. Chinese historical records of solar eclipses date back over 4,000 years and have been used to measure changes in the Earth's rate of spin.
By the 1600s, European astronomers were publishing books with diagrams explaining how lunar and solar eclipses occurred. In order to disseminate this information to a broader audience and decrease fear of the consequences of eclipses, booksellers printed broadsides explaining the event either using the science or via astrology.
Other planets and dwarf planets
The gas giant planets (Jupiter, Saturn, Uranus, and Neptune) have many moons and thus frequently display eclipses. The most striking involve Jupiter, which has four large moons and a low axial tilt, making eclipses more frequent as these bodies pass through the shadow of the larger planet. Transits occur with equal frequency. It is common to see the larger moons casting circular shadows upon Jupiter's cloudtops.
The eclipses of the Galilean moons by Jupiter became accurately predictable once their orbital elements were known. During the 1670s, it was discovered that these events were occurring about 17 minutes later than expected when Jupiter was on the far side of the Sun. Ole Rømer deduced that the delay was caused by the time needed for light to travel from Jupiter to the Earth. This was used to produce the first estimate of the speed of light.
On the other three gas giants, eclipses only occur at certain periods during the planet's orbit, due to their higher inclination between the orbits of the moon and the orbital plane of the planet. The moon Titan, for example, has an orbital plane tilted about 1.6° to Saturn's equatorial plane. But Saturn has an axial tilt of nearly 27°. The orbital plane of Titan only crosses the line of sight to the Sun at two points along Saturn's orbit. As the orbital period of Saturn is 29.7 years, an eclipse is only possible about every 15 years.
The timing of the Jovian satellite eclipses was also used to calculate an observer's longitude upon the Earth. By knowing the expected time when an eclipse would be observed at a standard longitude (such as Greenwich), the time difference could be computed by accurately observing the local time of the eclipse. The time difference gives the longitude of the observer because every hour of difference corresponded to 15° around the Earth's equator. This technique was used, for example, by Giovanni D. Cassini in 1679 to re-map France.
On Mars, only partial solar eclipses (transits) are possible, because neither of its moons is large enough, at their respective orbital radii, to cover the Sun's disc as seen from the surface of the planet. Eclipses of the moons by Mars are not only possible, but commonplace, with hundreds occurring each Earth year. There are also rare occasions when Deimos is eclipsed by Phobos. Martian eclipses have been photographed from both the surface of Mars and from orbit.
Pluto, with its proportionately largest moon Charon, is also the site of many eclipses. A series of such mutual eclipses occurred between 1985 and 1990. These daily events led to the first accurate measurements of the physical parameters of both objects.
Mercury and Venus
Eclipses are impossible on Mercury and Venus, which have no moons. However, both have been observed to transit across the face of the Sun. There are on average 13 transits of Mercury each century. Transits of Venus occur in pairs separated by an interval of eight years, but each pair of events happen less than once a century. According to NASA, the next pair of transits will occur on December 10, 2117 and December 8, 2125. Transits on Mercury are much more common.
A binary star system consists of two stars that orbit around their common centre of mass. The movements of both stars lie on a common orbital plane in space. When this plane is very closely aligned with the location of an observer, the stars can be seen to pass in front of each other. The result is a type of extrinsic variable star system called an eclipsing binary.
The maximum luminosity of an eclipsing binary system is equal to the sum of the luminosity contributions from the individual stars. When one star passes in front of the other, the luminosity of the system is seen to decrease. The luminosity returns to normal once the two stars are no longer in alignment.
The first eclipsing binary star system to be discovered was Algol, a star system in the constellation Perseus. Normally this star system has a visual magnitude of 2.1. However, every 2.867 days the magnitude decreases to 3.4 for more than nine hours. This is caused by the passage of the dimmer member of the pair in front of the brighter star. The concept that an eclipsing body caused these luminosity variations was introduced by John Goodricke in 1783.
- Staff (March 31, 1981). "Science Watch: A Really Big Syzygy" (Press release). The New York Times. Archived from the original on December 10, 2008. Retrieved 2008-02-29.
- http://www.in.gr/dictionary/lookup.asp?Word=%E5%EA%EB%E5%DF%F0%F9+++&x=0&y=0. Retrieved 2009-09-24. Missing or empty
- "Free online English Greek dictionary. LingvoSoft free online English dictionary". www.lingvozone.com. Archived from the original on 2013-01-28.
- "Google Translate". translate.google.com.
- Westfall, John; Sheehan, William (2014), Celestial Shadows: Eclipses, Transits, and Occultations, Astrophysics and Space Science Library, 410, Springer, pp. 1−5, ISBN 1493915355.
- Espenak, Fred (September 21, 2007). "Glossary of Solar Eclipse Terms". NASA. Archived from the original on February 24, 2008. Retrieved 2008-02-28.
- Green, Robin M. (1985). Spherical Astronomy. Oxford University Press. ISBN 0-521-31779-7.
- "Speed of eclipse shadow? - Sciforums". www.sciforums.com. Archived from the original on 2015-04-02.
- Espenak, Fred (July 12, 2007). "Eclipses and the Saros". NASA. Archived from the original on 2007-10-30. Retrieved 2007-12-13.
- "Eclipse Statistics". moonblink.info. Archived from the original on 2014-05-27.
- Gent, R.H. van. "A Catalogue of Eclipse Cycles". www.staff.science.uu.nl. Archived from the original on 2011-09-05.
- Hipschman, R. "Solar Eclipse: Why Eclipses Happen". Archived from the original on 2008-12-05. Retrieved 2008-12-01.
- Zombeck, Martin V. (2006). Handbook of Space Astronomy and Astrophysics (Third ed.). Cambridge University Press. p. 48. ISBN 0-521-78242-2.
- Staff (January 6, 2006). "Solar and Lunar Eclipses". NOAA. Archived from the original on May 12, 2007. Retrieved 2007-05-02.
- Phillips, Tony (February 13, 2008). "Total Lunar Eclipse". NASA. Archived from the original on March 1, 2008. Retrieved 2008-03-03.
- Ancient Timekeepers, "Archived copy". Archived from the original on 2011-10-26. Retrieved 2011-10-25.
- de Jong, T.; van Soldt, W. H. (1989). "The earliest known solar eclipse record redated". Nature. 338 (6212): 238–240. Bibcode:1989Natur.338..238D. doi:10.1038/338238a0. Archived from the original on 2007-10-15. Retrieved 2007-05-02.
- Griffin, Paul (2002). "Confirmation of World's Oldest Solar Eclipse Recorded in Stone". The Digital Universe. Archived from the original on 2007-04-09. Retrieved 2007-05-02.
- See DIO 16 Archived 2011-07-26 at the Wayback Machine. p.2 (2009). Though those Greek and perhaps Babylonian astronomers who determined the three previously unsolved lunar motions were spread over more than four centuries (263 BC to 160 AD), the math-indicated early eclipse records are all from a much smaller span Archived 2015-04-02 at the Wayback Machine.: the 13th century BC. The anciently attested Greek technique: use of eclipse cycles, automatically providing integral ratios, which is how all ancient astronomers' lunar motions were expressed. Long-eclipse-cycle-based reconstructions precisely produce all of the 24 digits appearing in the three attested ancient motions just cited: 6247 synod = 6695 anom (System A), 5458 synod = 5923 drac (Hipparchos), 3277 synod = 3512 anom (Planetary Hypotheses). By contrast, the System B motion, 251 synod = 269 anom (Aristarchos?), could have been determined without recourse to remote eclipse data, simply by using a few eclipse-pairs 4267 months apart.
- "Solar Eclipses in History and Mythology". Bibliotheca Alexandrina. Archived from the original on 2007-04-08. Retrieved 2007-05-02.
- Girault, Simon (1592). Globe dv monde contenant un bref traite du ciel & de la terra. Langres, France. p. Fol. 8V.
- Hevelius, Johannes (1652). Observatio Eclipseos Solaris Gedani. Danzig, Poland.
- Stephanson, Bruce; Bolt, Marvin; Friedman, Anna Felicity (2000). The Universe Unveiled: Instruments and Images through History. Cambridge, UK: Cambridge University Press. pp. 32–33. ISBN 052179143X.
- "Start eclipse of the Sun by Callisto from the center of Jupiter" (Observed at 00:28 UT). JPL Solar System Simulator. 3 June 2009. Retrieved 2008-06-05. External link in
- "Eclipse of the Sun by Titan from the center of Saturn" (Observed at 02:46 UT). JPL Solar System Simulator. 3 August 2009. Retrieved 2008-06-05. External link in
- "Brief Eclipse of the Sun by Miranda from the center of Uranus" (Observed at 19:58 UT (JPL Horizons S-O-T=0.0565)). JPL Solar System Simulator. 22 January 2007. Retrieved 2008-06-05. External link in
- "Transit of the Sun by Nereid from the center of Neptune" (Observed at 20:19 UT (JPL Horizons S-O-T=0.0079)). JPL Solar System Simulator. 28 March 2006. Retrieved 2008-06-05. External link in
- "Roemer's Hypothesis". MathPages. Archived from the original on 2011-02-24. Retrieved 2007-01-12.
- Cassini, Giovanni D. (1694). "Monsieur Cassini His New and Exact Tables for the Eclipses of the First Satellite of Jupiter, Reduced to the Julian Stile, and Meridian of London". Philosophical Transactions of the Royal Society. 18 (207-214): 237–256. doi:10.1098/rstl.1694.0048. JSTOR 102468. Archived from the original on 2013-09-08. Retrieved 2007-04-30.
- Davidson, Norman (1985). Astronomy and the Imagination: A New Approach to Man's Experience of the Stars. Routledge. ISBN 0-7102-0371-3.
- Buie, M. W.; Polk, K. S. (1988). "Polarization of the Pluto-Charon System During a Satellite Eclipse". Bulletin of the American Astronomical Society. 20: 806. Bibcode:1988BAAS...20..806B.
- Tholen, D. J.; Buie, M. W.; Binzel, R. P.; Frueh, M. L. (1987). "Improved Orbital and Physical Parameters for the Pluto-Charon System". Science. 237 (4814): 512–514. Bibcode:1987Sci...237..512T. doi:10.1126/science.237.4814.512. PMID 17730324. Archived from the original on 2008-07-06. Retrieved 2008-03-11.
- Espenak, Fred (May 29, 2007). "Planetary Transits Across the Sun". NASA. Archived from the original on March 11, 2008. Retrieved 2008-03-11.
- "When will the next transits of Mercury and Venus occur during a total solar eclipse? | Total Solar Eclipse 2017". eclipse2017.nasa.gov. Archived from the original on 2017-09-18. Retrieved 2017-09-25.
- Bruton, Dan. "Eclipsing binary stars". Midnightkite Solutions. Archived from the original on 2007-04-14. Retrieved 2007-05-01.
- Price, Aaron (January 1999). "Variable Star Of The Month: Beta Persei (Algol)". AAVSO. Archived from the original on 2007-04-05. Retrieved 2007-05-01.
- Goodricke, John; Englefield, H. C. (1785). "Observations of a New Variable Star". Philosophical Transactions of the Royal Society of London. 75: 153–164. Bibcode:1785RSPT...75..153G. doi:10.1098/rstl.1785.0009.
|Wikimedia Commons has media related to Eclipse.|
|Wikiquote has quotations related to: Eclipse|
|Look up eclipse in Wiktionary, the free dictionary.|
- on YouTube
- A Catalogue of Eclipse Cycles
- Search 5,000 years of eclipses
- NASA eclipse home page
- International Astronomical Union's Working Group on Solar Eclipses
- Mark's eclipse chasing website
- Interactive eclipse maps site
- Image galleries | <urn:uuid:6a0d0eb6-5618-410f-b9c0-fd6e2e76b4d9> | 3.703125 | 5,422 | Knowledge Article | Science & Tech. | 64.068133 | 95,575,595 |
Researchers tracking 125 female pronghorn in Wyoming¡¯s vast Jonah and PAPA gas fields using GPS collars discovered an 82 percent decline of habitat classified as ¡°highest quality¡± ¨C meaning highest probability of use for wintering animals. Widespread natural gas development in these areas, which are part of the Greater Yellowstone Ecosystem, has led to a sharp increase in well pads, roads, and other associated infrastructure. This in turn is driving pronghorn to the periphery of areas historically classified as crucial winter ranges, the five-year study says.
The study appears in the March, 2012 print edition of the journal Biological Conservation. Authors include Jon Beckmann and Rene Seidler of WCS; Kim Murray of Institute for Systems Biology; and Joel Berger of the University of Montana and WCS.
¡°In our study we have detected behavioral shifts for pronghorn in response to natural gas field development and infrastructure on federal BLM lands,¡± said Jon Beckmann of WCS¡¯s North America Program and lead author. ¡°By detecting behavioral changes, it is possible to identify threshold levels of gas field infrastructure development before any significant population declines. Maintaining the integrity of crucial wintering areas is particularly important in harsh winters to avoid diminishing pronghorn numbers.¡±
WCS has developed recommendations to protect pronghorn on BLM lands. Some of the recommendations include: baseline data being collected on population sizes and distribution prior to any development occurring. Data would then be used to define crucial winter range and keep development levels lower in key areas. Habitat and population levels should be monitored over time in both the gas fields and in similar control sites where no gas is being developed using scientifically rigorous methods to examine impacts of gas fields. Directional drilling should be used to reduce surface disturbance and limit habitat loss and fragmentation.
Fifty percent of North America¡¯s pronghorn live in Wyoming, which are declining in other parts of the U.S. Herds from throughout the western half of the state winter in the region where the gas fields are located including the herd from Grand Teton National Park that conducts the longest overland migration in the continental U.S. Herds that were attracted to the mesa above the natural gas deposits with windswept flat terrain and subsequent lack of deep snow are now being forced into less desirable areas.
The authors warn that pronghorn can only lose so much winter range before they will begin to decline in population. Mule deer have already declined by more than 50 percent from this region.Joel Berger, a WCS co-author on the study, said: ¡°Ultimately this is a policy issue for petroleum extraction on U.S. public lands. In several cases science indicates that petroleum developments have had negative impacts on wildlife. We are hopeful that studies like these will inform future energy development on public lands in the West.¡±
Stephen Sautner | Newswise Science News
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Power and Electrical Engineering
17.07.2018 | Life Sciences
16.07.2018 | Physics and Astronomy | <urn:uuid:649ca185-6123-4e88-a6fd-f72c4a74aa53> | 3.21875 | 1,269 | Content Listing | Science & Tech. | 37.628958 | 95,575,598 |
The Crazy Plan to Capture and Store CO2 Under the Ocean. There’s technically a way to take pollution out of the air. But if we did that, where would we put it all?
This Is What Pollution Does to Your Body ►►►►http://bit.ly/1JvKtfT
Sign Up For The TestTube Newsletter Here ►►►► http://bit.ly/1myXbFG
Trapping Carbon Dioxide Underground: Can We Do It?
"A newly released geological report points to a promising way to cut down on the amount of harmful carbon dioxide pumped into the atmosphere: inject and store it inside rocks deep underground."
Ocean Fertilization: Dead in the Water?
"The theory that adding iron to the oceans can help suck up atmospheric carbon dioxide cheaply and efficiently has received a further blow. A study published in this week's issue of Nature finds that the potential of iron-induced carbon sequestration is far lower than previously estimated."
Plan to Trap CO2 Under North Sea
"Some of the UK's largest energy and industrial companies, including Corus, Scottish and Southern Energy, Powerfuel Power Ltd , BP, ConocoPhillips, E.ON , Drax Power and AMEC, helped produce the study. It proposes setting up a carbon capture and storage network which would connect major producers of carbon emissions in the region and remove their CO2 emissions via a pipeline leading to the seabed."
DNews is dedicated to satisfying your curiosity and to bringing you mind-bending stories & perspectives you won't find anywhere else! New videos twice daily.
Watch More DNews on TestTube http://testtube.com/dnews
Subscribe now! http://www.youtube.com/subscription_center?add_user=dnewschannel
DNews on Twitter http://twitter.com/dnews
Trace Dominguez on Twitter https://twitter.com/tracedominguez
Lissette Padilla on Twitter https://twitter.com/lizzette
DNews on Facebook https://facebook.com/DiscoveryNews
DNews on Google+ http://gplus.to/dnews
Discovery News http://discoverynews.com
Download the TestTube App: http://testu.be/1ndmmMq
Sign Up For The TestTube Mailing List: http://dne.ws/1McUJdm
Tags: The Crazy Plan to Capture and Store CO2 Under the Ocean, carbon dioxide, co2, scrubbing, global warming, climate change, climate, global, warming, earth, Pollution, can we take CO2 out of the air, decontaminate, fix, how to fix global warming, amine, copper, geologic carbon sequestration, oceanic carbon sequestration, carbon, sequestration, ocean acidity, sea levels, d news, dnews, education, educational, Science, discovery news, age=14 15 16 17, c-earth science, trace dominguez | <urn:uuid:ac6ca7c9-975f-49b4-8a3c-d32fedaa1fcd> | 3.015625 | 629 | Content Listing | Science & Tech. | 49.777852 | 95,575,617 |
In the rainforests of Borneo, there lives a reddish brown ant by the name of Colobopsis explodens that really knows how to go out with a bang. When locked in battle with ants from another colony, C. explodens workers bring the fight to a swift end by ripping themselves open and spewing noxious fluid on the enemy. The workers die while pulling off this power move, but their sacrifice protects the rest of the colony from marauding predators like the weaver ant.
This is not typical animal behavior. Most creatures behave in ways that give themselves the best shot of surviving and passing their genes on to their own offspring. The workers of C. explodens achieve neither of these things by self-destructing—yet they aren’t the only insect to do it. They belong to a group called exploding ants that are found across Southeast Asia. And self-sacrifice shows up among a number of insects that live together in colonies including other ants, termites, bees, and certain aphids.
“One small worker does probably not cost very much to lose, but at the same time the benefits of deterring an intruder such as a big vertebrate predator that might potentially destroy the whole colony, or even other insects that might engage in a raid of the colony, are potentially huge,” says Olav Rueppell, a biologist at University of North Carolina at Greensboro.
Self-sacrifice can get pretty elaborate, and in some cases happens even when there isn’t a battle raging. Here are some of the most extreme moves that insects pull to defend their nestmates, from eviscerating themselves to leaving the safety of the nest to die alone.
You might have been on the receiving end of insect self-annihilation if you’ve ever been stung by a honeybee, which tears apart its own body to leave a stinger embedded in your flesh. But often, insects that disembowel themselves are aiming at other bugs.
Such is the case for C. explodens, which also goes by the nickname “yellow goo” in reference to the color of its signature chemical weapon. It’s thought that these ants “explode” by contracting their muscles until their abdomen and internal glands rupture and ooze sticky fluid, says Alice Laciny, an entomologist at the Natural History Museum Vienna.
For the predatory ants they face off against, getting slimed is downright deadly. If an enemy bites into the poisonous goo, it loses control of its limbs and dies in seconds. Otherwise, the goo will gum up its joints and immobilize the hapless insect. And even after it dies, C. explodens’s mandibles do not release their grip on the enemy’s body. “Then it’s sticky and it has a dead ant hanging from its antenna or leg,” Laciny says. “It probably won’t survive very long in that state.”
Self-destruction doesn’t really give the colony a numerical advantage during a skirmish. “It’s not so much about one-on-one confrontation, so an eye for an eye,” Laciny says. She suspects that behavior has more to do with keeping contaminants at bay. Exploding ants seem to be particularly dependent on the bacteria and fungi in their rainforest habitat and die if they are taken away from this microbiome. Any insect that threatens to get into their nests or too close to their foraging territory will carry foreign spores and bacteria.
The bright yellow gunk, which can sometimes be seen shimmering between the plates of the ants’ exoskeletons, could serve as a warning signal to enemies. “It seems to be well-known among other insects in the rainforest that they should just stay away,” Laciny says. Few insects are willing to venture onto trees where C. explodens colonies reside during the day, when the volatile ants are awake.
Although researchers have been spotting the ants and their yellow goo for decades, it was only recently that Laciny and her colleagues examined the species closely enough to give it a scientific name, which they reported in the journal ZooKeys on April 19. In fact, C. explodens is the first new exploding ant described since 1935.
That’s partly because different species of exploding ants can be nearly impossible to tell apart, while members of the same colony can look completely different, Laciny says. The tiny exploding members of C. explodens colonies were long thought to be a separate species from the other, larger workers. This ability to create different castes is one reason that self-sacrifice works so well for insects that live together in colonies. “Insects are quite plastic in their body plans,” Rueppell says. That means a colony can invest in a few members that self-destruct in very specialized ways.
And it’s not just ants that do this. Workers belonging to a termite species called Neocapritermes taracua found in French Guiana grow “explosive backpacks” as they age and become less valuable members of their society. When these termites rupture their bodies, blue crystals from the pouch on their back come into contact with secretions from the salivary glands. By mixing two chemicals to create an especially toxic brew, the termites’ final act is really “a step up in terms of sophistication,” Rueppell says.
Scientists recently discovered an entirely different kind of battlefield altruism in the matabele ant of sub-Saharan Africa. These ants are known for hunting termites, which put up a fierce fight. It’s not uncommon for workers to lose limbs during a raid. These injuries are not always fatal, though, says Erik Frank, an evolutionary biologist at the University of Lausanne in Switzerland. Wounded ants are carried home by their nestmates and have their injuries licked clean so they can heal. Before long, they can run as fast as a healthy ant and go raiding again, even short a few legs.
However, Frank realized there was more to the story one day after he accidentally drove over a column of raiding ants in Côte d’Ivoire’s Comoé National Park. When he stopped the car and got out to see how the ants were doing, Frank noticed something odd. “On ‘ground zero’ the nestmates were investigating all the injured ants but only helping the ones that still had a chance to survive,” Frank said in an email. He later presented ants with five legs amputated to their nestmates, and saw that these severely injured workers were rarely picked up.
But it wasn’t not for lack of trying. The healthy ants attempted to help, but their fallen comrades refused to cooperate. Instead of tucking their legs in and lying motionless when touched, the injured ants flailed around violently until their rescuers gave up. In other words, these ants perform a kind of self-triage that allows helpers to focus on other ants with less debilitating injuries.
It’s pretty unlikely that the ants are actually trying to sacrifice themselves, though. “This is not a conscious decision by the ants,” says Frank, who reported the behavior in February in the journal Proceedings of the Royal Society B. Normally when an ant gets hurt, it will stand up again, then call for help by releasing pheromones and allow its nestmates to scoop it up. When a seriously injured ant flails around, it is probably trying to stand up and failing, over and over again. “If you are able to stand up you are likely not too heavily injured so that you are still useful for the colony,” Frank says.
Sometimes, an insect will give up its own life to help its fellows even when there is no imminent danger. This happens every night in colonies of the Brazilian ant Forelius pusillus. When sunset falls, the ants seal up the entrance to their nest, leaving one to eight workers outside to finish the job.
“The ants trapped outside were not accidental victims, but rather were part of a deliberate strategy of entrance closing,” scientists wrote in The American Naturalist after watching the doomed ants covering the nest with sand until it was completely concealed. As the night wore on, the ants were blown away by gusts of wind, attacked by other kinds of ants, or simply wandered away. When morning arrived, none of these ants were ever found near the entrance of the nest.
Self-sacrifice might have become a routine part of life in F. pusillus colonies because these ants are particularly vulnerable to attack, Rueppell says. F. pusillus makes its nest on bare soil with little plant cover. What’s more, it lives in tropical areas, which tend to be crawling with all kinds of ants. An ant’s worst enemy is usually other ants, which means F. pusillus might have to take extreme steps to keep the colony hidden.
This isn’t the only time insects have been spied sacrificing themselves to ward off future threats. Scientists in Japan have found that an aphid by the name of Nipponaphis monzeni springs into action when its home is threatened. The bugs live together in swellings found on the outside of trees called galls; when the researchers drilled holes in these galls, the aphids within immediately began secreting bodily fluids to repair the damage. The aphids that were in charge of plastering over the hole shriveled up and in at least some cases died.
This might actually be a good thing for the remaining aphids in the gall. “Several nymphs were buried in the plaster, like ‘aphid sacrifices,’” the scientists reported in Proceedings of the Royal Society B. The entombed carcasses likely made the repair job sturdier, Rueppell says.
Most aphids—including the kind feasted upon by ladybugs—wouldn’t be able to use this trick. They are solitary insects that do not share a home base, Rueppell says. Self-sacrifice works best for insects that live together in nests or galls walled off from the outside world. In less isolated societies, it would be all too easy for freeloaders unrelated to the martyred insect to move in and reap the benefit of their sacrifice. “Any altruistic system is prone to exploitation,” Rueppell says.
Parasites and diseases can have a field day with big, enclosed nests of insects. But scientists suspect that ants and bees may have a way of halting germs from spreading too far. When these insects become sick, they leave the colony and go into exile to await their deaths.
In one experiment, Rueppell and his colleagues dosed honeybees with carbon dioxide and hydroxyurea, a drug used to treat sickle cell anemia and some cancers. “We wanted to really make them feel very sick,” he says. The bees that survived this treatment abandoned the hive, even though their fellows did not try to kick them out. Other scientists have seen rock ants stop socializing with their nestmates and head into seclusion when sickened with carbon dioxide, infected with fungal spores, or sickened by unknown ailments.
Scientists don’t really know what prompts these insects to start behaving in ways that will inevitably cause their own demise. Whether and how the brain might be overriding an insect’s self-preservation instincts when it goes into quarantine or rips itself to pieces is a mystery. “Is there something special about this, or is it just a continuation of normal defensive behaviors?” Rueppell wonders.
Still, it’s probably a safe bet that self-sacrificing insects are not consciously laying down their lives for the greater good. “I don’t think we can call an ant that sacrifices itself a ‘good’ ant,” Rueppell says.
In some ways, insect colonies resemble a single living organism instead of thousands of separate individuals. “Immune cells in our body are to some degree self-sacrificial as well,” Rueppell says. If we lose a few cells, the rest of the body doesn’t care. The workers that die to ensure the colony’s survival are also easily replaced—but their sacrifice isn’t entirely selfless.
The members of an insect colony are very closely related, and workers often don’t reproduce. If an insect can keep the queen and the rest of the colony from being wiped out by enemies or illness by laying down its own life, there’s a better chance that some of its genes will be passed on by relatives.
“It’s not the individual that counts—otherwise we wouldn’t see these self-sacrificial behaviors,” Rueppell says. | <urn:uuid:0336bc84-7599-412f-8a9b-3b8e2cdc1071> | 3.515625 | 2,719 | Nonfiction Writing | Science & Tech. | 49.267 | 95,575,631 |
A new study from the IEC (International Electrotechnical Commission) and the Fraunhofer Institute for Systems and Innovation Research (ISI) has found that nanotechnology will bring significant benefits to the energy sector, especially to energy storage and solar energy. Improved materials efficiency and reduced manufacturing costs are just two of the real economic benefits that nanotechnology already brings these fields and that's only the beginning. Battery storage capacity could be extended, solar cells could be produced cheaper, and the lifetime of solar cells or batteries for electric cars could be increased, all thanks to continued development of nanotechnology.
In the study, "Nanotechnology in the sectors of solar energy and energy storage," commissioned by the IEC, the Fraunhofer ISI found that there is a whole range of nanomaterials which will grow in importance as technology continues to advance.
The rise of nanomaterials
A key finding of the study is that technologies where "nano" already plays an important role will be of special interest for industry and research. The following nanomaterial technologies will be of particular importance: "organic and printed electronics", "nano-coatings," "nano-composites", "nano-fluids", "nano-catalysts", "nanocarbons" and "nano-electrodes". These seven technology profiles form the basis for two comprehensive roadmaps in the technical report.
For example, through the use of nanotechnology the light and energy generation of crystalline silicon solar cells or organic solar cells can already be enabled or significantly increased. Their manufacturing also requires less material and is more cost-efficient. Energy storage capacity will significantly improve with the use of nanomaterials for lithium-ion batteries. This is by far the most important battery technology for energy storage since the early 1990s. It is especially important in view of the constantly increasing demand for electric vehicles, whose success is also directly linked to battery performance and resulting range extension.
Large-scale application in solar power generation and energy storage
Dr. Björn P. Moller, project leader of this study at Fraunhofer ISI, is convinced that everything points to its large-scale application in solar power generation and energy storage, unlike many other fields where nanotechnology has been unable to make a break-through.
Moller said, "It can be assumed that in 2035 the share of fossil fuels in global energy production will have decreased to 75 percent. This implies that renewable energy will need to contribute significantly more to the overall energy generation. It is therefore crucially important that key technologies such as solar cells are further developed with the help of nanotechnology and that energy storage is improved. In some areas nanotechnology may even be a key to success. There is great potential for nanotechnology to help to mitigate the intermittency of renewable energy."
Role of nanotechnolgy in addressing the energy challenge
"Commissioning this study to evaluate the potential of nanotechnologies and the future role of nanomaterials in addressing the energy challenge helps the IEC to understand the kind of work that it needs to undertake to enable the broad roll out of these technologies," said IEC General Secretary and CEO Frans Vreeswijk. "Against the backdrop of an anticipated 30% increase of global energy demand by 2035 and the significant expansion of renewable energy coming into the grid, the study has found that nanotechnologies including new nanomaterials, could be a key to successful renewable energy and energy storage integration."
The Technical Report of the study Nanotechnology in the sectors of solar energy and energy storage will be of great use for those planning the use of solar energy and storage, whether they make products, use those products to generate and store electricity, or organize and regulate the use of the electric energy produced. | <urn:uuid:695a2c14-8aae-42d3-bfd4-dbf8049d2517> | 3.421875 | 782 | News Article | Science & Tech. | 8.478469 | 95,575,638 |
First description of a Lophelia pertusa reef complex in Atlantic Canada
Peer reviewed, Journal article
MetadataShow full item record
Original versionDeep-sea research. Part II, Topical studies in oceanography. 2017, 126 21-30. 10.1016/j.dsr.2017.05.009
For the first time, we describe a cold-water coral reef complex in Atlantic Canada, discovered at the shelf break, in the mouth of the Laurentian Channel. The study is based on underwater video and sidescan sonar. The reef complex covered an area of approximately 490×1300 m, at 280–400 m depth. It consisted of several small mounds (< 3 m high) where the scleractinian Lophelia pertusa occurred as live colonies, dead blocks and skeletal rubble. On the mounds, a total of 67 live colonies occurred within 14 patches at 300–320 m depth. Most of these (67%) were small (< 20 cm high). Dead coral (rubble and blocks), dominated (88% of all coral observations). Extensive signs of damage by bottom-fishing gear were observed: broken and tilted coral colonies, over-turned boulders and lost fishing gear. Fisheries observer data indicated that the reef complex was subjected to heavy otter trawling annually between 1980 and 2000. In June 2004, a 15 km2 conservation area excluding all bottom-fishing was established. Current bottom fisheries outside the closure include otter trawling for redfish and anchored longlines for halibut. Vessel monitoring system data indicate that the closure is generally respected by the fishing industry. | <urn:uuid:6d5d19bf-d0a5-47dd-91f3-2b0f8a927c85> | 2.765625 | 335 | Academic Writing | Science & Tech. | 53.3025 | 95,575,645 |
By the end of the deglaciation, however, the oceans had adjusted to their new warmer state and the nitrogen cycle had stabilized – though it took several millennia. Recent increases in global warming, thought to be caused by human activities, are raising concerns that denitrification may adversely affect marine environments over the next few hundred years, with potentially significant effects on ocean food webs.
Results of the study have been published this week in the journal Nature Geoscience. It was supported by the National Science Foundation.
"The warming that occurred during deglaciation some 20,000 to 10,000 years ago led to a reduction of oxygen gas dissolved in sea water and more denitrification, or removal of nitrogen nutrients from the ocean," explained Andreas Schmittner, an Oregon State University oceanographer and author on the Nature Geoscience paper. "Since nitrogen nutrients are needed by algae to grow, this affects phytoplankton growth and productivity, and may also affect atmospheric carbon dioxide concentrations."
"This study shows just what happened in the past, and suggests that decreases in oceanic oxygen that will likely take place under future global warming scenarios could mean more denitrification and fewer nutrients available for phytoplankton," Schmittner added.
In their study, the scientists analyzed more than 2,300 seafloor core samples, and created 76 time series of nitrogen isotopes in those sediments spanning the past 30,000 years. They discovered that during the last glacial maximum, the Earth's nitrogen cycle was at a near steady state. In other words, the amount of nitrogen nutrients added to the oceans – known as nitrogen fixation – was sufficient to compensate for the amount lost by denitrification.
A lack of nitrogen can essentially starve a marine ecosystem by not providing enough nutrients. Conversely, too much nitrogen can create an excess of plant growth that eventually decays and uses up the oxygen dissolved in sea water, suffocating fish and other marine organisms.
Following the period of enhanced denitrification and nitrogen loss during deglaciation, the world's oceans slowly moved back toward a state of near stabilization. But there are signs that recent rates of global warming may be pushing the nitrogen cycle out of balance.
"Measurements show that oxygen is already decreasing in the ocean," Schmittner said "The changes we saw during deglaciation of the last ice age happened over thousands of years. But current warming trends are happening at a much faster rate than in the past, which almost certainly will cause oceanic changes to occur more rapidly.
"It still may take decades, even centuries to unfold," he added.
Schmittner and Christopher Somes, a former graduate student in the OSU College of Earth, Ocean, and Atmospheric Sciences, developed a model of nitrogen isotope cycling in the ocean, and compared that with the nitrogen measurements from the seafloor sediments. Their sensitivity experiments with the model helped to interpret the complex patterns seen in the observations.
Andreas Schmittner | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:9c827d28-6fd2-4d61-892d-75cff5daa1bb> | 3.75 | 1,191 | Content Listing | Science & Tech. | 34.460318 | 95,575,650 |
Asteroids are usually referred to as being a solid rock from outer space whereas comets are usually a mixture of ice and rock and typically burn up before hitting the earth.In 1907 a large something from outer space destroyed thousands of acres of forest in Tunguska.Dinosaur bones, on the other hand, are millions of years old -- some fossils are billions of years old.To determine the ages of these specimens, scientists need an isotope with a very long half-life.
This rules out carbon dating for most aquatic organisms, because they often obtain at least some of their carbon from dissolved carbonate rock.
Scientist for the last hundred years have been searching for a crator or remnants of space rock.
Finally in 2008 scientist think they have found an impact site at the bottom of a nearby lake Cheko.
click here to see the profile of the bottom of the lake!
Whats at the bottom might be shattered rocks or impact melt or maybe a piece of meteorite. | <urn:uuid:3dc49769-8248-48dd-b142-37ea47ccf3c6> | 3.59375 | 204 | Knowledge Article | Science & Tech. | 50.809837 | 95,575,689 |
Barely Sufficient Documentation
Barely Sufficient Documentation
Properly documenting code execution and explanation are not always necessary in the code itself, but it can go a long way toward successful repetition.
Join the DZone community and get the full member experience.Join For Free
Discover how TDM Is Essential To Achieving Quality At Speed For Agile, DevOps, And Continuous Delivery. Brought to you in partnership with CA Technologies.
In Agile, when we say "barely sufficient documentation," we're not referring to user documentation. User documentation has been notoriously bad throughout the entire history of the software industry. As an industry, we must do much better with user documentation than we have in the past, but this post is not about user documentation — it's about internal documentation. It's about documenting the design and the code for other developers so that it will be more understandable and easier to work with later.
There are two important aspects to internal documentation, one of which is often neglected: documentation to express what the code is doing, and documentation to express why the code is doing what it's doing.
In traditional software development, we spend an enormous amount of effort documenting what the code does. We do this with design diagrams, specifications documents, and comments in the code. But this kind of documentation can actually become an impediment to maintainability rather than an aide to it.
The ultimate document that explains exactly what the code is doing is the code itself. Anything else is a distant second and as we write more documents that express what the code does, or pepper our code with comments that express what the code is doing, there is a tendency to make the code itself less expressive. This is a mistake.
The reason we use programming languages rather than hexadecimal machine instructions to make a computer do something is so that the instructions are understandable to us, the people who write and maintain the code. We have a very different set of concerns than the computer does. The computer isn't trying to understand the code we write, it simply executes it. But in order for us to change that code, we must understand it. And so we write our software in an intermediate form that's comprehensible to us and can get compiled down into something the machine can execute.
If you look at the code that's generated from our compilers it's highly efficient. And if highly skilled programmers were willing to sit down for many hours they might be able to write code that's even more efficient. But efficiency is not our only concern. We're also concerned with understandability. And if we have to trade some efficiency for greater understandability then it's often worthwhile to do that.
As we take these ideas to their logical conclusions, we realize that the bulk of programming is all about communication. Our jobs as programmers is to make our code expressive so that it's understandable, not just to us but to others as well. And we make code understandable by using metaphors and analogies as well as good intention revealing names to model the domain we're working in. We should name methods for what they do so the name of the method becomes the most important form of documentation.
In the past, I have been accused by managers of encouraging developers to write uncommented code and while this is true I have a good reason for it. Software developers don't like redundancy and if they know they have to write a comment that says what the code was doing, they'll tend to rely on that comment to communicate the intention of the code rather than the code itself. When this happens the code becomes less expressive and it can be a drudge to read.
Instead of writing a lot of comments in code, we should find ways of communicating that information with the code itself so our code is more expressive and there's less need to comment on what the code is doing. However, using block comments to express why the code is doing what it's doing can often be very helpful and appreciated by the people who maintain the code.
Published at DZone with permission of David Bernstein , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | <urn:uuid:e1b6d7c6-9f33-40f8-bbf7-e33dd3c96a13> | 2.78125 | 850 | Truncated | Software Dev. | 40.663751 | 95,575,712 |
Plant Protein Reveals Surprising Immune Response to Bacterial Attack
This cartoon depicts a leaf with areas of damage (brown spots) caused by the plant’s innate immune response. The superimposed schematic shows SOBER1’s three-dimensional structure. Credit: Salk Institute.
When you see brown spots on otherwise healthy green leaves, you may be witnessing a plant’s immune response as it tries to keep a bacterial infection from spreading. Some plants are more resistant to such infections than others, and plant biologists want to understand why. Salk Institute scientists studying a plant protein called SOBER1 recently discovered one mechanism by which, counterintuitively, plants seem to render themselves less resistant to infection.
The work, which appeared in Nature Communications on December 19, 2017, sheds light on plant resistance generally and could lead to strategies to boost plants’ natural immunity or to better contain infections that threaten to destroy an entire agricultural crop.
“There are a lot of losses in crop yields due to bacteria that kill plants,” says the paper’s senior author Joanne Chory, a Howard Hughes Medical Institute Investigator, director of Salk’s Plant Molecular and Cellular Biology Laboratory and a 2018 recipient of the Breakthrough Prize in Life Sciences. “With this work, we set out to understand the underlying mechanism of how resistance works, and to see how general it is.”
One of the ways plants fight bacterial infection is by killing off their own cells in which bacterial proteins are detected. But some bacteria have evolved a counter strategy—injecting special proteins that suppress the plant’s immune response by adding small, disabling chemical tags called acetyl groups to immune molecules. This process is called acetylation. What makes certain plants able to resist these bacterial counter measures while others succumb to infection remains unclear.
As a means to better understand such pathogen-plant interactions, Chory’s team turned to the well-studied weed Arabidopsis thaliana and, in particular, an enzyme called SOBER1—which had previously been reported to suppress the weed’s immune response to a bacterial protein known as AvrBsT. While it may seem counterintuitive to use immune suppression to study infection resistance, the Salk biologists thought doing so could yield useful information.
The researchers started by determining SOBER1’s amino acid sequence—the particular order of building blocks that gives a protein its basic identity. Intriguingly, they found it was very similar to a cancer-pathway-related human enzyme. This enzyme contains a characteristic tunnel into which proteins with certain types of modifications can fit and be cut as part of the enzymatic reaction. It turns out SOBER1 can be classified as part of a vast protein superfamily known as alpha/beta hydrolases. These enzymes share a common core structure but are very flexible in the chemical reactions they catalyze, which range from the breakdown of fat to the detoxification of chemicals called peroxides.
Next, they used a more than 100-year-old technique called X-ray crystallography to determine SOBER1’s three-dimensional structure. While similar to the human enzyme, the plant enzyme’s tunnel had two extra amino acids sticking down from the top: one at the entrance and one in the middle.
“When we saw those, we realized they had to have a dramatic effect on function because they basically block the tunnel,” says Salk research associate and co–first author Marco Bürger.
To discover what the purpose might be, Bürger and co–first author Björn Willige, also a research associate, used substrates (molecules that enzymes act on) with different lengths and biochemically tested how well they fit in the enzyme and whether they could be cut. Only certain types fit and were cut—very short acetyl groups. This suggested that SOBER1 is a deacetylase—a class of enzyme that removes acetyl groups. Furthermore, the team mutated SOBER1 and thus opened the blocked tunnel. With this change, Bürger and Willige engineered an enzyme that lost its strong specificity for short acetyl groups and instead preferred longer substrates.
“For the initial biochemistry experiments, we used established, artificial substrates,” says Willige. “But next we wanted to see what would happen in plants.”
For this, they used tobacco plants—which have large leaves that are easy to work with—and a bacterium with a protein called AvrBsT, known to trigger acetylation. They produced AvrBsT in different regions of tobacco leaves along with SOBER1 and several mutated (and thus nonfunctional) versions of the enzyme.
Leaves producing AvrBsT had brown patches of dead tissue, indicating that AvrBsT had initiated a cell death program to curtail the systemic spreading of the pathogen. Leaves that produced AvrBsT together with SOBER1 looked healthy, indicating that SOBER1 reversed the action of AvrBsT. Strikingly, mutated SOBER1 versions with an opened tunnel were not able to prevent the tissue from dying. From this, the researchers concluded that deacetylation must be the underlying chemical reaction leading to suppression of the plant’s immune response.
The tobacco tests supported the idea of SOBER1 being a deacetylase that would remove acetyl groups added by bacterial proteins. Without the acetyl groups tagging proteins, the plant didn’t recognize them as foreign and thus didn’t mount a cell-killing immune response. The leaves looked healthier because cells weren’t dying.
“SOBER’s function is surprising because it keeps infected tissue alive, which puts the plant at risk,” says Chory, who also holds the Howard H. and Maryam R. Newman Chair in Plant Biology at Salk. “But we are just beginning to understand these types of mechanisms, and there could very well be conditions in which the actions of SOBER1 is beneficial.”
Further tests showed that the activity and function of SOBER1 is not restricted to the weed Arabidopsis thaliana, but also exists in a plant called oilseed rape demonstrating that the findings of Chory’s lab could be applied to agricultural crops and biofuel resources.
Bürger and Willige would next like to begin screening for chemical inhibitors that could block SOBER1, thereby allowing plants to have a full immune response to pathogenic bacteria.
This article has been republished from materials provided by Salk Institute. Note: material may have been edited for length and content. For further information, please contact the cited source.
Marco Bürger, Björn C. Willige, Joanne Chory. A hydrophobic anchor mechanism defines a deacetylase family that suppresses host response against YopJ effectors. Nature Communications, 2017; 8 (1) DOI: 10.1038/s41467-017-02347-w.
How do Forests Respond to Atmospheric Pollution?News
How forests respond to elevated nitrogen levels from atmospheric pollution is not always the same. While a forest is filtering nitrogen as expected, a higher percentage than previously seen is leaving the system again as the potent greenhouse gas nitrous oxide, say researchers.READ MORE | <urn:uuid:c71f25d0-465c-4f8f-9a77-9bc4016890dc> | 3.625 | 1,544 | News Article | Science & Tech. | 36.149017 | 95,575,748 |
As Natasha Hurley-Walker makes clear in her astronomy speech, it's difficult to observe the universe. Though any person can look up into the sky on a clear night, our eyes aren't designed to gaze into deep space and see anything outside of our own galaxy, the Milky Way. Even using the most powerful telescopes that human ingenuity has delivered, which can pick up galaxies far distant from our own, astronomers can still only get a picture of the visible spectrum of light. That's where radio telescopes come in.
Anyone who's gone through a high school physics class knows about the Doppler effect: waves will be stretched or compressed when respectively moving away from or towards the observer. Since the universe is expanding, everything is moving away from everything else, so the light waves of distant galaxies appear red. However, light is on a spectrum beyond visible light (i.e. colors,) meaning that objects sufficiently far away will be infrared, an invisible light wave.
Hurley-Walker and her team of researchers created a radio telescope in the desert of Australia called GLEAM. It can sense these invisible infrared rays, and it thus gives us a greater picture of the universe.
Understanding Radio Telescopes
More Stats +/-
Earth's Role in the Universe
Using Stars to Understand Our Universe
Complexities of the Planet
Making Hostile Environments Habitable
Humanity’s Connection with Science | <urn:uuid:e1b03dc4-a59a-4bac-99ab-ed3e92ad7731> | 4.03125 | 285 | Truncated | Science & Tech. | 35.927797 | 95,575,750 |
The strangest product catalog on earth belongs to the Isotope Business Office, which manages the sale of atomic isotopes produced at Department of Energy labs around the country. It's got your calcium, platinum, and titanium. Your ytterbium, your strontium-95, and of course uranium-785 and plutonium-789 (responsible for Hiroshima and Nagasaki). The precursor to plutonium- 788 is neptunium-787, a radioactive by-product of nuclear power plants. Oak Ridge gets its neptunium trucked in from Idaho National Laboratory in a powdery form called oxide. When it arrives, it's deposited via a dumbwaiter-like system in a shielded room called a hot cell. Some of the neptunium oxide will have already decayed into a more dangerous radioactive material called protactinium, so small quantities are moved into a separate hot cell plumbed for radioactive liquids, where scientists can do the chemistry needed to remove it. Then the liquid is poured through a column of silica glass beads, whose surface attracts protactinium.
How is radioactive dating used to determine the age of an
The remaining liquid is moved to a glove box. In the glove box, the neptunium is processed with a technique invented at Oak Ridge called modified direct denitration. The liquid solution is rotated in a heated kiln until it sifts out, again in a powdered oxide form. This powder is mixed with powdered aluminum and pressed into pellets the size of a 5/8-inch socket, which are loaded into aluminum rods—targets for Oak Ridge's experimental high flux isotope reactor (HFIR). The HFIR offers much higher flux—the rate at which targets are bombarded with neutrons—than the reactor of a nuclear power plant.
Once target rods are loaded into the reactor, they're bombarded for a period of three to twelve months. As neutrons collide with the targets, some of them are absorbed by neptunium atoms. That creates a new neptunium isotope, neptunium-788, which radioactively decays into plutonium. When irradiation is complete, the targets go back into a hot cell. The rods are dissolved with a caustic solution and the radioactive material inside, now 67 to 69 percent plutonium-788, is again dissolved in nitric acid.
Why do scientists use radioactive decay com
A process called solvent extraction isolates the plutonium and neptunium: Solvents are added to the solution that dissolve only those elements. Then scientists induce the solution to separate—like oil and water—so that they can remove the solvent that's bound to them. At this point, neptunium is separated and can be passed through the cycle again. The plutonium is purified through a process called ion exchange, which Oak Ridge is still refining—a key step to reaching the 6.
5-kilogram per year delivery goal. Fully refined, the plutonium powder is packed into stainless-steel canisters designed for transporting radioactive materials. Isotopes are different forms of an element that share the same chemical properties, but that differ in mass and the number of neutrons they contain. Common elements that possess isotopes include carbon, oxygen, hydrogen, and nitrogen. Each element has a specific identifier, like 'C' for carbon, while a number placed before it identifies the isotope (e.
G. 68C and 67C). Some elements have many isotopes, but there are two basic types: stable and unstable or radioactive. Stable isotopes do not change over time while radioactive isotopes decrease or decay over predictable periods.
To distinguish different isotopes from each other, scientists use special instruments called mass spectrometers. Isotopes are everywhere in the environment. | <urn:uuid:f5410961-2cb9-4533-ade5-8206063df202> | 2.84375 | 780 | Knowledge Article | Science & Tech. | 35.058501 | 95,575,752 |
In crystallography, a vacancy is a type of point defect in a crystal. Crystals inherently possess imperfections, sometimes referred to as crystalline defects. A defect in which an atom is missing from one of the lattice sites is known as a "vacancy" defect. It is also known as a Schottky defect, although in ionic crystals the concepts are not identical.
Vacancies occur naturally in all crystalline materials. At any given temperature, up to the melting point of the material, there is an equilibrium concentration (ratio of vacant lattice sites to those containing atoms). At the melting point of some metals the ratio can be approximately 1:1000. This temperature dependence can be modeled by
where Nv is the vacancy concentration, Qv is the energy required for vacancy formation, kB is the Boltzmann constant, T is the absolute temperature, and N is the concentration of atomic sites i.e.
where ρ is density, NA Avogadro constant, and A the atomic mass.
It is the simplest point defect. In this system, an atom is missing from its regular atomic site. Vacancies are formed during solidification due to vibration of atoms, local rearrangement of atoms, plastic deformation and ionic bombardments.
The creation of a vacancy can be simply modeled by considering the energy required to break the bonds between an atom inside the crystal and its nearest neighbor atoms. Once that atom is removed from the lattice site, it is put back on the surface of the crystal and some energy is retrieved because new bonds are established with other atoms on the surface. However, there is a net input of energy because there are fewer bonds between surface atoms than between atoms in the interior of the crystal.
- Hong, J.; Hu, Z.; Probert, M.; Li, K.; Lv, D.; Yang, X.; Gu, L.; Mao, N.; Feng, Q.; Xie, L.; Zhang, J.; Wu, D.; Zhang, Z.; Jin, C.; Ji, W.; Zhang, X.; Yuan, J.; Zhang, Z. (2015). "Exploring atomic defects in molybdenum disulphide monolayers". Nature Communications. 6: 6293. Bibcode:2015NatCo...6E6293H. doi:10.1038/ncomms7293. PMC . PMID 25695374.
- Ehrhart, P. (1991) "Properties and interactions of atomic defects in metals and alloys", chapter 2, p. 88 in Landolt-Börnstein, New Series III, Vol. 25, Springer, Berlin
- Siegel, R. W. (1978). "Vacancy concentrations in metals". Journal of Nuclear Materials. 69-70: 117–146. Bibcode:1978JNuM...69..117S. doi:10.1016/0022-3115(78)90240-4. | <urn:uuid:219d6b88-fc8c-40e5-832e-b6bbd53c5919> | 3.5625 | 616 | Knowledge Article | Science & Tech. | 65.767569 | 95,575,761 |
Now, a team of researchers from the University of Texas at Dallas and Washington State University in Pullman, Wash., has made the counterintuitive discovery that aluminum, with a minor modification, is able to both break down and capture individual hydrogen atoms, potentially leading to a robust and affordable fuel storage system.
In nature, when two atoms of hydrogen meet they combine to form a very stable molecule (H2). Molecular hydrogen, however, has to be stored under great pressure and at very low temperatures, which is impractical if you want to power a vehicle or provide electricity for a home. A better solution would be to find a material that, at easily maintained temperatures and pressures, could efficiently store individual hydrogen atoms and release them on demand.
The first step in this process – hydrogen activation, breaking the chemical bonds that hold two hydrogen atoms together – is typically done by exposing molecular hydrogen to a catalyst. The best catalytic materials currently available are made of so-called "noble metals" (e.g. palladium and platinum). These elements efficiently enable hydrogen activation, but their scarcity makes them prohibitively expensive for widespread use.
In the quest to find an equally efficient yet less-expensive alternative, lead researcher Yves J. Chabal of the University of Texas at Dallas and Santanu Chaudhuri at Washington State University have identified a potential new hydrogen activation method that has the additional advantage of being an effective hydrogen-storage medium. Their proposed system relies on aluminum, a plentiful but inert metal that under normal conditions doesn't react with molecular hydrogen.
The key to unlocking aluminum's potential, the researchers surmised, is to impregnate its surface with some other metal that would facilitate the catalytic reaction. In this case, the researchers tested titanium, which is much more plentiful than noble metals and is used only sparingly in creating the titanium-doped aluminum surface.
Under very controlled temperatures and pressures, the researchers studied the aluminum surface, particularly in the vicinity of the titanium atoms, for telltale signs that catalytic reactions were taking place. The "smoking gun" was found in the spectroscopic signature of carbon monoxide (CO), which was added to the system to help identify areas of hydrogen activity. If atomic hydrogen were present, then the wavelength of light absorbed by the carbon monoxide bound to the catalytic metal center would become shorter, signaling that the catalyst was working.
"We've combined a novel infrared reflection absorption-based surface analysis method and first principles-based predictive modeling of catalytic efficiencies and spectral response, in which a carbon monoxide molecule is used as a probe to identify hydrogen activation on single-crystal aluminum surfaces containing catalytic dopants," says Chaudhuri.
Their studies revealed that in areas doped with titanium, the infrared signature of the CO shifted to shorter wavelengths even at very low temperatures. This "blue shift" was an indication that atomic hydrogen was being produced around some of the catalytic centers on an aluminum surface.
As part of a hydrogen storage system, an aluminum-supported catalyst has other advantages over more expensive metals. If technical advances like this can provide a pathway for aluminum to combine with hydrogen to form aluminum hydride (a stable solid with a composition ratio of a single aluminum atom to three hydrogen atoms) and store hydrogen as a high-density solid-state material, a critical step in developing a practical fuel system can be achieved.
The titanium further advances the process by helping the hydrogen bind to the aluminum to form aluminum hydride. If used as a fuel-storage device, the aluminum hydride could be made to release its store of hydrogen by simply raising its temperature.
"Although titanium may not be the best catalytic center for fully reversible aluminum hydride formation, the results prove for the first time that titanium-doped aluminum can activate hydrogen in ways that are comparable to expensive and less-abundant catalyst metals such as palladium and other near-surface alloys consisting of similar noble metals and their bimetallic analogs," Chaudhuri explains.
Irinder Chopra, the lead student in this project, will present this research at AVS' 58th International Symposium & Exhibition, held Oct. 30 – Nov. 4, 2011, in Nashville, Tenn. A paper based on this research – "Turning Aluminum into a noble-metal like catalyst for low-temperature molecular hydrogen activation" –was published online in the journal Nature Materials on September 25. Support for this research came from the Department of Energy – Office of Basic Energy Sciences.
The AVS 58th International Symposium & Exhibition will be held Oct. 30 – Nov. 4 at the Nashville Convention Center.
Presentation SS1-TuM-4, "Turning Aluminum into a Noble-metal like Catalyst for Low Temperature Molecular Hydrogen Activation," is at 9 a.m. on Tuesday, Nov. 1.
Main meeting website: http://www2.avs.org/symposium/AVS58/pages/greetings.html
Technical Program: http://www2.avs.org/symposium
Catherine Meyers | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:24d29ffe-0782-4c91-a6d2-bd8a1790e378> | 3.515625 | 1,629 | Content Listing | Science & Tech. | 27.539427 | 95,575,765 |
Rhaebo lynchi is only known from the type locality, Vereda “El Chuscal”, boundaries with Caicedo, Urrao, Antioquia, Colombia (Mueses-Cisneros, 2007).
Habitat and Ecology
The holotype was captured in primary forest, under logs during a rainy season. The female has a mass of pale eggs and convoluted oviducts (Mueses-Cisneros, 2007).
The species is only known from a single specimen, the holotype.
No major threats are known for this species.
No conservation measures are known for this species. Surveys are needed in the type locality to determine whether it is still present there (the holotype was collected in 1972)(Mueses-Cisneros, pers. com. 2008).
Red List Status
Data Deficient (DD)
Listed as Data Deficient since it has only recently been described, and there is still very little known about its extent of occurrence, area of occupancy, status and ecological requirements.
Rhaebo lynchi differs from other congeners by a combination of morphological features (Mueses-Cisneros, 2007).
Jonh Jairo Mueses-Cisneros 2008. Rhaebo lynchi. The IUCN Red List of Threatened Species 2008: e.T136106A4237642. http://dx.doi.org/10.2305/IUCN.UK.2008.RLTS.T136106A4237642.en | <urn:uuid:716f0226-8f27-442a-a89d-7f6c4836a072> | 2.875 | 329 | Knowledge Article | Science & Tech. | 51.584565 | 95,575,799 |
The fruit fly Drosophila melanogaster is pickier than previously thought when it comes to when it comes to choosing the best site for egg-laying. Using behavioral assays, researchers at the Max Planck Institute for Chemical Ecology in Jena, Germany, and their colleagues in Nigeria discovered that the insects prefer the smell of citrus.
An orange is an ideal oviposition substrate for fruit flies because the parasitoid wasp Leptopilina boulardii, which lays its eggs inside Drosophila larvae, is repelled by the odor of citrus.
M. C. Stensmyr / Lund University
This preference is controlled by one single odorant receptor. In nature, laying eggs on oranges is advantageous, because parasitoid wasps feeding on the larvae of Drosophila avoid citrus fruits. The same smell that is attractive to the flies also repels the wasps.
The scientists used imaging techniques to visualize the activity in certain areas of the flies’ brains while these were stimulated with different odors, and they were able to localize and identify the receptor for citrus. Flies in which this receptor was silenced were no longer able to distinguish oranges from other fruits. (Current Biology, December 5, 2013, DOI 10.1016/j.cub.2013.10.047)
For egg-laying insects, selecting the best site to lay eggs is crucial for the survival of the eggs and larvae. Once the eggs have been deposited, the maternal care of the female flies ends: eggs and larvae are henceforth at the mercy of their environment; their range is usually limited. Suitable and sufficient food sources for the hungry larvae and protection against predators and parasites are important selection criteria for the best oviposition substrates.
Multiple choice experiment shows fruit flies like citrus
First, Marcus Stensmyr, Bill Hansson and their colleagues in the Department of Evolutionary Neuroethology tested the preferred egg-laying substrates of fruit flies by letting insects select among different ripe fruits. They excluded damaged fruits to make sure that the smell of yeast would not influence the flies’ choices (yeast is the flies’ main food source). An analysis of the behavioral assays showed that female flies preferred to lay their eggs on oranges. Further selection experiments helped to identify the odor that was the crucial factor for the flies’ choice: the terpene limonene. Flies were not attracted to limonene-deficient oranges. On the other hand, they were immediately drawn to fruits that had been spiked with synthetic limonene. Although citrus is not an attractant for the flies, it elicits egg laying. Interestingly, evolution has split the perception of odors into two channels: those that guide flies to their food source and those that elicit the oviposition behavior.
A single odorant receptor controls preference for oranges
By performing electrophysiological measurements, the scientists were able to quantify the flies‘ responses to limonene and to localize and identify the olfactory sensory neurons responsible for detecting citrus. Subsequently, they tested the flies’ responses to 450 different odors. Apart from limonene, valencene, another component of citrus fruit, also triggered a strong response. Valencene distinguishes the scent of oranges from that of lemons; lemons are less favored by flies because of their acidity. Compounds that activated these particular sensory neurons induced oviposition. In vivo calcium imaging of the flies’ brains stimulated with citrus enabled the researchers to identify the corresponding odorant receptor.
"It is fascinating that a complex behavior, such as choosing an egg-laying site, can be broken down into multiple sub-routines that have such a simple genetic basis," says Marcus Stensmyr. "We were quite surprised that by silencing just this single odorant receptor, flies could no longer localize their preferred egg-laying substrate."
Citrus protects Drosophila larvae against parasites
In nature, a considerable proportion of Drosophila larvae are killed by enemies, mainly parasitoid wasps that lay their eggs inside the larvae. It is astonishing that these wasps are repelled by citrus odors, although citrus should guide them to their food source: Drosophila larvae. The parasitoid wasp Leptopilina boulardii, which specializes in Drosophila melanogaster, is repelled by valencene. In a further choice experiment, the wasps had to choose larvae from two substrates − one with valencene and one without − and they clearly preferred the larvae on the valencene-free substrate. It is still unknown why the wasps avoid citrus. However, it is certain that female fruit flies have learned to let their offspring grow on citrus fruits, because there the larvae are better protected against parasites.
These research results provide important information about the criteria that insects use to select an oviposition site that guarantees the improved development of their offspring. Marcus Stensmyr is convinced that “there are similar processes in other insects and ways to manipulate them.” These insights may lead to new ways to control insects, especially those that destroy crops or transfer diseases. [AO]Original Publication:
Dr. Marcus C. Stensmyr, Lund University, email@example.comContact and picture requests:
Download of high resolution pictures on http://www.ice.mpg.de/ext/735.html
Angela Overmeyer | Max-Planck-Institut
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:3cc05fb0-52c6-44c2-b329-facbde54b678> | 3.640625 | 1,818 | Content Listing | Science & Tech. | 35.448046 | 95,575,809 |
The role of photosynthesis in plant defense is a fundamental question awaiting further molecular and physiological elucidation. To this end we investigated host responses to infection with the bacterial pathogen Xanthomonas axonopodis pv. citri, the pathogen responsible for citrus canker. This pathogen encodes a plant-like natriuretic peptide (XacPNP) that is expressed specifically during the infection process and prevents deterioration of the physiological condition of the infected tissue. Proteomic assays of citrus leaves infected with a XacPNP deletion mutant (DeltaXacPNP) resulted in a major reduction in photosynthetic proteins such as Rubisco, Rubisco activase and ATP synthase as a compared with infection with wild type bacteria. In contrast, infiltration of citrus leaves with recombinant XacPNP caused an increase in these host proteins and a concomitant increase in photosynthetic efficiency as measured by chlorophyll fluorescence assays. Reversion of the reduction in photosynthetic efficiency in citrus leaves infected with DeltaXacPNP was achieved by the application of XacPNP or Citrus sinensis PNP lending support to a case of molecular mimicry. Finally, given that DeltaXacPNP infection is less successful than infection with the wild type, it appears that reducing photosynthesis is an effective plant defense mechanism against biotrophic pathogens.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:24a26a71-d919-4c8f-9810-95b4075ec2b2> | 2.875 | 304 | Academic Writing | Science & Tech. | 1.322679 | 95,575,822 |
Light is electromagnetic radiation, an oscillating electric field orthogonal to an oscillating magnetic field.. The polarization of an electromagnetic wave simply refers. An explanation of the polarisation of light. By Cowen Physics (www. acran.info) What Is. The polarization of light affects the focus of laser beams, influences the If the direction of the electric field of light is well defined, it is called polarized light. Even in isotropic media, so-called inhomogeneous waves can be launched into a medium whose refractive index has a significant imaginary part or " extinction coefficient " such as mauritius vs seychellen [ clarification needed ] these fields are also not strictly transverse. In circular or elliptical polarizationthe fields rotate at a constant rate in a plane as explain polarization wave travels. Most common sources of visible light, including thermal black body radiation and fluorescence but not lasersproduce light described as "incoherent". Electromagnetic Vibrations, Waves, and Radiation. Perhaps you wish that a more careful stress analysis were performed on the plastic case of the CD that you recently purchased. The relationship of the Stokes parameters to intensity and polarization ellipse parameters is shown hoffer austria the equations and figure . About the Physics Interactives Kinematics Usage Policy Newtons Laws Vectors and Projectiles Momentum and Collisions Work and Energy Circular and Satellite Motion Balance and Rotation Electric Circuits Static Electricity Magnetism Light and Color Waves and Sound Reflection and Mirrors Refraction and Lenses. However, in practice there are explain polarization in which all of the light cannot be viewed in such a simple manner due to spatial inhomogeneities or the presence of mutually incoherent waves. I could to the Pythagorean theorem if I wanted to figure out the die besten dating portale of it, but I just want to know the direction for. In many other optical techniques polarization is crucial or at least must be taken into account and controlled; such examples are too numerous to mention. In comparison with lower frequencies such as microwaves, the amount of angular momentum in lighteven of pure circular polarization, compared to the same wave's linear momentum or radiation pressure is very small and difficult to even measure. | <urn:uuid:1c910af2-4b1b-43e6-85c1-cb1b10db9891> | 3.28125 | 457 | Spam / Ads | Science & Tech. | 19.092386 | 95,575,900 |
London: Climate change may negatively impact the sea turtle population, as warmer temperatures could lead to higher numbers of females and increased nest failure, scientists have warned.
The temperature at which sea turtle embryos incubate determines the sex of an individual, which is known as Temperature-Dependent Sex Determination (TSD).
The pivotal temperature for TSD is 29 degrees Celsius as both males and females are produced in equal proportions above 29 degrees Celsius mainly females are produced while below 29 degrees Celsius more males are born.
"Up to a certain point, warmer incubation temperatures benefit sea turtles because they increase the natural growth rate of the population: more females are produced because of TSD, which leads to more eggs being laid on the beaches," said Jacques-Olivier Laloe from Swansea University in the UK.
However, beyond a critical temperature, the natural growth rate of the population decreases because of an increase of temperature-linked in-nest mortality, researchers said.
"Temperatures are too high and the developing embryos do not survive. This threatens the long-term survival of this sea turtle population," Laloe said.
Within the context of climate change and warming temperatures, all else being equal, sea turtle populations are expected to be more female-biased in the future.
While it is known that males can mate with more than one female during the breeding season, if there are too few males in the population this could threaten population viability, researchers said.
Sea turtle eggs only develop successfully in a relatively narrow thermal range of about 25-35 degrees Celsius, so if incubation temperatures are too low the embryo does not develop but if they are too high then development fails, they said.
This means that if incubation temperatures increase in the future as part of climate warming, then more sea turtle nests will fail.
Researchers recorded sand temperatures at a globally important loggerhead sea turtle nesting site in Cape Verde off the northwest coast of Africa over six years.
They also recorded the survival rates of over 3,000 nests to study the relationship between incubation temperature and hatchling survival.
Using local climate projections, the team then modelled how turtle numbers are likely to change throughout the century at this nesting site.
"In recent years, in places like Florida - another important sea turtle nesting site - more and more turtle nests are reported to have lower survival rates than in the past," Laloe said.
"This shows that we should really keep a close eye on incubation temperatures and the in-nest survival rates of sea turtles if we want to successfully protect them," he added.
The study was published in the journal Global Change Biology
Updated Date: Jun 23, 2017 14:21 PM | <urn:uuid:0a70d0c5-e6a1-4bd0-90d3-d8a61b4b6fdf> | 3.34375 | 551 | News Article | Science & Tech. | 22.904435 | 95,575,909 |
A collaboration between a team of scientists in Cambridge and Oxford, UK and Sydney, Australia has identified an increase in the chemical serotonin in specific parts of the insects' nervous system as initiating the key changes in behaviour that cause them to swarm.
Desert Locusts are one of the most devastating insect pests, affecting 20% of the world's land surface. Vast swarms containing billions of locusts stretching over many square kilometres periodically devastated parts of the USA at the time of the settlement of the West, and continue to inflict severe economic hardship on parts of Africa and China. In November 2008 swarms six kilometres (3.7 miles) long plagued Australia.
Locusts belong to the grasshopper family but unlike their harmless relatives they have the unusual ability to live in either a solitary or a gregarious state, with the genetic instructions for both packaged within a single genome.
Locusts originate from barren regions that see only occasional transient rainfalls. While unforgiving conditions prevail, locusts eke out a living as solitary individuals with a strong aversion to mingling with other locusts. When the rains come, the amount and quality of vegetation expands and the locusts can breed in large numbers.
In deserts, however, the rains are not sustained and food soon becomes more and more sparse. Thus large numbers of locusts are funnelled into dwindling patches of remaining vegetation where they are forced into close contact with each other. This crowding triggers a dramatic and rapid change in the locusts' behaviour: they become very mobile and they actively seek the company of other locusts. This new behaviour keeps the crowd together while the insects acquire distinctly different colours and large muscles that equip them for prolonged flights in swarms.
As Steve Rogers from Cambridge University emphasises: "The gregarious phase is a strategy born of desperation and driven by hunger, and swarming is a response to find pastures new".
Solitary and gregarious locusts are so different in looks and behaviour that they were thought to be separate species until 1921. But the realisation that crowding triggers swarming posed a new problem: how can the mere presence of other locusts have such a dramatic effect? The new research, which was funded by the Biotechnology and Biological Sciences Research Council, the Natural Sciences and Engineering Research Council of Canada and the Royal Society, solved this 90 year old question by identifying an increase in the chemical serotonin in specific parts of the locust's nervous system as launching the fundamental changes in behaviour that lead to the gregarious phase.
In the laboratory, solitary locusts can be turned into gregarious ones in just two hours simply by tickling their hind legs to simulate the jostling that locusts experience in a crowd. This period coincides with a threefold but transient (less than 24 hours) increase in the amount of serotonin in the thoracic region of the nervous system. Experiments were then designed to show that serotonin is indeed the causal link between the experience of being in a crowd and the change in behaviour.
First, locusts were injected with specific chemicals that block the action of serotonin on its receptors: when these locusts were exposed to the same gregarizing stimuli, they did not become gregarious. Second, chemicals that block the production of serotonin had the same effect. Third, when injected with serotonin or chemicals that mimic serotonin, locusts turned gregarious even in the absence of other locusts. Finally, chemicals that increased the natural synthesis of serotonin enhanced gregarization when locusts were exposed to the tickling stimuli. This indicates that it is the synthesis of serotonin that is driven by these specific stimuli and in turn changes the behaviour.
Dr Michael Anstey, an author of the paper from the University of Oxford, said: "Up until now, whilst we knew the stimuli that cause locusts' amazing 'Jekyll and Hyde'-style transformation, nobody had been able to identify the changes in the nervous system that turn antisocial locusts into monstrous swarms. The question of how locusts transform their behaviour in this way has puzzled scientists for almost 90 years, now we finally have the evidence to provide an answer."
Dr Swidbert Ott, from Cambridge University, one of the co-authors of the article, said: "Serotonin profoundly influences how we humans behave and interact, so to find that the same chemical in the brain is what causes a normally shy antisocial insect to gang up in huge groups is amazing."
Professor Malcolm Burrows, also from Cambridge University: "We hope that this greater understanding of the mechanisms causing such a big change in behaviour will help in the control of this pest, and more broadly help in understanding the widespread changes in behavioural traits of animals."
Professor Steve Simpson of Oxford and Sydney Universities said: "No other biological system is understood from nerve cells to populations in such detail or to such effect: locusts offer an exemplar of the how to span molecules to ecosystems – one of the greatest challenges in modern science."
O2 stable hydrogenases for applications
23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Materials Sciences
23.07.2018 | Information Technology
23.07.2018 | Health and Medicine | <urn:uuid:8854a7c1-93a8-4419-a76d-50258d0e77b0> | 3.78125 | 1,626 | Content Listing | Science & Tech. | 35.018714 | 95,575,910 |
+44 1803 865913
Edited By: Norman J Rosenberg and Roberto C Izaurralde
Soil carbon sequestration can play a strategic role in controlling the increase of CO2 in the atmosphere and thereby help mitigate climatic change. There are scientific opportunities to increase the capacity of soils to store carbon and remove it from circulation for longer periods of time. The vast areas of degraded and desertified lands throughout the world offer great potential for the sequestration of very large quantities of carbon. If credits are to be bought and sold for carbon storage, quick and inexpensive instruments and methods will be needed to monitor and verify that carbon is actually being added and maintained in soils. Large-scale soil carbon sequestration projects pose economic and social problems that need to be explored.
This book focuses on scientific and implementation issues that need to be addressed in order to advance the discipline of carbon sequestration from theory to reality. The main issues discussed in the book are broad and cover aspects of basic science, monitoring, and implementation. The opportunity to restore productivity of degraded lands through carbon sequestration is examined in detail.
Reprinted from CLIMATIC CHANGE, 51:1, 2001
Storing Carbon in Agricultural Soils to Help Head-Off a Global Warning, N.J. Rosenberg, R.C. Izaurralde; Science Needs and New Technology for Increasing Soil Carbon Sequestration, F.B. Metting, J.L. Smith, J.S. Amthor, R.C. Izaurralde; Potential of Desertification Control to Sequester Carbon and Mitigate the Greenhouse Effect, R. Lal; Monitoring and Verifying Changes of Organic Carbon in Soil, W.M. Post, R.C. Izaurralde, L.K. Mann, N. Bliss; Soil Carbon - Policy and Economics; G. Marland, B.A. McCarl, U. Schneider.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
the world’s foremost supplier of natural history and environmental books
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:7a1684cd-7243-442d-b99f-1122b914b779> | 3.109375 | 465 | Product Page | Science & Tech. | 50.067024 | 95,575,912 |
At the end of the article, you will be able to describe What are alkali metals of periodic table, Definition, Examples, characteristics of alkali metals properties, in water, reactivity, alkali metal uses, chemical properties of alkali metals and physical properties of alkali metals. Let’s start Discussing one by one.
opzioni binarie risultati What are Alkali Metals? – Definition
The word “alkali has been derived from Arabic word ‘ Alquili ‘ meaning the ashes of plants from which certain compounds of the elements sodium and potassium were initially isolated.
Group 1 elements readily dissolve in water to form soluble hydroxides which are strongly alkaline in nature.
The alkali metals are also called s-block elements. Since the elements listed below have one electron each in the valence s-subshell of their atoms i.e., they have http://nottsbushido.co.uk/hotstore/Hotsale-20150822-254180.html ns1 configuration (n represents the valence shell).
Must Read – Alkaline earth metals properties.
Order Tastylia Oral Strip No Prescription Alkali Metals Examples
All elements are silvery-white, soft and light metals. Group IA elements are metals because they have low ionization energies and have a few valence electrons as compared to available vacant orbitals. These are highly malleable (can be pressed out into sheets) and ductile (can be drawn out into wires). When freshly cut, they have a bright luster which quickly tarnishes on exposure to air.
The similarity in the electronic configuration results in similar alkali metals properties.The group 1 of the periodic table comprises six elements :
- lithium (Li)
- Sodium (Na)
- Potassium (K)
- Rubidium (Rb)
- Cesium (Cs) and
- francium (Fr).
|flirten schüchterne männer Lithium||http://www.sugaredstyle.com.au/?seltork=How-can-a-14-make-money-online-programming&9bd=47 Sodium||Trombettato involtarti ostlia zavorrassero Super alert pro option binary Circostanziavi livellino steccavate http://modernhomesleamington.co.uk/component/k2/itemlist/user/414?format=feed Potassium||click here Rubidium||generika Viagra billig Cesium||russische frauen auf partnersuche Francium|
forex bank öppettider jönköping Alkali Metals Periodic Table
The orbital electronic configuration of elements is listed in the Table:
- Besides hydrogen which is not a member of the alkali metal family. Hydrogen has been included because of the similarity in the electronic configuration with these elements. These are called alkali metals.
- Last element francium is of radioactive nature and is rather unstable with a very small life period of 21 minutes. Therefore, very little information is available about this element.
go to site Alkali Metals Uses
- Alkali metals are highly malleable (can be pressed out into sheets) and ductile (can be drawn out into wires).
- Owing to its very low density, it floats to the surface of kerosene. Hence, uses for an application where low density required.
- They are diamagnetic and colorless.
- Due to their strong electropositive nature, they emit electrons even when exposed to light (photoelectric effect). This property is responsible for their use in photoelectric cells; cesium and potassium are used, in particular, for this purpose.
- Due to the presence of loosely held valence electrons which are free to move throughout the metal structure, the group 1 elements are good conductors of heat and electricity.
film une rencontre sophie marceau musique Characteristics of Alkali Metals – Properties
Alkali Metals Reactivity
Alkali Metals reactivity increase with the increase of atomic radius.
Cs > Rb > K > Na > Li (Atomic radius)
Atoms of the group 1 elements are the largest in their corresponding periods. Atomic, as well as ionic size, increases from Li to Fr due to the presence of an extra shell of electrons. Atomic volume (At. wt/Density) also increases in moving down from Li to Cs.
Behaviour with water- Smaller the size of a cation, greater is its charge density and hence greater is its tendency to draw electrons from molecules which are thus polarized. Lithium-ion, being smallest in size among alkali metal ions, is the most extensively hydrated while Cs+ ion, the largest alkali metal ion, is the least hydrated.
Cs+ > Rb+ > K+ > Na+ > Li+ (Relative ionic radii)
Li+ > Na+ > K+ > Rb+ >Cs+ (Relative ionic radii in water) (Relative degree of hydration)
Lithium-ion, being heavily hydrated, moves very slowly under the effect of electric current and is thus the poorest conductor of electricity as compared to other alkali metal ions. Thus it is the degree of hydration of the ions rather than their size that determines the electrical conductivity of the alkali salt solutions. According to electrical conductivity measurements, the alkali metal ions conduct electric current in the following order.
Cs+ > Rb+ > K+ > Na+ > Li+ (Relative electrical conductivity)
Alkali Metals in Water
All alkali metal salts are ionic (except Li) and soluble in water, a solvent of high dielectric constant. The solubility in water is due to the fact that the cations get hydrated by water molecules. The degree of hydration depends on the size of the cation.
Reaction with water (Formation of hydroxides). Alkali metals, their oxides, peroxides and even superoxides dissolve in water to form hydroxides which are soluble and are called alkalies (water soluble hydroxides are known as alkalies).
- 2Na + H2O →2NaOH + H2
- Li2O + H2O→ 2LiOH
- Na2O2 + 2H2O→ 2NaOH+ H2O2
- 2KO2+ 2H2O →2KOH + H2O2 + O2
The reactions of metals with water are so highly exothermic that the hydrogen gas evolved catches fire accompanied by an explosion. Therefore, alkali metals are not kept in contact with water.
Physical Properties of Alkali Metals
Their densities are quite low and increase from lithium to cesium.However, potassium is lighter than sodium (anomaly) probably due to an unusual increase in an atomic size of potassium. Lithium, sodium, and potassium are lighter than water; Lithium is the lightest known metal (density 0.534). Owing to its very low density, it floats to the surface of kerosene, hence it can’t be stored in kerosene. It is kept wrapped in paraffin wax.
Since these metals are highly electropositive, their electronegativity (i.e. tendency to attract electrons) values are very low. Further, since electropositive character increases on moving down the group, the electronegativity decreases in the same order, i.e. from Li to Cs.
Due to their large size, the outermost solitary s-electron is at a large distance from the nucleus and, therefore, can be easily removed. Thus their ionization energies are low.
Further, as the atomic radius increases on moving down the group the outer electron gets farther and farther away from the nucleus and, therefore, ionization energy decreases on moving down from Li to Cs.
Must Read: First Ionization Energy
The alkali metal atoms show only +1 oxidation state. Because of their low ionization energies, they easily lose the outermost S-electron to form the unipositive ions. Since the unipositive ions have the stable noble gas configuration (s2 Or s2p6) in the valence shell, the energy required to pull out another electron from the valence shell is very high. Hence the second ionization energies of alkali metals are very high. Therefore, we can say that group 1 elements are univalent and form ionic compounds.
Further, since the alkali metal ions have a noble gas configuration with no unpaired electron, they are diamagnetic and colorless. In fact, all the compounds of alkali metal ions are colorless except those where the anion is colored, viz, para-manganates and dichromates.
Electropositive character of Alkali Metals
It is the tendency of the element to lose an electron. On account of their low ionization energies, these metals have a great tendency to lose the ns1 electron and form positive ions.
M → M+ + e–
Thus group 1 elements are strongly electropositive (or metallic in nature). Further, since ionization energy decreases from Li to Cs, the electropositive character increases in going down from Li to Cs.
Melting and boiling points.
The melting and boiling points are very low because of the weak bonding in the crystal lattice of the metals (weak bonding in the crystal lattice also explains the softness of alkali metals). The weak interatomic bonds (binding energies) is due to their large atomic radii and especially due to the presence of only one valence electron per metal atom as compared to a large number of available vacant orbitals.
With the increase in the size of the metal atoms, the repulsion of the non-bonding electrons increases and therefore melting and boiling points decrease on moving down the group from Li (m.p. 186°C) to Cs (m.p. 28.5°C).
Flame Test Showing Alkali Metals Properties
The alkali metals and their salts, when introduced into the flame, give characteristic color to the flame.
|Crimson red||Golden yellow||Pale violet||Violet||Violet|
This property of the group 1 elements offers a very sensitive and reliable test (flame test) for alkali metals which are difficult to be identified by chemical methods as they do not form many insoluble compounds.
The reason for flame colouration is that when an alkali metal or its any salt is introduced into the flame, the outermost electrons of the alkali atoms absorb energy and excited to the higher energy levels. When the excited electrons return to their original (ground) level, they release the absorbed (excited) energy as visible light. Now for the same excitation energy, the energy level to which the electron in Li will rise is lower than that to which the electron in Na will rise and this, in turn, is lower than the level to which the electron in K will rise and so on.
These differences are due to differences in their ionization energies. Consequently, when the electron returns to the ground state, the energy released will be lowest in Li+ and will increase in the order: Li+ , Na+ , K+ , Rb+ , and Cs+ . As a result of this, the frequency of the light emitted in the Bunsen flame is minimum in lithium and corresponds to the red region of spectra. In potassium, the frequency of the light emitted corresponds to the violet region of spectra.
Hydration of ions is an exothermic process. The energy released in the hydration of ions is known as hydration energy. Since the degree of hydration of M+ ions decreases as we go down the group, the hydration energy of alkali metal ions decreases from Li+ to Cs+ .
Chemical Properties of Alkali Metals
Group 1 elements are highly reactive chemically because of their low ionization enthalpies and enthalpy of atomization. Some of the important chemical properties of the members of the family are discussed.
Reaction with air
When freshly cut, alkali metals have luster but their surfaces get tarnished when exposed to air due to the formation of a layer of oxide, hydroxide, and carbonate.
- 4M + O2 → 2M2O (M=Metal)
- M2O + H2O → 2MOH
- 2MOH + CO2 → M2CO3 + H2O
The alkali metals cannot be placed in air. Similarly, these are also not placed in water due to strong affinity. These are normally kept in chemically inert solvents such as kerosene.
Reaction with oxygen
Alkali metals combine with oxygen upon heating to form different oxides depending upon their nature. Lithium forms a normal oxide (Li2O), sodium forms peroxide (Na2O2) while potassium and rest of the metals form superoxides (MO2 where M = K, Rb and Cs) upon heating in oxygen.On Heating,
- 4Li + O2 →2Li2O (Lithium monoxide) Oxide ion: O2-
- 2Na + O2 →Na2O2 (Sodium Peroxide) Peroxide : O22-
- K + O2 → KO2 (Potassium superoxide) Superoxide ion: O2
Reactivity with oxygen increases from Li to Cs
Reaction with hydrogen
All alkali metals combine with hydrogen upon heating to form colorless crystalline hydrides which are of ionic nature.
- 2M + H2 →2M–H+
(M=Li, Na, K. Rb, Cs)
The ionic character of the hydrides increases from Li to Cs.
As the alkali metals have low ionization enthalpies, their atoms can easily lose valence electron to a hydrogen atom and form ionic hydrides (M– H+ ). Since the ionization enthalpy decreases down the group, the tendency to form positive ion increases accordingly. Therefore, the ionic character of hydrides also increases.
Reaction with halogens
Group 1 elements(M) combine with halogens(X) directly to form metal halides.
- 2M + X2 → 2MX
With the exception of certain lithium halides, the halides of rest of the metals are of ionic nature (M+X–). They have high melting and boiling points. The fused halides are good conductors of electricity and in the fused state, these are used for the preparation of the alkali metals.
Order of reactivity of M : Li < Na < K < Rb < Cs
Order of reactivity of X2 : F2>Cl2 > Br2 > I2
Reaction with Sulphur and Phosphorus
Alkali metals react with sulphur and phosphorus upon heating to form the corresponding sulphides and phosphides as follows:
- 16 Na + S8 → 8Na2S (Sod. sulphide)
- 12Na + P4→ Na3P (Sod. phosphide)
Both sulphides and phosphides are hydrolysed by water as follows:
- Na2S + H2O→ NaOH + NaHS
- Na3P + 3H2O → 3NaOH + PH3
Alkali Metals With Ammonia
Solubility in liquid ammonia. Group 1 elements dissolve in liquefied ammonia to give highly conducting solution with blue color.
Alkali metals due to their low ionization energies, ionize in the ammonia solution to form ammoniated cations and ammoniated electrons.
- M + (x + y) NH3 → (M(NH3)x) + (e(NH3)–
The blue color of the solution is attributed to the fact that when light falls on the ammoniated electrons, they absorb energy corresponding to the red color and the transmitted light has a blue color. The electrical conductivity of the solution is because of ammoniated cations as well as ammoniated electrons.
(i) In concentrated solution, the color changes from blue to bronze. The blue solutions are paramagnetic while the concentrated solutions are diamagnetic in nature.
(ii) The presence of free ammoniated electrons makes the blue solution reducing in nature.
(iii) When dry ammonia gas is passed over heated metals, amides are formed and hydrogen gas is evolved.
- 2M + 2NH3 → 2MNH2(metal amide) + H2
Alkali metal amides are powerful reducing agents due to the presence of amide (NH2) ions.
This is Whole about the basics of What are alkali metals of periodic table, Definition, Examples, characteristics of alkali metals properties, in water, reactivity, alkali metal uses, chemical properties of alkali metals and physical properties of alkali metals.
If you like feel free to share with others. | <urn:uuid:dfbf328d-43d9-4ee3-b4d6-bb2d1cbaa1f6> | 3.671875 | 3,638 | Knowledge Article | Science & Tech. | 35.89741 | 95,575,916 |
From: Howard Johnson ([email protected])
Date: Fri Mar 16 2001 - 12:15:30 PST
To Sainath Nimmagadda,
Wow! An actual Maxwell's equation question... I may
not be able to give you a complete answer, but hopefully
I can start you down the right path. Those of you
not interested in philosophical questions about
Maxwell's equations may want to skip this message.
The principle in question is the "minimum energy"
principle. My recollection of Maxwell's equations
(specifically I *think* it's the ones that say
the Laplacian of both electric and magnetic
fields are zero within source-free regions)
is that the distributions of charge and current
in a statics problem fall into a pattern
that satisfies all the boundary conditions around
the edges of the region of interest,
satisfies the Laplacian conditions in the middle,
AND ALSO just happens to store the *minimum*
amount of energy in the interior fields.
In other words, you aren't going to get huge,
unexplained, spurrious magnetic fields in
the middle of an otherwise quiet region (unless
you believe in vaccuum fluctuations, which is
a different subject entirely...).
For a parallel-plate capacitor, the minimum-stored-
energy principle means that charge
pumped from one plate to the other will distribute
itself fairly evenly across the two plates, producing an
even distribution of electric field intensity everywhere.
That's the minimum-stored-energy configuration.
The energy stored in a capacitor is E=(1/2)*C*(V^^2),
where C is the capacitance and V^^2 is the voltage
If you divide the plate in two, making two capacitors each
with half the capacitance and half the total charge, the
total energy stored remains the same:
E = (2 capacitors)*(1/2)*(half the capacitance)*((same voltage)^^2)
If you bunch the same charge all together on just
one of these half-capacitors, you'd have twice the electric field
intensity (twice the voltage) on that capacitor,
but no voltage on the other.
The energy on the charged half would be
E' = (1/2)*(half the capacitance)*((double voltage)^^2),
which works out to twice as big as E. If you try all combinations
of charge distribution, you'll find that the
way to minimize the total stored energy in this system
is to distribute the charge equally on both capacitors.
I use the capacitor analogy because I find most electrical
engineers have an easier time visualizing electric-field
problems than magnetic-field problems.
Let's shift gears now to look at currents. Imagine I
divide a conducting path (like a printed-circuit trace)
into a multitude of skinny, parallel pathways.
Now look to see in what pattern the current distributes
itself. At frequencies high enough that the inductance of
the traces is more significant that the resistance, but not
so high that we have to worry about excessive radiative
losses or non-TEM propagation modes, the answer is this:
the current distributes itself in that pattern
that minimizes the total energy stored in the magnetic field.
The stored energy for inductive problems is: E = (1/2)*L*(I^^2),
where where L is the system inductance and I^^2 is the
total current squared. As you can see, stored magnetic
energy E and inductance L are directly proportional to each
other. Therefore, the minimum-stored-energy distribution of
current and the minimum-inductance distribution of current are
Notice that I have assumed in this treatment that the
resistance is insignificant and there are no significant
capacitances to worry about. Both assumptions apply
pretty well to the problem of figuring the inductance
of a bypass-capacitor via, the issue that started this
In answer to what might logically be your next
question, "Why do electromagnetic fields tend towards
the minimum-stored-energy distribution?", I can only say
that I'm not sure anyone really knows -- we just observe
that this is the way nature seems to operate. Perhaps
someone more well-versed in electromagnetic theory
can provide an answer.
It's possible that by assuming the current is *NOT* in
the minimum-energy distribution you could prove some
impossibility, like a perpetual-motion machine or
something, that would convince you of the absurdity of
the situation, but that won't really increase your
understanding unless you also intuitively believe that
nature is not absurd. Further discussion of *that* issue
is probably best left to physicist-philosophers.
I hope this brief answer is helpful to you, and doesn't
just stir up a lot of other doubts.
Dr. Howard Johnson
At 08:31 AM 3/15/01 -0800, you wrote:
>Please see below:
>Howard Johnson wrote:
>> Dear Itzhak Hirshtal and Brian Young,
>> The difficulties with approximating the inductance
>> of a via are even worse than you
>> may have suspected. Both approximations are flawed whether
>> you use +1 or -3/4, (or, as I have also seen, -1).
>> The issue of the exact constant (1, -3/4, or something
>> else) depends critically on your assumption about
>> the path of returning signal current. (Current always
>> makes a loop; when signal current traverses the via,
>> a returning signal current flows SOMEWHERE in
>> the opposite direction.). It is a principle
>> of Maxwell's equations that high-speed returning signal
>> current will flow in whatever path produces the
>> least overall inductance.
>My question is on this last statement. I like to understand which
>Maxwell's equation suggests this and how? Thanks.
**** To unsubscribe from si-list or si-list-digest: send e-mail to
[email protected] In the BODY of message put: UNSUBSCRIBE
si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
This archive was generated by hypermail 2b29 : Thu Jun 21 2001 - 10:11:14 PDT | <urn:uuid:ef2023ac-5767-4f73-94d7-1f56ae23ebe5> | 2.6875 | 1,400 | Comment Section | Science & Tech. | 42.932866 | 95,575,930 |
The brain is faced with a similar problem. The images captured by light-sensitive cells in the retina are on the order of a megapixel. The brain does not have the transmission or memory capacity to deal with a lifetime of megapixel images. Instead, the brain must select out only the most vital information for understanding the visual world.
In today's online issue of Current Biology, a Johns Hopkins team led by neuroscientists Ed Connor and Kechen Zhang describes what appears to be the next step in understanding how the brain compresses visual information down to the essentials.
They found that cells in area "V4," a midlevel stage in the primate brain's object vision pathway, are highly selective for image regions containing acute curvature. Experiments by doctoral student Eric Carlson showed that V4 cells are very responsive to sharply curved or angled edges, and much less responsive to flat edges or shallow curves.
To understand how selectivity for acute curvature might help with compression of visual information, co-author Russell Rasquinha (now at University of Toronto) created a computer model of hundreds of V4-like cells, training them on thousands of natural object images. After training, each image evoked responses from a large proportion of the virtual V4 cells -- the opposite of a compressed format. And, somewhat surprisingly, these virtual V4 cells responded mostly to flat edges and shallow curvatures, just the opposite of what was observed for real V4 cells.
The results were quite different when the model was trained to limit the number of virtual V4 cells responding to each image. As this limit on responsive cells was tightened, the selectivity of the cells shifted from shallow to acute curvature. The tightest limit produced an eight-fold decrease in the number of cells responding to each image, comparable to the file size reduction achieved by compressing photographs into the .jpeg format. At this level, the computer model produced the same strong bias toward high curvature observed in the real V4 cells.
Why would focusing on acute curvature regions produce such savings? Because, as the group's analyses showed, high-curvature regions are relatively rare in natural objects, compared to flat and shallow curvature. Responding to rare features rather than common features is automatically economical.
Despite the fact that they are relatively rare, high-curvature regions are very useful for distinguishing and recognizing objects, said Connor, a professor in the Solomon H. Snyder Department of Neuroscience in the School of Medicine, and director of the Zanvyl Krieger Mind/Brain Institute.
"Psychological experiments have shown that subjects can still recognize line drawings of objects when flat edges are erased. But erasing angles and other regions of high curvature makes recognition difficult," he explained
Brain mechanisms such as the V4 coding scheme described by Connor and colleagues help explain why we are all visual geniuses.
"Computers can beat us at math and chess," said Connor, "but they can't match our ability to distinguish, recognize, understand, remember, and manipulate the objects that make up our world." This core human ability depends in part on condensing visual information to a tractable level. For now, at least, the .brain format seems to be the best compression algorithm around.
To learn more about the Mind/Brain Institute, go here: http://krieger.jhu.edu/mbi/.
Lisa DeNike | EurekAlert!
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:90f1e5b7-0f6c-4b7a-a6aa-2a70406f808f> | 3.546875 | 1,367 | Content Listing | Science & Tech. | 38.194902 | 95,575,952 |
|Posted: Dec 11, 2013|
How do silver nanoparticles impact the composting of municipal solid waste?
|(Nanowerk News) Like many other nanoparticles, silver nanopartciles (AgNPs) have the potential for release into the environment throughout their life cycle. At the end of their useful life, products containing nanoparticles are often disposed of with the municipal solid waste stream. In a disposal scenario, nanoparticles (e.g., AgNPs) may leach from products into the solid waste.|
|A new study study ("The Impact of Silver Nanoparticles on the Composting of Municipal Solid Waste") conducted by scientists at the University of Cincinnati and the U.S. Environmental Protection Agency, aimed at investigating the impacts of polyvinylpyrrolidone (PVP) coated silver nanoparticles (PVP-AgNPs) on the composting process of the biodegradable organic fraction of municipal solid waste. This research represents one of the few studies that evaluate end-of-life management concerns with regard to the increasing use of nanomaterials in everyday life.|
|The study also examines relatively low concentrations that may be encountered in a real world scenario and how such exposure may impact the function and composition of microbial communities associated with compost samples.|
|The team found that the AgNPs evaluated in their study did not significantly influence aerobic composting processes at the concentrations that could be expected to be present in the solid waste stream.|
|The researchers conclude that, extrapolating from their results, similar toxicological behavior of AgNPs would be expected in the organically rich municipal solid waste landfills if the concentration of AgNPs was relatively low.|
|They also point out, though, that additional research, however, is still needed to identify at which concentrations AgNPs start to have toxicological impact on waste management systems where microbial groups and microbial processes may be more impacted by AgNPs.|
|Source: American Chemical Society|
Subscribe to a free copy of one of our daily
Nanowerk Newsletter Email Digests
with a compilation of all of the day's news.
These articles might interest you as well: | <urn:uuid:2aeea828-d670-4f2c-978d-e0e3ebb60c7c> | 2.9375 | 459 | Truncated | Science & Tech. | 18.170759 | 95,575,966 |
Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves (minute distortions of spacetime predicted by Einstein's theory of general relativity) to collect observational data about objects such as neutron stars and black holes, events such as supernovae, and processes including those of the early universe shortly after the Big Bang.
Gravitational waves have a solid theoretical basis, founded upon the theory of relativity. They were first predicted by Einstein in 1916; although a specific consequence of general relativity, they are a common feature of all theories of gravity that obey special relativity.. However, after 1916 there was a long debate whether the waves were actually physical, or artefacts of coordinate freedom in general relativity; this was not fully resolved until the 1950s. Indirect observational evidence for their existence first came in the late 1980s, from monitoring of the Hulse–Taylor binary pulsar (discovered 1974); the pulsar orbit was found to evolve exactly as would be expected for gravitational wave emission. Hulse and Taylor were awarded the 1993 Nobel Prize in Physics for this discovery.
On 11 February 2016 it was announced that the LIGO collaboration had directly observed gravitational waves for the first time in September 2015. The second observation of gravitational waves was made on 26 December 2015 and announced on 15 June 2016. Barry Barish, Kip Thorne and Rainer Weiss were awarded the 2017 Nobel Prize in Physics for leading this work.
Ordinary gravitational waves frequencies are very low and much harder to detect, while higher frequencies occur in more dramatic events and thus have become the first to be observed.
In addition to a merger of black holes, a binary neutron star merger has been directly detected: a gamma-ray burst (GRB) was detected by the orbiting Fermi gamma-ray burst monitor on 2017 August 17 12:41:06 UTC, triggering an automated notice worldwide. Six minutes later a single detector at Hanford LIGO, a gravitational-wave observatory, registered a gravitational-wave candidate occurring 2 seconds before the gamma-ray burst. This set of observations is consistent with a binary neutron star merger, as evidenced by a multi-messenger transient event which was signalled by gravitational-wave, and electromagnetic (gamma-ray burst, optical, and infrared)-spectrum sightings,
In 2015, the LIGO project was the first to directly observe gravitational waves using laser interferometers. The LIGO detectors observed gravitational waves from the merger of two stellar-mass black holes, matching predictions of general relativity. These observations demonstrated the existence of binary stellar-mass black hole systems, and were the first direct detection of gravitational waves and the first observation of a binary black hole merger. This finding has been characterized as revolutionary to science, because of the verification of our ability to use gravitational-wave astronomy to progress in our search and exploration of dark matter and the big bang.
There are several current scientific collaborations for observing gravitational waves. There is a worldwide network of ground-based detectors, these are kilometre-scale laser interferometers including: the Laser Interferometer Gravitational-Wave Observatory (LIGO), a joint project between MIT, Caltech and the scientists of the LIGO Scientific Collaboration with detectors in Livingston, Louisiana and Hanford, Washington; Virgo, at the European Gravitational Observatory, Cascina, Italy; GEO600 in Sarstedt, Germany, and the Kamioka Gravitational Wave Detector (KAGRA), operated by the University of Tokyo in the Kamioka Observatory, Japan. LIGO and Virgo are currently being upgraded to their advanced configurations. Advanced LIGO began observations in 2015, detecting gravitational waves even though not having reached its design sensitivity yet; Advanced Virgo is expected to start observing in 2016. The more advanced KAGRA is scheduled for 2018. GEO600 is currently operational, but its sensitivity makes it unlikely to make an observation; its primary purpose is to trial technology.
An alternative means of observation is using pulsar timing arrays (PTAs). There are three consortia, the European Pulsar Timing Array (EPTA), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), and the Parkes Pulsar Timing Array (PPTA), which co-operate as the International Pulsar Timing Array. These use existing radio telescopes, but since they are sensitive to frequencies in the nanohertz range, many years of observation are needed to detect a signal and detector sensitivity improves gradually. Current bounds are approaching those expected for astrophysical sources.
Further in the future, there is the possibility of space-borne detectors. The European Space Agency has selected a gravitational-wave mission for its L3 mission, due to launch 2034, the current concept is the evolved Laser Interferometer Space Antenna (eLISA). Also in development is the Japanese Deci-hertz Interferometer Gravitational wave Observatory (DECIGO).
Astronomy has traditionally relied on electromagnetic radiation. Originating with the visible band, as technology advanced, it became possible to observe other parts of the electromagnetic spectrum, from radio to gamma rays. Each new frequency band gave a new perspective on the Universe and heralded new discoveries. During the 20th century, indirect and later direct measurements of high-energy, massive, particles provided an additional window into the cosmos. Late in the 20th century, the detection of solar neutrinos founded the field of neutrino astronomy, giving an insight into previously inaccessible phenomena, such as the inner workings of the Sun. The observation of gravitational waves provides a further means of making astrophysical observations.
Russell Hulse and Joseph Taylor were awarded the 1993 Nobel Prize in Physics for showing that the orbital decay of a pair of neutron stars, one of them a pulsar, fits general relativity's predictions of gravitational radiation. Subsequently, many other binary pulsars (including one double pulsar system) have been observed, all fitting gravitational-wave predictions. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the first detection of gravitational waves.
Gravitational waves provide complementary information to that provided by other means. By combining observations of a single event made using different means, it is possible to gain a more complete understanding of the source's properties. This is known as multi-messenger astronomy. Gravitational waves can also be used to observe systems that are invisible (or almost impossible to detect) to measure by any other means. For example, they provide a unique method of measuring the properties of black holes.
Gravitational waves can be emitted by many systems, but, to produce detectable signals, the source must consist of extremely massive objects moving at a significant fraction of the speed of light. The main source is a binary of two compact objects. Example systems include:
- Compact binaries made up of two closely orbiting stellar-mass objects, such as white dwarfs, neutron stars or black holes. Wider binaries, which have lower orbital frequencies, are a source for detectors like LISA. Closer binaries produce a signal for ground-based detectors like LIGO. Ground-based detectors could potentially detect binaries containing an intermediate mass black hole of several hundred solar masses.
- Supermassive black hole binaries, consisting of two black holes with masses of 105–109 solar masses. Supermassive black holes are found at the centre of galaxies. When galaxies merge, it is expected that their central supermassive black holes merge too. These are potentially the loudest gravitational-wave signals. The most massive binaries are a source for PTAs. Less massive binaries (about a million solar masses) are a source for space-borne detectors like LISA.
- Extreme-mass-ratio systems of a stellar-mass compact object orbiting a supermassive black hole. These are sources for detectors like LISA. Systems with highly eccentric orbits produce a burst of gravitational radiation as they pass through the point of closest approach; systems with near-circular orbits, which are expected towards the end of the inspiral, emit continuously within LISA's frequency band. Extreme-mass-ratio inspirals can be observed over many orbits. This makes them excellent probes of the background spacetime geometry, allowing for precision tests of general relativity.
In addition to binaries, there are other potential sources:
- Supernovae generate high-frequency bursts of gravitational waves that could be detected with LIGO or Virgo.
- Rotating neutron stars are a source of continuous high-frequency waves if they possess axial asymmetry.
- Early universe processes, such as inflation or a phase transition.
- Cosmic strings could also emit gravitational radiation if they do exist. Discovery of these gravitational waves would confirm the existence of cosmic strings.
Gravitational waves interact only weakly with matter. This is what makes them difficult to detect. It also means that they can travel freely through the Universe, and are not absorbed or scattered like electromagnetic radiation. It is therefore possible to see to the center of dense systems, like the cores of supernovae or the Galactic Centre. It is also possible to see further back in time than with electromagnetic radiation, as the early universe was opaque to light prior to recombination, but transparent to gravitational waves.
The ability of gravitational waves to move freely through matter also means that gravitational-wave detectors, unlike telescopes, are not pointed to observe a single field of view but observe the entire sky. Detectors are more sensitive in some directions than others, which is one reason why it is beneficial to have a network of detectors.
In cosmic inflation
Cosmic inflation, a hypothesized period when the universe rapidly expanded during the first 10−36 seconds after the Big Bang, would have given rise to gravitational waves; that would have left a characteristic imprint in the polarization of the CMB radiation.
It is possible to calculate the properties of the primordial gravitational waves from measurements of the patterns in the microwave radiation, and use those calculations to learn about the early universe.[how?]
As a young area of research, gravitational-wave astronomy is still in development; however, there is consensus within the astrophysics community that this field will evolve to become an established component of 21st century multi-messenger astronomy.
Gravitational-wave observations complement observations in the electromagnetic spectrum. These waves also promise to yield information in ways not possible via detection and analysis of electromagnetic waves. Electromagnetic waves can be absorbed and re-radiated in ways that make extracting information about the source difficult. Gravitational waves, however, only interact weakly with matter, meaning that they are not scattered or absorbed. This should allow astronomers to view the center of a supernova, stellar nebulae, and even colliding galactic cores in new ways.
Ground-based detectors have yielded new information about the inspiral phase and mergers of binary systems of two stellar mass black holes, and merger of two neutron stars. They could also detect signals from core-collapse supernovae, and from periodic sources such as pulsars with small deformations. If there is truth to speculation about certain kinds of phase transitions or kink bursts from long cosmic strings in the very early universe (at cosmic times around 10−25 seconds), these could also be detectable. Space-based detectors like LISA should detect objects such as binaries consisting of two white dwarfs, and AM CVn stars (a white dwarf accreting matter from its binary partner, a low-mass helium star), and also observe the mergers of supermassive black holes and the inspiral of smaller objects (between one and a thousand solar masses) into such black holes. LISA should also be able to listen to the same kind of sources from the early universe as ground-based detectors, but at even lower frequencies and with greatly increased sensitivity.
Detecting emitted gravitational waves is a difficult endeavor. It involves ultra stable high quality lasers and detectors calibrated with a sensitivity of at least 2·10−22 Hz−1/2 as shown at the ground-based detector, GEO600. It has also been proposed that even from large astronomical events, such as supernova explosions, these waves are likely to degrade to vibrations as small as an atomic diameter.
- Peters, P.; Mathews, J. (1963). "Gravitational Radiation from Point Masses in a Keplerian Orbit". Physical Review. 131 (1): 435–440. Bibcode:1963PhRv..131..435P. doi:10.1103/PhysRev.131.435.
- Peters, P. (1964). "Gravitational Radiation and the Motion of Two Point Masses". Physical Review. 136 (4B): B1224–B1232. Bibcode:1964PhRv..136.1224P. doi:10.1103/PhysRev.136.B1224.
- Schutz, Bernard F. (1984). "Gravitational waves on the back of an envelope". American Journal of Physics. 52 (5): 412. Bibcode:1984AmJPh..52..412S. doi:10.1119/1.13627.
- Hulse, R. A.; Taylor, J. H. (1975). "Discovery of a pulsar in a binary system". The Astrophysical Journal. 195: L51. Bibcode:1975ApJ...195L..51H. doi:10.1086/181708.
- LIGO Scientific Collaboration and Virgo Collaboration; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T. (2016-06-15). "GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence". Physical Review Letters. 116 (24): 241103. arXiv: . Bibcode:2016PhRvL.116x1103A. doi:10.1103/PhysRevLett.116.241103. PMID 27367379.
- Moore, Christopher; Cole, Robert; Berry, Christopher (19 July 2013). "Gravitational Wave Detectors and Sources". Retrieved 17 April 2014.
- Astrophysical Journal Letters (2017 October 16), Multi-messenger Observations of a Binary Neutron Star Merger
- Overbye, Dennis (11 February 2016). "Physicists Detect Gravitational Waves, Proving Einstein Right". New York Times. Retrieved 11 February 2016.
- Krauss, Lawrence (11 February 2016). "Finding Beauty in the Darkness". New York Times. Retrieved 11 February 2016.
- Pretorius, Frans (2005). "Evolution of Binary Black-Hole Spacetimes". Physical Review Letters. 95 (12): 121101. arXiv: . Bibcode:2005PhRvL..95l1101P. doi:10.1103/PhysRevLett.95.121101. ISSN 0031-9007. PMID 16197061.
- Campanelli, M.; Lousto, C. O.; Marronetti, P.; Zlochower, Y. (2006). "Accurate Evolutions of Orbiting Black-Hole Binaries without Excision". Physical Review Letters. 96 (11): 111101. arXiv: . Bibcode:2006PhRvL..96k1101C. doi:10.1103/PhysRevLett.96.111101. ISSN 0031-9007. PMID 16605808.
- Baker, John G.; Centrella, Joan; Choi, Dae-Il; Koppitz, Michael; van Meter, James (2006). "Gravitational-Wave Extraction from an Inspiraling Configuration of Merging Black Holes". Physical Review Letters. 96 (11): 111102. arXiv: . Bibcode:2006PhRvL..96k1102B. doi:10.1103/PhysRevLett.96.111102. ISSN 0031-9007. PMID 16605809.
- Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P. (2016-02-11). "Observation of Gravitational Waves from a Binary Black Hole Merger". Physical Review Letters. 116 (6): 061102. arXiv: . Bibcode:2016PhRvL.116f1102A. doi:10.1103/PhysRevLett.116.061102. ISSN 0031-9007. PMID 26918975.
- Sesana, A. (22 May 2013). "Systematic investigation of the expected gravitational wave signal from supermassive black hole binaries in the pulsar timing band". Monthly Notices of the Royal Astronomical Society: Letters. 433 (1): L1–L5. arXiv: . Bibcode:2013MNRAS.433L...1S. doi:10.1093/mnrasl/slt034.
- "ESA's new vision to study the invisible universe". ESA. Retrieved 29 November 2013.
- Longair, Malcolm (2012). Cosmic century: a history of astrophysics and cosmology. Cambridge University Press. ISBN 1107669367.
- Bahcall, John N. (1989). Neutrino Astrophysics (Reprinted. ed.). Cambridge: Cambridge University Press. ISBN 052137975X.
- Bahcall, John (9 June 2000). "How the Sun Shines". Nobel Prize. Retrieved 10 May 2014.
- "The Nobel Prize in Physics 1993". Nobel Foundation. Retrieved 2014-05-03.
- Stairs, Ingrid H. (2003). "Testing General Relativity with Pulsar Timing". Living Reviews in Relativity. 6: 5. arXiv: . Bibcode:2003LRR.....6....5S. doi:10.12942/lrr-2003-5.
- Rincon, Paul; Amos, Jonathan (3 October 2017). "Einstein's waves win Nobel Prize". BBC News. Retrieved 3 October 2017.
- Overbye, Dennis (3 October 2017). "2017 Nobel Prize in Physics Awarded to LIGO Black Hole Researchers". The New York Times. Retrieved 3 October 2017.
- Kaiser, David (3 October 2017). "Learning from Gravitational Waves". The New York Times. Retrieved 3 October 2017.
- Nelemans, Gijs (7 May 2009). "The Galactic gravitational wave foreground". Classical and Quantum Gravity. 26 (9): 094030. arXiv: . Bibcode:2009CQGra..26i4030N. doi:10.1088/0264-9381/26/9/094030.
- Stroeer, A; Vecchio, A (7 October 2006). "The LISA verification binaries". Classical and Quantum Gravity. 23 (19): S809–S817. arXiv: . Bibcode:2006CQGra..23S.809S. doi:10.1088/0264-9381/23/19/S19.
- Abadie, J; Abbott, R.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Amador Ceron, E.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Antonucci, F.; Aoudia, S.; Arain, M. A.; Araya, M.; Aronsson, M.; Arun, K. G.; Aso, Y.; Aston, S.; Astone, P.; Atkinson, D. E.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P.; et al. (7 September 2010). "Predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors". Classical and Quantum Gravity. 27 (17): 173001. arXiv: . Bibcode:2010CQGra..27q3001A. doi:10.1088/0264-9381/27/17/173001.
- "Measuring Intermediate-Mass Black-Hole Binaries with Advanced Gravitational Wave Detectors". Gravitational Physics Group. University of Birmingham. Retrieved 28 November 2015.
- "Observing the invisible collisions of intermediate mass black holes". LIGO Scientific Collaboration. Retrieved 28 November 2015.
- Volonteri, Marta; Haardt, Francesco; Madau, Piero (10 January 2003). "The Assembly and Merging History of Supermassive Black Holes in Hierarchical Models of Galaxy Formation". The Astrophysical Journal. 582 (2): 559–573. arXiv: . Bibcode:2003ApJ...582..559V. doi:10.1086/344675.
- Sesana, A.; Vecchio, A.; Colacino, C. N. (11 October 2008). "The stochastic gravitational-wave background from massive black hole binary systems: implications for observations with Pulsar Timing Arrays". Monthly Notices of the Royal Astronomical Society. 390 (1): 192–209. arXiv: . Bibcode:2008MNRAS.390..192S. doi:10.1111/j.1365-2966.2008.13682.x.
- Amaro-Seoane, Pau; Aoudia, Sofiane; Babak, Stanislav; Binétruy, Pierre; Berti, Emanuele; Bohé, Alejandro; Caprini, Chiara; Colpi, Monica; Cornish, Neil J; Danzmann, Karsten; Dufaux, Jean-François; Gair, Jonathan; Jennrich, Oliver; Jetzer, Philippe; Klein, Antoine; Lang, Ryan N; Lobo, Alberto; Littenberg, Tyson; McWilliams, Sean T; Nelemans, Gijs; Petiteau, Antoine; Porter, Edward K; Schutz, Bernard F; Sesana, Alberto; Stebbins, Robin; Sumner, Tim; Vallisneri, Michele; Vitale, Stefano; Volonteri, Marta; Ward, Henry; Babak, Stanislav; Binétruy, Pierre; Berti, Emanuele; Bohé, Alejandro; Caprini, Chiara; Colpi, Monica; Cornish, Neil J.; Danzmann, Karsten; Dufaux, Jean-François; Gair, Jonathan; Jennrich, Oliver; Jetzer, Philippe; Klein, Antoine; Lang, Ryan N.; Lobo, Alberto; Littenberg, Tyson; McWilliams, Sean T.; Nelemans, Gijs; Petiteau, Antoine; Porter, Edward K.; Schutz, Bernard F.; Sesana, Alberto; Stebbins, Robin; Sumner, Tim; Vallisneri, Michele; Vitale, Stefano; Volonteri, Marta; Ward, Henry (21 June 2012). "Low-frequency gravitational-wave science with eLISA/NGO". Classical and Quantum Gravity. 29 (12): 124016. arXiv: . Bibcode:2012CQGra..29l4016A. doi:10.1088/0264-9381/29/12/124016.
- Amaro-Seoane, P. (May 2012). "Stellar dynamics and extreme-mass ratio inspirals". arXiv: . Bibcode:2012arXiv1205.5240A.
- Berry, C. P. L.; Gair, J. R. (12 December 2012). "Observing the Galaxy's massive black hole with gravitational wave bursts". Monthly Notices of the Royal Astronomical Society. 429 (1): 589–612. arXiv: . Bibcode:2013MNRAS.429..589B. doi:10.1093/mnras/sts360.
- Amaro-Seoane, Pau; Gair, Jonathan R; Freitag, Marc; Miller, M Coleman; Mandel, Ilya; Cutler, Curt J; Babak, Stanislav (7 September 2007). "Intermediate and extreme mass-ratio inspirals—astrophysics, science applications and detection using LISA". Classical and Quantum Gravity. 24 (17): R113–R169. arXiv: . Bibcode:2007CQGra..24R.113A. doi:10.1088/0264-9381/24/17/R01.
- Gair, Jonathan; Vallisneri, Michele; Larson, Shane L.; Baker, John G. (2013). "Testing General Relativity with Low-Frequency, Space-Based Gravitational-Wave Detectors". Living Reviews in Relativity. 16: 7. arXiv: . Bibcode:2013LRR....16....7G. doi:10.12942/lrr-2013-7.
- Kotake, Kei; Sato, Katsuhiko; Takahashi, Keitaro (1 April 2006). "Explosion mechanism, neutrino burst and gravitational wave in core-collapse supernovae". Reports on Progress in Physics. 69 (4): 971–1143. arXiv: . Bibcode:2006RPPh...69..971K. doi:10.1088/0034-4885/69/4/R03.
- Abbott, B.; Adhikari, R.; Agresti, J.; Ajith, P.; Allen, B.; Amin, R.; Anderson, S.; Anderson, W.; Arain, M.; Araya, M.; Armandula, H.; Ashley, M.; Aston, S; Aufmuth, P.; Aulbert, C.; Babak, S.; Ballmer, S.; Bantilan, H.; Barish, B.; Barker, C.; Barker, D.; Barr, B.; Barriga, P.; Barton, M.; Bayer, K.; Belczynski, K.; Berukoff, S.; Betzwieser, J.; et al. (2007). "Searches for periodic gravitational waves from unknown isolated sources and Scorpius X-1: Results from the second LIGO science run". Physical Review D. 76 (8): 082001. arXiv: . Bibcode:2007PhRvD..76h2001A. doi:10.1103/PhysRevD.76.082001.
- "Searching for the youngest neutron stars in the galaxy". LIGO Scientific Collaboration. Retrieved 28 November 2015.
- Binétruy, Pierre; Bohé, Alejandro; Caprini, Chiara; Dufaux, Jean-François (13 June 2012). "Cosmological backgrounds of gravitational waves and eLISA/NGO: phase transitions, cosmic strings and other sources". Journal of Cosmology and Astroparticle Physics. 2012 (6): 027–027. arXiv: . Bibcode:2012JCAP...06..027B. doi:10.1088/1475-7516/2012/06/027.
- Damour, Thibault; Vilenkin, Alexander (2005). "Gravitational radiation from cosmic (super)strings: Bursts, stochastic background, and observational windows". Physical Review D. 71 (6): 063510. arXiv: . Bibcode:2005PhRvD..71f3510D. doi:10.1103/PhysRevD.71.063510.
- Schutz, Bernard F (21 June 2011). "Networks of gravitational wave detectors and three figures of merit". Classical and Quantum Gravity. 28 (12): 125023. arXiv: . Bibcode:2011CQGra..28l5023S. doi:10.1088/0264-9381/28/12/125023.
- Hu, Wayne; White, Martin (1997). "A CMB polarization primer". New Astronomy. 2 (4): 323–344. arXiv: . Bibcode:1997NewA....2..323H. doi:10.1016/S1384-1076(97)00022-5.
- Kamionkowski, Marc; Stebbins, Albert; Stebbins, Albert (1997). "Statistics of cosmic microwave background polarization". Physical Review D. 55 (12): 7368–7388. arXiv: . Bibcode:1997PhRvD..55.7368K. doi:10.1103/PhysRevD.55.7368.
- "PLANNING FOR A BRIGHT TOMORROW: PROSPECTS FOR GRAVITATIONAL-WAVE ASTRONOMY WITH ADVANCED LIGO AND ADVANCED VIRGO". LIGO Scientific Collaboration. Retrieved 31 December 2015.
- Price, Larry (September 2015). "Looking for the Afterglow: The LIGO Perspective" (PDF). LIGO Magazine (7): 10. Retrieved 28 November 2015.
- See Cutler & Thorne 2002, sec. 2.
- See Cutler & Thorne 2002, sec. 3.
- See Seifert F., et al. 2006, sec. 5.
- See Golm & Potsdam 2013, sec. 4.
- Cutler, Curt; Thorne, Kip S. (2002), "An overview of gravitational-wave sources", in Bishop, Nigel; Maharaj, Sunil D., Proceedings of 16th International Conference on General Relativity and Gravitation (GR16), World Scientific, p. 4090, arXiv: , Bibcode:2002gr.qc.....4090C, ISBN 981-238-171-6
- Thorne, Kip S. (1995), "Gravitational radiation", Particle and Nuclear Astrophysics and Cosmology in the Next Millennium: 160, arXiv: , Bibcode:1995pnac.conf..160T
- Gravitational Wave Astronomy, Max Planck Institute for Gravitational Physics, retrieved 24 January 2013
- Schutz, B. F. (1999), "Gravitational wave astronomy", Classical and Quantum Gravity, 16 (12A): A131–A156, arXiv: , Bibcode:1999CQGra..16A.131S, doi:10.1088/0264-9381/16/12A/307
- LIGO Magazine, LIGO Scientific Collaboration
- LIGO Scientific Collaboration
- AstroGravS: Astrophysical Gravitational-Wave Sources Archive
- Video (04:36) – Detecting a gravitational wave, Dennis Overbye, NYT (11 February 2016).
- Video (71:29) – Press Conference announcing discovery: "LIGO detects gravitational waves", National Science Foundation (11 February 2016). | <urn:uuid:6728ede0-72a5-49b3-bea1-848640a1b256> | 4.0625 | 6,590 | Knowledge Article | Science & Tech. | 52.626151 | 95,575,977 |
Is the night sky darkest in the direction opposite the Sun? No. In fact, a rarely discernable faint glow known as the gegenschein (German for “counter glow”) can be seen 180 degrees around from the Sun in an extremely dark sky. The gegenschein is sunlight back-scattered off small interplanetary dust particles. These dust particles are millimeter sized splinters from asteroids and orbit in the ecliptic plane of the planets. Pictured above from last year is one of the more spectacular pictures of the gegenschein yet taken. Here a deep exposure of an extremely dark sky over Las Campanas Observatory in Chile shows the gegenschein so clearly that even a surrounding glow is visible. Notable background objects include the Andromeda galaxy, the Pleiades star cluster, the California Nebula, the belt of Orion just below the Orion Nebula and inside Barnard’s Loop, and bright stars Sirius and Betelgeuse. The gegenschein is distinguished from zodiacal light near the Sun by the high angle of reflection. During the day, a phenomenon similar to the gegenschein called the glory can be seen in reflecting air or clouds opposite the Sun from an airplane.
This picture originally appeared at Nasa
Are Antibiotics Leading To An Increased Risk Of Miscarriage?
According to a new study published in the CMAJ (Canadian Medical Association Journal), many classes of antibiotics are associated with an...May 1, 2017
Could a Carbon Tax Work?
Over the past couple of years, several suggestions for limiting the amount of greenhouse gases that are produced by the burning...May 1, 2017
Genes Might Be Helping the Tasmanian Devil Fight Off Face Cancer
Getty Images The Tasmanian devil is famous for two things. One, it’s ornery as all hell. And two, it’s the unfortunate...August 30, 2016
This Aquanaut Is Defining the Next Era of Spaceflight
NASA Megan McArthur has spent her life messing with microgravity. She was on the team that got the first commercial cargo...August 29, 2016
What Gives With Insects Pretending to Be Sticks and Leaves?
Imagine that you had one outfit and one outfit only: a jumpsuit that made you look like a leaf. You’d blend...August 29, 2016
How to Use Physics to Paddle Board Like a Pro
Getty Images Question: How do you make a stand up paddle board go straight if you only paddle on one side?...August 29, 2016
Cluster of Big Earthquakes Rattles Iceland’s Katla Volcano
Alamy Last night, a brief earthquake swarm rattled the caldera at Katla in southern Iceland. The largest earthquakes were over M4,...August 29, 2016
Six Scientists Lived in a Tiny Pod for a Year Pretending They Were on Mars
Arguably one of the most Mars-like environments on Earth, the north side of Mauna Loa has been home sweet home to...August 29, 2016
Forget the Pool. This Guy Chased Tornadoes All Summer
This May, a massive supercell storm ripped through the countryside just outside of Dodge City, Kansas. It produced more than a...August 29, 2016 | <urn:uuid:59a6ea7a-444e-484a-8a36-2b3c7226099b> | 3.40625 | 682 | Content Listing | Science & Tech. | 58.618527 | 95,575,978 |
In ongoing observations of one of the universe’s earliest, most distant cluster of galaxies using NASA’s Spitzer Space Telescope, an international team of researchers led by Texas A&M’s Dr. Kim-Vy Tran has discovered that a significant fraction of those ancient galaxies are still actively forming stars.
Tran, an assistant professor in the Texas A&M Department of Physics and Astronomy and member of the George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, and her team have spent the past four months analyzing images taken from the Multiband Imaging Photometer for Spitzer (MIPS), essentially looking back in time nearly 10 billion years at a high red-shift cluster known as CLG J02182-05102. Mere months after first discovering the cluster and the fact that it is shockingly “modern” in its appearance and size despite being observed just 4 billion years after the Big Bang, the Texas A&M-led team was able to determine that the galaxy cluster produces hundreds to thousands of new stars every year — a far higher birthrate than what is present in nearby galaxies.
What is particularly striking, according to Tran, is the fact that the stellar birthrate is higher in the cluster’s center than at the cluster’s edges — the exact opposite of what happens in our local portion of the universe, where the cores of galaxy clusters are known to be galactic graveyards full of massive elliptical galaxies composed of old stars.
“A well-established hallmark of galaxy evolution in action is how the fraction of star-forming galaxies decreases with increasing galaxy density,” explains Tran, lead author of the team’s study which appears in The Astrophysical Journal Letters. “In other words, there are more star-forming galaxies in the field than in the crowded cores of galaxy clusters. However, in our cluster, we find many galaxies with star-formation rates comparable to their cousins in the lower-density field environment.”
Exactly why this star power increases as galaxies become more crowded remains a mystery. Tran thinks the densely-populated surroundings could lead to galaxies triggering activity in one another, or that all galaxies were extremely active when the universe was young.
The group’s discovery holds potentially compelling implications that could ultimately reveal more about how such massive galaxies form. Observations of nearby galaxy clusters confirm that they are made of stars that are at least 8 to 10 billion years old, which means that CLG J02182-05102 is nearing the end of its hyperactive star-building period.
Now that they have pinpointed the epoch when galaxy clusters are making the last of their stars, astronomers can focus on understanding why massive assemblies of galaxies transition from very active to passive. Identifying how long it takes for galaxies in clusters to build up their stellar mass as well as the time at which they stop provides strong constraints for how these massive galaxies form.
“Our study shows that by looking farther into the distant universe, we have revealed the missing link between the active galaxies and the quiescent behemoths that live in the local universe,” Tran adds. “Our discovery indicates that future studies of galaxy clusters in this red-shift range should be particularly fruitful for understanding how these massive galaxies form as a function of their environment.”
Tran’s team includes fellow Texas A&M astronomer Dr. Casey Papovich, who first identified the galaxy cluster CLG J02182-05102 in May. The collection of roughly 60 galaxies is observed just 4 billion years after the Big Bang, making it the earliest cluster of galaxies ever detected. However, the team was struck not by its age, but by its astoundingly modern appearance — a huge, red collection of galaxies typical in only local clusters.
The fact that Tran’s team was able to see these active galaxies so far back in time (Tran likens their find to discovering that her mild-mannered grandparent had lived a fast and furious youth) is only the preface to what they expect eventually to learn about these clusters. Tran will continue to lead an international collaboration with Papovich and their postdoctoral researchers to examine these clusters more thoroughly and hopefully to understand why they are still so energetic.
“We will analyze new observations scheduled to be taken with the Hubble Space Telescope and Herschel Space Telescope to study these galaxies more carefully to understand why they are so active,” Tran adds. “We will also start looking at several more distant galaxy clusters to see if we find similar behavior.”
The team’s findings are detailed in their paper, “Reversal of Fortune: Confirmation of an Increasing Star Formation-Density Relation in a Cluster at z=1.62,” available online at http://iopscience.iop.org/2041-8205/719/2/L126/.
For or additional information on Texas A&M Astronomy, visit http://astronomy.tamu.edu.
NASA/JPL-Caltech Feature: http://www.spitzer.caltech.edu/news/1172-feature10-14
Contact: Chris Jarvis, (979) 845-7246 or Dr. Kim-Vy Tran, (979) 862-2747
Dr. Kim-Vy Tran | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:ebdd06b4-37ca-406e-8cca-1ba37e8ff7f2> | 3.84375 | 1,701 | Content Listing | Science & Tech. | 42.637356 | 95,575,990 |
A study of π and K production in proton-uranium and oxygen-uranium interactions at 22 GeV/A using decay muons
The like-sign dimuons copiously recorded in the NA 38 experiment both in p − U and O−U reactions at 200 GeV/nucleon are interpreted as resulting from decays of π and K mesons in comparable proportions. The + +/− − ratio is large (≃1.7) and ascribed to the K + being more copiously produced than the K −. Both the average transverse momentum and the + +/− − ratio are comparable for p−U and O−U reactions, and both increase only slightly with the transverse energy E T .
KeywordsTransverse Energy Quark Matter Decay Muon Hadronic Cross Section Muon Pair
Unable to display preview. Download preview PDF.
- 3.P. Koch et al.: Phys. Rep. 142 (1986)Google Scholar | <urn:uuid:e2f10f93-6801-4e61-8f3e-0e0fbff7f508> | 2.546875 | 204 | Truncated | Science & Tech. | 66.072502 | 95,576,012 |
Astronomers have discovered a 'hot molecular core', a cocoon of molecules surrounding a newborn massive star, for the first time outside our Galaxy. The discovery, which marks the first important step for observational studies of extragalactic hot molecular cores and challenges the hidden chemical diversity of our universe, appears in a paper in The Astrophysical Journal Volume 827.
The scientists from Tohoku University, the University of Tokyo, the National Astronomical Observatory of Japan, and the University of Tsukuba, used the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile to observe a newborn star located in the Large Magellanic Cloud, one of the closest neighbors of our Galaxy. As a result, a number of radio emission lines from various molecular gas are detected, which indicates the presence of a hot molecular core associated with the observed newborn star (Fig. 1 and 2).
Artist's concept image of the hot molecular core discovered in the Large Magellanic Cloud. Credit: FRIS/Tohoku University. The figure is a derivative work of the following sources (ESO/M. Kornmesser; NASA, ESA, and S. Beckwith (STScI) and the HUDF Team; NASA/ESA and the Hubble Heritage Team (AURA/STScI)/HEI).
Left: Distributions of molecular line emission from a hot molecular core in the Large Magellanic Cloud observed with ALMA. Emissions from dust, sulfur dioxide (SO2), nitric oxide (NO), and formaldehyde (H2CO) are shown as examples. Right: An infrared image of the surrounding star-forming region (based on the 8 micron data provided by the NASA/Spitzer Space Telescope). Credit: T. Shimonishi/Tohoku University, ALMA (ESO/NAOJ/NRAO)
The observations have revealed that the hot molecular core in the Large Magellanic Cloud shows significantly different chemical compositions as compared to similar objects in our Galaxy. In particular, the results suggest that simple organic molecules such as methanol are deficient in this galaxy, suggesting a potential difficulty in producing large organic species indispensable for the birth of life. The research team suggests that the unique galactic environment of the Large Magellanic Cloud affects the formation processes of molecules around a newborn star, and this results in the observed unique chemical compositions.
"This is the first detection of an extragalactic hot molecular core, and it demonstrates the great capability of new generation telescopes to study astrochemical phenomena beyond our Galaxy," said Dr. Takashi Shimonishi, an astronomer at Tohoku University, Japan, and the paper's lead author. "The observations have suggested that the chemical compositions of materials that form stars and planets are much more diverse than we expected. "
It is known that various complex organic molecules, which have a connection to prebiotic molecules formed in space, are detected from hot molecular cores in our Galaxy. It is, however, not yet clear if such large and complex molecules exist in hot molecular cores in other galaxies. The newly discovered hot molecular core is an excellent target for such a study, and further observations of extragalactic hot molecular cores will shed light on the chemical complexities of our universe.
This work is supported by a Grant-in-Aid from the Japan Society for the Promotion of Science (15K17612).
The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of the European Organisation for Astronomical Research in the Southern Hemisphere (ESO), the U.S. National Science Foundation (NSF) and the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Republic of Chile. ALMA is funded by ESO on behalf of its Member States, by NSF in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and by NINS in cooperation with the Academia Sinica (AS) in Taiwan and the Korea Astronomy and Space Science Institute (KASI). ALMA construction and operations are led by ESO on behalf of its Member States; by the National Radio Astronomy Observatory (NRAO), managed by Associated Universities, Inc. (AUI), on behalf of North America; and by the National Astronomical Observatory of Japan (NAOJ) on behalf of East Asia. The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA.
Full bibliographic informationAuthors: Takashi Shimonishi, Takashi Onaka, Akiko Kawamura, Yuri Aikawa
Title: The Detection of a Hot Molecular Core in the Large Magellanic Cloud with ALMA
Journal: The Astrophysical Journal, 827, 72
For further information, please contact:
Masaaki Hiramatsu | AlphaGalileo
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:3b65029f-a949-4785-a75c-d4d3a5fd6a6b> | 3.3125 | 1,594 | Content Listing | Science & Tech. | 25.868762 | 95,576,014 |
The structure of junctions between carbon nanotubes and graphene shells
MetadataShow full item record
© The Royal Society of Chemistry.Junctions between carbon nanotubes and flat or curved graphene structures are fascinating for a number of reasons. It has been suggested that such junctions could be used in nanoelectronic devices, or as the basis of three-dimensional carbon materials, with many potential applications. However, there have been few detailed experimental analyses of nanotube-graphene connections. Here we describe junctions between nanotubes and graphene shells in a material produced by passing a current through graphite. Transmission electron micrographs show that the junction angles are not random but fall close to multiples of 30°. We show that connections with these angles are the only ones which are consistent with the symmetry of the hexagonal lattice, and molecular models show that a continuous lattice requires the presence of large carbon rings at the junction. Some of the configurations we propose have not been previously considered, and could be used to construct new kinds of three-dimensional carbon architecture. We also discuss the possible formation mechanism of the junctions.
Showing items related by title, author, creator and subject.
Recent trends in the synthesis of graphene and graphene oxide based nanomaterials for removal of heavy metals — A reviewLim, J.; Mujawar, Mubarak; Abdullah, E.; Nizamuddin, S.; Khalid, M.; Inamuddin (2018)© 2018 The Korean Society of Industrial and Engineering Chemistry The advanced synthesis and development of raw graphene based on various significant functionalization has been outstanding in the wastewater treatment ...
Optimization of SnO<inf>2</inf>Nanoparticles Confined in a Carbon Matrix towards Applications as High-Capacity Anodes in Sodium-Ion BatteriesWei, S.; Chu, S.; Lu, Q.; Zhou, W.; Cai, R.; Shao, Zongping (2018)© 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim SnO 2 /carbon composites including amorphous carbon and graphene or carbon nanotubes are synthesized by a gas-liquid interfacial approach and subsequent annealing ...
Sun, Hongqi; Liu, Shi Zhen; Zhou, Guanliang; Ang, Ming; Tade, Moses; Wang, Shaobin (2012)We discovered that chemically reduced graphene oxide, with an ID/IG >1.4 (defective to graphite) can effectively activate peroxymonosulfate (PMS) to produce active sulfate radicals. The produced sulfate radicals (SO4•—) ... | <urn:uuid:62c64da7-bc92-4a19-8d12-53587c1ec096> | 2.6875 | 552 | Academic Writing | Science & Tech. | 32.042239 | 95,576,081 |
Even just four months after the eclipse, scientists have learned a ton from the day the sun went out.
The event in the Earth’s upper atmosphere had been theorized before, but never detected.
"It’s like the eclipse shape was burned into her retina,” a doctor said about one patient.
But solar eclipses have been known to affect temperature and wind.
ABC News anchor Frank Reynolds shared his hopes for 2017.
The first daughter is among those looking forward to Monday's event.
“Has Everyone His Smoked Glass Ready?” one local newspaper asked before the 1918 celestial event.
From how to protect your eyes on Monday to why ancient civilizations were spooked by the moon blacking out the sun in the middle of the day. | <urn:uuid:48ae8509-ccb0-4c7c-98d4-b0dcc0aef4ff> | 2.65625 | 161 | Content Listing | Science & Tech. | 59.682143 | 95,576,109 |
NASA has narrowed the list of candidates to three space rocks for an ambitious mission to capture an asteroid and tow it to the moon, where it can be explored by astronauts.
The space agency's plan aims to bring a 23-foot-wide space rock into lunar orbit using a robotic space lasso. Once the asteroid is in a stable orbit around the moon, astronauts can visit as soon as 2021 using NASA's Orion space capsule and the giant Space Launch System mega-rocket.
NASA scientists have identified three of the best candidates from a list of 14 asteroids that could be prime contenders for this kind of mission, Paul Chodas, a senior scientist in NASA's Near-Earth Object Program Office told reporters in a teleconference Wednesday. [NASA's Asteroid Capture Mission in Pictures: How It Works]
"It's mostly orbital constraints that those 14 satisfy," Chodas said. "We did not have the opportunity to characterize the size. We have two to three which we'll characterize in the next year and if all goes well, those will be valid candidates that could be certified targets and we'll pass by another in the year 2016. So we have three from the list of 14."
Chodas also thinks the future of viable asteroid discovery is bright. NASA scientists could find about five more asteroid candidates each year as it prepares for the mission, he said during the teleconference at the Space 2013 conference held by the American Institute of Aeronautics and Astronautics in San Diego, Calif.
NASA's asteroid-capture mission aims to send astronauts to explore an asteroid by 2025, a goal set by President Barack Obama in 2010. NASA's 2014 budget plan sets aside $100 million to jump-start the work on the asteroid mission, though the entire project could cost up to $2.6 billion, according to a Keck Institute study.
Scientists are searching for a specific kind of space rock to tow into orbit around the moon. NASA officials want an asteroid that is between 20 and 30 feet in size, which is fairly small for an asteroid. The target size of the near-Earth object is somewhat constrained by the size of the bag and the ability of the robotic probe to wrangle the asteroid back to Earth, Chodas said.
Officials are also hoping to capture an asteroid that will be made out of useful material.
"It would be preferred to have a so-called 'C-type' asteroid which is one that has hydrated minerals," Chodas said. "These are the kinds of asteroids from which you can extract water and oxygen in theory. So this would be a very interesting object to have in distant retrograde orbit around the moon because we could get an idea of what would be required to use asteroids as way-stations to extract consumables should we need that on the way to Mars, for example."
Enhancing the discovery methods for possible asteroid-capture mission candidates could also increase the discovery rate for Earth-threatening asteroids, Chodas said. "We can't just look for one kind of asteroid and not find the other. With these enhancements, we're going to find more of both kinds."
NASA put out a call for proposals from outside groups about the best ways to go about tackling the asteroid mission — and it paid off. The space agency received more than 400 proposals for the asteroid-capture mission that officials narrowed down to 96 they plan on discussing in a public workshop taking place from Sept. 30 to Oct. 2.
Image: YouTube, Space.com
- The Most Amazing Space Photos This Week!
- Jupiter's Breathtaking Cloud Formations on Display in New Juno Image
- As Opportunity Rover Sleeps, Other NASA Craft Study Mars' Raging Dust Storm
- Mars at Opposition 2018: How to See It and What to Expect
This article originally published at Space.com here | <urn:uuid:80d5f4d3-e912-48f5-a6d4-aa1e92a58d63> | 2.9375 | 786 | Truncated | Science & Tech. | 49.874146 | 95,576,117 |
In Vivo Testing
Zebrafish Tol2 Gene Expression Vector
Our Tol2 vector system is highly effective for inserting foreign DNA into the genome of host cells. This system is technically simple, utilizing plasmid transfection or electroporation to permanently integrate your gene(s) of interest into the host genome.
The system is derived from the Tol2 transposon, which is originally isolated from the teleost fish, medaka (Oryzias latipes). Based on sequence homology, the Tol2 transposon was found to be closely related to the hAT family of non-autonomous elements found throughout vertebrate genomes.
The Tol2 system contains two vectors, both engineered as E. coli plasmids. One vector, referred to as the helper plasmid, encodes the transposase. The other vector, referred to as the transposon plasmid, contains two inverted terminal repeats (ITRs) bracketing the region to be transposed. The gene to be delivered into host cells is cloned into this region of the transposon plasmid.
When the transposon and helper plasmids are co-transfected or co-electroporated into target cells, the transposase produced from the helper plasmid recognizes the two ITRs on the transposon, and inserts the flanked region including the two ITRs into the host genome. Insertion occurs without any significant bias with respect to insertion site sequence. This is unlike transposon systems which have specific target consensus sites. For example, piggyBac transposons typically inserts at sites containing the sequence TTAA.
Tol2 is a class II transposon, meaning that it moves in a cut-and-paste manner, hopping from place to place without leaving copies behind. (In contrast, class I transposons move in a copy-and-paste manner.) Tol2 integrates as a single copy through a cut-and-paste mechanism. At each insertion site, the Tol2 transposase creates an 8 bp duplication, resulting in identical 8 bp direct repeats flanking each transposon integration site in the genome.
There are two alternative methods for introducing the transposase into target cells. The helper plasmid can be transiently transfected or electroporated into cells, where it will temporarily drive expression of the transposase. Alternatively, target cells can be injected with Tol2 mRNA generated by in vitro transcription from the helper plasmid. In either case, the transposase will only be expressed for a short time. With the loss of the helper plasmid or degradation of transposase mRNA, the integration of the transposon in the host genome becomes permanent. If Tol2 transposase is reintroduced into the cells, the transposon could get excised from the genome of some cells.
For further information about this vector system, please refer to the papers below.
|Genome Biol. 8(Suppl 1): S7 (2007)||Review of Tol2 vectors.|
|Genetics 174: 639 (2006)||Identification of minimal sequences for Tol2 transposable elements.|
|PLoS Genetics 2: e169 (2006)||Large cargo-capacity transposition with a minimal Tol2 transposon.|
Our Tol2 transposon vector system enables efficient insertion of sequences up to 11 kb into the genome of target cells. The Tol2 transposon plasmid along with the helper plasmid are optimized for high copy number replication in E. coli, efficient transfection or electroporation into a wide range of target cells, and high-level expression of the transgene carried on the vector.
Permanent integration of vector DNA: Conventional transfection or electroporation results in almost entirely transient delivery of DNA into host cells due to the loss of DNA over time. This problem is especially prominent in rapidly dividing cells. In contrast, transfection or electroporation of cells with the Tol2 transposon plasmid along with the helper plasmid (or introduction of Tol2 mRNA) can deliver genes carried on the transposon permanently into host cells due to the integration of the transposon into the host genome.
Technical simplicity: Delivering plasmid vectors into cells is technically straightforward, and far easier than virus-based vectors which require the packaging of live virus.
Very large cargo space: Our Tol2 transposon vector can accommodate ~11 kb of total DNA. The plasmid backbone and transposon-related sequences only occupies about 3 kb, leaving plenty of room to accommodate the user's sequence of interest.
Limited cell type range: The delivery of Tol2 vectors into cells generally relies on transfection or electroporation. The efficiency of these methods can vary greatly from cell type to cell type. Non-dividing cells are often more difficult to transfect than dividing cells, and primary cells are often harder to transfect than immortalized cell lines. Some important cell types, such as neurons and pancreatic β cells, are notoriously difficult to transfect. These issues limit the use of the Tol2 system.
5' ITR: Tol2 5' terminal repeat. When a DNA sequence is flanked by two ITRs, the Tol2 transposase can recognize them, and insert the flanked region including the two ITRs into the host genome.
Promoter: The promoter driving your gene of interest is placed here.
Kozak: Kozak consensus sequence. This is placed in front of the start codon of the ORF of interest to facilitate translation initiation in eukaryotes.
ORF: The open reading frame of your gene of interest is placed here.
SV40 late pA: Simian virus 40 late polyadenylation signal. It facilitates transcriptional termination of the upstream ORF.
3' ITR: Tol2 3' terminal repeat. When a DNA sequence is flanked by two ITRs, the Tol2 transposase can recognize them, and insert the flanked region including the two ITRs into the host genome.
Ampicillin: Ampicillin resistance gene. It allows the plasmid to be maintained by ampicillin selection in E. coli.
pUC ori: pUC origin of replication. Plasmids carrying this origin exist in high copy numbers in E. coli. | <urn:uuid:816401ee-5e9b-43ed-b495-4f47611bf787> | 2.953125 | 1,335 | Knowledge Article | Science & Tech. | 34.630118 | 95,576,142 |
If you have: <a href="t" title="Henrik Gemal <firstname.lastname@example.org> Henrik Gemal <email@example.com> Henrik Gemal <firstname.lastname@example.org>">Henrik Gemal <email@example.com></a> that's a title tag that includes a newline the newline char is show as || in the tooltip when the cursor is over the link. Expected: Newlines in title tags should just be ignored. Build Gecko/20001226
Correction: Newlines shouldn't be ignored. The title should just be shown correct with the newlines. Will attach screenshot of Mozilla and IE handling the attached testcase.
Summary: Newlines in title tooltip is show a two vertical lines (||) → Newlines in title tooltip is shown as two vertical lines (||)
Happens with the "MIT" link at the bottom of http://www.w3.org/DOM/ as well. cc'ing Hixie to make sure we're right about what the the correct behavior is.
Correct behaviour is to treat newlines as whitespace (and trim leading and trailing whitespace) per HTML4 and SGML. I don't know what we should do in XML. This is a parser-level requirement per the spec. Check Bugzilla's INVALID, WONTFIX and FIXED bugs with the words "newline" and "attribute" in the description fields for bugs similar to this one where this has been discussed to death.
But what should we do for
? (What do we do now?) In a DOM where we want to preserve all whitespace for editing, etc. (perhaps this is contrary to SGML, though, but...), how should we behave? (How should we handle 'content: attr(X); white-space: pre'?)
> But what should we do for
? Exactly the same thing we do for ordinary CRs/LFs. There are *no* difference in: <CR>
They're *exactly* the same characters (they're canonically equivalent). See also <URL: http://www.w3.org/TR/charmod/ >
*** Bug 72670 has been marked as a duplicate of this bug. ***
*** Bug 72984 has been marked as a duplicate of this bug. ***
Reassigning to harishd.
Assignee: clayton → harishd
This bug has been marked "future" because the original netscape engineer working on this is over-burdened. If you feel this is an error, that you or another known resource will be working on this bug,or if it blocks your work in some way -- please attach your concern to the bug for reconsideration -----
Status: NEW → ASSIGNED
Target Milestone: --- → Future
I think this is a duplicate of bug 59743.
*** This bug has been marked as a duplicate of 47078 ***
Status: ASSIGNED → RESOLVED
Last Resolved: 17 years ago
Resolution: --- → DUPLICATE
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug. | <urn:uuid:6b1984ec-1b0c-4e41-866a-7dc7d4744e87> | 2.515625 | 708 | Comment Section | Software Dev. | 66.497433 | 95,576,194 |
This resource, from the Royal Society of Chemistry, is about protecting astronauts from the effects of harmful UV light.
Students can experiment with different materials to discover which blocks UV light the best and can inform the Royal Society of Chemistry about their results to receive a mission completion certificate, patches, and to unlock a secret video featuring Tim Peake.
The global experiment booklet contains an overview of the global experiment, a guide to the experiments and the experiments themselves.
Show health and safety information
Please be aware that resources have been published on the website in the form that they were originally supplied. This means that procedures reflect general practice and standards applicable at the time resources were produced and cannot be assumed to be acceptable today. Website users are fully responsible for ensuring that any activity, including practical work, which they carry out is in accordance with current regulations related to health and safety and that an appropriate risk assessment has been carried out.
You might also like
|Subject(s)||Science, Practical work, Enquiries and investigations, Chemistry|
|Age||7-11, 11-14, 14-16|
|Published||2010 to date|
Share this resource
This resource is part of these collections
- Airbus Foundation Discovery Space - Key Stage 2
- Mission: Starlight | <urn:uuid:fa30bba8-b394-408f-865c-c2015b440ab4> | 3.078125 | 262 | Content Listing | Science & Tech. | 15.637061 | 95,576,204 |
This article discusses how to validate information you get from users — that is, to make sure that users enter valid information in HTML forms in an ASP. What you'll learn: If you ask users to enter information in a page — for example, into a form — it's important to make sure that the values that they enter are valid.
For example, you don't want to process a form that's missing critical information.
One of the most important factors affecting inotropic state is the level of calcium in the cytoplasm of the muscle cell.
Positive inotropes usually increase this level, while negative inotropes decrease it.
RDF Parser DAML APIPersistence RDF Query The DAML API is a collection of Java interfaces and utility classes that implements an interface for manipulating DAML ontologies. | <urn:uuid:c586ef1a-2b01-4b64-a6ad-765c5cb710f5> | 2.609375 | 170 | Truncated | Software Dev. | 34.21854 | 95,576,205 |
Since our last blog post, we've carried out an x-ray diffraction experiment with one of our protein crystals. We were lucky that the protein crystal yielded high quality diffraction data, and from this data we were able to solve the first-ever crystal structure of a protein designed by Foldit players—a near-exact match to the designed structure! Below we explain a bit more about x-ray diffraction. In a later post, we'll examine the final structure in more detail.
First, the protein crystal is harvested from the drop using a small loop of nylon, about 0.3 mm across. Protein crystals are often very fragile, so looping the crystal requires a steady hand (i.e. optimal coffee dosage). Even in the loop, the crystal is still immersed in an aqueous solution, with the surface tension of the water helping to keep the crystal in the loop. The loop is rapidly submerged in liquid nitrogen, at a temperature of about -200ºC, which quenches most of the thermal motion of molecules in the crystal.
Once frozen, our looped crystal is mounted on a robotic arm that positions the loop in the path of an x-ray beam. During x-ray exposure, the crystal is kept under a steady stream of cold nitrogen gas to limit temperature increases in the crystal. X-rays have a high energy, and a protein crystal can only endure so much exposure to x-rays before it starts to degrade. The protein lattice could disintegrate from the increased thermal motion of individual protein molecules, or else the x-rays could trigger chemical reactions within the protein, distorting its structure.
X-rays are simply a type of electromagnetic radiation with a very short wavelength—in this case about one angstrom. In an x-ray diffraction experiment, it's important that all radiation has exactly the same wavelength and is focused into a very narrow beam. With our crystal mounted in the path of the x-ray beam, an x-ray detector is positioned behind the crystal, and measures incident x-rays after they strike the crystal and are diffracted by electrons of the protein molecules within. Because of the regular arrangement of atoms in the protein crystal, diffracted x-rays undergo constructive interference in particular directions. This occurs when two equivalent "slices" of the crystal are oriented to coincide with the wavelength of the x-rays. Wherever constructive interference occurs, the detector registers an especially intense signal, shown as a dark spot on the image below. Taken together, these spots comprise a diffraction pattern.
Above is an x-ray diffraction pattern from a protein crystal. In the inset at the right, we can see that some spots seem to have duplicates which are slightly offset. This indicates that there are actually two identical crystals in the path of the x-ray, lying in slightly different orientations. Most likely, the crystal cracked in two during freezing. Fortunately, the image-processing software we use is sophisticated enough to correct for this issue.
The spacing and position of spots is governed by the size and shape of the crystal’s unit cell, the repeating unit that makes up the crystal. The intensity of each spot is determined by the distribution of electrons within the unit cell (i.e. the positions of atoms in the protein). Every atom of the unit cell contributes to each spot in the diffraction pattern. If you could change the electron density around just one atom of your crystallized protein, this would alter the intensity of every spot in the diffraction pattern!
Notice that spots farther from the center of the detector tend to be less intense. More distant spots contain higher resolution data about the electron density of the protein. If we adjust the contrast of this image, we can discern spots close to the edge of the detector. This protein diffracts x-rays to a resolution limit of 1.20 Å! In an electron density map derived from these diffraction patterns, we should be able to distinguish the positions of individual atoms.
If the crystal is rotated relative to the x-ray beam, then we would observe another diffraction pattern, as the new orientation produces constructive interference in different directions. We typically measure a new diffraction pattern at rotation intervals of 0.5 degrees, eventually rotating the crystal a total of 180 degrees (sometimes less for highly-symmetric crystals) to collect a complete dataset. This dataset was collected with a state-of-the-art detector that can measure individual photons; collecting a full dataset takes no more than a few minutes. In the early days of protein crystallography, it could take a whole day to collect a complete dataset!
The processing and interpretation of a these x-ray diffraction patterns is a complex, technical procedure, and we won't go into it here. But suffice it to say, this x-ray diffraction data revealed a full, high-resolution crystal structure of this Foldit player-designed protein!
Congratulations to Waya, Galaxie, and Susume who contributed to this solution in Puzzle 1297! All players should check out Puzzle 1384 to explore the refined electron density map from this data, and see if you can fold up the protein sequence into its crystal structure! We'll follow up later with a more detailed comparison of the designed model and the final crystal structure.( Posted by bkoep 73 639 | Tue, 05/30/2017 - 04:59 | 4 comments ) | <urn:uuid:9dd31827-5695-48e7-b9f0-d8998a2d5df9> | 3.921875 | 1,112 | Personal Blog | Science & Tech. | 45.73765 | 95,576,233 |
|MLA Citation:||Bloomfield, Louis A. "Question 834"|
How Everything Works 16 Jul 2018. 16 Jul 2018 <http://howeverythingworks.org/print1.php?QNum=834>.
To understand how this charge separation occurs, we must look at how crystals respond to stress. Many crystalline materials are microscopically asymmetric, meaning that their molecules form orderly arrangements that aren't entirely symmetric. To visualize such an arrangement, consider a collection of shoes: an orderly arrangement of left shoes can't be symmetric because a left shoe isn't its own mirror image—you can't built a fully symmetric system out of asymmetric pieces. Like left shoes, sucrose molecules (the molecules in table sugar) are asymmetric so that a crystal of sucrose is also asymmetric.
Whenever you squeeze a crystal, exposing it to stress, its electric charges rearrange somewhat. In a symmetric crystal, this microscopic rearrangement doesn't have any overall consequences. But in an asymmetric crystal such as sucrose, the microscopic rearrangement can produce a large overall rearrangement of electric charges and huge voltages can appear between different parts of the crystal. The most familiar such case is in the spark lighters for gas grills, where a stressed asymmetric crystal creates large sparks. In a Wint-O-Green Lifesaver, the large build-ups of charge cause small sparks that produce the light you see. | <urn:uuid:bebe4d5b-a620-4260-8e60-bed9f7f95302> | 3.46875 | 303 | Knowledge Article | Science & Tech. | 41.975606 | 95,576,255 |
SCIENTISTS have discovered a massive “pool” of underwater gas that is warming up the planet and could spark a climate catastrophe.
Experts from Queen Mary, University of London, found that microbes are generating a vast lake of methane in the tropical Pacific Ocean.
Sediment collected from the ocean floor, where there is very little oxygen, revealed how bugs are creating the largest region of marine methane on Earth.
Methane is a potent greenhouse gas with 30 times more heat-trapping power than carbon dioxide.
It follows the discovery of the terrifying “Jacuzzi of Despair” off the Gulf of Mexico which will kill anything that swims in it.
Atmospheric levels of methane have increased in the last few decades, partly because of human activity.
Scientists are keen to understand natural processes of methane production and consumption in order to assess the role played by humans.
Most Read in News
Queen Mary scientists aboard the Royal Research Ship James Cook spent six weeks mapping the methane pool between Panama and Hawaii.
Their findings are published in the International Society for Microbial Ecology journal.
Dr Felicity Shelley, from the university’s School of Biological and Chemical Sciences, said: “The research is novel because it’s the first time anyone has successfully retrieved sediment from this part of the ocean and directly measured methane production using specialised equipment on board the research ship.
“It is important that we understand how microbes produce and consume this powerful greenhouse gas, especially in the oceans, where we currently understand very little.”
We pay for your stories! Do you have a story for The Sun Online news team? Email us at firstname.lastname@example.org or call 0207 782 4368 | <urn:uuid:6f1c1f6a-f363-45df-8618-0f256ac438aa> | 3.515625 | 360 | Truncated | Science & Tech. | 36.553 | 95,576,263 |
Comparative Protein Biochemistry -
Enzymology & Molecular Parasitology
two topics enzymology and molecular parasitology are combined
in the current research projects outlined below.
The main focus of the lab is to compare
enzymes and protein machineries from different organisms
with an emphasis on unicellular parasites.
do we compare yeast and parasites? Apart
from satisfying our curiosity, we analyze
protein functions in order to figure
out what exactly distinguishes parasitic protists from yeast
or man at the molecular level. The more we know about parasites, the more
treatment options will be discovered (even though biochemical
research is just one part of such a process). Furthermore,
the comparison of similar proteins from phylogenetically
distant organisms (Fig. 1) is highly suited to
decipher protein structure-function relationships and the
molecular evolution of protein machineries.
In brief, we try to answer the questions "what do certain proteins
do" and "how do they work" for parasites, yeast and man.
1: We work with proteins from baker's yeast (blue) and the
parasites Plasmodium falciparum (red) and Leishmania tarentolae
(green). These organisms are members from three different
groups of eukaryotes. In contrast, many commonly used model
organisms such as yeast, worms, flies, mice, etc. are all
members of a single group (the opisthokonta). The figure
was modified from Deponte
mitochondrial protein import machinery of parasitic protists
GFP-tagging sheds light on protein translocation
and thiol-disulfide metabolism - new enzymes, new lessons
glyoxalase system of the malaria parasite Plasmodium falciparum | <urn:uuid:5f093cb0-b6e7-4d46-8f11-5339433a2e1c> | 2.65625 | 378 | About (Org.) | Science & Tech. | 16.548342 | 95,576,277 |
A new understanding of the microbes and viruses in the thawing permafrost in Sweden may help scientists better predict the pace of climate change.
While the United States is deeply divided on many issues, climate change stands out as one where there is remarkable consensus, according to Stanford research.
New research from the University of Guelph is dispelling a commonly held assumption about climate change and its impact on forests in Canada and abroad.
In a hurricane-proof lab miles down the Florida Keys, scientists coddle tiny pieces of coral from the moment they are spawned until they are just hearty enough to be separated into specimens equipped to survive in the wild.
The plethora of salamanders living in the southern Appalachian Mountains might be in less danger from the effects of global warming than previously believed, according to new research published Wednesday in Science Advances.
Researchers using satellite imaging have found much greater than expected deforestation since 2000 in the highlands of Southeast Asia, a critically important world ecosystem. The findings are important because they raise ...
Climate change predictions are not taking account of the full range of possible effects of rising carbon dioxide levels, researchers say.
A U.S. judge who held a hearing about climate change that received widespread attention ruled Monday that Congress and the president were best suited to address the contribution of fossil fuels to global warming, throwing ...
A new framework to understand how uneven the effects of a 1.5°C world are for different countries around the world has been published today in Geophysical Research Letters, led by researchers from the Environmental Change ...
Vegetation plays an important role in shaping local climate—just think of the cool shade provided by a forest or the grinding heat of the open desert. | <urn:uuid:641603eb-077f-477f-96d9-f9c6d7998233> | 2.921875 | 348 | Content Listing | Science & Tech. | 39.261164 | 95,576,285 |
Plastic solar cells are light, easy to install, and readily produced using a printer. Nevertheless, the processes that take place on the molecular scale during the production of organic solar cells are not yet entirely clear. Researchers from the Technical University of Munich (TUM) have now managed to observe these processes in real time. Their findings, which are published in the specialist journal Advanced
The solar modules that can be seen on the roofs of many houses mainly consist of the semiconductor silicon. They are heavy and consequently costly to secure on roofs. Moreover, they do not blend in very well with their surroundings.
Organic solar cells, which consist of organic molecules like plastic bags or cling film, are an alternative to these conventional solar cells. Organic solar cells are soluble and can therefore be produced using a printer. Since they are very thin and light weight the installation of this thin light converting device in a variety of different locations is feasible, furthermore, the color and shape of the solar cells can also be adjusted. One of the current disadvantages is, however: The efficiency of organic photovoltaics has not yet reached that of silicon solar cells.
Processes at the nano level
One of the key parameters for harvesting more energy from the flexible solar cells is the arrangement of the molecular components of the material. This is important for the energy conversion because, as in the case of the “classic” solar cell, free electrons must be produced. To do this, organic solar cells need two types of material, one that donates electrons and another one that accepts them. The interface between these materials must be as large as possible to convert light into electricity. Up to now, it was not known exactly how the molecules align with each other during the printing process and how the crystals they form grow during the drying process. Like the pigments in printer ink, the molecules are initially contained in a solution.
“In order to be able to control the arrangement of the components, we need to understand what happens at the molecular level during the drying process,” explains Dr. Eva M. Herzig from the Munich School of Engineering (MSE) at TUM. To resolve such small structures inside a drying film with adequate time resolution presents an experimental challenge.
X-rays give insights into the process
Working in cooperation with the Lawrence Berkeley National Laboratory in the USA, Stephan Pröller, doctoral candidate at MSE, used X-rays to make the molecules and their processes visible during the printing of a plastic film. He identified different phases that unfold during the drying of the film.
Initially the solvent evaporates while the other materials stay in solution. This leads to an increase in the concentration of the plastic molecules in the wet film until the electron donor starts crystallizing. At the same time the electron acceptor starts to form aggregates. A fast crystallization process follows, pushing the aggregates of the electron acceptor closer together. At this stage the distance between the interfaces of the two materials is defined, which is closely related to efficiency. To systematically improve the solar cells, this step in the printing process needs to be controlled.
In the last stage optimizing processes within the individual materials are taking place, like the optimization of the packing of the crystals.
“The production speed also plays an important role,” explains Pröller. Although this pattern is preserved with faster drying processes, the aggregates and crystals formed by the materials influence the remainder of the structure formation so that slower structure formation has a more positive impact on the final efficiency.
The researchers would now like to use their insights into the processes to gain specific control over the arrangement of the materials using other parameters. These results could then be transferred to industrial production and help to optimize it. | <urn:uuid:3be53452-8597-4b81-9d75-6ef8280ffc1e> | 3.671875 | 761 | Truncated | Science & Tech. | 32.277432 | 95,576,316 |
Mercury in the Antarctic troposphere has a distinct chemistry and challenging long-term measurements are needed for a better understanding of the atmospheric Hg reactions with oxidants and the exchanges of the various mercury forms among air-snow-sea and biota. Antarctic mosses and lichens are reliable biomonitors of airborne metals and in short time they can give useful information about Hg deposition patterns. Data summarized in this review show that although atmospheric Hg concentrations in the Southern Hemisphere are lower than those in the Northern Hemisphere, Antarctic cryptogams accumulate Hg at levels in the same range or higher than those observed for related cryptogam species in the Arctic, suggesting an enhanced deposition of bioavailable Hg in Antarctic coastal ice-free areas. In agreement with the newest findings in the literature, the Hg bioaccumulation in mosses and lichens from a nunatak particularly exposed to strong katabatic winds can be taken as evidence for a Hg contribution to coastal ecosystems by air masses from the Antarctic plateau. Human activities on the continent are mostly concentrated in coastal ice-free areas, and the deposition in these areas of Hg from the marine environment, the plateau and anthropogenic sources raises concern. The use of Antarctic cryptogams as biomonitors will be very useful to map Hg deposition patterns in costal ice-free areas and will contribute to a better understanding of Hg cycling in Antarctica and its environmental fate in terrestrial ecosystems.
Scheda prodotto non validato
Scheda prodotto in fase di analisi da parte dello staff di validazione
|Titolo:||Atmospheric chemistry of mercury in Antarctica and the role of cryptogams to assess deposition patterns in coastal ice-free areas|
|Appare nelle tipologie:||1.1 Articolo in rivista|
File in questo prodotto: | <urn:uuid:a8d492b1-6ff5-4ab9-8982-d89d70e80d6b> | 3.078125 | 389 | Knowledge Article | Science & Tech. | -0.147944 | 95,576,336 |