text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
|Scope: Global & Europe|
|Scientific Name:||Symphodus rostratus (Bloch, 1791)|
Lutjanus rostratus Bloch, 1791
|Red List Category & Criteria:||Least Concern ver 3.1|
|Reviewer(s):||Craig, M.T., Nieto, A., García, M. & Allen, D.J.|
This species is present throughout the Mediterranean Sea and the western part of the Black Sea, and there are no known major threats to its populations. Although there is no specific population information available, its populations are thought to be stable. This species is listed as Least Concern.
|Previously published Red List assessments:|
|Range Description:||This species is present throughout the whole Mediterranean basin and the western part of the Black Sea (Golani et al. 2006). The upper depth limit is 30 metres, while the lower depth limit is one metre (Louisy 2005).|
Native:Albania; Algeria; Bulgaria; Croatia; Cyprus; Egypt (Egypt (African part), Sinai); France (Corsica, France (mainland)); Gibraltar; Greece (East Aegean Is., Greece (mainland), Kriti); Israel; Italy (Italy (mainland), Sardegna, Sicilia); Lebanon; Libya; Malta; Monaco; Montenegro; Morocco; Romania; Slovenia; Spain (Baleares, Spain (mainland), Spanish North African Territories); Syrian Arab Republic; Tunisia; Turkey (Turkey-in-Asia, Turkey-in-Europe)
|FAO Marine Fishing Areas:|
Mediterranean and Black Sea
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||This species is common and widespread in the Mediterranean Sea and it has also been recorded from the western part of the Black Sea (Louisy 2005). The current population trend of this species is stable.|
|Current Population Trend:||Stable|
|Habitat and Ecology:||This species lives mainly over Posidonia seagrass beds, though it is also present on algal-covered rocky reefs (Golani et al. 2006). It frequently occurs in large aggregations and feeds on small benthic organisms such as molluscs, crustaceans and echinoderms. Spawning takes place in spring, when the male builds and guards a nest of algae, in which one or more females lay adhesive eggs (Golani et al. 2006), and the breeding season seems to be from March to June (Louisy 2005). The age at maturity of this species is one year, its longevity of about 3-4 years and its maximum size is 13 cm (SL) or 14 cm (TL) (Louisy 2005, Golani et al. 2006).|
This is not a migratory species and they tend to form aggregations year-round (Golani et al. 2006).
|Movement patterns:||Not a Migrant|
|Use and Trade:||This species may be sold for food when caught in local artisanal fisheries, where it is probably mainly used in fish soup.|
|Major Threat(s):||There are no known major threats to this species, although it may be utilized as food when caught in local artisanal fisheries. However, its shallow water seagrass and rocky reef habitats may be threatened with habitat degradation, as a result of urban and agricultural development and waste disposal along the adjacent coastlines, pollution (D. Pollard pers. comm. 2008) and the invasive introduced tropical algae Caulerpa taxifolia (Verlaque and Fritayre 1994, Villele and Verlaque 1995).|
There are no specific conservation measures in place for this species. Its distribution overlaps with several marine protected areas within its range. However, there is a need for conservation actions, regarding resource and habitat protection of seagrass and rocky algal reef habitats, where control of the invasive species Caulerpa taxifolia needs to be undertaken as well, enabling this way restoration of the habitats. Creation of awareness must be carried out in order to prevent water pollution, habitat degradation and control of the invasive algae species. To this matter, policies and regulations need to be strengthened.
More research is also needed regarding the species' population size, distribution and trends, life history and threats. Monitoring is also needed regarding population, harvest and habitat trends.
|Citation:||Pollard, D. 2014. Symphodus rostratus. The IUCN Red List of Threatened Species 2014: e.T187573A49025124.Downloaded on 16 July 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:71a6e140-5194-4fa7-a0ba-8711b01f5f51> | 2.59375 | 1,023 | Knowledge Article | Science & Tech. | 44.06293 | 95,532,208 |
In tranverse waves the motion of the disturbance is perpendicular to the direction of motion of the wave. Longitudinal waves propagate in the same direction as the motion of the disturbance of the medium.
In tranverse waves the motion of the disturbance is perpendicular to the direction of motion of the wave.
Reflection off a sea wall: http://www.youtube.com/watch?v=PevRZAxDxZw
Big wave at beach: http://www.weather.com/blog/weather/8_21326.html
Tacoma narrows bridge: http://video.search.yahoo.com/search/video;_ylt=A2KLqIGXaSdPEn0AIxT7w8QF?p=tacoma+narrows+bridge&b=21&tnr=20
www.cord.eduStanding Wave Characteristics (cont.)
T = 2L/v
fn = n(v/2L) n = 1, 2, 3, …
fn = n(v/4L) n = 1, 3, 5, …
How do I measure
For each of these?
L = l/2
All of these terms are required for the Regents
Amplitude: The height of the wave from node to antinode (transverse waves), or the pressure in a compressive wave. Measured in units describing the wave
Wavelength: The distance traversed by a full cycle of the wave
Node: The “zero point” of the wave
Antinode: The extreme point of the wave (max or min amplitude)
Period: The time between successive waves
Frequency: The rate of occurrence of the wave (in Hertz or cycles / second)
f = 1/T
where T is the period.
Period (if axis is time)
They pass through each other without changing and keep on going.
(Have you ever crossed the beams of two flashlights to see what would happen?)
Two waves of similar frequency
Beat frequency is the difference between them
Fundamental Frequency and Harmonics
the amount of diffraction will
the amount of diffraction
hence velocity, do not change. | <urn:uuid:de4e6506-0f4f-4417-9812-a0f43597bbc2> | 3.53125 | 476 | Content Listing | Science & Tech. | 65.424545 | 95,532,217 |
posted by Suzie
I am in Calculus and am currently learning how to find the Area of a Surface of Revolution. I cannot understand what the surface of revolution (whether it's the x-axis, y-axis, or y=6) is. For example, I had a problem saying to use the washer method to find the volume of the solid generated by revolving the region bounded by the graphs of the equations y=x y=3 x=0 about the line y=4. I don't understand what I am supposed to do. Sometimes I can get the problem right with the way my teacher explained it and sometimes I can't. Please help!
First of all, you have to visualize what the surface of revolution is. You get it by rotating a line (straight or curved) 360 degrees about an axis, like the outside of a vase or pot being made on a lathe or potter's wheel.
Draw yourself a figure with the three lines plotted. In your case, the line that you are rotating is the straight horizontal line y=3 from x=0 to 3, and the 45 degree line y=x from x = 3 to 4.
When rotating the y = x line and y = 3 line about y=4, you get a pencil-shaped volume running from x=0 to x=4. You will have to do the integration in two parts: from x=0 to x=3, you get a cylinder with radius 1 and length 3 (from x=0 to x=3). The volume of that part is 3 pi. You don't even have to perform the integration to see that. The "washer method" consists of doing a dx integration of pi r^2 dx from 0 to 3, with r = 4-3 = 1. The region from x=3 to 4 is conical because the y=x curve forms the surface. The volume of that part is (1/3) pi
I get the complete volume to be
V = (10/3) pi. | <urn:uuid:39576d91-e04e-4c01-9ed4-40b9758c3863> | 3.890625 | 421 | Q&A Forum | Science & Tech. | 81.031874 | 95,532,241 |
Wind shear is a major factor that can keep a tropical cyclone "down" or unable to consolidate and intensify because it keeps pounding the circulation of winds head on. Strong wind shear has been battering Tropical Cyclone 18S for a couple of days and is expected to continue the next couple of days.
An infrared look at Tropical Storm 18S by the AIRS instrument on NASA's Aqua satellite on Feb. 26, revealed wind shear continues to take its toll on the storm and keeps pushing its strongest (purple) precipitation away from the center of the storm. Credit: NASA JPL, Ed Olsen
On Feb. 26 at 1500 UTC (10 a.m. EST) Tropical Storm 18S was located about 1,000 nautical miles (1,151 miles/1,852 km) west-northwest of Learmonth, Australia, near 15.5 south and 98.0 east. TS18S still had maximum sustained winds near 35 knots (40.2 mph/64.8 kph) and was now moving to the south-southeast near 4 knots (4.6 mph/7.4 kph).
Satellite imagery shows that the main convection and thunderstorms are still being pushed away from the center of circulation from wind shear. Vertical wind shear has also caused the storm to stretch out making it difficult to find the center on satellite imagery. According to the Joint Typhoon Warning Center (JTWC), an analysis of the upper level winds showed that the tropical storm is in an area of strong (30-40 knot/34.5 to 46.0 mph/55.5 to 74.0 kph) easterly vertical wind shear.
Tropical Storm 18S is expected to drift under weak steering conditions for the next two days after which time another weather system will push the storm eastward toward Western Australia.Text Credit: Rob Gutro
Rob Gutro | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:e345a749-a4f5-4e2c-88d8-77d86b1c2983> | 3.515625 | 1,027 | Content Listing | Science & Tech. | 55.220601 | 95,532,260 |
Jump to navigation Jump to search
Oxidization may refer to:
- Oxidation, a chemical reaction in which electrons are lost
- Beta oxidation, the process by which fatty acids are broken down in mitochondria and/or peroxisomes
disambiguation page lists articles associated with the title Oxidization.
If an internal link led you here, you may wish to change the link to point directly to the intended article. | <urn:uuid:a738f6a1-2e97-48b3-932a-18220eb58dd0> | 2.59375 | 91 | Knowledge Article | Science & Tech. | 13.758269 | 95,532,269 |
Laser sources operating near a wavelength of four microns are important for a broad range of applications that require power scaling beyond the state-of-the-art. The highest power demonstrated in the spectral region from a solid-state laser source is based upon nonlinear optical (NLO) conversion using the NLO crystal ZnGeP2 (ZGP). High-power operation in ZGP is known to be limited by thermal lensing. By comparing the figure of merit for thermal lensing in ZGP with other NLO crystal candidates, CdSiP2 (CSP) particularly offers significant advantages. However as was the case with ZGP during its early development, the physics of observed crystal defects, and their relevance to power scaling, was not at first sufficiently understood to improve the crystal’s characteristics as a NLO wavelength conversion element. During the past decade, significant progress has been made (1) with the first reported growth of a large CSP crystals, (2) in understanding the crystal’s characteristics and its native defects, (3) in improving growth and processing techniques for producing large, low-loss crystals, and (4) in demonstrating CSP’s potential for generating high-power mid-infrared laser light. The paper will summarize this progress. | <urn:uuid:6739cbac-acb0-4b2f-b562-30c3658172e4> | 2.515625 | 263 | Academic Writing | Science & Tech. | 31.486154 | 95,532,293 |
Nucleobases, also known as nitrogenous bases or often simply bases, are nitrogen-containing biological compounds that form nucleosides, which in turn are components of nucleotides, with all of these monomers constituting the basic building blocks of nucleic acids. The ability of nucleobases to form base pairs and to stack one upon another leads directly to long-chain helical structures such as ribonucleic acid (RNA) and deoxyribonucleic acid (DNA).
Five nucleobases—adenine (A), cytosine (C), guanine (G), thymine (T), and uracil (U)—are called primary or canonical. They function as the fundamental units of the genetic code, with the bases A, G, C, and T being found in DNA while A, G, C, and U are found in RNA. Thymine and uracil are identical excepting that T includes a methyl group that U lacks.
Adenine and guanine have a fused-ring skeletal structure derived of purine, hence they are called purine bases. Similarly, the simple-ring structure of cytosine, uracil, and thymine is derived of pyrimidine, so those three bases are called the pyrimidine bases. Each of the base pairs in a typical double-helix DNA comprises a purine and a pyrimidine: either an A paired with a T or a C paired with a G. These purine-pyrimidine pairs, which are called base complements, connect the two strands of the helix and are often compared to the rungs of a ladder. The pairing of purines and pyrimidines may result, in part, from dimensional constraints, as this combination enables a geometry of constant width for the DNA spiral helix. The A-T and C-G pairings function to form double or triple hydrogen bonds between the amine and carbonyl groups on the complementary bases.
In August 2011, a report based on NASA studies of meteorites suggested that nucleobases such as adenine, guanine, xanthine, hypoxanthine, purine, 2,6-diaminopurine, and 6,8-diaminopurine may have formed in outer space as well as on earth.
The origin of the term base reflects these compounds' chemical properties in acid-base reactions, but those properties are not especially important for understanding most of the biological functions of nucleobases.
At the sides of nucleic acid structure, phosphate molecules successively connect the two sugar-rings of two adjacent nucleotide monomers, thereby creating a long chain biomolecule. These chain-joins of phosphates with sugars (ribose or deoxyribose) create the "backbone" strands for a single- or double helix biomolecule. In the double helix of DNA, the two strands are oriented chemically in opposite directions, which permits base pairing by providing complementarity between the two bases, and which is essential for replication of or transcription of the encoded information found in DNA.
DNA and RNA also contain other (non-primary) bases that have been modified after the nucleic acid chain has been formed. In DNA, the most common modified base is 5-methylcytosine (m5C). In RNA, there are many modified bases, including those contained in the nucleosides pseudouridine (Ψ), dihydrouridine (D), inosine (I), and 7-methylguanosine (m7G).
Hypoxanthine and xanthine are two of the many bases created through mutagen presence, both of them through deamination (replacement of the amine-group with a carbonyl-group). Hypoxanthine is produced from adenine, xanthine from guanine, and uracil results from deamination of cytosine.
Modified purine nucleobases
These are examples of modified adenosine or guanosine.
Modified pyrimidine nucleobases
These are examples of modified cytosine, thymine or uridine.
A vast number of nucleobase analogues exist. The most common applications are used as fluorescent probes, either directly or indirectly, such as aminoallyl nucleotide, which are used to label cRNA or cDNA in microarrays. Several groups are working on alternative "extra" base pairs to extend the genetic code, such as isoguanine and isocytosine or the fluorescent 2-amino-6-(2-thienyl)purine and pyrrole-2-carbaldehyde.
In medicine, several nucleoside analogues are used as anticancer and antiviral agents. The viral polymerase incorporates these compounds with non-canonical bases. These compounds are activated in the cells by being converted into nucleotides; they are administered as nucleosides as charged nucleotides cannot easily cross cell membranes. At least one set of new base pairs has been announced as of May 2014.
- Callahan; Smith, K.E.; Cleaves, H.J.; Ruzica, J.; Stern, J.C.; Glavin, D.P.; House, C.H.; Dworkin, J.P. (11 August 2011). "Carbonaceous meteorites contain a wide range of extraterrestrial nucleobases". PNAS. doi:10.1073/pnas.1106493108. Retrieved 2011-08-15.
- Steigerwald, John (8 August 2011). "NASA Researchers: DNA Building Blocks Can Be Made in Space". NASA. Retrieved 2011-08-10.
- ScienceDaily Staff (9 August 2011). "DNA Building Blocks Can Be Made in Space, NASA Evidence Suggests". ScienceDaily. Retrieved 2011-08-09.
- BIOL2060: Translation
- "Role of 5' mRNA and 5' U snRNA cap structures in regulation of gene expression" – Research – Retrieved 13 December 2010.
- T Nguyen, D Brunson, C L Crespi, B W Penman, J S Wishnok, and S R Tannenbaum, DNA damage and mutation in human cells exposed to nitric oxide in vitro, Proc Natl Acad Sci U S A. 1992 April 1; 89(7): 3030–3034
- Denis A. Malyshev; Kirandeep Dhami; Thomas Lavergne; Tingjian Chen; Nan Dai; Jeremy M. Foster; Ivan R. Corrêa & Floyd E. Romesberg (2014). "A semi-synthetic organism with an expanded genetic alphabet". Nature. 509: 385–388. doi:10.1038/nature13314. PMC . PMID 24805238. | <urn:uuid:63d21001-3c0f-472e-8a01-1c016646916b> | 4.40625 | 1,430 | Knowledge Article | Science & Tech. | 42.116142 | 95,532,300 |
ephemeris time (ET), astronomical time defined by the orbital motions of the earth, moon, and planets. The earth does not rotate with uniform speed, so the solar day is an imprecise unit of time. Ephemeris time is calculated from the positions of the sun and moon relative to the earth, assuming that Newton's laws are perfectly obeyed. It is used to calculate the future positions of the sun and the planets. By convention, the standard seasonal year is taken to be AD 1900 and to contain 31,556,925.9747 sec of ephemeris time. In 1984 ephemeris time was renamed terrestrial dynamical time (TDT or TT); also created was barycentric dynamical time (TDB), which is based on the orbital motion of the sun, moon, and planets. For most purposes they can be considered identical, since they differ by only milliseconds, and often therefore are referred to simply as dynamical time.
"ephemeris time." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (July 22, 2018). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/ephemeris-time
"ephemeris time." The Columbia Encyclopedia, 6th ed.. . Retrieved July 22, 2018 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/ephemeris-time | <urn:uuid:4da29009-c1b5-446a-ae19-ffea6e2a04f7> | 3.96875 | 317 | Knowledge Article | Science & Tech. | 44.707509 | 95,532,302 |
In oblique collision settings, parallel and perpendicular components of the relative plate motion can be partitioned into different structures of deformation and may be localized close to the plate boundary, or distributed on a wider region. In the Southern Alps of New Zealand, it has been proposed that one-third of the regional convergence is distributed in a broad area along the Southern Alps orogenic wedge. To better document and understand the regional dynamics of such systems, reliable markers of the horizontal tectonic motion over geological time scales are needed. River networks are able to record a large amount of distributed strain and they can thus be used to reconstruct the mode and rate of distribution away from major active structures. To explore the controls on river resilience to deformation, we develop an experimental model to investigate river pattern evolution over a doubly-vergent orogenic wedge growing in a context of oblique convergence. We use a rainfall system to activate erosion, sediment transport and river development on the model surface. At the end of the experiment, the drainage network is statistically rotated clockwise, confirming that rivers can record the distribution of motion along the wedge. Image analysis of channel time-space evolution shows how the fault-parallel and fault-perpendicular components of motion decrease toward the fault and impose rotation to the main trunk valleys. However, rivers do not record the whole imposed rotation rate, which suggest that the natural lateral channel dynamics can alter the capacity of rivers to act as passive markers of deformation.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:76da4726-2f55-4749-85f4-d83d02ac6eda> | 3.0625 | 318 | Academic Writing | Science & Tech. | 9.13665 | 95,532,303 |
Porosity Measurement in Composites Using Ultrasonic Attenuation Methods
The measurement of porosity content in composites has been an area of interest to the NDE community. Theoretical and experimental work have related ultrasonic scattering to the amount of porosity in composites and metals . By monitoring the frequency dependence of the ultrasonic scattering, information concerning the amount of porosity in the material can be determined. The scattering of ultrasonic waves can be measured by monitoring the attenuation of the waves as they travel through a material. To accurately measure the attenuation associated with material properties such as porosity scattering, corrections must be made to the ultrasonic amplitude data. These corrections concern other ultrasonic loss mechanisms that are attributed to the measurement process such as surface or boundary effects and transducer focus effects.
KeywordsUltrasonic Wave Composite Plate Acoustic Impedance Ultrasonic Attenuation Ultrasonic Beam
Unable to display preview. Download preview PDF.
- 3.R. F. Murphy, R. W. Reed, and R. Williams, Proceedings Nondestructive Testing and Evaluation of Advanced Materials and Composites Conference, Aug. 19–21,1986 U. S. Air Force Academy, 135, NTIAC, NMCIAC, MCICGoogle Scholar
- 4.R. F. Murphy and R. W. Reed, Proceedings 20th International SAMPE Technical Conference, Sept. 27–29, 1988, Minneapolis, MN., 481Google Scholar
- 5.H. J. Roth, Jl. of Appl. Phys., 19, 901 (1950)Google Scholar
- 6.J. Krautkramer and H. Krautkramer, “Ultrasonic Testing of Materials”, Springer-Verlag, Berlin Heidelberg New York (1977)Google Scholar
- 7.R. F. Murphy, R. W. Reed, and T. J. Batzinger, “Multiparameter Ultrasonic Evaluation of Thick Composite Materials”, Review of Progress in Quantitative Nondestructive Evaluation, 9B, 1473 (1990)Google Scholar | <urn:uuid:b5e75d68-caf0-409a-be7d-2600f0ab4f81> | 2.578125 | 442 | Academic Writing | Science & Tech. | 37.246152 | 95,532,315 |
The study, which was led by Dr. Chen, is reported in Issue 52 of Science in China (G) because of its significant research value.
Many explosive phenomena on the Sun, such as solar flares, involve the energy conversion from the magnetic energy to thermal and kinetic energies in the corona, which is the outer atmosphere of the Sun. Therefore, the coronal magnetic field is extremely crucial in the understanding of these eruptive phenomena.
However, at present, only the magnetic field along the solar surface can be measured directly, whereas the magnetic field in the solar corona can hardly be measured. Despite some efforts of measuring through infrared spectral lines and of the inversion through radio emissions, the coronal magnetic field is generally approximated by extrapolating the magnetic field from the solar surface, which is however an ill-posed problem. Therefore, it would be great to have an alternative approach to diagnose the coronal magnetic field.
In 1997, the EUV Imaging Telescope (EIT in short) on board the European–US satellite, Solar and Heliospheric Observatory (SOHO), discovered an unexpected wavelike phenomenon propagating in the solar corona, which was later named "EIT waves" after the telescope. "EIT waves" were explained successfully to be apparently propagating density enhancements compressed by the successive stretching of magnetic field lines during coronal mass ejections (CMEs), the largest-scale eruptive phenomenon on the Sun.
According to this model, the "EIT waves" propagation velocity is intimately determined by the 3-dimensional distribution of the coronal magnetic field. Based on such an interesting property, Dr. Chen proposed recently that the profile of the "EIT wave" propagation velocity can be utilized to probe the coronal magnetic field.
Dr. Chen told the reporter: "You know, we can already diagnose the deep structure of the Earth by analyzing seismic waves. Similarly, we now can diagnose the magnetic field in the solar corona by analyzing EIT waves, which in some sense can be analogized as helioseismic waves." He commented that, in this sense, "EIT wave" observations open a new window for solar physicists to look into the mysterious magnetic field in the solar corona, and would help uncover the explosive nature of many explosive phenomena, including solar flares. As also commented by a reviewer, "This is an interesting paper describing the observations and modeling of EIT waves, and illustrating how they can be applied to probe the global magnetic field in the corona".
"EIT waves" were originally explained as the magnetoacoustic waves, i.e., sound waves coupled with the magnetic field. Such a model was also used to estimate the magnetic field in the low corona. However, the magnetoacoustic wave model cannot account for various characteristics of "EIT waves". To reconcile the discrepancies, Dr. Chen and his collaborators from China, USA, and Japan put forward the magnetic field-line stretching model since 2002, which has been widely recognized in the solar physics community. In this newly published paper, Dr. Chen demonstrated that it is feasible to diagnose the magnetic field in the solar corona using the observations of "EIT wave" velocity profiles.
With the application of the "EIT wave" diagnostics, the 3-dimensional distribution of the solar coronal magnetic field is expected to be revealed, which would finally help unveil the nature of solar flares and CMEs, the two major driving sources of hazardous space disturbances to human high-tech activities, including navigations, telecommunications, manned missions, etc.
Dr. P. F. Chen is working in Department of Astronomy, Nanjing University. The department is one of the lead groups of astronomy research in China. The research was sponsored by National Natural Science Foundation of China (Nos. 10403003 and 10673004) and the Key Project of Chinese National Programs for Fundamental Research and Development (2006CB806302).
References:1. Chen P F. EIT waves and coronal magnetic field diagnosis. Sci China G-Phys Mech Astron, 2009, 52(11): 1785-1789
http://springer.r.delivery.net/r/r?2.1.Ee.2Tp.1hW1Qv.ByxLWW..H.Ixxu.3Geu.bW89MQ%5f%5fDUSeFVZ02. Chen P F, Wu S T, Shibata K and Fang C. Evidence of EIT and Moreton waves in numerical simulations. Astrophys J, 2002, 572: L99-L102
http://www.iop.org/EJ/abstract/1538-4357/572/1/L99/3 Chen P F, Fang C and Shibata K. A full view of EIT waves. Astrophys J, 2005, 622: 1202-1210
P. F. Chen | EurekAlert!
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:55052708-30f0-4f14-a407-43e0718b0f74> | 3.296875 | 1,677 | Knowledge Article | Science & Tech. | 42.620698 | 95,532,328 |
News Release 06-003
Global Warming Can Trigger Extreme Ocean, Climate Changes
Scientists use deep ocean historical records to find an abrupt ocean circulation reversal
January 4, 2006
This material is available primarily for archival purposes. Telephone numbers or other contact information may be out of date; please see current contact information at media contacts.
Newly published research results provide evidence that global climate change may have quickly disrupted ocean processes and lead to drastic shifts in environments around the world.
Although the events described unfolded millions of years ago and spanned thousands of years, the researchers, affiliated with the Scripps Institution of Oceanography, say they provide one of the few historical analogs for warming-induced changes in the large-scale sea circulation, and thus may help to illuminate the potential long-term impacts of today's climate warming.
Writing in this week's issue of the journal Nature, scientists Flávia Nunes and Richard Norris explain that they probed a four- to seven-degree warming period that occurred some 55 million years ago during the closing stages of the Paleocene and the beginning of the Eocene eras. The unique data set they constructed, based on the chemical makeup of tiny ancient sea creatures, uncovered for the first time evidence of a monumental reversal in the circulation of deep-ocean patterns around the world. The researchers concluded that it was triggered by the global warming the world experienced at the time.
"The earth is a system that can change very rapidly," said Nunes. "Fifty-five million years ago, when the earth was in a period of global warmth, ocean currents rapidly changed direction and this change did not reverse to original conditions for about 20,000 years."
The global warming of 55 million years ago, known as the Paleocene/Eocene Thermal Maximum (PETM), emerged in less than 5,000 years, an instantaneous blip on geological time scales. Fossil records indicate that the PETM set in motion a host of important changes around the globe, ranging from a mass extinction of deep-sea bottom-dwelling marine life to key migrations terrestrial mammal species, likely allowed by warm conditions that opened travel routes not possible under previously colder climates. For example, this period is where scientists find the earliest evidence for horses and primates in North America and Europe.
To obtain their data, Nunes and Norris analyzed carbon isotopes--chemical signatures that reveal a host of information--from the shells of single-celled animals called foraminifera, or "forams." Such organisms exist in a variety of marine environments, and their vast numbers per research sample allow scientists to uncover a range of details about the state of the seas.
"A tiny shell from a sea creature living millions of years ago can tell us so much about past ocean conditions," said Nunes. "We know approximately what the temperature was at the bottom of the ocean. We also have a measure of the nutrient content of the water the creature lived in. And, when we have information from several locations, we can infer the direction of ocean currents."
In the study, the scientists looked at a foram named Nuttalides truempyi from 14 sites around the world in deep-sea sediment cores from Deep Sea Drilling Program (DSDP) and Ocean Drilling Program (ODP). The isotopes were used as nutrient "tracers" to reconstruct changes in deep-ocean circulation through the PETM period. Nutrient levels tell the researchers how long a sample has been near or isolated from the sea surface, thus giving them a way to track the age and path of deep-sea water.
The results indicate that deep-ocean circulation in the Southern Hemisphere abruptly stopped the conveyor belt-like process known as "overturning," in which cold and salty water in the depths exchanges with warm water on the surface. Even as it was virtually shutting down in the south, however, overturning apparently became active in the Northern Hemisphere. The researchers believe this shift drove unusually warm water to the deep sea, likely releasing stores of methane gas that led to further global warming and a massive die-off of deep-sea marine life.
Overturning is a fundamental component of the global climate conditions we know today, said Bil Haq, program director in the National Science Foundation (NSF)'s division of ocean sciences, which funded the research. For example, overturning in the modern North Atlantic Ocean is a primary means of drawing heat into the far north Atlantic and keeping temperatures in Europe relatively warmer than conditions in Canada, he said.
Today, "new" deep-water generation does not occur in the Pacific Ocean because of the large amount of freshwater input from the polar regions, which prevents North Pacific waters from becoming dense enough to sink to more than intermediate depths.
In the case of the Paleocene/Eocene, however, deep-water formation was possible in the Pacific Ocean because of global warming-induced changes. The Atlantic Ocean also could have been a significant generator of deep waters during this period.
Modern carbon dioxide input from fossil fuel sources to the earth's surface is approaching the same levels estimated for the PETM period, which raises concerns about future climate and changes in ocean circulation, say the scientists. Thus, they say, the Paleocene/Eocene example suggests that human-produced changes may have lasting effects not only on global climate, but on deep ocean circulation.
"Overturning is very sensitive to surface ocean temperatures and surface ocean salinity," said Norris. "The case described here may be one of the best examples of global warming triggered by the massive release of greenhouse gases. It gives us a perspective on what the long-term impact is likely to be of today's human-caused warming."
Cheryl Dybas, NSF, (703) 292-7734, email: email@example.com
Mario Aguilera, Scripps Institution of Oceanography, (858) 534-3624, email: firstname.lastname@example.org
Jon Corsiglia, Joint Oceanographic Institutions, Inc., (202) 787-1644, email: email@example.com
Nancy Light, IODP-Management International, (202) 465-7511, email: firstname.lastname@example.org
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2018, its budget is $7.8 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives more than 50,000 competitive proposals for funding and makes about 12,000 new funding awards.
Useful NSF Web Sites:
NSF Home Page: https://www.nsf.gov
NSF News: https://www.nsf.gov/news/
For the News Media: https://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: https://www.nsf.gov/statistics/
Awards Searches: https://www.nsf.gov/awardsearch/ | <urn:uuid:cc64f7f8-8b59-4150-9351-912bdecc7461> | 3.78125 | 1,485 | News (Org.) | Science & Tech. | 40.01196 | 95,532,331 |
By Rachel C. Evans,Peter Douglas,Hugh D. Burrow
Applied Photochemistry encompasses the main purposes of the chemical results as a result of gentle absorption by means of atoms and molecules in chemistry, physics, medication and engineering, and includes contributions from experts in those key parts. specific emphasis is put either on how photochemistry contributes to those disciplines and on what the present advancements are.
The booklet begins with a common description of the interplay among gentle and subject, which gives the overall historical past to photochemistry for non-specialists. the subsequent chapters strengthen the final man made and mechanistic features of photochemistry as utilized to either natural and inorganic fabrics, including varieties of fabrics that are beneficial as mild absorbers, emitters, sensitisers, and so on. for a wide selection of functions. an in depth dialogue is gifted at the photochemical approaches happening within the Earth’s surroundings, together with dialogue of vital present points equivalent to ozone depletion. very important detailed, yet interconnected, functions of photochemistry are in photocatalytic remedy of wastes and in solar power conversion. Semiconductor photochemistry performs a big function in those and is mentioned near to either one of those parts. loose radicals and reactive oxygen species are of significant significance in lots of chemical, organic and clinical functions of photochemistry, and are mentioned intensive. the subsequent chapters speak about the relevance of utilizing gentle in drugs, either with a number of varieties of phototherapy and in scientific diagnostics. the advance of optical sensors and probes is heavily regarding diagnostics, yet is usually suitable to many different functions, and is mentioned individually. vital elements of utilized photochemistry in electronics and imaging, via strategies similar to photolithography, are mentioned and it's proven how this can be permitting the expanding miniaturisation of semiconductor units for a wide selection of electronics functions and the advance of nanometer scale units. the ultimate chapters give you the easy principles essential to organize a photochemical laboratory and to characterise excited states.
This ebook is aimed toward these in technology, engineering and drugs who're attracted to using photochemistry in a huge spectrum of parts. each one bankruptcy has the fundamental theories and techniques for its specific functions and directs the reader to the present, vital literature within the box, making Applied Photochemistry appropriate for either the amateur and the skilled photochemist.
Read Online or Download Applied Photochemistry PDF
Similar optical engineering books
Optics basically explains the rules of optics utilizing first-class pedagogy to help pupil studying. starting with introductory principles and equations, okay. okay. Sharma takes the reader in the course of the international of optics by way of detailing difficulties encountered, complicated matters, and genuine purposes. Elegantly written, this ebook carefully examines optics with over three hundred illustrations and a number of other difficulties in every one bankruptcy.
This ebook makes an attempt to provide a dialogue of the physics and present and power functions of the self-focusing of an extreme femtosecond laser pulse in a tra- dad or mum medium. even if self-focusing is an previous topic of nonlinear optics, the final result of self-focusing of extreme femtosecond laser pulses is completely new and unforeseen.
In my profession I’ve chanced on that ‘‘thinking outdoor the box’’ works greater if i do know what’s ‘‘inside the field. ’’ Dave Grusin, composer and jazz musician assorted humans imagine in several time frames: scientists imagine in many years, engineers imagine in years, and traders imagine in quarters. Stan Williams, Director of Quantum technological know-how learn, Hewlett Packard Laboratories every little thing may be made smaller, by no means brain physics; every thing may be made extra ef?
Along with his Ph. D. thesis, awarded the following within the structure of a "Springer Theses", Paul Fulda gained the 2012 GWIC thesis prize offered by way of the Gravitational Wave foreign Committee. The influence of thermal noise on destiny gravitational wave detectors is determined by the dimensions and form of the interrogating laser beam.
- Phase-Locked Loops for Wireless Communications: Digital, Analog and Optical Implementations
- Optical Metrology
- Nonlinear Optics
- Optical Fiber Sensor Technology: Advanced Applications - Bragg Gratings and Distributed Sensors
Extra resources for Applied Photochemistry
Applied Photochemistry by Rachel C. Evans,Peter Douglas,Hugh D. Burrow | <urn:uuid:a28eb20e-6d66-4cb0-8c88-5b1854e63958> | 2.5625 | 920 | Product Page | Science & Tech. | 14.27668 | 95,532,358 |
The Moon and the Stars
|Time Required||Long (2-4 weeks)|
|Material Availability||Readily available|
|Cost||Very Low (under $20)|
|Safety||Use safety measures when in dark areas and use a flashlight when walking in the dark.|
AbstractEveryone loves looking at the full moon, but are these nights the best time to go stargazing? Can the moon interfere with certain astronomical observations?
ObjectiveIn this experiment you will investigate how the phase of the moon effects the number of visible stars in the night sky.
Cite This PageGeneral citation information is provided here. Be sure to check the formatting, including capitalization, for the method you are using and update your citation, as needed.
Last edit date: 2018-03-03
When you are in the city, only a few of the brightest stars are visible. But when you are in the country, you can see many more stars than you can count. Sometimes you can even see the bright belt of our galaxy, the Milky Way. Why is this so?
The lights of the city give off background lighting that block the light from all but the brightest of stars. This urban background lighting is called "light pollution", and can be a problem for urban observatories. But there is a form of natural light pollution from the moon, which is the second brightest object in the sky after the sun. The moon can sometimes be so bright, that it too can block out the light from dimmer stars.
The moon is so bright because it acts like a giant mirror in the night sky that reflects the light of the sun. You may have noticed that depending upon the time of month, that the moon looks different. Sometimes it is only a sliver, other times it is full, and other times it is only half-full. These are called "lunar phases" and are caused by the movement of the Earth between the sun and the moon, causing some of the sunlight reflected by the moon to be blocked.
The lunar phases have a very predictable cycle. In fact, for thousands of years humans have used the lunar phases to keep track of time. The lunar calendar is still used often in Chinese, Hebrew and Muslim cultures. The lunar calendar is also used as a tool to keep track of the phases of the moon by farmers, sailors, fishermen, oceanographers and astronomers.
Does the number of visible stars in the night sky change, depending upon the lunar phase? In this experiment you will count the number of stars in the sky during different phases of the moon. Will there be more stars during a full moon, quarter moon or new moon?
Terms and ConceptsTo do this type of experiment you should know what the following terms mean. Have an adult help you search the Internet, or take you to your local library to find out more!
- new moon
- full moon
- quarter moon
- lunar phase
- astronomical observation
- How many stars can I see from my area?
- Which phase of the moon is best for making astronomical observations?
- How is the lunar calendar useful to astronomers for scheduling experiments?
- Miller, K. and Miller, S. Phases of the Moon. College Park, MD: University of Maryland Astronomy Department. Retrieved April 21, 2014, from http://www.astro.umd.edu/resources/introastro/phases.html
- Weinrich, D. Counting Stars. Mangilao, Guam: UOG Planetarium. Retrieved April 21, 2014, from http://www.guam.net/planet/Docs/Star-Count.html
- Astronomical Applications Department. (n.d.). Phases of the Moon and Percent of the Moon Illuminated. U.S. Naval Observatory Astronomical Applications Dept., Washington, D.C. Retrieved April 21, 2014, from http://aa.usno.navy.mil/faq/docs/moon_phases.php
- Gardner, R. and Webster, D. 1987. Science in Your Backyard. New York, NY: Simon & Schuster, Inc.
- Bonnet, R.L. and Keen, G.D. 1992. Space and Astronomy: 49 Science Fair Projects New York, NY: McGraw-Hill Inc.
- Asimov, Isaac. 1990. Library of the Universe: Projects in Astronomy. Milwaukee, WI: Gareth Steven's Inc.
News Feed on This Topic
Materials and Equipment
- cardboard star counter (a toilet paper tube)
- a clear night without clouds or fog
Remember Your Display Board Supplies
Poster Making Kit
ArtSkills Trifold with Header
- First you will need to consult a lunar calendar to find out when the moon will be in each phase so you can plan and schedule your data collections. A lunar calendar is available online from the U.S. Naval Observatory.
- On the lunar calendar, find the correct year and month you will be conducting your experiment. You will need to find the next date for each of the four primary lunar phases: New Moon, First Quarter, Full Moon and Last Quarter.
- Write down the dates of the next four primary lunar phases on your regular wall calendar so you will remember when to do your star counting. You may want to circle the four days on your calendar with a bright colored marker to help you remember, if you miss one you will need to wait an entire month to do it again!
- Prepare your notebook with a data table for your observations. You will need a data table for each lunar phase, including space to write a description of the moon and to perform any calculations:
|Sum of Counts:||Average Count:||Total Visible Stars:|
- You will need to pack your bag of supplies since you will be conducting this experiment in the field. Bring your cardboard star counter (toilet paper tube), a notebook, a pencil and a flashlight.
- On the night marked on your calendar, if the sky is clear, go out into your backyard to count the visible stars. Bring your cardboard star counter, a notebook, a pencil and a flashlight. If there are too many clouds, try again tomorrow night.
- Be sure to turn off all lights that can interfere with your counts, including porch lights and interior house lights. Use a flashlight to find your way to a safe place to count stars.
- When you find a safe comfortable location with good visibility, turn off your flashlight and allow your eyes to adjust to the light for a few minutes.
- Hold your counting tube up to your eyes and count all of the stars you see through the tube, being careful not to count any star twice. Write the number in your notebook.
- Repeat nine more times, moving your counting tube slightly to a new view of the sky each time. Write each number in your notebook. You should have ten different counts in your notebook.
- Add together the ten numbers, and then divide the sum by ten. This number will be your average number of visible stars in that area. Write this number in the data table.
- Next, calculate the total number of stars in the sky. When using a toilet paper roll to calculate the number of stars in the night sky, the number of stars you see in the tube is equal to a fraction of the total number of stars in the sky. To calculate the total number of stars in the sky, multiply your average by 104. Write this number in your data table. For an explanation of how this number was derived, see Counting Stars by Dave Weinrich.
- Wait until the next phase of the moon, which will be about one week, and repeat steps 6–12. When you are done with your data collection, you should have 10 individual counts, one average count and one total count for each of the four quarterly lunar phases.
- Make a graph and analyze your data. Which nights had the most visible stars? The least? Compare the number of stars and the lunar phases, do you see a correlation? Which phase of the moon yielded the best counts? The worst? How do you think astronomers should use the lunar calendar to make their best observations?
Communicating Your Results: Start Planning Your Display BoardCreate an award-winning display board with tips and design ideas from the experts at ArtSkills.
If you like this project, you might enjoy exploring these related careers:
AstronomerAstronomers think big! They want to understand the entire universe—the nature of the Sun, Moon, planets, stars, galaxies, and everything in between. An astronomer's work can be pure science—gathering and analyzing data from instruments and creating theories about the nature of cosmic objects—or the work can be applied to practical problems in space flight and navigation, or satellite communications. Read more
- Another way to measure light is to use a light meter. A light meter will give a reading of the amount of light present. You can buy a light meter, or you may have one as an option in your camera. Take light readings at night during different phases of the moon. Do the changes in light values correlate with the numbers of visible stars during each lunar phase?
- Another source of light that can interfere with star gazing is light from the urban environment, called light pollution. Try a similar experiment using your star counter to see if the number of visible stars changes at different locations with different amounts of light pollution. Is there a difference between a busy street corner and a large open space? How can urban areas diminish the effects of light pollution?
- Other types of atmospheric pollution can interfere with astronomic observations as well. Do air pollution and smog effect the visibility of stars? Use your local newspaper to find the daily smog or air quality report for your area. Count the stars on a high, medium and low smog day to compare.
Recent Feedback Submissions
|Sort by Date||Sort by User Name|
What was the most important thing you learned?
What problems did you encounter?
Can you suggest any improvements or ideas?
Science Buddies materials are free for everyone to use, thanks to the support of our sponsors. What would you tell our sponsors about how Science Buddies helped you with your project?
Overall, how would you rate the quality of this project?
What is your enthusiasm for science after doing your project?
Compared to a typical science class, please tell us how much you learned doing this project.
|Do you agree?||Report Inappropriate Comment|
Ask an ExpertThe Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you troubleshoot.
Ask an Expert
News Feed on This Topic
Looking for more science fun?
Try one of our science activities for quick, anytime science explorations. The perfect thing to liven up a rainy day, school vacation, or moment of boredom.Find an Activity | <urn:uuid:5e4de4e9-0010-449d-93c1-64f1293e4f5c> | 3.46875 | 2,314 | Tutorial | Science & Tech. | 62.699918 | 95,532,368 |
Space Weather Update: 02/11/2017
By Spaceweather.com, 02/11/2017
DISAPPOINTING COMET FLYBY: This weekend, Comet 45P/Honda-Mrkos-Pajdusakova is flying past Earth only 7.4 million miles away–the 8th-closest comet flyby of the Space Age. Unfortunately, the comet is invisible to the naked eye and even observers with telescopes are having trouble seeing it. After losing many of its volatile gases when it flew past the sun in December, the depleted comet is much dimmer than forecasters expected: photo gallery. Sky maps: Feb. 11, 12.
SNOW MOON PASSES THROUGH EARTH’S SHADOW: According to folklore, this weekend’s full Moon is called the “Snow Moon.” For northerners, it often feels like the brightest Moon of the year as moonlight glistens off the white February landscape. For a while on Friday night, the Snow Moon lost some of its luster when it passed, off center, through the shadow of our planet. Tom Bailey photographed the event from Urbandale, Iowa:
Note the darkening in the upper left quadrant of the lunar disk. That’s our planet’s shadow. When the Moon skims the edge of Earth’s shadow as it did on Friday night, astronomers call it a “penumbral lunar eclipse.” In this case, it was a double eclipse: “As I was shooting the eclipse, a jet passed between my location and the Moon,” says Bailey.
Observers on every continent except Australia witnessed the shadow. Browse the gallery for more sightings.
FUNNEL CLOUD: Note to photographers: When you see a funnel cloud reaching down out of a stormy sky, the correct response is usually Run! Brazilian photographer Helio C. Vital made a different choice. Click! He snapped this picture on Feb. 7th from Rio de Janeiro:
“The cloud appeared about a half hour before sunset,” says Vital. “It was part of a thunderstorm cell that was approaching, announcing the arrival of a new weather system that would bring rain to the city several hours later.”
Meteorologists call this type of cloud a “tuba” — a swirling mass of moist air that can hang down from an active thunderstorm. A tuba that touches the ground gets a new name: tornado. “Fortunately, in spite of its threatening appearance, this tuba did not reach the ground and no damage was reported,” says Vital.
Click! was the correct choice after all.
All Sky Fireball Network
Every night, a network of NASA all-sky cameras scans the skies above the United States for meteoritic fireballs. Automated software maintained by NASA’s Meteoroid Environment Office calculates their orbits, velocity, penetration depth in Earth’s atmosphere and many other characteristics. Daily results are presented here on Spaceweather.com.
On Feb. 11, 2017, the network reported 12 fireballs.
In this diagram of the inner solar system, all of the fireball orbits intersect at a single point–Earth. The orbits are color-coded by velocity, from slow (red) to fast (blue). [Larger image] [movies]
Near Earth Asteroids
Potentially Hazardous Asteroids (PHAs) are space rocks larger than approximately 100m that can come closer to Earth than 0.05 AU. None of the known PHAs is on a collision course with our planet, although astronomers are finding new ones all the time.
On February 11, 2017 there were 1773 potentially hazardous asteroids.
Recent & Upcoming Earth-asteroid encounters:
Notes: LD means “Lunar Distance.” 1 LD = 384,401 km, the distance between Earth and the Moon. 1 LD also equals 0.00256 AU. MAG is the visual magnitude of the asteroid on the date of closest approach.
Cosmic Rays in the Atmosphere
Readers, thank you for your patience while we continue to develop this new section of Spaceweather.com. We’ve been working to streamline our data reduction, allowing us to post results from balloon flights much more rapidly, and we have developed a new data product, shown here:
This plot displays radiation measurements not only in the stratosphere, but also at aviation altitudes. Dose rates are expessed as multiples of sea level. For instance, we see that boarding a plane that flies at 25,000 feet exposes passengers to dose rates ~10x higher than sea level. At 40,000 feet, the multiplier is closer to 50x. These measurements are made by our usual cosmic ray payload as it passes through aviation altitudes en route to the stratosphere over California.
What is this all about? Approximately once a week, Spaceweather.com and the students of Earth to Sky Calculus fly space weather balloons to the stratosphere over California. These balloons are equipped with radiation sensors that detect cosmic rays, a surprisingly “down to Earth” form of space weather. Cosmic rays can seed clouds, trigger lightning, and penetrate commercial airplanes. Furthermore, there are studies ( #1, #2, #3, #4) linking cosmic rays with cardiac arrhythmias and sudden cardiac death in the general population. Our latest measurements show that cosmic rays are intensifying, with an increase of more than 12% since 2015:
Why are cosmic rays intensifying? The main reason is the sun. Solar storm clouds such as coronal mass ejections (CMEs) sweep aside cosmic rays when they pass by Earth. During Solar Maximum, CMEs are abundant and cosmic rays are held at bay. Now, however, the solar cycle is swinging toward Solar Minimum, allowing cosmic rays to return. Another reason could be the weakening of Earth’s magnetic field, which helps protect us from deep-space radiation.
The radiation sensors onboard our helium balloons detect X-rays and gamma-rays in the energy range 10 keV to 20 MeV. These energies span the range of medical X-ray machines and airport security scanners.
The data points in the graph above correspond to the peak of the Reneger-Pfotzer maximum, which lies about 67,000 feet above central California. When cosmic rays crash into Earth’s atmosphere, they produce a spray of secondary particles that is most intense at the entrance to the stratosphere. Physicists Eric Reneger and Georg Pfotzer discovered the maximum using balloons in the 1930s and it is what we are measuring today.
Daily Sun: 11 Feb 17
AR2635 poses no threat for strong solar flares. Credit: SDO/HMI
Sunspot number: 18
What is the sunspot number?
Updated 11 Feb 2017
Current Stretch: 0 days
2017 total: 11 days (25%)
2016 total: 32 days (9%)
2015 total: 0 days (0%)
2014 total: 1 day (<1%)
2013 total: 0 days (0%)
2012 total: 0 days (0%)
2011 total: 2 days (<1%)
2010 total: 51 days (14%)
2009 total: 260 days (71%)
Updated 11 Feb 2017
Current Auroral Oval:
Coronal Holes: 11 Feb 17
A stream of solar wind flowing from the indicated coronal hole will probably sail northof Earth this weekend, having little effect on our planet’s magnetic field. Credit: NASA/SDO.
Noctilucent Clouds The southern season for noctilucent clouds began on Nov. 17, 2016. Come back to this spot every day to see the “daily daisy” from NASA’s AIM spacecraft, which is monitoring the dance of electric-blue around the Antarctic Circle.
Updated at: 02-11-2017 16:55:03
Updated at: 2017 Feb 10 2200 UTC
Updated at: 2017 Feb 10 2200 UTC | <urn:uuid:2fb0ab67-6671-4bc3-8fbf-8322635e0a7a> | 2.921875 | 1,689 | News (Org.) | Science & Tech. | 59.031071 | 95,532,375 |
the scattering of X rays by the atoms of a crystal; the diffraction pattern shows structure of the crystal
The scattering of X-rays by the regular lattice of atoms or molecules in a crystal.
The diffraction pattern so obtained.
U.S. National Library of Medicine
The scattering of x-rays by matter, especially crystals, with accompanying variation in intensity due to interference effects. Analysis of the crystal structure of materials is performed by passing x-rays through them and registering the diffraction image of the rays (CRYSTALLOGRAPHY, X-RAY). (From McGraw-Hill Dictionary of Scientific and Technical Terms, 4th ed)
The numerical value of x-ray diffraction in Chaldean Numerology is: 8
The numerical value of x-ray diffraction in Pythagorean Numerology is: 2
Images & Illustrations of x-ray diffraction
Translations for x-ray diffraction
From our Multilingual Translation Dictionary
Get even more translations for x-ray diffraction »
Find a translation for the x-ray diffraction definition in other languages:
Select another language:
Discuss these x-ray diffraction definitions with the community:
Word of the Day
Would you like us to send you a FREE new word definition delivered to your inbox daily?
Use the citation below to add this definition to your bibliography:
"x-ray diffraction." Definitions.net. STANDS4 LLC, 2018. Web. 22 Jul 2018. <https://www.definitions.net/definition/x-ray diffraction>. | <urn:uuid:a9b0b4b3-4c4d-4c3e-80c4-617c69ee042c> | 3.5 | 331 | Structured Data | Science & Tech. | 46.297563 | 95,532,386 |
posted by Anonymous
The thrust of an airplane's engine produces a speed of 400 mph in still air. The wind velocity is given by
<-20,30>. In what direction should the plane travel to fly due south? Give your answer as an angle from due south.
The airplane velocity should be
(20,-a) where a=sqrt(400^2-20^2) | <urn:uuid:2c313da1-7d87-46cd-ad32-87b1941d29fb> | 3.5 | 82 | Q&A Forum | Science & Tech. | 84.566086 | 95,532,388 |
STUDY: Antarctic Sea Ice Loss Driven By ‘Natural Variability,’ Not Global Warming
A series of strong storms late last year brought warm winds down to Antarctica that melted a South Carolina-sized chunk of sea ice every day, leading to the lowest sea ice coverage on record for the South Pole.
And it likely had nothing to do with man-made global warming, according a new study published in the journal Geophysical Research Letters.
“There’s no indication this is anything but just natural variability,” John Turner, a climate scientist with the British Antarctic Survey, told the American Geophysical Union’s (AGU) blog Friday.
“It highlights the fact that the climate of the Antarctic is incredibly variable,” Turner said.
Turner and his colleagues found that while sea ice decline is an indicator of global warming, Antarctic sea ice decline in 2016 was likely caused by a series of Southern Ocean storms in the fall. Antarctic sea ice had actually been increasing up until this point, hitting record levels in late 2014.
Those storms brought warm air and strong winds to the South Pole that rapidly melted the surrounding sea ice. A strong El Nino also ramped up temperatures throughout 2016.
Scientists have struggled for years to find the signal of man-made warming in Antarctica, but the region still seems to be dominated by natural variability. South Pole sea ice, for example, is only about 3 feet thick and sensitive to strong winds.
Most climate models predicted South Pole sea ice would shrink as the planet warmed, much like Arctic sea ice. But the models were wrong, and sea ice grew since the satellite record began in the late 1970s.
“It is tempting to think that the 2016 low ice conditions may mark this turn toward decreasing ice, but that temptation is not warranted,” Walt Meier, a sea ice scientist at NASA who did not take part in the study, told AGU.
“It’s too soon to tell whether the low ice conditions are an ephemeral downturn or the start of something more long-term,” Meier said.
Turner said global warming could become more apparent as more greenhouse gases are emitted into the atmosphere, but right now, they can’t blame the fall storms in 2016 on human activity.
“This doesn’t mean that climate change isn’t happening, just that, at least through 2015 for Antarctic sea ice, the climate change signal could not be distinguished from natural variability,” Meier said.
Scientists worry global warming will cause the rapid melting of Antarctica’s glaciers, which would increase the rate of sea level rise. Sea level rise has been pretty consistent for the last hundred years or so.
Even so, The New York Times recently likened potential sea level rise from Antarctic glaciers to Biblical-level floods.
“I don’t think the biblical deluge is just a fairy tale,” Terence Hughes, a retired glaciologist told NYT. “I think some kind of major flood happened all over the world, and it left an indelible imprint on the collective memory of mankind that got preserved in these stories.”
A 2015 NASA study found Antarctica’s ice sheet increased in mass from 1992 to 2008. The study found ice gains in Eastern Antarctica more than offset ice loss from melting glaciers in the west.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact email@example.com. | <urn:uuid:9464216d-8784-449b-8836-a5fc5bab9cc1> | 3.1875 | 744 | Truncated | Science & Tech. | 45.364605 | 95,532,412 |
Self-modulated wakefield accelerators
Plasma waves operate as energy transformers that convert the energy from the driver to a trailing bunch of accelerated particles. Proton bunches at the Large-Hadron-Collider (LHC), CERN, are the most energetic charged particle bunch ever produced in a particle accelerator. As a result, they could be used to accelerate electrons and positrons beyond to the energy frontier in plasmas, in distances orders of magnitude smaller than conventional accelerators (A. Caldwell et al, Nature Physics, 5, 363 – 367 (2009)).
Unlike any previous plasma based acceleration experiment performed to date, which have employed drivers with lengths that are comparable to or shorter than the plasma wavelength, LHC proton bunches at CERN can be hundreds of plasmas wavelengths long. The physics of a plasma accelerator driven by such long proton bunches differs, dramatically, from any experiment performed to date.
Long proton (or electron or positron) bunches are subject to the self-modulation instability. This instability results from an unstable coupling between the long driver and the plasma and leads to the formation of a train of smaller particle bunches separated (nearly) by the plasma wavelength. These bunches can excite large amplitude plasma waves that are suitable to accelerate electrons and/or positrons to high energies. Follow the links below to find out more about my involvement in self-modulation experiments performed in Europe and in the United States.
A large international collaboration has been setup (the AWAKE collaboration) in order to design, plan, and execute a proof-of-principle experiment, planned to occur during the next 4 years.
This experiment is be a first step towards the energy frontier with plasma acceleration. It is challenging from a technical, experimental and physics point of view. It requires an integrated effort combining experimental and theoretical advance, complemented by high-fidelity plasma simulations.
I have been collaborating with scientists from Max Planck Institute in Munich (Germany) (Prof. Allen Caldwell and Dr. Patric Muggli) and the Budker Institute of Nuclear Physics (Prof. Konstantin Lotov) in order to maximize chances for a successful endeavor.
Simplified layout of the AWAKE experiment
E-209 collaboration (J. Vieira et al PoP 9, 063105 (2012)).
The AWAKE experiment at CERN is a medium-long term experiment. The physics can be tested right away, taking advantage from currently available un-compressed electron and positron bunches at SLAC FACET.
The study of the self-modulation instability is the main goal of the E-209 experiment. Self-modulation occurs when the length of a particle bunch (or laser pulse) is much larger than the plasma wavelength. The self-consistent interaction with the plasma leads to the formation of a sequence of shorter beamlets (see Figure below).
Each beamlet is separated by the plasma wavelength, leading to resonant amplification of plasma waves from the front towards the tail of the beam. Resonant amplification of the plasma wave amplitude is like when an harmonic oscillator (plasma wave) is externally driven at its natural frequency.
There is much physics to explore, including the role of the background plasma ion motion and the role of competing instabilities.
Additional info can also be found here.
Self-modulated particle beam consists in a train of smaller beamlets. These beamlets ressonantly excite a plasma wave, that grows from the head to the tail of the beam. | <urn:uuid:78383e56-3e0f-4374-80b3-74bc080f13e3> | 2.890625 | 735 | About (Pers.) | Science & Tech. | 30.548026 | 95,532,415 |
Synthetic Yeast 2.0
The global Sc2.0 team has built five new synthetic yeast chromosomes, meaning that 30 percent of S.cerevisiae’s genetic material has now been swapped out for engineered replacements. This is one of several findings of a package of seven papers published March 10 as the cover story for Science.
An international team of more than 200 authors produced the latest work from the Synthetic Yeast Project (Sc2.0). By the end of this year, this international consortium hopes to have designed and built synthetic versions all 16 chromosomes – the structures that contain DNA – for S. cerevisiae.
Like computer programmers, scientists add swaths of synthetic DNA to – or remove stretches from – human, plant, bacterial or yeast chromosomes in hopes of averting disease, manufacturing medicines, or making food more nutritious. Yeast has long served as an important research model because their cells share many features with human cells, but are simpler and easier to study.
“This work sets the stage for completion of designer, synthetic genomes to address unmet needs in medicine and industry,” says Jef Boeke, the Sc2.0 project director. “Beyond any one application, the papers confirm that newly created systems and software can answer basic questions about the nature of genetic machinery by reprogramming chromosomes in living cells.”
In March 2014, Sc2.0 successfully assembled the first synthetic yeast chromosome (synthetic chromosome 3 or synIII) comprising 272,871 base pairs, the chemical units that make up the DNA code. The new round of papers consists of an overview and five papers describing the first assembly of synthetic yeast chromosomes synII, synV, synVI, synX, and synXII. A seventh paper provides a first look at the 3D structures of synthetic chromosomes in the cell nucleus which mimic their native counterparts with remarkable fidelity.
Many technologies developed in Sc2.0 serve as the foundation for GP–write, a related initiative aiming to synthesize complete sets of human and plant chromosomes (genomes) in the next ten years. GP-write will hold its next meeting in New York City on May 9-10, 2017; please visit this site for more information.
To begin synthesizing a yeast chromosome, researchers must first plan thousands of changes, some of which empower them to move around pieces of chromosomes in a kind of fast, high-powered evolution. Other changes remove stretches of DNA code found to be unlikely to have a functional role by past efforts. The BioStudio software as developed by a team at Johns Hopkins led by Joel Bader.
With the edits made, the team starts to assemble edited, synthetic DNA sequences into ever larger chunks, which are finally introduced into yeast cells, where cellular machinery finishes building the chromosome. A major innovation captured in the current round of papers involves this last step.
Previously, researchers were required to finish building one piece of a chromosome before they could start work on the next. Sequential requirements are bottlenecks, says Boeke, which slow processes and increase cost. The current round of papers features several efforts to “parallelize” the assembly of synthetic chromosomes.
Labs around the globe each synthesized different pieces in strains of yeast that were then mated (crossed) to quickly yield thriving yeast, not just with an entire synthetic chromosome, but in some instances with more than one. Specifically, a paper led by author Leslie Mitchell, PhD, a post-doctoral fellow from Boeke’s lab at NYU Langone, described the construction of a strain containing three synthetic chromosomes.
“Steps can be accomplished at the same time in many locales and then assembled at the end, like networking laptops to create a global super computer,” says Mitchell.
Along the way, the global team honed a number of innovations and came to understand yeast biology better. A team at Tsinghua University, for instance, led an effort where six teams built in pieces synthetic chromosome XII (synXII), which was then assembled into a final molecule more than a million base pairs (a megabase) in length. This largest synthetic chromosome to date is still 1/3,000 of what would be needed to build a human genome molecule, so new techniques will be needed.
In addition, experiments demonstrated that drastic changes can be made to the genomes of yeast without killing them, says Boeke. Yeast strains, for instance, survived experiments where sections of DNA code were moved from one chromosome to another, or even swapped between yeast species, with little effect. Genetically pliable (plastic) organisms make good platforms for the dramatic engineering that may be needed for future applications. The search for differences between the wild type and synthetic chromosomes was taken to new heights by the BGI/Edinburgh effort on synII, which used a “Transomic” approach to deeply profile the DNA, RNA protein and even metabolomics and phenomics, and confirm the “yeastiness” of the altered strain and its chromosomes.
The package of seven newly published had authors from ten universities in several countries, including the US (NYU Langone, Johns Hopkins), China (Tsinghua, Tianjin), France (Institut Pasteur, Sorbonne Universités), and Scotland (Edinburgh); along with authors from key industry partners: BGI, the leading Chinese genomics company, and US/China-based Genscript.
Led by the School of Chemical Engineering and Technology at Tianjin University in China, the paper describing the synthesis of SynV is noteworthy in that is was done by undergraduate students as part of “Build-a-Genome China”, a class first taught in the United States at Johns Hopkins, where Boeke worked before coming to NYU Langone. This is part of an emerging global network of “chromosome foundries,” says Boeke, “which is building the next generation of synthetic biologists along with chromosomes.” The Tianjin group also notably completed two chromosomes, and developed powerful methods for “debugging” errors found in synthetic chromosomes.
In addition to Boeke and Mitchell, lead organizers for the current studies included Ying-Jin Yuan of Tianjin and Junbiao Dai of Tsinghua University, Joel Bader from Johns Hopkins, Romain Koszul at the Institut Pasteur, Yizhi Cai at the University of Edinburgh, and Huanming Yang at BGI. The US studies were supported principally by the National Science Foundation. Other key funding sources were the China National High Technology Research and Development Program, the National Science Foundation of China, the Chinese Ministry of Science and Technology, the UK Biotechnology and Biological Sciences Research Council, and ERASynBio.
This article has been republished from materials provided by SAVI. Note: material may have been edited for length and content. For further information, please contact the cited source.
CRISPR Causes More Genome Damage Than First ThoughtNews
Researchers have discovered that CRISPR/Cas9 gene editing can cause greater genetic damage in cells than was previously thought. These results create safety implications for gene therapies using CRISPR/Cas9 in the future as the unexpected damage could lead to dangerous changes in some cells.
Potential Treatment for Rare Inherited CancersNews
Studying two rare inherited cancer syndromes, scientists have found the cancers are driven by a breakdown in how cells repair their DNA. The discovery suggests a promising strategy for treatment with drugs recently approved for other forms of cancer, said the researchers.READ MORE
Crowdsourcing Friendly Bacteria Helps Superbug Cause InfectionNews
Antimicrobial resistant pathogens crowdsource friendly bacteria to survive in immune cells and cause disease, a new study has revealed.READ MORE | <urn:uuid:444967d3-1acb-463f-8f93-acedd55728a2> | 2.65625 | 1,607 | News Article | Science & Tech. | 31.149977 | 95,532,423 |
Press play then disable your screen reader. Use space bar to pause or play, and up and down arrows to control volume. Use left arrow to rewind and right arrow to fast forward.
Human contribution to hottest year
Five separate studies have concluded that human induced climate change was a contributing factor in Australia recording its hottest year on record in 2013.
EMMA ALBERICI, PRESENTER: 2013 was Australia's hottest year on record and now scientists have put a finger on a common cause.
Five separate studies have concluded that human-induced climate change was a contributing factor to last year's extreme heat.
Jason Om reports.
JASON OM, REPORTER: 2013 was a year for smashing Australia's records.
Soaring temperatures created the hottest summer day and warmest winter day, as well as the hottest summer and spring, according to the Bureau of Meteorology.
SOPHIE LEWIS, EARTH SCIENCES, ANU: After an extreme event occurs, everyone naturally questions whether there was a human influence on that event or whether that arose purely from natural variations.
JASON OM: Scientists from Australia and overseas have tested that very question in five studies published by the American Meteorological Society. While each research team looked at different aspects and took various approaches to 2013, there was a common theme.
DAVID KAROLY, ATMOSPHERIC SCIENCES, UNI. OF MELBOURNE: They've all come to very similar conclusions: that climate change due to increases greenhouse gases is already affecting the heatwaves and extreme temperatures across Australia in 2013.
JASON OM: Joint research by David Karoly and Sophie Lewis used climate modelling to compare the factors of natural variability and human activity. They found human-induced climate change was a major factor for the hottest summer and spring on record.
SOPHIE LEWIS: We found it was virtually impossible for those temperatures that we experienced in 2013 to occur without greenhouse gases.
JASON OM: 2013 was also a year of drought and one study led by the University of NSW found the lack of rainfall was another factor.
As the climate warms, the research says, droughts will become more severe.
DAVID KAROLY: In the future we're going to see increasing global temperatures, increasing temperatures across Australia and even more hot extremes.
JASON OM: It's a warning that's becoming all too frequent.
Jason Om, Lateline.
Author Jason Om | <urn:uuid:312076e2-ba99-4007-a7ff-562823e5fbdd> | 3.34375 | 516 | Truncated | Science & Tech. | 36.43397 | 95,532,447 |
Smalltalk/V: Tutorial and Programming Handbook
Publisher: Digitalk, Inc 1988
Number of pages: 571
Smalltalk is both a powerful language -- you can get a lot of activity out of a few lines of code -- and a powerful program development environment -- software utilities help you to reuse as many lines of pre-written code as possible. This book is intended for both people who have never used Smalltalk as well as experienced Smalltalk programmers.
Download or read it online for free here:
by W. R. Lalonde, J. R. Pugh - Prentice-Hall
An intro to the object-oriented programming language Smalltalk-80, with an emphasis on classes, subclassing, inheritance and message passing. The SmallTalk language is fully explained as well as the class library and programming environment.
by Adele Goldberg, David Robson - Addison-Wesley
The book is an overview of the concepts and syntax of the programming language, and an annotated and illustrated specification of the system's functionality. The book gives an example of the design and implementation of a moderate-size application.
by Canol Goekel - Lulu.com
This book tries a different approach for teaching introductory computer programming than most other books by choosing Smalltalk as the programming language. A language which is mature and powerful yet not as widely used as some popular alternatives.
by Stephane Ducasse, at al.
Pharo is a development environment for the classic Smalltalk-80 programming language. This book, intended for both students and developers, will guide you through the Pharo language and environment by means of a series of examples and exercises. | <urn:uuid:b92835f3-ebd0-4263-bb10-f2ec6b76b186> | 2.71875 | 342 | Product Page | Software Dev. | 37.663474 | 95,532,448 |
A team led by the Department of Energy's Oak Ridge National Laboratory has uncovered how certain soil microbes cope in a phosphorus-poor environment to survive in a tropical ecosystem. Their novel approach could be applied in other ecosystems to study various nutrient limitations and inform agriculture and terrestrial biosphere modeling.
Phosphorus is a critical nutrient for global biological processes, such as collecting the sun's energy during photosynthesis and degrading plant debris and soil organic matter. Most tropical ecosystems endure long-term weathering that leaches phosphorus from soil.
An Oak Ridge National Laboratory-led research team found genes for production of phytase enzymes that would be released by tropical soil microbes. Phytase enzymes will attack phytate molecules, releasing much needed phosphate molecules for the microbes' survival. The soil samples were collected in phosphorus-rich and -poor experimental sites at the Smithsonian Tropical Research Institute in the Republic of Panama.
Credit: Melanie Mayes and Andy Sproles/Oak Ridge National Laboratory, U.S. Dept. of Energy
The ORNL-led team set out to discover how soil microbial communities respond to the lack of phosphorus and other nutrient deficiencies at the molecular level.
They collected soil samples at the Smithsonian Tropical Research Institute in the Republic of Panama, an experimental field site with phosphorus-rich plots and unfertilized control plots.
"This was the perfect place to test the optimal foraging theory, which is a model that helps predict an organism's behavior when searching for resources," said Chongle Pan, ORNL senior staff scientist and joint associate professor at the University of Tennessee. "We learned how this theory plays out when applied to microbial communities as they compete for nutrients."
The team analyzed the behaviors of many genes and proteins, and in the phosphorus-deficient, untreated soil, they found an increased number of genes responsible for producing phosphorus-acquiring enzymes. They also discovered more than 100 genes that work to pull phosphorus from phytate, which is a complex organic compound found in plant tissue.
"Finding so many genes to break apart and transport such a complex molecule tells us that microbes are hungry for phosphorus in untreated soil," said Melanie Mayes, an ORNL senior staff scientist who studies multi-scale environmental processes.
Conversely, she noted that when phosphorus was plentiful, more genes needed to acquire complex carbon compounds were present. "The microbial community prioritizes the breakdown of the most needed nutrients, focusing efforts on the most limiting element to balance their overall nutritional needs," she said.
The team ran each soil sample through a series of rigorous and comprehensive analyses. The DOE Joint Genome Institute conducted deep sequencing of the soils' metagenomes, or genetic material recovered directly from the soil. ORNL then used mass spectrometry and metaproteomics to identify more than 7,000 proteins in each sample.
ORNL's Titan supercomputer quickly analyzed the large amounts of metagenomics and metaproteomics data, comparing microbial activities in phosphorus-rich and -poor soils. Environmental Molecular Sciences Laboratory scientists further characterized the soils' organic matter at Pacific Northwest National Laboratory.
These unique tools working together enabled one of the deepest proteogenomics studies done on soil microbial communities, according to Pan.
The ORNL-led team plans to continue their research to characterize the ecology and evolution of soil microbial communities in nutrient-poor environments, which has applications in agriculture and terrestrial biosphere modeling worldwide. Additionally, Mayes and her team are incorporating metagenomics information into nutrient cycling models under a DOE Early Career Research Program Award.
Results from their three-year study titled, "Community Proteogenomics Reveals the Systemic Impact of Phosphorus Availability on Microbial Functions in Tropical Soil," were published in Nature Ecology & Evolution.
The paper's coauthors included Qiuming Yao, Zhou Li, Yang Song, Melanie A. Mayes and Chongle Pan of ORNL; S. Joseph Wright and Benjamin L. Turner of the Smithsonian Tropical Research Institute; Terry C. Hazen, University of Tennessee-ORNL Governor's Chair for Environmental Biotechnology; Xuan Guo of UT; Susannah G. Tringe of the DOE Joint Genome Institute; and Malak M. Tfaily and Ljiljana Paša-Tolic of Pacific Northwest National Laboratory.
The research was supported by the Laboratory Directed Research and Development program at ORNL. Metagenomic sequencing was conducted by the DOE Joint Genome Institute and soil organic matter analyses were performed using Fourier-transform ion cyclotron resonance mass spectrometry by PNNL's Environmental Molecular Sciences Laboratory, both DOE Office of Science User Facilities. This work also leveraged the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility.
ORNL is managed by UT-Battelle for DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.
Sara Shoemaker | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:5b133022-f9ed-4e75-a73b-bdfe285adfdc> | 4.0625 | 1,666 | Content Listing | Science & Tech. | 29.424627 | 95,532,457 |
In Riemannian geometry, the sectional curvature is one of the ways to describe the curvature of Riemannian manifolds. The sectional curvature K(σp) depends on a two-dimensional plane σp in the tangent space at a point p of the manifold. It is the Gaussian curvature of the surface which has the plane σp as a tangent plane at p, obtained from geodesics which start at p in the directions of σp (in other words, the image of σp under the exponential map at p). The sectional curvature is a smooth real-valued function on the 2-Grassmannian bundle over the manifold.
The sectional curvature determines the curvature tensor completely.
- 1 Definition
- 2 Manifolds with constant sectional curvature
- 3 Toponogov's theorem
- 4 Manifolds with non-positive sectional curvature
- 5 Manifolds with positive sectional curvature
- 6 Manifolds with non-negative sectional curvature
- 7 Manifolds with almost flat curvature
- 8 Manifolds with almost non-negative curvature
- 9 References
- 10 See also
Here R is the Riemann curvature tensor.
In particular, if u and v are orthonormal, then
The sectional curvature in fact depends only on the 2-plane σp in the tangent space at p spanned by u and v. It is called the sectional curvature of the 2-plane σp, and is denoted K(σp).
Manifolds with constant sectional curvature
- negative curvature −1, hyperbolic geometry
- zero curvature, Euclidean geometry
- positive curvature +1, elliptic geometry
The model manifolds for the three geometries are hyperbolic space, Euclidean space and a unit sphere. They are the only connected, complete, simply connected Riemannian manifolds of given sectional curvature. All other connected complete constant curvature manifolds are quotients of those by some group of isometries.
If for each point in a connected Riemannian manifold (of dimension three or greater) the sectional curvature is independent of the tangent 2-plane, then the sectional curvature is in fact constant on the whole manifold.
Toponogov's theorem affords a characterization of sectional curvature in terms of how "fat" geodesic triangles appear when compared to their Euclidean counterparts. The basic intuition is that, if a space is positively curved, then the edge of a triangle opposite some given vertex will tend to bend away from that vertex, whereas if a space is negatively curved, then the opposite edge of the triangle will tend to bend towards the vertex.
More precisely, let M be a complete Riemannian manifold, and let xyz be a geodesic triangle in M (a triangle each of whose sides is a length-minimizing geodesic). Finally, let m be the midpoint of the geodesic xy. If M has non-negative curvature, then for all sufficiently small triangles
where d is the distance function on M. The case of equality holds precisely when the curvature of M vanishes, and the right-hand side represents the distance from a vertex to the opposite side of a geodesic triangle in Euclidean space having the same side-lengths as the triangle xyz. This makes precise the sense in which triangles are "fatter" in positively curved spaces. In non-positively curved spaces, the inequality goes the other way:
If tighter bounds on the sectional curvature are known, then this property generalizes to give a comparison theorem between geodesic triangles in M and those in a suitable simply connected space form; see Toponogov's theorem. Simple consequences of the version stated here are:
- A complete Riemannian manifold has non-negative sectional curvature if and only if the function is 1-concave for all points p.
- A complete simply connected Riemannian manifold has non-positive sectional curvature if and only if the function is 1-convex.
Manifolds with non-positive sectional curvature
In 1928, Élie Cartan proved the Cartan–Hadamard theorem: if M is a complete manifold with non-positive sectional curvature, then its universal cover is diffeomorphic to a Euclidean space. In particular, it is aspherical: the homotopy groups for i ≥ 2 are trivial. Therefore, the topological structure of a complete non-positively curved manifold is determined by its fundamental group. Preissman's theorem restricts the fundamental group of negatively curved compact manifolds.
Manifolds with positive sectional curvature
Little is known about the structure of positively curved manifolds. The soul theorem (Cheeger & Gromoll 1972; Gromoll & Meyer 1969) implies that a complete non-compact non-negatively curved manifold is diffeomorphic to a normal bundle over a compact non-negatively curved manifold. As for compact positively curved manifolds, there are two classical results:
- It follows from the Myers theorem that the fundamental group of such manifold is finite.
- It follows from the Synge theorem that the fundamental group of such manifold in even dimensions is 0, if orientable and otherwise. In odd dimensions a positively curved manifold is always orientable.
Moreover, there are relatively few examples of compact positively curved manifolds, leaving a lot of conjectures (e.g., the Hopf conjecture on whether there is a metric of positive sectional curvature on ). The most typical way of constructing new examples is the following corollary from the O'Neill curvature formulas: if is a Riemannian manifold admitting a free isometric action of a Lie group G, and M has positive sectional curvature on all 2-planes orthogonal to the orbits of G, then the manifold with the quotient metric has positive sectional curvature. This fact allows one to construct the classical positively curved spaces, being spheres and projective spaces, as well as these examples (Ziller 2007):
- The Berger spaces and .
- The Wallach spaces (or the homogeneous flag manifolds): , and .
- The Aloff–Wallach spaces .
- The Eschenburg spaces
- The Bazaikin spaces , where .
Manifolds with non-negative sectional curvature
Cheeger and Gromoll proved the soul theorem which states that any non-negatively curved complete non-compact manifold M has a totally convex compact submanifold S such that M is diffeomorphic to the normal bundle of S. Such an S is called the soul of M. This theorem implies that M is homotopic to its soul S which has the dimension less than M.
Manifolds with almost flat curvature
This section is empty. You can help by adding to it. (December 2017)
Manifolds with almost non-negative curvature
This section is empty. You can help by adding to it. (December 2017)
- Cheeger, Jeff; Gromoll, Detlef (1972), "On the structure of complete manifolds of nonnegative curvature", Annals of Mathematics, Second Series, Annals of Mathematics, 96 (3): 413–443, doi:10.2307/1970819, JSTOR 1970819, MR 0309010.
- Gromoll, Detlef; Meyer, Wolfgang (1969), "On complete open manifolds of positive curvature", Annals of Mathematics, Second Series, Annals of Mathematics, 90 (1): 75–90, doi:10.2307/1970682, JSTOR 1970682, MR 0247590.
- Milnor, John Willard (1963), Morse theory, Based on lecture notes by M. Spivak and R. Wells. Annals of Mathematics Studies, No. 51, Princeton University Press, MR 0163331.
- Petersen, Peter (2006), Riemannian geometry, Graduate Texts in Mathematics, 171 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-29246-5, MR 2243772.
- Ziller, Wolfgang (2007). "Examples of manifolds with non-negative sectional curvature". arXiv: .. | <urn:uuid:9bc309c3-e749-4cb3-b75e-637aefa652b4> | 3.65625 | 1,794 | Knowledge Article | Science & Tech. | 36.970985 | 95,532,480 |
Species Detail - Common Moorhen (Gallinula chloropus) - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
1 January (recorded in 2003)
31 December (recorded in 2010)
National Biodiversity Data Centre, Ireland, Common Moorhen (Gallinula chloropus), accessed 21 July 2018, <https://maps.biodiversityireland.ie/Species/10031> | <urn:uuid:3e0aa7c3-f73f-4886-8b2c-43a7da07bd7e> | 2.6875 | 135 | Structured Data | Science & Tech. | 34.405041 | 95,532,482 |
TRAPPIST-1 twice as old as our solar system, claims study
TRAPPIST-1 is a system of seven Earth-size planets orbiting an ultra-cool dwarf star about 40 light-years away.
The ultra-cool dwarf star of the intriguing TRAPPIST-1 planetary system is up to twice as old as our solar system, a study has found.
TRAPPIST-1 is a system of seven Earth-size planets
The Trappist-1 system has been featured in the news quite a bit lately. In May of 2016, it appeared in the headlines after researchers announced the discovery of three...Universe Today (NewsDB Live) 2017-02-25
In February of 2017, a team of European astronomers announced the discovery of a seven-planet system orbiting the nearby star TRAPPIST-1. Aside from the fact that all seven...Universe Today 2017-08-14
An artist's impression of the TRAPPIST-1 planetary system. Seven rocky, Earth-size planets have been found circling TRAPPIST-1, a red dwarf star about the size of Jupiter....Business Insider 2017-02-22
Scientists only announced less than a year ago that they had spotted seven small planets huddled around a star called TRAPPIST-1. Now new research finds that, thanks to...Newsweek 2018-01-24
In February, NASA announced the discovery of seven potentially habitable planets within the TRAPPIST-1 solar system, just 40 light years from Earth. Three of the seven,...Newsweek 2017-04-27
> The central star of the TRAPPIST-1 planetary system (TRAPPIST-1A) is more than twice as cold as the Sun, and a red star at a distance of 39 light years (370 trillion km)...Hindustan Times 2017-03-31
Popular News Stories
> The central star of the TRAPPIST-1 planetary system (TRAPPIST-1A) is more than twice as cold as the Sun, and a red star at a distance of 39 light years (370 trillion km) from us. TRAPPIST-1A is relatively close by and is among the 300 closest stars to the Sun. The star TRAPPIST-1A is about 10 times smaller in size and 12 times less massive than our Sun. Read: A beginner’s...Hindustan Times 2017-03-31
More This artist’s depiction shows the planets of the TRAPPIST-1 system and what state of matter water would maintain based on how far they are from their host star. Photo: NASA/JPL “It's not that the system is doomed, it’s that stable configurations are very exact. We can’t measure all the orbital parameters well enough at the moment so the simulated systems kept resulting in...Yahoo Daily News 2017-05-12
Using NASA's Kepler space telescope, scientists have confirmed that the outermost and least understood planet in the TRAPPIST-1 system orbits its star every 19 days. Scientists announced that the TRAPPIST-1 system has seven Earth-sized planets at a NASA press conference on February 22. The post Orbital details of TRAPPIST-1’s outermost planet confirmed . ......Financial Express 2017-05-23
This illustration shows the possible surface of TRAPPIST-1f, one of the newly discovered planets in the TRAPPIST-1 system.NASA/JPL-Caltech NASA has revealed that the seven planets discovered last year in TRAPPIST-1 exoplanet system are likely to be habitable. ALSO READ: In a first, astronomers discover around 2,000 planets outside the Milky Way galaxy The exoplanets are...International Business Times 2018-02-07
This illustration shows what the TRAPPIST-1 system might look like from a vantage point near planet TRAPPIST-1f (at right). Credit: NASA/JPL-Caltech If we want to know more about whether life could survive on a planet outside our Solar System, it’s important to know the age of its star. Young stars have frequent releases of high-energy radiation called flares that can zap their...Astronomy/Spaceflight Now 2017-08-16 | <urn:uuid:330f3391-3b0d-44a1-856a-5f7895e06506> | 3.03125 | 879 | Content Listing | Science & Tech. | 70.934264 | 95,532,493 |
The scientists discovered this new kind of electrical signal transmission system by applying a novel method: Filamentary electrodes were inserted through open stomata directly into the inner leaf tissue and then placed onto the cell walls (see picture). Stomata are microscopically small openings in the leaf surface which plants facilitate regulating evaporation and gas exchange.
The scientists found out that the new electrical signal they called "system potential" was induced and even modulated by wounding. If a plant leaf is wounded, the signal strength can be different and can be measured over long distances in unwounded leaves, depending on the kind and concentration of added cations (e.g. calcium, potassium, or magnesium).
It is not the transport of ions across cell membranes that causes the observed changes in voltage transmitted from leaf to shoot and then to the next leaf, but the activation of so-called proton pumps. "This is the reason why the "system potential" we measured cannot at all be compared to the classic action potential as present in nerves of animals and also in plants", says Hubert Felle from Gießen University.
Action potentials follow all-or-none characteristics: they are activated if a certain stimulus threshold is reached and then spread constantly. The "system potential", however, can carry different information at the same time: The strength of the inducing stimulus (wound signal) can influence the amplitude of the systemic signal as well as the effect of different ions. "We may be on the trail of an important signal transmission system that is induced by insect herbivory. Within minutes the whole plant is alerted and the plant's defense against its enemy is activated", says Axel Mithöfer from the Max Planck Institute for Chemical Ecology in Jena.
The novel "system potential" was detected in five different plant species, among them agricultural crops like tobacco (Nicotiana tabacum), maize (Zea mays), barley (Hordeum vulgare), and field bean (Vicia faba).
Plant Physiology 149, 1593-1600 (2009)
Hubert H. Felle | EurekAlert!
Further reports about: > Calcium > Filamentary electrodes > Hordeum vulgare > Magnesium > Max Planck Institute > Nicotiana tabacum > Vicia faba > Zea mays > agricultural crops > barley > cell membrane > cell membranes > electrical signal transmission system > field bean > maize > micro-electrodes electrical signals > potassium > proton pump > proton pumps > stomata > voltage changes > wound signal
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:8fea8f8b-84de-49a8-98ec-4f5a3ec01c1c> | 4.0625 | 1,111 | Content Listing | Science & Tech. | 34.466521 | 95,532,497 |
Environmental contamination by agricultural chemicals and industrial waste disposal results in adverse effects on reproduction of exposed birds. The diversity of pollutants results in physiological effects at several levels, including direct effects on breeding adults as well as developmental effects on embryos. The effects on embryos include mortality or reduced hatchability, failure of chicks to thrive (wasting syndrome), and teratological effects producing skeletal abnormalities and impaired differentiation of the reproductive and nervous systems through mechanisms of hormonal mimicking of estrogens. The range of chemical effects on adult birds covers acute mortality, sublethal stress, reduced fertility, suppression of egg formation, eggshell thinning, and impaired incubation and chick rearing behaviors. The types of pollutants shown to cause reproductive effects include organochlorine pesticides and industrial pollutants, organophosphate pesticides, petroleum hydrocarbons, heavy metals, and in a fewer number of reports, herbicides, and fungicides. o,p'-DDT, polychlorinated biphenyls (PCBs), and mixtures of organochlorines have been identified as environmental estrogens affecting populations of gulls breeding in polluted ''hot spots'' in southern California, the Great lakes, and Puget Sound. Estrogenic organochlorines represent an important class of toxicants to birds because differentiation of the avian reproductive system is estrogen dependent.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:c624a1af-36c2-4006-b3c5-42c96e1cc0dc> | 3.265625 | 286 | Academic Writing | Science & Tech. | -15.337943 | 95,532,498 |
Isolation of Total and Poly A+ RNA from Animal Cells
Most RNA in a mammalian cell consists of 28S, 18S, and 5S ribosomal RNAs together with tRNAs and other small ubiquitous RNAs. The remainder (<5%) consists of messenger RNA encoding most of the polypeptides of interest to the vast majority of present day biologists. This mRNA is heterogeneous in size and generally carries long tracts of polyadenylic acid (polyA) at its 3′ end. The mRNA can be purified away from other nucleic acids by hybridization of the poly A tract to oligo(dT) as described below (1). Such RNA is referred to as polyA+ RNA and is a superior substrate for many techniques. However, purifying such molecules presents special problems as a result of the inherent instability of some RNAs and the presence of potent RNAse activities in many cell types. Suitable purification protocols should include RNase inhibition or inactivation measures. The tried and tested methods described below include the latter (2,3). Denaturation of all cellular proteins, including RNases, at a rate superior to that of RNA hydrolysis eliminates RNA degradation. This can be achieved using guanidinium thiocyanate and β-mercaptoethanol which denature cellular proteins and disrupt disulfide bonds respectively.
KeywordsCesium Chloride Guanidinium Thiocyanate Polyadenylic Acid Sweet Smell Resuspend Cell Pellet | <urn:uuid:556c9b39-41a8-44b0-859c-fd9a8cefc8d8> | 2.734375 | 311 | Academic Writing | Science & Tech. | 23.699075 | 95,532,508 |
struct_tree1 man page
struct::tree_v1 — Create and manipulate tree objects
package require Tcl 8.2
package require struct::tree ?1.2.2?
treeName option ?arg arg ...?
treeName append node ?-key key? value
treeName children node
treeName cut node
treeName delete node ?node ...?
treeName depth node
treeName exists node
treeName get node ?-key key?
treeName getall node
treeName keys node
treeName keyexists node ?-key key?
treeName index node
treeName insert parent index ?child ?child ...??
treeName isleaf node
treeName lappend node ?-key key? value
treeName move parent index node ?node ...?
treeName next node
treeName numchildren node
treeName parent node
treeName previous node
treeName set node ?-key key? ?value?
treeName size ?node?
treeName splice parent from ?to? ?child?
treeName swap node1 node2
treeName unset node ?-key key?
treeName walk node ?-order order? ?-type type? -command cmd
The ::struct::tree command creates a new tree object with an associated global Tcl command whose name is treeName. This command may be used to invoke various operations on the tree. It has the following general form:
- treeName option ?arg arg ...?
Option and the args determine the exact behavior of the command.
A tree is a collection of named elements, called nodes, one of which is distinguished as a root, along with a relation ("parenthood") that places a hierarchical structure on the nodes. (Data Structures and Algorithms; Aho, Hopcroft and Ullman; Addison-Wesley, 1987). In addition to maintaining the node relationships, this tree implementation allows any number of keyed values to be associated with each node.
The element names can be arbitrary strings.
A tree is thus similar to an array, but with three important differences:
Trees are accessed through an object command, whereas arrays are accessed as variables. (This means trees cannot be local to a procedure.)
Trees have a hierarchical structure, whereas an array is just an unordered collection.
Each node of a tree has a separate collection of attributes and values. This is like an array where every value is a dictionary.
The following commands are possible for tree objects:
- treeName append node ?-key key? value
Appends a value to one of the keyed values associated with an node. If no key is specified, the key data is assumed.
- treeName children node
Return a list of the children of node.
- treeName cut node
Removes the node specified by node from the tree, but not its children. The children of node are made children of the parent of the node, at the index at which node was located.
- treeName delete node ?node ...?
Removes the specified nodes from the tree. All of the nodes' children will be removed as well to prevent orphaned nodes.
- treeName depth node
Return the number of steps from node node to the root node.
- treeName destroy
Destroy the tree, including its storage space and associated command.
- treeName exists node
Returns true if the specified node exists in the tree.
- treeName get node ?-key key?
Return the value associated with the key key for the node node. If no key is specified, the key data is assumed.
- treeName getall node
Returns a serialized list of key/value pairs (suitable for use with [array set]) for the node.
- treeName keys node
Returns a list of keys for the node.
- treeName keyexists node ?-key key?
Return true if the specified key exists for the node. If no key is specified, the key data is assumed.
- treeName index node
Returns the index of node in its parent's list of children. For example, if a node has nodeFoo, nodeBar, and nodeBaz as children, in that order, the index of nodeBar is 1.
- treeName insert parent index ?child ?child ...??
Insert one or more nodes into the tree as children of the node parent. The nodes will be added in the order they are given. If parent is root, it refers to the root of the tree. The new nodes will be added to the parent node's child list at the index given by index. The index can be end in which case the new nodes will be added after the current last child.
If any of the specified children already exist in treeName, those nodes will be moved from their original location to the new location indicated by this command.
If no child is specified, a single node will be added, and a name will be generated for the new node. The generated name is of the form nodex, where x is a number. If names are specified they must neither contain whitespace nor colons (":").
The return result from this command is a list of nodes added.
- treeName isleaf node
Returns true if node is a leaf of the tree (if node has no children), false otherwise.
- treeName lappend node ?-key key? value
Appends a value (as a list) to one of the keyed values associated with an node. If no key is specified, the key data is assumed.
- treeName move parent index node ?node ...?
Make the specified nodes children of parent, inserting them into the parent's child list at the index given by index. Note that the command will take all nodes out of the tree before inserting them under the new parent, and that it determines the position to place them into after the removal, before the re-insertion. This behaviour is important when it comes to moving one or more nodes to a different index without changing their parent node.
- treeName next node
Return the right sibling of node, or the empty string if node was the last child of its parent.
- treeName numchildren node
Return the number of immediate children of node.
- treeName parent node
Return the parent of node.
- treeName previous node
Return the left sibling of node, or the empty string if node was the first child of its parent.
- treeName set node ?-key key? ?value?
Set or get one of the keyed values associated with a node. If no key is specified, the key data is assumed. Each node that is added to a tree has the value "" assigned to the key data automatically. A node may have any number of keyed values associated with it. If value is not specified, this command returns the current value assigned to the key; if value is specified, this command assigns that value to the key.
- treeName size ?node?
Return a count of the number of descendants of the node node; if no node is specified, root is assumed.
- treeName splice parent from ?to? ?child?
Insert a node named child into the tree as a child of the node parent. If parent is root, it refers to the root of the tree. The new node will be added to the parent node's child list at the index given by from. The children of parent which are in the range of the indices from and to are made children of child. If the value of to is not specified it defaults to end. If no name is given for child, a name will be generated for the new node. The generated name is of the form nodex, where x is a number. The return result from this command is the name of the new node.
- treeName swap node1 node2
Swap the position of node1 and node2 in the tree.
- treeName unset node ?-key key?
Removes a keyed value from the node node. If no key is specified, the key data is assumed.
- treeName walk node ?-order order? ?-type type? -command cmd
Perform a breadth-first or depth-first walk of the tree starting at the node node. The type of walk, breadth-first or depth-first, is determined by the value of type; bfs indicates breadth-first, dfs indicates depth-first. Depth-first is the default. The order of the walk, pre-, post-, both- or in-order is determined by the value of order; pre indicates pre-order, post indicates post-order, both indicates both-order and in indicates in-order. Pre-order is the default.
Pre-order walking means that a parent node is visited before any of its children. For example, a breadth-first search starting from the root will visit the root, followed by all of the root's children, followed by all of the root's grandchildren. Post-order walking means that a parent node is visited after any of its children. Both-order walking means that a parent node is visited before and after any of its children. In-order walking means that a parent node is visited after its first child and before the second. This is a generalization of in-order walking for binary trees and will do the right thing if a binary is walked. The combination of a breadth-first walk with in-order is illegal.
As the walk progresses, the command cmd will be evaluated at each node. Percent substitution will be performed on cmd before evaluation, just as in a bind script. The following substitutions are recognized:
Insert the literal % character.
Name of the tree object.
Name of the current node.
Name of the action occurring; one of enter, leave, or visit. enter actions occur during pre-order walks; leave actions occur during post-order walks; visit actions occur during in-order walks. In a both-order walk, the command will be evaluated twice for each node; the action is enter for the first evaluation, and leave for the second.
Bugs, Ideas, Feedback
This document, and the package it describes, will undoubtedly contain bugs and other problems. Please report such in the category struct :: tree of the Tcllib Trackers [http://core.tcl.tk/tcllib/reportlist]. Please also report any ideas for enhancements you may have for either package and/or documentation.
Copyright (c) 2002 Andreas Kupries <firstname.lastname@example.org> | <urn:uuid:8c93dbca-6fd5-43c6-9ee0-4e938925eb49> | 2.609375 | 2,223 | Documentation | Software Dev. | 60.491359 | 95,532,510 |
+44 1803 865913
Examines climate dominance on the distribution of the major vegetation types of the world.
F.I. Woodward has written a useful short book...Its strengths derive from Woodward's understanding of scale, his focus on key physical and physiological relationships and mechanisms, and his willingness to simplify and to test models that link plant physiological responses to global vegetable patterns...Woodward's ideas should be key to research in the emerging International Geosphere Biosphere Program. The potential application of Woodward's ideas is therefore high, and this potential makes his book a timely publication. Ecology
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
I will not hesitate to use you again or recommend you to others.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:de3638d4-195c-4b03-a064-2d49c7e53828> | 2.59375 | 187 | Product Page | Science & Tech. | 45.529059 | 95,532,513 |
The research, federally funded by the National Science Foundation, appears Feb. 5 in the online edition of the journal Science and will be published later in the journal's print edition.
The scientists used molecular genetic techniques to analyze DNA sequences from 150 wolves, about half of them black, in Yellowstone National Park, which covers parts of Wyoming, Montana and Idaho. They found that a novel mutated variant of a gene in dogs, known as the K locus, is responsible for black coat color and was transferred to wolves through mating.
The biologists are unsure of when the black coat color was transferred from dogs to wolves, but they believe it was not a recent occurrence; the black coat could not have spread as widely as it has throughout North America in just a few hundred years, they say. They suspect the transfer took place sometime before the arrival of Europeans to North America and involved dogs that were here with Native Americans.
"This is the first example where a gene mutation originated in a domesticated species, was transferred to and became very common in a closely related wild species," said Robert Wayne, a UCLA professor of ecology and evolutionary biology and co-author of the Science paper.
"Although genes that evolve under domestication may be transferred to wild species, they generally do not proliferate in the wild because the natural context is so different from that under domestication," Wayne said. "No one would have guessed that the common black coat color in North American wolves came from dogs — there is no precedent for it. Moreover, for whatever reason, the transfer of the black coat-color gene from dogs to wolves and its success in the wild occurred uniquely in North America.
"Most mutations we see in dogs have been selected by humans, and we intuitively think they are unique to dogs," he said. "We don't think of short-legged wolves like dachshunds or wild wolves that look like Dalmatians. The surprise of this study is that black wolves have their black coat coloration as a gift from dogs. The products of artificial selection had added substantially to the genetic legacy of a wild species."
Scientists have thought that coat color is related to camouflage, perhaps to hide wolves from their prey or from one another.
"Apparently, natural selection has increased the frequency of black coat color dramatically in wolf populations across North America," Wayne said. "It must have adaptive value that we don't yet understand. It could be camouflage, or strengthening the immune system to combat pathogens, or it could reflect a preference to mate with individuals of a different coat color."
Does this research have implications beyond dogs and wolves?
"The underlying assumption is that genes from one species will be contained and not enter another species on a massive scale; this may not be true," Wayne said. "There may be implications for genetically modified organisms."
"This work shows how domestication can preserve and ultimately enrich the genetic legacy of the original natural populations," said Gregory Barsh, a professor of genetics at Stanford University's School of Medicine and co-author of the Science paper. "Our work is on wolves, but there are many other examples of domestic plants — wheat, rice, maize, soybean — and animals — bison, cattle, cats — where gene flow from domesticated to natural populations has been documented."
The lead authors of the paper are Tovi Anderson, a graduate student in Barsh's Stanford laboratory, and Bridgett vonHoldt, a UCLA graduate student of ecology and evolutionary biology who works in Wayne's laboratory.
As part of the research on the Yellowstone wolves, VonHoldt conducted a genome scan and studied more than 50,000 genetic markers in order to assess genetic variation across wolf populations in relation to dogs. She and her colleagues examined whether there was any evidence elsewhere in the genome indicating that black wolves recently hybridized with dogs but could not find any.
Black coyotes also have the same coat-color gene as domestic dogs, Anderson, vonHoldt and the co-authors report.
The research was conducted by laboratory and field scientists with diverse backgrounds in conservation biology, ecology and molecular genetics.
The collaboration will help to refine concepts relevant to both genetics and conservation biology with respect to understanding how different traits arise during evolution and how biological diversity can be nurtured and maintained, the scientists said.
"My main interest is to describe the genetics of dog domestication — the geographic location of domestication and the genetic changes that led to the distinctive body forms evident in so many breeds," vonHoldt said. "I'm able to use a genome approach and look at many points along the dog genome to find interesting regions and whether these regions contain genes with known functions, and to extrapolate what that means for the domestication process of dogs.
"We're trying to figure out whether the black coat color provides a fitness or behavioral advantage," she added, noting that Yellowstone National Park has a wealth of observational data that "we can integrate with our genetic data."
"We can scan the dog's genome and find associations between a particular marker and a trait like foreshortened limbs or a specific coat color, or even behavioral traits," Wayne said. "We then examine the genes near those markers and identify candidates that may be responsible for the specific trait. Our hope is that we will find the genetic basis for traits having to do with behavior, speed, longevity or fecundity — all these traits that we measure in wild populations, but we do not yet understand their genetic basis."
Yellowstone is home to the wolf population about which the most is known, Wayne said. Their behavior and reproduction have been well studied, including by one of Wayne's graduate students, Daniel Stahler, a co-author of the Science paper who works as a biologist for the Yellowstone National Park Gray Wolf Restoration Project.
"The wolves of Yellowstone represent an unparalleled population for studying the inheritance of traits," Wayne said. "In Yellowstone, we have followed very precisely the inheritance of coat color throughout the entire wolf population and document that coat color is a trait inherited with just one gene involved, with two forms — one causing white and one causing black. This is the most comprehensive genealogical analysis of a North American carnivore population ever undertaken."
Other co-authors on the paper are Sophie Candille, from Barsh's Stanford laboratory; Marco Musiani, a professor at the University of Calgary and former postdoctoral scholar in Wayne's laboratory; Claudia Greco, a graduate student from Italy who previously worked in Wayne's laboratory, and her adviser, Ettore Randi; Douglas W. Smith, project leader for the Yellowstone National Park Gray Wolf Restoration Project; Badri Padhukasahasram, a graduate student in the department of biological statistics and computational biology at Cornell University; Jennifer Leonard of Sweden's Uppsala University, a former Ph.D. student in Wayne's laboratory; Carlos Bustamante, a statistical population geneticist in the department of biological statistics and computational biology at Cornell; Elaine Ostrander of the National Human Genome Research Institute; and Hua Tang, from Barsh's laboratory.
In previous research, Wayne and colleagues used molecular genetic techniques to determine that dogs have ancient origins, and that the first Americans to arrive in the New World more than 12,000 years ago brought domesticated dogs with them. They have also found that dogs have been living in close association with humans much longer than any other domestic animal, have confirmed that dogs evolved from wolves, and have confirmed that today's domestic horse resulted from the interbreeding of many lines of wild horses in multiple locations and was not confined to a small area or a single culture.
UCLA is California's largest university, with an enrollment of nearly 38,000 undergraduate and graduate students. The UCLA College of Letters and Science and the university's 11 professional schools feature renowned faculty and offer more than 323 degree programs and majors. UCLA is a national and international leader in the breadth and quality of its academic, research, health care, cultural, continuing education and athletic programs. Four alumni and five faculty have been awarded the Nobel Prize.
Stuart Wolpert | EurekAlert!
Further reports about: > DNA sequence > Dalmatians > K locus > Molecular > Science TV > UCLA > Wayne' > Yellowstone > black coat color > black dogs > black wolves > camouflage > co-author > conservation biology > dachshunds > evolutionary biology > genetic basis > genetic marker > mutations > transferred > wild gray wolves > wild species
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
Colorectal cancer risk factors decrypted
16.07.2018 | Max-Planck-Institut für Stoffwechselforschung
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:62dada8a-2a45-4193-a419-752bbd84c0e6> | 3.875 | 2,370 | Content Listing | Science & Tech. | 31.061214 | 95,532,526 |
Doughnuts, electric current and quantum physics - this will sound like a weird list of words to most people, but for Sebastian Huber it is a job description.
ETH-professor Huber is a theoretical physicist who, for several years now, has focused his attention on so-called topological insulators, i.e., materials whose ability to conduct electric current originates in their topology.
The easiest way to understand what "topological" means in this context is to imagine how a doughnut can be turned into a coffee cup by pulling, stretching and moulding - but without cutting it. Topologically speaking, therefore, doughnuts and coffee cups are identical, and by applying the same principle to the quantum mechanical wave function of electrons in a solid one obtains the phenomenon of the topological insulator.
This is advanced quantum physics, highly complex and far removed from everyday experience. Nevertheless, professor Huber and his collaborators have now managed to make these abstract ideas very concrete and even to come up with a possible application in engineering by cutting red tape, as it were, and involving colleagues from different disciplines all the way through the ETH.
From quanta to mechanics
In the beginning, Sebastian Huber asked a simple question: is it possible to apply the principle of a topological insulator to mechanical systems? Normally, quantum physics and mechanics are two separate worlds.
In the quantum world particles can "tunnel" through barriers and reinforce or cancel each other as waves, whereas everyday mechanics deals with falling bodies or the structural analysis of bridges. Huber and his colleagues realized, however, that the mathematical formulas describing the quantum properties of a topological insulator can be rearranged to look exactly like those of a well-known mechanical system - an array of swinging pendulums.
In particular, just like their quantum mechanical counterparts the mechanical formulas predicted so-called edge states. In such states an electric current (or, in the case of pendulums, a mechanical vibration) flows along the edges of the material, while inside the system nothing happens. "From a theoretical point of view that was a beautiful result", says Huber, "but, of course, it is easier to convince people if you also show it in practice."
No sooner said than done, together with technicians at the ETH Huber and his student built a mechanical model consisting of 270 pendulums that are arranged in a rectangular lattice and connected by small springs. Two of those pendulums can be mechanically excited, meaning that they can be shaken back and forth with a particular frequency and strength.
Little by little, the spring couplings cause the other pendulums to start swinging as well. Eventually, for a particular excitation frequency the physicists saw what they had been hoping for: the pendulums inside the rectangle stood still, whereas those along the edge vibrated rhythmically, causing a "wave" to flow around the rectangle. In other words, the coupled pendulums did, indeed, behave just like a topological insulator.
Robotic arms and lenses for sound
What started out as a pipe dream and a nice gimmick for professor Huber could soon become a useful tool. The mechanical edge states of the coupled pendulums, it turns out, are so robust - "topologically protected", in technical language - that they persist if the array of pendulums is disordered and even if a part of the rectangle is removed.
Such properties would be interesting, for instance, in sound and vibration insulation, which is important in various areas such as industrial production, where robot arms have to place objects precisely and without jittering. Moreover, one can imagine materials that convey sound in one direction only, or others that focus sound like a lens.
"Such applications are very challenging, but still realistic", says Chiara Daraio, ETH-professor for mechanics and materials. Of course, the mechanical systems would first have to shrink considerably - Huber's pendulums are, after all, half a metre long and weigh half a kilo. The engineers are already building a new device that works without pendulums and that will only be a few centimetres in size.
Süsstrunk R, Huber SD: Observation of phononic helical edge states in a mechanical topological insulator. Science 2015, 349: 47-50, doi: 10.1126/science.aab0239
Dr. Sebastian Huber | EurekAlert!
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
Theorists publish highest-precision prediction of muon magnetic anomaly
16.07.2018 | DOE/Brookhaven National Laboratory
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:fcf8651c-38f7-405b-854f-b70f2c09ae6c> | 3.203125 | 1,567 | Content Listing | Science & Tech. | 34.694767 | 95,532,527 |
Environment protection and biodiversity conservation act 1999 pdf
At least 40 per cent of the world’s economy and 80 per cent of the needs of the poor are derived from biological resources. In addition, the environment protection and biodiversity conservation act 1999 pdf the diversity of life, the greater the opportunity for medical discoveries, economic development, and adaptive responses to such new challenges as climate change. The Convention about Life on Earth, Convention on Biodiversity web site.
The variety of life on Earth, its biological diversity is commonly referred to as biodiversity. The number of species of plants, animals, and microorganisms, the enormous diversity of genes in these species, the different ecosystems on the planet, such as deserts, rainforests and coral reefs are all part of a biologically diverse Earth. Appropriate conservation and sustainable development strategies attempt to recognize this as being integral to any approach to preserving biodiversity. Almost all cultures have their roots in our biological diversity in some way or form. Declining biodiversity is therefore a concern for many reasons. Biodiversity boosts ecosystem productivity where each species, no matter how small, all have an important role to play. Healthy ecosystems can better withstand and recover from a variety of disasters.
And so, while we dominate this planet, we still need to preserve the diversity in wildlife. Deforestation threatens many species such as the giant leaf frog, shown here. That is quite a lot of services we get for free! It therefore makes economic and development sense to move towards sustainability.
To prevent the well known and well documented problems of genetic defects caused by in-breeding, species need a variety of genes to ensure successful survival. Without this, the chances of extinction increases. And as we start destroying, reducing and isolating habitats, the chances for interaction from species with a large gene pool decreases. I had not noted the publication details. It is a type of cooperation based on mutual survival and is often what a balanced ecosystem refers to. As an example, consider all the species of animals and organisms involved in a simple field used in agriculture. | <urn:uuid:ff4e26e2-3829-467a-b392-eed04b173679> | 3.234375 | 413 | Knowledge Article | Science & Tech. | 27.149712 | 95,532,548 |
In a new paper published online today in Progress in Oceanography, Diane Thompson and collaborators (including Malin) show how ocean currents transport coral larvae throughout the western Tropical Pacific, and how the barriers posed by these currents have helped shape where species are found.
Becca Selden teamed up with DataSpire’s Kristin Hunter-Thomson to develop an educational resource with Science Friday’s educational director Ariel Zych. The resource teaches 7-12th grade high school students to interpret the impacts of warming oceans on marine ecosystems. Lab members Katrina Catalano, and Lisa McManus provided valuable scientific review of the resource prior to its publication.
The ocean is changing. As it changes, the ecosystem and the species within the ocean are impacted, sometimes in surprising ways. This is a story about how some of those changes—in temperature, where fish populations live, and the fishing communities that rely upon them—could play out along the Atlantic Coast in the next century. It’s also a story about making predictions and using evidence from data. Here’s how it’s going to work:
Read a story from the docks of New England: What’s changing?
Meet a scientist and think like one: How do we collect data on the oceans?
Think like a fish: Use data to model changes in fish populations.
Make predictions: Use your model to make predictions and inform the community
Just out last week, Malin has a Commentary in PNAS, “Throwing back the big ones saves a fishery from hot water.” In it, he explains why a recent paper by Arnault Le Bris on the Maine lobster fishery provides important insight into efforts to create climate-ready fisheries management. Practices like conserving the female lobsters and not catching the large lobsters have allowed the fishery to flourish as temperatures have warmed, and will likely continue helping the fishery into the future. Despite the overall good news for lobster and the way it has been managed in Maine, many of the stakeholders in Maine have not been as happy with the news (see Portland Press Herald articles here and here).
Jennifer presented 25 years of changes in population genetic patterns of summer flounder at the Ecological Society of America (ESA) meeting in Portland, OR
Sarah presented on genomic evidence for evolutionary rescue in little brown bats hit by white nose syndrome, also at ESA
Malin gave three talks: how ecology can help meet the UN sustainable development goals, how to teach about climate change (with Rebecca Jordan), and how climate change impacts in the ocean are different than those on land (all at ESA)
Becca talked about changing predator-prey interactions as a result of warming in the Northeast US at the American Fisheries Society (AFS) meeting in Tampa, FL
Jim presented a detailed projection of marine animal distributions in North America over the coming century (AFS)
Allison presented some of her Ph.D. work on eco-evolutionary dynamics in salmon (AFS)
New paper just out online in Global Change Biology, led by postdoc Becca Selden: functional diversity among predatory fish helps protect ecosystems from the impacts of warming. Becca showed that warming has helped make Atlantic cod a much less important predator in the Northeast U.S., but other predators (spiny dogfish, hakes) have expanded to fill its role.
On a geeky note, what’s especially interesting is that these changes in predator-prey interactions with warming are occurring even though both predators and prey are shifting their distributions as the environment changes. | <urn:uuid:3268845b-486d-4ee8-a406-0007b8a690df> | 3.34375 | 738 | Personal Blog | Science & Tech. | 27.854458 | 95,532,604 |
A University of Exeter research team studied zebra finches, which had been selectively bred to produce three distinct types – ‘laid-back’, ‘normal’ and ‘stressed’ – based on their levels of stress hormone. The group was surprised to find that the ‘stressed’ birds were bolder and took more risks in a new environment than the group that was usually more laid-back. Their findings are published today (26 October) in the journal Hormones and Behaviour.
Like other animals including humans, birds respond to stress, created by the appearance of a predator or a change in their environment for example, by producing a hormone. In birds, this hormone is called ‘corticosterone’ and some individuals have higher levels of the hormone than others. The zebra finches in this experiment were bred to have three different corticosterone levels, with the ‘laid-back’ birds having lower levels than the ‘stressed’ birds. The researchers put the birds into a new environment, which housed several unfamiliar objects, including new feeders. The ‘stressed’ birds were the first to visit the new feeders, which they also returned to more quickly than the other birds after being startled. Overall, they approached more objects than their normally more relaxed peers, showing greater risk-taking behaviour and arguably handling the situation better.
Dr Thais Martins of the University of Exeter said: “It initially seems counter-intuitive that birds with higher levels of the stress hormone showed bolder behaviour, normally associated with confidence. However, corticosterone is released to help tackle stress by encouraging the animal to adopt key survival behaviours like seeking food. So on reflection, perhaps it is not surprising that these birds are more likely to explore the environment and look for food.”
Previous research has indicated that animals show consistent individual differences in their behaviour when faced with certain challenges. Traditionally in this field of research, birds have been separated into two groups: ‘bold’ and ‘shy’, or ‘active’ and ‘passive’. These definitions are based on observations of their behavioural strategies, and the birds are then studied for physiological differences. This research took the opposite approach and separated the birds into groups by physiology, based on corticosterone production levels, and then looked for behavioural differences.
Sarah Hoyle | EurekAlert!
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
Colorectal cancer risk factors decrypted
16.07.2018 | Max-Planck-Institut für Stoffwechselforschung
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:fdc325e6-80dc-41e3-916b-7288f6308c23> | 3.6875 | 1,152 | Content Listing | Science & Tech. | 36.175798 | 95,532,640 |
Caltech and JPL researchers identify a process involving UV light from the sun that helps explain how a moderately dense martian atmosphere 3.8 billion years ago could have evolved into the current thin one without invoking a missing carbon reservoir.
Charles Elachi has announced his intention to retire as director of the Jet Propulsion Laboratory on June 30, 2016 and move to campus as professor emeritus. A national search is underway to identify his successor.
Using data from the W. M. Keck Observatory's OSIRIS spectrometer and maps from NASA's Galileo probe, researchers have mapped what may be salt deposits from the ocean below the ice onto the Jovian moon's surface.
In September, the NASA/JPL Cassini mission began the last two years of the Solstice Mission. We recently spoke with JPL director Charles Elachi to gain his unique perspective on Cassini's achievements—and what will come next.
Ken Farley, the project scientist for NASA's next Mars rover, a mission called Mars 2020, and the W.M. Keck Foundation Professor of Geochemistry at Caltech, talks about how the Mars 2020 landing site selection process is shaping up.
NASA participated for the first time in Norway's annual oil spill cleanup exercise in the North Sea on June 8 through 11. Scientists flew a specialized NASA airborne instrument to monitor a controlled release of oil into the sea.
Feynman Teaching Award winner Mike Brown ventures into new fields of instruction: the Massive Open Online Course, or MOOC, and the "flipped" classroom, which inverts the traditional arrangement of listening to lectures in class and doing assignments at home. | <urn:uuid:74bf1297-30ab-4c13-a763-c6d7ea350efd> | 2.546875 | 335 | Content Listing | Science & Tech. | 45.520212 | 95,532,665 |
Construction of the real numbers
In mathematics, there are several ways of defining the real number system as an ordered field. The synthetic approach gives a list of axioms for the real numbers as a complete ordered field. Under the usual axioms of set theory, one can show that these axioms are categorical, in the sense that there is a model for the axioms, and any two such models are isomorphic. Any one of these models must be explicitly constructed, and most of these models are built using the basic properties of the rational number system as an ordered field.
The synthetic approach axiomatically defines the real number system as a complete ordered field. Precisely, this means the following. A model for the real number system consists of a set R, two distinct elements 0 and 1 of R, two binary operations + and × on R (called addition and multiplication, respectively), and a binary relation ≤ on R, satisfying the following properties.
- (R, +, ×) forms a field. In other words,
- For all x, y, and z in R, x + (y + z) = (x + y) + z and x × (y × z) = (x × y) × z. (associativity of addition and multiplication)
- For all x and y in R, x + y = y + x and x × y = y × x. (commutativity of addition and multiplication)
- For all x, y, and z in R, x × (y + z) = (x × y) + (x × z). (distributivity of multiplication over addition)
- For all x in R, x + 0 = x. (existence of additive identity)
- 0 is not equal to 1, and for all x in R, x × 1 = x. (existence of multiplicative identity)
- For every x in R, there exists an element −x in R, such that x + (−x) = 0. (existence of additive inverses)
- For every x ≠ 0 in R, there exists an element x−1 in R, such that x × x−1 = 1. (existence of multiplicative inverses)
- (R, ≤) forms a totally ordered set. In other words,
- The field operations + and × on R are compatible with the order ≤. In other words,
- For all x, y and z in R, if x ≤ y, then x + z ≤ y + z. (preservation of order under addition)
- For all x and y in R, if 0 ≤ x and 0 ≤ y, then 0 ≤ x × y (preservation of order under multiplication)
- The order ≤ is complete in the following sense: every non-empty subset of R bounded above has a least upper bound. In other words,
- If A is a non-empty subset of R, and if A has an upper bound, then A has a least upper bound u, such that for every upper bound v of A, u ≤ v.
The rational numbers Q satisfy the first three axioms (i.e. Q is totally ordered field) but Q does not satisfy axiom 4. So axiom 4, which requires the order to be Dedekind-complete, is crucial. Axiom 4 implies the Archimedean property. Several models for axioms 1-4 are given below. Any two models for axioms 1-4 are isomorphic, and so up to isomorphism, there is only one complete ordered Archimedean field.
When we say that any two models of the above axioms are isomorphic, we mean that for any two models (R, 0R, 1R, +R, ×R, ≤R) and (S, 0S, 1S, +S, ×S, ≤S), there is a bijection f : R → S preserving both the field operations and the order. Explicitly,
- f is both injective and surjective.
- f(0R) = 0S and f(1R) = 1S.
- For all x and y in R, f(x +R y) = f(x) +S f(y) and f(x ×R y) = f(x) ×S f(y).
- For all x and y in R, x ≤R y if and only if f(x) ≤S f(y).
Tarski's axiomatization of the realsEdit
An alternative synthetic axiomatization of the real numbers and their arithmetic was given by Alfred Tarski, consisting of only the 8 axioms shown below and a mere four primitive notions: a set called the real numbers, denoted R, a binary relation over R called order, denoted by infix <, a binary operation over R called addition, denoted by infix +, and the constant 1.
Axioms of order (primitives: R, <):
Axiom 1. If x < y, then not y < x. That is, "<" is an asymmetric relation.
Axiom 2. If x < z, there exists a y such that x < y and y < z. In other words, "<" is dense in R.
Axiom 3. "<" is Dedekind-complete. More formally, for all X, Y ⊆ R, if for all x ∈ X and y ∈ Y, x < y, then there exists a z such that for all x ∈ X and y ∈ Y, if z ≠ x and z ≠ y, then x < z and z < y.
To clarify the above statement somewhat, let X ⊆ R and Y ⊆ R. We now define two common English verbs in a particular way that suits our purpose:
- X precedes Y if and only if for every x ∈ X and every y ∈ Y, x < y.
- The real number z separates X and Y if and only if for every x ∈ X with x ≠ z and every y ∈ Y with y ≠ z, x < z and z < y.
Axiom 3 can then be stated as:
- "If a set of reals precedes another set of reals, then there exists at least one real number separating the two sets."
Axioms of addition (primitives: R, <, +):
Axiom 4. x + (y + z) = (x + z) + y.
Axiom 5. For all x, y, there exists a z such that x + z = y.
Axiom 6. If x + y < z + w, then x < z or y < w.
Axioms for one (primitives: R, <, +, 1):
Axiom 7. 1 ∈ R.
Axiom 8. 1 < 1 + 1.
Explicit constructions of modelsEdit
We shall not prove that any models of the axioms are isomorphic. Such a proof can be found in any number of modern analysis or set theory textbooks. We will sketch the basic definitions and properties of a number of constructions, however, because each of these is important for both mathematical and historical reasons. The first three, due to Georg Cantor/Charles Méray, Richard Dedekind and Karl Weierstrass/Otto Stolz all occurred within a few years of each other. Each has advantages and disadvantages. A major motivation in all three cases was the instruction of mathematics students.
Construction from Cauchy sequencesEdit
R is defined as the completion of Q with respect to the metric |x-y|, as will be detailed below (for completions of Q with respect to other metrics, see p-adic numbers.)
Let R be the set of Cauchy sequences of rational numbers. That is, sequences
- x1, x2, x3,...
of rational numbers such that for every rational ε > 0, there exists an integer N such that for all natural numbers m,n > N, |xm − xn| < ε. Here the vertical bars denote the absolute value.
Cauchy sequences (xn) and (yn) can be added and multiplied as follows:
- (xn) + (yn) = (xn + yn)
- (xn) × (yn) = (xn × yn).
Two Cauchy sequences are called equivalent if and only if the difference between them tends to zero. This defines an equivalence relation that is compatible with the operations defined above, and the set R of all equivalence classes can be shown to satisfy all axioms of the real numbers. We can embed Q into R by identifying the rational number r with the equivalence class of the sequence (r,r,r, …).
Comparison between real numbers is obtained by defining the following comparison between Cauchy sequences: (xn) ≥ (yn) if and only if x is equivalent to y or there exists an integer N such that xn ≥ yn for all n > N.
By construction, every real number x is represented by a Cauchy sequence of rational numbers. This representation is far from unique; every rational sequence that converges to x is a representation of x. This reflects the observation that one can often use different sequences to approximate the same real number.
The only real number axiom that does not follow easily from the definitions is the completeness of ≤, i.e. the least upper bound property. It can be proved as follows: Let S be a non-empty subset of R and U be an upper bound for S. Substituting a larger value if necessary, we may assume U is rational. Since S is non-empty, we can choose a rational number L such that L < s for some s in S. Now define sequences of rationals (un) and (ln) as follows:
- Set u0 = U and l0 = L.
For each n consider the number:
- mn = (un + ln)/2
If mn is an upper bound for S set:
- un+1 = mn and ln+1 = ln
- ln+1 = mn and un+1 = un
This defines two Cauchy sequences of rationals, and so we have real numbers l = (ln) and u = (un). It is easy to prove, by induction on n that:
- un is an upper bound for S for all n
- ln is never an upper bound for S for any n
Thus u is an upper bound for S. To see that it is a least upper bound, notice that the limit of (un − ln) is 0, and so l = u. Now suppose b < u = l is a smaller upper bound for S. Since (ln) is monotonic increasing it is easy to see that b < ln for some n. But ln is not an upper bound for S and so neither is b. Hence u is a least upper bound for S and ≤ is complete.
The usual decimal notation can be translated to Cauchy sequences in a natural way. For example, the notation π = 3.1415... means that π is the equivalence class of the Cauchy sequence (3, 3.1, 3.14, 3.141, 3.1415, ...). The equation 0.999... = 1 states that the sequences (0, 0.9, 0.99, 0.999,...) and (1, 1, 1, 1,...) are equivalent, i.e., their difference converges to 0.
An advantage of constructing R as the completion of Q is that this construction is not specific to one example; it is used for other metric spaces as well.
Construction by Dedekind cutsEdit
A Dedekind cut in an ordered field is a partition of it, (A, B), such that A is nonempty and closed downwards, B is nonempty and closed upwards, and A contains no greatest element. Real numbers can be constructed as Dedekind cuts of rational numbers.
For convenience we may take the lower set as the representative of any given Dedekind cut , since completely determines . By doing this we may think intuitively of a real number as being represented by the set of all smaller rational numbers. In more detail, a real number is any subset of the set of rational numbers that fulfills the following conditions:
- is not empty
- is closed downwards. In other words, for all such that , if then
- contains no greatest element. In other words, there is no such that for all ,
- We form the set of real numbers as the set of all Dedekind cuts of , and define a total ordering on the real numbers as follows:
- We embed the rational numbers into the reals by identifying the rational number with the set of all smaller rational numbers . Since the rational numbers are dense, such a set can have no greatest element and thus fulfills the conditions for being a real number laid out above.
- Addition.
- Subtraction. where denotes the relative complement of in ,
- Negation is a special case of subtraction:
- Defining multiplication is less straightforward.
- if then
- if either or is negative, we use the identities to convert and/or to positive numbers and then apply the definition above.
- We define division in a similar manner:
- if then
- if either or is negative, we use the identities to convert to a non-negative number and/or to a positive number and then apply the definition above.
- Supremum. If a nonempty set of real numbers has any upper bound in , then it has a least upper bound in that is equal to .
As an example of a Dedekind cut representing an irrational number, we may take the positive square root of 2. This can be defined by the set . It can be seen from the definitions above that is a real number, and that . However, neither claim is immediate. Showing that is real requires showing that has no greatest element, i.e. that for any positive rational with , there is a rational with and The choice works. Then but to show equality requires showing that if is any rational number less than 2, then there is positive in with .
An advantage of this construction is that each real number corresponds to a unique cut.
Construction using hyperreal numbersEdit
As in the hyperreal numbers, one constructs the hyperrationals *Q from the rational numbers by means of an ultrafilter. Here a hyperrational is by definition a ratio of two hyperintegers. Consider the ring B of all limited (i.e. finite) elements in *Q. Then B has a unique maximal ideal I, the infinitesimal numbers. The quotient ring B/I gives the field R of real numbers. Note that B is not an internal set in *Q. Note that this construction uses a non-principal ultrafilter over the set of natural numbers, the existence of which is guaranteed by the axiom of choice.
It turns out that the maximal ideal respects the order on *Q. Hence the resulting field is an ordered field. Completeness can be proved in a similar way to the construction from the Cauchy sequences.
Construction from surreal numbersEdit
Every ordered field can be embedded in the surreal numbers. The real numbers form a maximal subfield that is Archimedean (meaning that no real number is infinitely large). This embedding is not unique, though it can be chosen in a canonical way.
Construction from Z (Eudoxus reals)Edit
A relatively less known construction allows to define real numbers using only the additive group of integers with different versions. The construction has been formally verified by the IsarMathLib project. Shenitzer and Arthan refer to this construction as the Eudoxus reals.
Let an almost homomorphism be a map such that the set is finite. (Note that is an almost homomorphism for every .) Almost homomorphisms form an abelian group under pointwise addition. We say that two almost homomorphisms are almost equal if the set is finite. This defines an equivalence relation on the set of almost homomorphisms. Real numbers are defined as the equivalence classes of this relation. Alternatively, the almost homomorphisms taking only finitely many values form a subgroup, and the underlying additive group of the real number is the quotient group. To add real numbers defined this way we add the almost homomorphisms that represent them. Multiplication of real numbers corresponds to functional composition of almost homomorphisms. If denotes the real number represented by an almost homomorphism we say that if is bounded or takes an infinite number of positive values on . This defines the linear order relation on the set of real numbers constructed this way.
|“||Few mathematical structures have undergone as many revisions or have been presented in as many guises as the real numbers. Every generation reexamines the reals in the light of its values and mathematical objectives.||”|
As a reviewer of one noted: "The details are all included, but as usual they are tedious and not too instructive."
- Pugh, Charles Chapman (2002). Real Mathematical Analysis. New York: Springer. pp. 11–15. ISBN 0-387-95297-7.
- Hersh, Reuben (1997). What is Mathematics, Really?. New York: Oxford University Press US. p. 274. ISBN 0-19-513087-1.
- R.D. Arthan. "The Eudoxus Real Numbers". arXiv: .
- Norbert A'Campo. "A natural construction for the real numbers". arXiv: .
- Ross Street (September 2003). "Update on the efficient reals" (PDF). Retrieved 2010-10-23.
- Shenitzer, A (1987). "A topics course in mathematics". The Mathematical Intelligencer. 9 (3): 44–52. doi:10.1007/bf03023955.
- F. Faltin, N. Metropolis, B. Ross and G.-C. Rota, The real numbers as a wreath product Advances in Math., 16(1975), 278–304.
- N.G. de Bruijn, N. G. Construction of the system of real numbers. (Dutch) Nederl. Akad. Wetensch. Verslag Afd. Natuurk. 86 (1977), no. 9, 121–125.
- N. G. de Bruijn, Defining reals without the use of rationals. Nederl. Akad.Wetensch. Proc. Ser. A 79 = Indag. Math. 38 (1976), no. 2, 100–108
- Rieger, G. J. A new approach to the real numbers (motivated by continued fractions). Abh. Braunschweig.Wiss. Ges. 33 (1982), 205–217
- Knopfmacher, Arnold; Knopfmacher, John Two concrete new constructions of the real numbers. Rocky Mountain J. Math. 18 (1988), no. 4, 813–824.
- Knopfmacher, Arnold; Knopfmacher, John A new construction of the real numbers (via infinite products). Nieuw Arch. Wisk. (4) 5 (1987), no. 1, 19–31.
- MR693180 (84j:26002) review of A new approach to the real numbers (motivated by continued fractions) by Rieger, G. J. | <urn:uuid:e972436d-cc46-4714-a0e6-05afae89f6b8> | 3.65625 | 4,189 | Knowledge Article | Science & Tech. | 72.923211 | 95,532,679 |
Materials Modelling and Design: An Introduction
Modelling of various phenomena observed in materials, prediction of their behaviour under different conditions and the development/design of cost effective materials with improved or desired properties are some of the prime objectives in materials research. Over the past few decades much progress has been made in our understanding of the various physical phenomena in materials but the prediction of their properties as well as the development of new materials have often relied upon empirical models. In recent years, however, important advances have taken place in the quantum mechanical description of interatomic interactions in materials using the density functional theory. This together with tremendous improvements in computational power have made it possible to predict materials properties starting just from atomic numbers and to simulate their behaviour under different conditions. At the same time, experimental progress in the preparation of thin films/multilayers, the atomic force microscope and the availability of cluster sources is providing exciting opportunities to develop novel materials as well as explore new directions in materials modelling. Understanding many of the properties of such materials would need a quantum mechanical description which is now possible. Besides the conventional routes of changing materials properties, clusters with different sizes and multilayers with different combinations of materials and thicknesses exhibit significantly different properties. This is giving way to new possibilities of designing materials with desired properties. On the other hand at a macroscopic scale, finite element methods are being used to understand materials properties as a function of size, shape and microstructures. In this proceedings we have chosen articles which focus on some of these recent developments and in particular deal with problems related to alloys, surfaces, small clusters, nanoparticles, phase transitions and the mechanical behaviour of materials.
KeywordsGeneralize Gradient Approximation Electronic Structure Calculation Quantum Mechanical Description Coherent Potential Approximation Kondo Insulator
Unable to display preview. Download preview PDF. | <urn:uuid:5c314876-7931-4c25-80e8-1f6f58f0fa0b> | 2.75 | 371 | Truncated | Science & Tech. | -2.664931 | 95,532,699 |
Technique and Significance, Looking at Cell Migration
What is Time-Lapse Microscopy (TLM)?
Time-lapse microscopy may be described as a type
of time-lapse photography in microscopy. Here, the film frames are captured at
a lower frequency than the frequency used to play the sequence which makes time
appear to be moving faster and lapsing when the sequence is played at normal
speed. Therefore, this technique is a manipulation of time where real life
events that may have taken minutes or hours get to be observed to completion
within a matter of seconds.
Time-lapse microscopy was first reported in 1909
where Jean Comandon, a French student, successfully captured image sequences by using an
enormous cinema film camera coupled to a much smaller dark field microscope.
Using his technique, Comandon was able to create a time-lapse video of syphilis
producing sinochaetes. In the following half of the 20th Century, compact
16-milimeter cameras were used for capturing image sequences. Here, the cameras
were used on microscopes equipped with phase contrast illumination while the
time interval between successive image captures were controlled by bulky
auxiliary intervalometers given that electronic shutters were not available.
While these techniques represented some breakthrough in microscopy, they
heavily relied on film cameras that were subject to a number of challenges.
For instance, films had to be taken out for commercial processing, which means
that it took days and even weeks to get results. In addition to being costly,
the technique was also subject to a number of processing variations, which means
that there were chances of the results being inconsistent.
In the 1970s, video tube cameras and frame
grabber computer cards were integrated with microscopes, which significantly
reduced uncertainties of exposure settings associated with film cameras. In
addition to reduced costs, this new technique also allowed for image sequences
to be viewed during acquisition with the result of extended time-lapse
experiments being available immediately.
Today, digital still cameras are being
used to record individual image frames rather than using a video recorder.
These cameras allow for a number of advantages including lower overall cost
and recording individual frames as well as precise software control over
Time-Lapse Imaging Technique
Essentially, time-lapse microscopy can be
conducted using any microscope system that can accommodate a digital imaging
camera with time lapse capabilities. Here, the time intervals between image
capture can simply be preset on the camera being used or integrated camera
microscope software. Time interval between image capture simply refers to the
regular interval between each individual capture. For instance, one may set for
an image scene to be captured once each second.
The duration of these intervals
is very important in that it ultimately determines the temporal resolution with
the resulting video sequence showing the cells or organism in motion. For very
rapid events, imaging often requires that cameras have high temporal
resolution, which allows for capturing detail and high sensitivity in order to
capture enough signals within a short period of time.
Time-Lapse Microscopy & Cell Migration
Cell migration is a dynamic process that is
central to the development and maintenance of multicellular organisms. It is particularly
important for such events as embryonic development, tissue repair, functioning
of the immune system as well as tumor invasion among others. Therefore, cell
migration generally refers to the translation of cells from a given location to
another. For this reason, it is essential that the specimen is kept alive
during time-lapse microscopy.
Depending on the specimen (cells) under
investigation, it is important that a suitable environment is created to allow
the cells to remain viable during the acquisition of the images. This therefore
involves controlling the temperature, humidity, light as well as providing the
appropriate media among other factors.
It is now possible for scientists to track the movement of cells, study cell motility or conduct chemotaxis experiments among other applications. Here, labels and stains are not used given that they are invasive and can either change the behaviour of the cell or kill it.
Breast Cancer Cell Time-Lapse
To observe the migratory behaviour of cells,
living cells of interest have to be placed in the appropriate culture media
(different cells require different media) and then placed under the microscope.
Here, the images of given regions of interest are then taken at the set regular
intervals over a given period of time (minutes, hours or even a day). Here, the
position of given individual cells have to be marked in consecutive images,
which allows for easy tracking or following the positional changes of the cells
over a period of time.
The tracking procedure simply involves the
"point and click" systems. This involves pointing the cursor at the
cell and clicking on it to follow its movements. However, this method has been
shown to result in errors that may affect the integrity of the results
obtained. For this reason, new methods are in development to help avoid such
errors. A good example of this is the multi-target tracking technology that is
also used in military radar tracking techniques.
Using this method, it becomes
easier to develop a full automated cell identification and tracking system for
screening the video sequences of the unstained living cells. Whereas manual
tracking of cells has been shown to be time consuming and ineffective at times,
recent advances in automated cell tracking in time lapse microscopy have made it
easier to track specific cells even in large populations of cells for
quantitative, systematic and high-throughput measures of cells behaviour.
Significance of Time-Lapse Microscopy
Time-lapse microscopy presents significant
advantages in observing and studying cell migration. One of the biggest
advantages is that it is a
high-throughput and noninvasive tool for studying cells. As such, it has proved
particularly beneficial when studying or identifying stem cells and embryo and
their development. Through this technique, stains are not required, which means
that the cells are basically observed in their natural state.
Time-lapse microscopy can be said to be one of the methods that extends live
cell imaging from a single observation in time to observation of cellular
dynamics over a long period of time.
This makes this technique a cornerstone technology for the assessment of
cells given that it allows for users to observe the dynamic events in a large
number of cells and the single-cell level.
Johannes Huth, Malte Buchholz, Johann M Kraus,
Martin Schmucker, Götz von Wichert, Denis Krndija, Thomas Seufferlein, Thomas M
Gress and Hans A Kestler (2010) Significantly improved precision of cell
migration analysis in time-lapse video microscopy through use of a fully
automated tracking system.
Kevin E. Loewke and Renee A. Reijo Pera (2010)
The Role of Time-Lapse Microscopy in Stem Cell Research and Therapy.
Konda, R. (2014). Automated cell tracking in
time-lapse microscopy images. PhD thesis, Department of Electrical and
Electronic Engineering, The University of Melbourne.
Amazon and the Amazon logo are trademarks of Amazon.com, Inc. or its affiliates
The material on this page is not medical advice and is not to be used for diagnosis or treatment. Although care has been taken when preparing this page, its accuracy cannot be guaranteed. Scientific understanding changes over time.
Pezizomycotina is the largest subphylum of phylum Ascomycota (also known as sac fungi). With well over 100, 000 species (over 30,000 of these species have been well described), subphylum Pezizomycotin…
San Antonio, Texas, based Living Slides (www.theliveslide.com) is pleased to announce the launch of an innovative new microscope slide, LiveSlide®, developed in San Antonio and proudly manufactured in…
MicroscopeMaster.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means to earn fees by linking to Amazon.com and affiliated sites. | <urn:uuid:eefd9ed2-9e48-4cea-a60c-3ca103e255b0> | 3.875 | 1,737 | Knowledge Article | Science & Tech. | 23.390468 | 95,532,727 |
By Clivia M. Sotomayor Torres
Read or Download Alternative Lithography: Unleashing the Potentials of Nanotechnology (Nanostructure Science and Technology) PDF
Similar nanotechnology & mems books
This ebook investigates the habit of sunshine (light pulse) in the micro- and nano-scale gadget (ring resonator), that are built-in to shape the gadget, circuits, and platforms that may be used for atom/molecule trapping and transportation, optical transistor, quickly calculation units (optical gate), nanoscale verbal exchange and networks, and effort garage, and so on.
During this e-book, advancements within the warmth resistance of silicon nitride (Si3N4) ceramics utilizing grain boundary keep watch over and in plasticity at excessive temperatures utilizing grain measurement keep watch over which will decrease the price of shaping Si3N4 are described. The warmth resistance of Si3N4 is stronger by means of blending a moderate quantity of sintering additive as an impurity into the unique fabric powder.
Serving as a evaluation on non-local mechanics, this publication presents an creation to non-local elasticity conception for static, dynamic and balance research in a variety of nanostructures. The authors draw on their lonesome learn adventure to offer basic and intricate theories which are suitable throughout quite a lot of nanomechanical platforms, from the basics of non-local mechanics to the newest examine functions.
This e-book offers an authoritative resource of knowledge at the use of nanomaterials to augment the functionality of present electrochemical strength garage structures and the manners within which new such structures are being made attainable. The e-book covers the cutting-edge of the layout, guidance, and engineering of nanoscale sensible fabrics as potent catalysts and as electrodes for electrochemical strength garage and mechanistic research of electrode reactions.
- High-Intensity X-rays - Interaction with Matter: Processes in Plasmas, Clusters, Molecules and Solids
- Adsorption and Transport at the Nanoscale
- Nanotechnology in Drug Delivery: Fundamentals, Design, and Applications
- Nanotechnology and the Environment
- In Pursuit of Nanoethics: Transatlantic Reflections on Nanotechnology: 10 (The International Library of Ethics, Law and Technology)
- Nano and Giga Challenges in Microelectronics
Additional info for Alternative Lithography: Unleashing the Potentials of Nanotechnology (Nanostructure Science and Technology)
Alternative Lithography: Unleashing the Potentials of Nanotechnology (Nanostructure Science and Technology) by Clivia M. Sotomayor Torres | <urn:uuid:423fdd07-68c5-4269-8141-cde69f2628b1> | 2.625 | 549 | Product Page | Science & Tech. | -15.741365 | 95,532,742 |
Symmetry is a fundamental concept in physics. Our ‘standard model’ of particle physics, for example, predicts that matter and anti-matter should have been created in equal amounts at the big bang, yet our existing universe is mostly matter. Such a discrepancy between the symmetry of known physical laws, and what we actually observe, are often the inspiration for realizing that new interactions are important or that new phases of matter can exist.
Shigeki Onoda, a theorist at the RIKEN Advanced Science Institute in Wako, recognized that experimentalists at The University of Tokyo had possibly discovered a new state of matter, called a ‘chiral spin liquid’ when they reported evidence of time-reversal symmetry breaking1—a difference between the trajectory of a particle moving along one path or its inverse—in the oxide called Pr2Ir2O7. If a material is magnetic, or in a magnetic field, its electrons will not obey time reversal symmetry; but in Pr2Ir2O7, neither contribution was present to explain what the experimentalists had observed.
Now, Onoda and colleague Yoichi Tanaka have explained how a chiral spin liquid could emerge from so-called ‘quantum spin fluctuations’—the motion of spins that occurs even at absolute zero2. “The possibility of a chiral spin liquid was first proposed twenty years ago and many physicists had lost hope to find it,” explains Onoda. “This is a revival of a phase that was found in a totally different system than where it was first expected.”
The interesting properties of Pr2Ir2O7 are rooted in its crystal structure, called a pyrochlore lattice: four praseodymium (Pr) ions, each of which carries a magnetic ‘spin’, form a tetrahedral cage around an oxygen (O) ion. At low temperatures, the spins of materials with this structure often ‘freeze’ into what is called a ‘spin ice’ because of its similarity to the way hydrogen ions form around oxygen in water ice.
Onoda and Tanaka predict, however, that the quantum fluctuations in the spins melt the spin ice structure of Pr2Ir2O7. They proposed a realistic model of Pr spins on a pyrochlore lattice and suggested that both the geometry of the crystal and the small size of the spin on the Pr ion allowed the quantum fluctuations to grow so large that they melted the spin ice into a chiral spin liquid.
If their prediction is correct, Pr2Ir2O7 will be the first material in which one can study this new state of matter.
The corresponding author for this highlight is based at the Condensed Matter Theory Laboratory, RIKEN Advanced Science Institute
Journal information1. Machida, Y., Nakatsuji, S., Onoda, S., Tayama, T. & Sakakibara, T. Time-reversal symmetry breaking and spontaneous Hall effect without magnetic dipole order. Nature 463, 210–213 (2010).
2. Onoda, S. & Tanaka, Y. Quantum melting of spin ice: Emergent cooperative quadrupole and chirality. Physical Review Letters 105, 047201 (2010).
gro-pr | Research asia research news
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:159d512a-b08a-4f30-b845-db8f6a1dd405> | 3.296875 | 803 | Knowledge Article | Science & Tech. | 46.415907 | 95,532,767 |
The role of space agencies in remotely sensed EBVs.
5 October 2016
A new paper published on the policy forum of the Remote Sensing in Ecology and Conservation (RSEC) open-access journal, summaries the conditions for a collective engagement of space agencies in the co-development of Essential Biodiversity Variables (EVBs).
GEO BON is developing the EBVs as the key variables needed, on a regular and global basis, to understand and monitor changes in the Earth’s biodiversity. A subset of these EBVs can be derived from spaceborne remote sensing.
Since their conceptual definition, EBVs have been based on field observations from sampling schemes integrated into large-scale generalizations, and on remotely sensed observations measured continuously and globally by satellites. The increasing observation capabilities of Earth Observation satellites together with open and free data policies have enhanced the ability of the remote sensing community to conduct biodiversity research in terrestrial, freshwater and marine environments and to address essential biodiversity questions such as the distribution and abundance of species or the integrity of the ecosystems they inhabit. The need for global coverage and periodic measures makes remote sensing an important tool to assess how biodiversity is changing in space and time, and consequently to track progress towards the 2020 Aichi Biodiversity Targets and the post-2015 UN SDGs. However, the use of satellite remote sensing in biodiversity monitoring presents a number of challenges that need to be adequately addressed.
The biodiversity community at large can gain value from remotely sensed EBVs, but this requires close cooperation with space agencies. GEO BON has started to broadly engage the biodiversity community in order to collectively prioritize the EBVs and define their observational requirements. For the remotely sensed EBVs, this requires building a close relationship with space agencies, through their Committee on Earth Observation Satellites (CEOS). GEO BON, in its leadership role of facilitating the development of EBVs, is the key organization that can channel the satellite observation requirements for remotely sensed EBVs from the biodiversity community to the space agencies. A strong engagement of the space agencies in the co-development of the EBVs requires a community buy-in of the remotely-sensed EBVs and an endorsement by authoritative institutions in the field of science-policy interfaces such as the CBD SBSTTA and the IPBES.
Read full article here
Blog post: Earth Observations for Urban Resilience: Takeaways from ICLEI’s 2018 Resilient Cities Congress
Blog post: World Day to Combat Desertification and Drought 2018 – GEO Land Degradation Neutrality Activity | <urn:uuid:a9bf950c-2b9d-4538-b608-939cac13ba5a> | 2.703125 | 524 | Truncated | Science & Tech. | 4.341538 | 95,532,797 |
In my last article I illustrated how to install Enlightenment by checking out the most recent code from the Enlightenment Subversion Server (see "Installing Enlightenment E17 using subversion"). After that article I thought it would be a good follow up to illustrate how to create your own subversion repository.
Why? What can you use a subversion repository for? If you are collaborating on an application project in which multiple users need to be able to check in and check out your code, you will definitely want to use a solution such as subversion. There are other, interesting possibilities, for using subversion...such as a repository for documentation that is
In this process we are going to create a repository called myrepository and a project within that repository called "myproject". For simplicity sake we are going to house that repository in our ~/ directory. This is only used for simplicity (to avoid permissions issues). Once you have gained an understanding of how to work with subversion, we'll discuss creating repositories that can be accessed from without.
Here are the steps for creating your subversion repository.
Step 1: Install Subversion
Step 2: Create your repository
svnadmin create ~/repository
Step 3: Create project folders within ~/myrepository. Once you have created the directory structure, you can then move the pre-existing project files into the trunk folder. If this is a new project (that no work has been done) you can start saving your project files to the trunk folder. The folder structure needs to look like this:
Step 4: Create an svn user. This will be the user(s) that are allowed access to the project. The first step is to edit the ~/myrepository/conf/svnserve.conf file and add the following to the end of the file:
anon-access = none
auth-access = write
password-db = passwd
The next step is to edit ~/myrepository/conf/passwd and add the following to the end of the file:
user = user password
Where user is the username and password is the password for the user.
Step 5: Now it's time to import your project. From within your ~/myrepository direcotory issue the command:
svn import project1/ svn+ssh://[email protected]/home/USER/myproject/project1 -m "Original Commit"
Where ADDRESS is the location of the machine housing the repository and USER is the actual user name. NOTE: The above command is all one line. When that command runs successfully you will be prompted for the user password. Once you correctly enter the user password you will see scroll by a number of lines all beginning with "Adding". That tells you all of your project files/folders have been added.
Step 6: Start the daemon. In order for other uses to be able to access that repository you have to run the subversion daemon. To start this, issue the command:
You can now check out and check in your project files on your svn repository.
This has been a very basic introduction to setting up a subversion server. The next time around we are going to take this to the next level and set up a subversion server that others can check in and check out files.
Advertising revenue is falling fast across the Internet, and independently-run sites like Ghacks are hit hardest by it. The advertising model in its current form is coming to an end, and we have to find other ways to continue operating this site.
We are committed to keeping our content free and independent, which means no paywalls, no sponsored posts, no annoying ad formats or subscription fees.
If you like our content, and would like to help, please consider making a contribution:
Ghacks is a technology news blog that was founded in 2005 by Martin Brinkmann. It has since then become one of the most popular tech news sites on the Internet with five authors and regular contributions from freelance writers. | <urn:uuid:57c78fb4-fa6f-44e9-8725-f61935df3986> | 2.8125 | 834 | Tutorial | Software Dev. | 47.68215 | 95,532,819 |
For all of human history we have been exploring the unexplored, from discovering new lands to the seas and to the skies. Now as we have conquered those challenges we are presented with our next goal, space. In it is filled with billions upon billions of galaxies, stars and even other solar systems. Planets that roam their parent star outside of our own system are called extrasolar planets or exoplanets for short. Here are 5 of my favorite exoplanets discovered up to date.
1. First siting – Pulsar PSR B1257+12A
Located 1000 light years away from our sun PSR B1257+12 became wasthe first confirmed detection of a pulsar with several large masses orbiting it. It was later confirmed in 2007 that 3 exoplanets (with masses of 4.3x, 3.9x, and 0.02x the mass of our earth) orbit the pulsar.
Fun Fact #1: Pulsars are rapidly rotating neutron stars.
2. Two incredibly close orbits – star Kepler-36a
Located 1,200 light years away, the star system Kepler-36 and 2 of its orbiting planets have been striking a craze recently, why you may ask? Because these 2 planets have very dangerously close orbits, so much that the gravity from the 2 planets Kepler-36b and Kepler-36c would have significant tidal forces on each other (if either of them had any large bodies of water on them, like oceans or seas). Kepler-36b is a large terrestrial planet with 4.5x the mass of the earth also known as a super Earth. And Kepler-36c is a large Neptune like gas planet 8 times the mass of our planet. Every 97 days or so the 2 planets would synchronize with each other and will get as close as 1.9 million kilometers, or about 5 times the distance from the ground to the moon. Although 1.9 million kilometers may not sound anything short of a long way it is 20 time times the distance from our planet is Venus at 39 million kilometers away. But if all these numbers are too difficult to imagine (and I sure bit it will be especially because it is in the metric system). Here’s a photo-shopped picture of how the skies may look from the surface of rocky super Earth Kepler-36b. From the surface of Kepler-36b the other planet (Kepler-36c) would take up 3 or 4 times as much sky space as a full moon on Earth would.
Fun Fact #2: Super Earths can be classified as any rocky planet with a significantly higher mass than that of our earth. So the planet Kepler-36b has a mass of 4.5x that of our Earth, so it can be classified as a Super Earth; however Kepler-36c cannot be a super earth, although it is 8 times the mass of our earth, it is a gas planet.
3. Largest exoplant yet – Star TrES-4
The planet TrES-4b was discovered in 2006 and is currently known as the largest planet outside of our solar system to date. TrES-4b is70% larger than Jupiter, however, strangely enough it only contains 70% of the mass of Jupiter making it a very low density gas planet. In fact TrES-4b is just about as dense as the cork found on your champagne bottle.
Fun fact #3: Exoplanets are named from their parent stars in order of their discovery and not the distance from the central star, which can often lead to confusion. However the central star has its own name reserved as whatever name then lowercase letter ‘a’. So for example Kepler-36a or Kepler-36 is the star name, while Kepler-36b, c, d … and so on are left for the orbiting planets.
4. Triple Star System – HD 188753
Located approximately 150 light years (which is about 1.5 quadrillion kilometers) from the constellation Cygnus. With a central star mass of 1.06 solar masses (1 solar mass = the mass of our sun), along with a pair of smaller stars with a combined mass of just 1.63 solar masses. A Jupiter like gas planet has been detected in the triple system; it orbits its main star in a mere 3.5 earth days. So from the surface of this planet you'd seee not one, not two, but 3 stars rising and setting. 2 small suns and one large one.
Fun Fact #4: In the constellation Cygnus exists an earth like planet existing in the habitable zone, or as some like to call it, the Goldilocks Zone. An area where a planet is neither too far or too close to its host star, allowing a nice temperature fit for life. Earth resides in the middle of a goldilocks zone in our solar system. Kepler-22b was the first goldilocks planet discovered by the Kepler space telescope in 2009 and was later confirmed to exist in 2011.
5. Diamond planet – PSR J1716-1438b
This is another planet that orbits a pulsar which spins at 10,000 revolutions per minute or about 167 times per second. But what’s incredibly interesting about this planet is that because of its high density and gravity, it is possible that the exoplanet’s own gravity may have crushed the carbon within into crystalized diamonds, that's right diamonds. It exists about 4,000 light years (40 quadrillion kilometers) from the Earth in the constellation Serpens. Shown below is the system with the pulsar in the center with the exoplanet orbiting around it.
Last Fun Fact #5: 1 light year = about 10 trillion kilometers
If you liked this "top five exoplanets" then you may also like "Top Five Stellar Phenomena" found here: http://expertscolumn.com/content/final-frontier-space-top-five-stellar-phenomena | <urn:uuid:7e39aa3a-1a42-42de-853e-f9d982f91dd6> | 3.390625 | 1,245 | Listicle | Science & Tech. | 66.889261 | 95,532,838 |
Once upon a time, before India knew Asia, when alligators sunned themselves on shores north of the Arctic Circle, a small, timid, dog-like creature tentatively waded into a river. Fifty million years passed. The continents wandered and crashed, and the ocean reconfigured itself.
Now, where there were once Arctic alligators, there was ice. As for the creature who once dipped its toes into the tepid river, it now swam the frigid seas. The intervening age had transformed it into the largest animal in the history of life on Earth.
“There’s a famous paleontologist who’s dead by now, George Gaylord Simpson, and he once described whales as, ‘On the whole, the most peculiar and aberrant of mammals,’” says Felix Marx, a whale paleontologist and Marie Skłodowska-Curie Fellow at Monash University in Melbourne, Australia. “And I think that’s really true, because, I mean, they’re mammals, so they have to face all of the challenges that a normal mammal does. They’re adapted to living on land: they’re [warm-blooded], they have fur, they breathe air, they give birth to live young and they have to suckle those live young. And then you try and do all of that in the sea, and of course, almost everything is stacked against you. Like, the milk is floating away, heat is draining from your body, your fur isn’t really that useful, there’s no air to breathe—like, everything is against you. And yet, within a relatively short period of time they’ve managed to tackle all of that, and they managed to achieve feats like diving down several kilometers and staying down for—I don’t know—an hour at a time, and doing some of the weirdest, biggest feeding events in all of the animal kingdom.”
How they got there, transforming from four-legged, landlubbing also-rans, patrolling Pakistani riverbanks, to the globe-spanning marine colossus of earth’s history is the sort of question that gets people to pursue Ph.D.’s in paleontology in the first place.
“Among mammals, whales really stand out to me for having to have met the most obstacles in their evolution,” says Marx. “They’re really a poster child of evolution.”
The evolution of whales spans whole ages and unfamiliar worlds. It draws from an oeuvre that includes, not only paleontology, but paleoclimatology, oceanography, geology and paleoecology as well. To get a foothold on this dizzying sweep, UC Berkeley Ph.D. candidate Larry Taylor has decided to probe something smaller. Not the whales themselves, but the barnacles that cling to the animals—hitching rides around the planet. As Taylor realized, oxygen isotopes in barnacle shells act as a chemical passport of a whale’s travels, filled with stamps from the world’s various oceans. And humpback-whale barnacles go back millions of years in the fossil record. Taylor hopes to find ancient whale journeys coded in these fossil shells—journeys that could illuminate the evolution of whales and, perhaps even, why some got so preposterously large.
Starting about three million years ago, after a long decline from the high-CO2 greenhouse of the dinosaurs, the earth descended into a waxing and waning low-CO2 ice age—one that continues to this day (albeit precariously). In this ongoing ice age, the planet has swung back and forth between more wintry climes when there was a half-mile of ice crushing Boston and sea levels were 400 feet lower—to warm, but brief, interglacials like today, when the ice sheets temporarily retreat to the poles. And back and forth and back and forth and back again, as the northern hemisphere wobbles in and out of the sunshine. If Taylor’s barnacle data showed ancient whales changing their behavior in response to these climate changes, it might go a long way in explaining why baleen whales in particular (those bristle-mouthed whales that gulp plankton by the ton) have become globe-traveling giants, capable of going months without food—and dwarfing every other animal in the planet’s history.
“Essentially no one knows anything about whale migration in the prehistoric past,” says Taylor. “But the idea would be that as climate got more unstable in the last several million years—and we went through glacial maximums and minimums—the productive zones of the oceans would have been shifting around a lot, and these huge animals could quickly adapt their behavior to find these productive zones of the ocean. Evolution might have favored these really large animals that could migrate huge distances and survive off an enormous fat store.”
It’s an intuitive idea. But it’s long been just that—an idea. This is where Taylor’s humpback barnacles come in. The unassuming shells effectively act as a black box for whale journeys of the distant past.
Here’s the trick. Whale barnacles build their shells from seawater. Seawater, as you might imagine, is made of atoms. Some of those atoms are oxygen. And oxygen in the ocean comes in lighter and heavier isotopes. Water closer to the poles tends to be lighter because most of the heavier stuff—being heavier—has been literally rained out of the clouds in the long trip to the Arctic. This is because, in very general terms, the most evaporation happens where there’s the most sunshine (near the equator) and, as the water evaporates and moves pole-ward, it’s successively rained out, re-evaporated, rained out, re-evaporated, rained out and so on, along the journey north. In the process—with each step—the water is essentially distilled for lighter isotopes. As a result, animals swimming in the Arctic find themselves in lighter water.
Over the seasons, barnacle shells grow in bands from this surrounding seawater. It stands to reason, then, that if the barnacle is moving from ocean to ocean aboard a migrating whale, the shifting isotopes in each growth band laid down will reflect this travel. And so, Taylor sampled his barnacle growth bands and analyzed this changing composition—the wavering chemistry reflecting an animal crossing whole oceans with the changing of the seasons. But this isn’t as easy as it sounds. Confusingly, humpback whale barnacles take in more heavy oxygen isotopes the colder it gets. As a result, the straightforward signal from the ocean is scrambled. But luckily geochemists, in their infinite wisdom, have devised a Greek-alphabet-soup of an equation to unscramble this mess and work back to the original ocean chemistry.
After figuring out the how to unscramble the signal, Taylor mapped this shifting isotope profile onto the modern oceans (an idea that originated with paleontologist J.S. Killingley), giving him an accurate reconstruction of where the barnacle (and the whale) journeyed over its lifetime, moving from higher to lower latitudes—and back again. Simple stuff. The work falls under the umbrella of “stable-isotope geochemistry,” a field whose name suggests a dreadful slog, but one that’s allowed scientists to reconstruct everything from the diets of grizzly bears to the heights of ancient mountain ranges, long eroded-away.
Taylor showed me a graph on his laptop of data from a modern Alaskan humpback barnacle.
“This is consistent with the whales feeding in Alaska,” he said, pointing to the data. “Then, most Alaska humpbacks winter in Hawaii, and—look—when you move down the slope to the fall, these low values are consistent with the animal migrating to Hawaii.”
The atomic inscriptions in other humpback barnacles accurately captured an animal moving from California to Baja. But these measurements (undeniably clever as they are) merely confirm what we already know about whale migration. For Taylor, though, this is just a proof-of-concept. He wants to know where whales were traveling hundreds of thousands, even millions of years ago—if they were even traveling at all. Using the isotopes of fossil barnacles stretching back millions of years, and mapping them onto the ancient ocean, Taylor hopes to find out just what the whales of a bygone Earth were up to.
Of the many “peculiar and aberrant” features of modern baleen whales, perhaps the most peculiar, even to non-scientists, is their tremendous size. It’s difficult not to wonder when considering an animal like the blue whale, whose heart alone weighs 400 pounds: Why is it so outrageously huge? Even Melville puzzled over this trend in whale evolution toward gigantism, noting, in one of those interminable naturalist interludes of Moby-Dick, that “the whales of the present day” were “superior in magnitude” to those of the fossil record. Naturally, evolutionary biologists—ever eager to explain how the tiger got its stripes—have speculated about why whales got so big as well. Perhaps, they say, baleen whales got bigger to go farther.
“The idea is, you need this enormous size and these really powerful tail flukes to get you through these vast ocean basins easily,” Taylor says. It’s a skill that, as mentioned, might have become especially valuable in the past few million years during the chaotic plunge into the ice ages, as food sources became less predictable and animals had to commute across the oceans to seasonal feeding grounds.
These adaptations for epic journeys explain why entanglement with fishing gear is one of the top killers for modern whales. The whales can’t afford any extra drag. Humpback mothers can lose upwards of a third of their body weight during the winter, while nursing North Atlantic right whales can lose 30,000 pounds in the transit between the tropics and their feeding grounds in the northeast. Any additional yoke, like lobster trap groundlines wrapped around your fins—can throw off this monumental caloric calculus, starving migrating whales (70 percent of critically endangered North Atlantic right whales bear the scars of entanglement).
If migration was, in fact, selected for by the ice ages, perhaps the ancient barnacles will say so. But at this point, Taylor says, all of this—the connection between gigantism, migration, and climate history—is little more than a hunch.
“We don’t even know if they were migrating.” he says.
Not all whales faced the turning of the geological seasons with good humor. While giants, like blues and humpbacks, might have escaped the harrowing gauntlet of the past few million years—perhaps by journeying further in search of food—their extended family was sacrificed at the gates of the ice ages. In fact, it was less that baleen whales got big—after all, there existed giant whales before—than that a wild variety of smaller whales was selectively decimated by the planetary chill.
“Gigantism isn’t necessarily something that only occurred in the last 3 million years or so,” says Monash University paleontologist Felix Marx. “But what did change, as far as we can tell, is that all of the little ones suddenly start to disappear. You’ve got a whole range of whales that don’t even exist today.”
Marx had just returned from a fossil-collecting trip to Peru’s Ica desert, where he searched for these ancient whales from the group’s glory days, before the icy scythe of the Pleistocene. While sheer hugeness might seem like an intrinsic feature of baleen whales, the animals once filled out an entire spectrum of shapes and sizes.
“You’ve got all sorts of stuff that’s just a lot smaller—like, three, four, five meters. And about 3 million years ago or so, as far as we can tell, they all disappear.”
When the frost came and the smaller whales vanished, the largest whales stayed on their ballooning trajectory, to the point where, today in the ocean, swims the largest animal in the planet’s history: the blue whale.
“There’s not much evidence that [animals that big] occurred much before 3 million years ago,” he said. “So you’re just left with big stuff.”
If the ice ages selected for gigantic migratory baleen whales, something about the ancient ocean also seems to have also selected against what, to us, would appear as a kaleidoscopic world of runts.
“If you’re a little whale you’re going to be a little bit less wide-ranging,” says Marx. “You’re probably going to be a bit more coastal as well.”
That’s a dangerous lifestyle in a world that’s freezing. Sea levels had soared before the ice ages, and the flooded coastline welcomed smaller whales to spend their lives in the friendly shallows. But when ice suddenly swelled at the poles, and sea level plummeted by hundreds of feet, the coastline receded to the edge of the continental shelf. Where there was once a vast shallow coastal province to feed and find love, there was now dry land, and—offshore—a precipitous plummet into the deep. There was no place for a small, coastal baleen whale to make a living.
“So you make life for these little guys a lot more difficult,” says Marx. “On the other hand, the big guys, they can be wide-ranging, and they’re capable of undertaking migrations from the poles and back to get the maximum amount of feeding and breeding opportunities.”
If the ice ages have been driving the past few million years of whale evolution then it’s been the brute, unthinking forces of geology guiding the course of life.
Though the ice ages began in earnest in only the past three million years, the plunge into the cold might have been set in motion long ago by the wander of the continents. Around 40 million years ago, the island continent of India—which had been barreling across the ocean since the Cretaceous—collided with Asia. When it did so, subduction zones that had been pulling the two continents together—and spewing CO2 out of volcanoes above—went quiet. Volcanic rock thrust up into the Himalayas and elsewhere was attacked by wind and rain, and weathered away, a process that drew down atmospheric CO2 even further. Today we find ourselves rightly concerned about soaring CO2 levels launching us back into the greenhouse of the whales’ early days, but falling CO2 over the past 40 million years seems to have dragged the planet into our modern ice age in the first place. Antarctica suddenly gained an ice sheet and the long, faltering descent into the modern ice ages had begun.
If this story is right, it might not have been the only time that the peculiar influence of plate tectonics has guided whale evolution. More than 30 million years ago, when South America divorced Antarctica and the sea spilled over between the two continents, the profound changes to ocean circulation supercharged the ocean with nutrients and plankton, and might have prodded the split between baleen whales and toothed whales—which curiously occurs around the same time. Even further back, 50 million years ago the extremely warm climate of the Eocene might have helped ease the whales’ wolf-like ancestor into the tepid water in the first place (according to Michigan paleontologist Philip Gingerich). And, before that, 56 million years ago, it might have been deep-sea volcanoes burning through fossil fuels under the North Atlantic seafloor that released enough methane and carbon dioxide to the atmosphere to set off an extreme spike in global temperatures—a heatwave that spawned a new group of animals that today includes deer, camels and giraffes, but that also included the ancestor of all whales. Understanding biology without geology is impossible, and vice versa.
Today human society is a geological force in its own right, and it’s an open question what its ultimate influence will be on the long evolutionary story of whales. The ocean is warming faster than it did even 56 million years ago, while ice sheets are poised for a collapse on time scales only seen at the end of the ice ages. But even before this global chemistry experiment gets completely out of hand, whales have already—rather acutely—felt the influence of human civilization. Not that long ago, humans drew their oil, not from petroleum-soaked rocks, but from whales’ heads—and Nantucket played the role of Abu Dhabi in this cetacean oil economy. Whale extinction was on Melville’s mind as he watched this global slaughter unfold, firsthand. He wondered whether “Leviathan can long endure so wide a chase, and so remorseless a havoc.”
Though the hunt has relented, and Leviathan’s numbers have recovered somewhat, genetic studies indicate that populations of North Atlantic humpbacks were once 20 times more abundant than present. And given that climate and oceanography have played such an important role in whales’ evolutionary past, through ice ages and super-greenhouses both, what will their future hold in an ocean that’s not only rapidly warming but quickly acidifying as well?
Perhaps, counterintuitively, in the far future and in a far warmer ocean, whales might once again take on their dizzying diversity of forms of 10 million years ago, before the cull of the ice ages—when being a baleen whale didn’t mean having to migrate to survive. But, perhaps not.
“Even if, in theory, a warmer ocean could maybe support a higher diversity of whales, there’s no guarantee whales would ever get there at the current rate of change, Marx says. “On top of that, of course, you’ve got other factors, like: habitat degradation, fishing, noise pollution and goodness-knows-what, that all play their role as well. We’re sort of assuming ideal conditions and even then it’s speculative. But, if you just ask based on the climate data, there’s nothing for me to say that, you know, in a warmer ocean whales wouldn’t be more diverse.”
In a fossil-hunting trip to the South I took with Alabama state paleontologist Dana Ehret, ancient whales were on our minds, strewn, as they were, across the Alabama farmland—fossils that Melville claimed, “slaves in the vicinity took … for the bones of one of the fallen angels.” But Ehret specializes in the boom-and-bust evolution of another creature—the 60-foot, death-mawed megalodon shark. It’s an animal whose fate was inevitably tangled with that of whales. Curiously, as the earth descended into the icehouse around 3 million years ago, this other goliath of the ocean, mysteriously disappeared as well, along with all the smaller baleen whales. Ehret doubts that it was the cold of the ensuing ice ages that killed “meg,” as has been sometimes proposed. Megalodon, after all, was a monster whose preposterous size alone likely generated enough heat to keep it warm. Rather, Ehret thinks, it was a knock-on effect of the mega-shark’s dwindling menu of whales to dine on.
“So what I think what happened was that ‘meg’ just got so big because it was literally just a buffet of all of these medium-sized whales,” he said. “And then all of a sudden the buffet closes and you’re left with this giant shark that can’t really support itself.
The closing of the baleen buffet might have killed off more than the megalodon. In 2010 in the Peruvian desert, a new whale was discovered, one that shared the ocean with megalodon, though its teeth, at more than a foot long, dwarfed those of the destroyer shark. The terrifying whale earned a Linnaean name apposite to its grandeur: Livyatan Melvillei—literally, “Melville’s Leviathan.” That it swam in the same oceans with the 60-foot apex predator shark makes one grateful to swim in our considerably gentler seas. And like megalodon, Melville’s Leviathan might have fed on smaller whales as well, for it too seems to have failed to survive the transition to the modern, cooler world.
Like any subject in geology, pull on one thread—in this case, humpback whale barnacles—and all of Earth’s history begins to unspool. To understand an animal you have to understand its history, and to understand its history you have to first understand the history of the earth—and beyond. Indeed, whales even benefited from the influence of outer space as well, as the asteroid that executed T. Rex also cleared the ocean of its sea monsters, inviting that dog-like ancestor of all whales to colonize the seas ages ago.
Melville himself struggled with this intellectual vertigo that whales so reliably inspire: “In the mere act of penning my thoughts of this Leviathan, they weary me, and make me faint with their outreaching comprehensiveness of sweep, as if to include the whole circle of the sciences, and all the generations of whales, and men, and mastodons, past, present, and to come, with all the revolving panoramas of empire on earth, and throughout the whole universe,” he wrote, “not excluding its suburbs.”
We want to hear what you think. Submit a letter to the editor or write to email@example.com. | <urn:uuid:793ba1a4-621b-4d88-a39d-96fd4ef37b81> | 3.125 | 4,672 | Nonfiction Writing | Science & Tech. | 43.973443 | 95,532,852 |
Professor Karlene Roberts has never donned a spacesuit nor orbited around the planet, but the spirited organizational behavior expert at UC Berkeley’s Haas School of Business was tapped to help a committee of astronauts, diplomats, and legal experts find ways to mitigate the impact of an asteroid hitting Earth.
After two years of work, Roberts will join that committee -- the Association of Space Explorers (ASE) Committee on Near Earth Objects (NEO) -- in presenting its findings, “Asteroid Threats: A Call for Global Response,” at a press conference, September 25, 2008, 10 a.m., at the Google Foundation, 345 Spear St., 2nd Floor, San Francisco. A full report will be presented to the United Nations in early 2009. The press conference follows the committee’s weeklong workshop in San Francisco. Over the past two years, the group conducted similar workshops in France, Romania and Costa Rica.
The NEO Committee, chaired by Apollo 9 astronaut Rusty Schweickart, was formed to work with world leaders and organizations on preparations to protect the planet from near earth object impacts. The committee invited Prof. Roberts to share her expertise in risk management and organizational behavior. Roberts studies and advises organizations and systems in which errors can have catastrophic consequences, such as wildfire response, air control towers, nuclear submarines, and the medical industry.
“This is not an astronomy problem. It is a financial problem, an accounting problem, an international problem, an organizational problem, a political problem, and a problem that needs to be solved by public and private enterprise coming together to solve it,” says Roberts. Asteroids are often referred to as space rocks but consider their potentially enormous danger. In an Atlantic Monthly article, June 2008, journalist Gregg Easterbrook wrote, “astronomers are nervously tracking 99942 Apophis, an asteroid with a slight chance of striking Earth in April 2036 … it could hit with about 60,000 times the force of the Hiroshima bomb – enough to destroy an area the size of France.”
The committee includes chair and Apollo 9 astronaut Rusty Schweickart; NASA astronauts Thomas Jones, Edward Lu and Franklin Chang-Diaz; and four international space explorers.
The Association of Space Explorers (ASE) is an international nonprofit professional and educational organization of over 320 individuals from 373 nations who have flown in space.
Pamela Tom | Newswise Science News
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Materials Sciences
23.07.2018 | Information Technology
23.07.2018 | Health and Medicine | <urn:uuid:71b47692-1843-4eaf-bf6d-e99bc0bbce57> | 2.671875 | 1,070 | Content Listing | Science & Tech. | 37.780587 | 95,532,866 |
+44 1803 865913
In this highly accessible book, leading scientists from around the world give a general overview of research advances in their subject areas within the field of Astronomy. They describe some of their own cutting-edge research and give their visions of the future. Re-written in a popular and well-illustrated style, the articles are mainly derived from scholarly and authoritative papers published in special issues of the Royal Society's Philosophical Transactions, the world's longest running scientific journal. Carefully selected by the journal's editor, topics include the Big Bang creation of the universe, the formation and evolution of the stars and galaxies, cold dark matter, explosive sun-spot events, and humankind's exploration of the solar system. The book conveys the excitement and enthusiasm of the authors for their work at the frontiers of astronomy. All are definitive reviews for people with a general interest in the future directions of science.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
NHBS never fails to deliver
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:c681d680-42c0-4135-bfb9-9acea846dd67> | 2.640625 | 248 | Product Page | Science & Tech. | 35.363627 | 95,532,867 |
Eclogite, Excess silica, Omphacite, Pyroxene, Ultrahigh-pressure
Silica lamellae in eclogitic clinopyroxene are widely interpreted as evidence of exsolution during decompression of eclogite. However, mechanisms other than exsolution might produce free silica, and the possible mechanisms depend in part on the nature and definition of excess silica. ‘Excess’ silica may occur in both stoichiometric and non-stoichiometric pyroxene. Although the issue has been debated, we show that all common definitions of excess silica in non-stoichiometric clinopyroxene are internally consistent, interchangeable, and therefore equivalent. The excess silica content of pyroxene is easily illustrated in a three-component, condensed composition space and may be plotted directly from a structural formula unit or recalculated end-members. In order to evaluate possible mechanisms for the formation of free silica in eclogite, we examined the net-transfer reactions in model eclogites using a Thompson reaction space. We show that there are at least three broad classes of reactions that release free silica in eclogite: (i) vacancy consumption in non-stoichiometric pyroxene; (ii) dissolution of Ti-phases in pyroxene or garnet; (iii) reactions between accessory phases and either pyroxene or garnet. We suggest that reliable interpretation of the significance of silica lamellae in natural clinopyroxene will require the evaluation not only of silica solubility, but also of titanium solubility, and the possible roles of accessory phases and inclusions on the balance of free silica.
Journal of Metamorphic Geology
Required Publisher's Statement
Copyright 2007 Blackwell Publishing Ltd.
This article was published paid open access.
Day, H. W. and Mulcahy, Sean R., "Excess Silica in Omphacite and the Formation of Free Silica in Eclogite" (2007). Geology Faculty and Staff Publications. 76. | <urn:uuid:772534da-4e15-409d-8740-ba707ddeb5fe> | 2.609375 | 438 | Academic Writing | Science & Tech. | 8.807142 | 95,532,875 |
Physicists at the University of Washington and Stony Brook University in New York believe the phenomenon might be intrinsically linked with wormholes, hypothetical features of space-time that in popular science fiction can provide a much-faster-than-light shortcut from one part of the universe to another.
Alan Stonebraker/American Physical Society
This illustration demonstrates a wormhole connecting two black holes.
But here’s the catch: One couldn’t actually travel, or even communicate, through these wormholes, said Andreas Karch, a UW physics professor.
Quantum entanglement occurs when a pair or a group of particles interact in ways that dictate that each particle’s behavior is relative to the behavior of the others. In a pair of entangled particles, if one particle is observed to have a specific spin, for example, the other particle observed at the same time will have the opposite spin.
The “spooky” part is that, as past research has confirmed, the relationship holds true no matter how far apart the particles are – across the room or across several galaxies. If the behavior of one particle changes, the behavior of both entangled particles changes simultaneously, no matter how far away they are.
Recent research indicated that the characteristics of a wormhole are the same as if two black holes were entangled, then pulled apart. Even if the black holes were on opposite sides of the universe, the wormhole would connect them.
Black holes, which can be as small as a single atom or many times larger than the sun, exist throughout the universe, but their gravitational pull is so strong that not even light can escape from them.
If two black holes were entangled, Karch said, a person outside the opening of one would not be able to see or communicate with someone just outside the opening of the other.
“The way you can communicate with each other is if you jump into your black hole, then the other person must jump into his black hole, and the interior world would be the same,” he said.
The work demonstrates an equivalence between quantum mechanics, which deals with physical phenomena at very tiny scales, and classical geometry – “two different mathematical machineries to go after the same physical process,” Karch said. The result is a tool scientists can use to develop broader understanding of entangled quantum systems.
“We’ve just followed well-established rules people have known for 15 years and asked ourselves, ‘What is the consequence of quantum entanglement?’”
Karch is a co-author of a paper describing the research, published in November in Physical Review Letters. Kristan Jensen of Stony Brook, a coauthor, did the work while at the University of Victoria, Canada. Funding came from the U.S. Department of Energy and the Natural Sciences and Engineering Research Council of Canada.
For more information, contact Karch at 206-543-8591 or firstname.lastname@example.org
Vince Stricherz | EurekAlert!
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:f1943d69-5c3d-4323-b9df-8415a555fdac> | 3.84375 | 1,271 | Content Listing | Science & Tech. | 38.806911 | 95,532,893 |
+44 1803 865913
By: Debabrata Mukherjee(Author), Kumudranjan Naskar(Author), Goutam Kumar Sen(Author)
145 pages, 16 b/w photos, b/w illustrations, tables
The mangrove forests on the active delta of Indian Sundarban are a unique biosphere and a declared world heritage site. It prevails with the dynamic features of a tidal estuary along with special floral diversity of mangroves. The estuaries, the adjoining mangroves, and the brackish water ecosystem sustain a very rich aquatic, terrestrial, aerial and arboreal faunal biodiversity. Mangrove areas are an important media for transport of offshore nutrients, as they are the exporters of large amount of plant and animal detritus. Ever increasing destruction of mangroves and simultaneous conversion of land into different land use patterns, coupled with indiscriminate contamination with different xenobiotics of water and soil by various means are affecting the ecological condition of this special ecosystem. Ecological Evaluation of Estuarine Indian Sundarban is the result of an extensive study in Indian Sundarban considering its unique estuarine mangrove ecological status.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
I don't know how you got a book printed 26 years ago in the conditions that I received it (like new) but you do it! ABSOLUTELY AWESOME!
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:639af04d-0d97-4bf8-8bb2-75968bd418a6> | 3.21875 | 342 | Product Page | Science & Tech. | 23.262566 | 95,532,895 |
2. Which gas is the most abundant in the Earth's atmosphere?
N2 O2 He H2 CO2
3. Body temperature is around 308oK. What volume of air at 298oK must a person with a lung capacity of 2.5 L breath in to fill the lungs?
2.58 L 0.00811 L 2.42 L 4.26 L None of the above
4. A mixture of nitrogen gas and oxygen gas is kep at constant temperature. Which is true?
Average kinetic energies will be the same Average molecular speed will be the same Partial pressures will be the same Total masses will be the same Densities will be the same
5. An ideal gas fills a balloon at 1 atm and 67oC. By what factor will the volume of the balloon change if the balloon is heated to 339oC?
9/5 5.9 1/4 4/1 3/4
| about us | contact us
tutorials index | organic
chemistry | practice tests | online
quizzes | reference tools
site copyright (c) 2002 to neopages.com | <urn:uuid:d9178fea-84a4-485b-8478-959c441c69ab> | 3.125 | 236 | Content Listing | Science & Tech. | 86.771242 | 95,532,900 |
Following previous findings regarding the influence of vascular plants (mainly trees) on weathering, soil production and hillslope stability, in this study, we attempted to test a hypothesis regarding significant impacts of tree root systems on soil and regolith properties. Different types of impacts from tree root system (direct and indirect) are commonly gathered under the key term of “biomechanical effects”. To add to the discussion of the biomechanical effects of trees, we used a non-invasive geophysical method, electrical resistivity tomography (ERT), to investigate the profiles of four different configurations at three study sites within the Polish section of the Outer Western Carpathians. At each site, one long profile (up to 189m) of a large section of a hillslope and three short profiles (up to 19.5m), that is, microsites occupied by trees or their remnants, were made. Short profiles included the tree root zone of a healthy large tree, the tree stump of a decaying tree and the pit-and-mound topography formed after a tree uprooting.The resistivity of regolith and bedrock presented on the long profiles and in comparison with the short profiles through the microsites it can be seen how tree roots impact soil and regolith properties and add to the complexity of the whole soil/regolith profile. Trees change soil and regolith properties directly through root channels and moisture migration and indirectly through the uprooting of trees and the formation of pit-and-mound topography. Within tree stump microsites, the impact of tree root systems, evaluated by a resistivity model, was smaller compared to microsites with living trees or those with pit-and-mound topography but was still visible even several decades after the trees were windbroken or cut down.The ERT method is highly useful for quick evaluation of the impact of tree root systems on soils and regolith. This method, in contrast to traditional soil analyses, offers a continuous dataset for the entire microsite and at depths not normally reached by standard soil excavations. The non-invasive nature of ERT studies is especially important for protected areas as it was shown in the present study.
Geomorphology – Elsevier
Published: Jan 1, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
All the latest content is available, no embargo periods.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud | <urn:uuid:2af53802-6f4c-4cff-97f1-d454855f2327> | 2.984375 | 592 | Truncated | Science & Tech. | 36.466223 | 95,532,912 |
Unknown to [Benjamin] Franklin but now clear to a growing roster of lightning researchers and astronomers is that along with bright thunderbolts, thunderstorms unleash sprays of X-rays and even intense bursts of gamma rays, a form of radiation normally associated with such cosmic spectacles as collapsing stars. The radiation in these invisible blasts can carry a million times as much energy as the radiation in visible lightning, but that energy dissipates quickly in all directions rather than remaining in a stiletto-like lightning bolt. ... Unlike with regular lightning, though, people struck by dark lightning, most likely while flying in an airplane, would not get hurt. But according to [lightning researcher Joseph] Dwyer’s calculations, they might receive in an instant the maximum safe lifetime dose of ionizing radiation — the kind that wreaks the most havoc on the human body.By "not get hurt", I imagine the reporter means that a person struck by dark lightning would not be crispy crittered. Yet getting a lifetime's dose (or more) of ionizing radiation does not seem like a good thing. Maybe better than the alternative.
6 months ago | <urn:uuid:8bb00c83-6328-4d10-bfae-a1e557a8c95b> | 2.984375 | 232 | Personal Blog | Science & Tech. | 45.798885 | 95,532,945 |
Guess the Largest Number
Professor Alan Taylor
January 17, 2005
Pastries and drinks will be served
Consider the following game. I give you a stack of one million blank index cards. On each one, you write any real number that you want (e.g., -17, pi, 22/7, 10-to-the-84th, etc.). I don't know anything about your choice of numbers, except that there are a million of them, and they're all distinct. The deck is then placed face down on a table, and I begin to turn the cards over, one by one. At some point, I stop turning cards and declare that the last card that I turned over contained the largest of all the numbers you wrote. If I'm wrong, I give you a dollar. If I'm right, you give me two dollars. Remarkably, there is a strategy whereby, in the long run, I come out ahead.
|Union College Math Department Home Page|
Comments to: firstname.lastname@example.org
Created automatically on: Fri Jul 20 16:30:25 EDT 2018 | <urn:uuid:4cc0f874-a316-476e-ace6-69e9d764cc52> | 2.90625 | 232 | Knowledge Article | Science & Tech. | 76.535618 | 95,532,958 |
Tropical forests are reducing carbon emissions from tropical deforestation by a third and SLOWING the rate of global warming, study finds
- Protected areas account for a total of 20 per cent of the world's tropical forest
- They prevent the release of three times as much carbon as the UK emits per year
- They also provide crucial habitats for endangered species such as orangutans
- But rainforests are under logging and clearing pressure to produce cash crops
National parks and nature reserves in South America, Africa and Asia are reducing carbon emissions from tropical deforestation by a third, helping to slow the rate of global warming, a new study shows.
The study found that tropical forests are preventing the release of three times as much carbon into the atmosphere as the UK emits each year.
Protected areas, which account for 20 per cent of the world's tropical forest, also play a crucial role in providing habitats for species including orangutans, forest elephants and Asiatic lions, and they also conserve world heritage sites such as the Incan ruins of Machu Picchu in Peru.
In Asia , protected areas of forest, including nature reserves protecting animals such as tigers and orangutans, stopped 25 million tonnes of carbon per year being released. Pictured is an orangutan hanging on a tree in the Malaysian part of Borneo's rainforest
The study, published in the journal Scientific Reports, was an audit of the role that protected areas of tropical forest play in preventing global warming.
The research, by the University of Exeter and University of Queensland in Australia, involved analyzing the likely level of tree loss in protected area - and the resulting carbon emission - had they not been protected from deforestation.
It shows that protected forests and preventing millions of tonnes of carbon emissions from being lost through logging and deforestation.
According to the researchers, it's the first study to analyze the impact of all protected areas of tropical forest on reducing carbon emissions.
Tropical forests account account for about 68 per cent of global forest carbon stock - including trees, canopy and root systems.
PROTECTED TROPICAL FORESTS PREVENT CARBON EMISSIONS
According to a new study, from 2000 to 2012, tropical protected areas reduced carbon emissions by 407 million tonnes per year, equivalent to 1492 million tonnes of CO2 per year.
Annually, these tropical forests are preventing the release of three times as much carbon into the atmosphere as the UK emits each year.
The areas accounted for different amounts of reductions in carbon dioxide emissions:
- Protected forest area in South America - including Brazil - prevented 368.8 million tonnes of carbon per year being released between 2000 and 2012.
- In Asia, protected areas of forest, including nature reserves protecting animals such as tigers and orangutans, stopped 25 million tonnes of carbon per year being released.
- Protected areas of forest in Africa, including reserves to protect lowland mountain gorillas, saved 12.7 million tonnes of carbon per year, which would have been released had the areas not been protected from being cleared.
Annual estimated carbon saving in tropical Protected Areas (PA's) from 2000–2012 in (a) Americas, (b) Africa, (c) Asia. Red hues indicate carbon loss greater than expected for non-PA areas, blue hues indicate carbon retention greater than expected. This does not include changes in forest carbon in unprotected areas
But rainforests are under logging and clearing pressure to produce cash crops such as pasture land for cattle in South America, and palm oil in South East Asia, while in Africa, tropical forests are being cleared for agriculture and charcoal production for local cooking.
However, these activities come at a cost: deforestation releases nearly twice as much carbon than is absorbed by intact forests.
For the study, ecologists analyzed the carbon stocks and losses of millions of hectares of protected areas such as national parks, world heritage sites, reserves for indigenous people, tourist sites and areas to protect endangered species.
Rainforests are under logging and clearing pressure to produce cash crops such as pasture land for cattle in South America, and palm oil in South East Asia, while in Africa, tropical forests are being cleared for agriculture and charcoal production for local cooking
They found that from 2000-2012, these protected area cut predicted carbon emissions by about one third.
'Tropical protected areas are often valued for their role in safeguarding biodiversity,' says Dr Dan Bebber, an ecologist at the University of Exeter and the co-author of the research.
'Our study highlights the added benefit of maintaining forest cover for reducing carbon dioxide emissions to the atmosphere, so helping slow the rate of climate change.'
This is an image of a deforested area of Brazilian forest. Rainforests are under logging and clearing pressure to produce cash crops such as pasture land for cattle in South America. This eliminates a source of carbon sequestration
From 2000-2012, tropical protected areas reduced carbon emissions by 407 million tonnes per year, which according to the researchers is equivalent to 1492 million tonnes of carbon dioxide per year.
The UK's annual carbon dioxide emissions are around 404 million tonnes per year, so the saving is more than three times the UK's annual production of carbon dioxide emissions.
Total annual carbon emissions from the tropics are thought to be between 1 and 1.5 billion tonnes of carbon per year - equivalent to 3.67 to 5.05 billion tonnes of carbon dioxide.
While there was a concern that forest clearing would increase outside the boundaries of protected areas, the authors of the study did not find a measurable increase in logging in areas of rainforest just outside the protected areas.
TROPICAL FORESTS EMIT MORE CARBON DIOXIDE THAN THEY STORE DUE TO DEFORESTATION
Tropical forests have been so damaged by humans that they now pollute the planet more than they protect it.
The planet's forests and oceans are vital 'carbon sinks' which prevent polluting gases reaching our atmosphere.
The massive Amazon forest alone sucks up 600 million tons of carbon emissions a year.
But a new study found that so many trees have been lost from tropical forests that they now produce more carbon than they absorb.
This is due to the massive logging industry, farmers felling trees for fuel, forest fires and disease.
The researchers calculated the net amount of carbon sent into the atmosphere by these trees after they are burned or die.
They found, subtracting the carbon they store, that tropical forest trees produce 425 teragrams of carbon a year.
Most watched News videos
- Man fatally shoots a father during an argument over a handicap spot
- Police surround LA Trader Joe's where gunman is barricaded inside
- London commuter sings out loud and doesn't care who hears him
- Roseanne Barr gives official statement on her Valerie Jarrett tweet
- Roseanne Bar explains her Valerie Jarrett tweet in eccentric rant
- Dennis Quaid says he ‘disappeared’ during Meg Ryan relationship
- Sir David Attenborough shuts down Naga Munchetty's questions
- Duck boats struggle to stay afloat on Missouri river
- May urges EU to take more flexible view on Irish border issue
- Woman livestreams unassisted birth of her 6th child in her garden
- Moment uni student fends off armed mugger with martial arts in Brazil
- Family and friends pay their respects to Alesha MacPhail | <urn:uuid:ee1e3512-2911-42a9-84da-e9ae9ece1f01> | 3.53125 | 1,536 | Truncated | Science & Tech. | 28.233822 | 95,532,987 |
It’s the first time astronomers have achieved this feat.
It’s the hungriest thing we’ve ever seen.
An incredibly fruitful mission sheds new secrets about the Milky Way.
For the first time, astronomers have managed to confirm the existence of a black hole population surrounding the core of our galaxy.
Looks like we can expect more gravitational waves in the near future.
Another discovery that cements black holes’ fundamental role in the evolution of the universe.
It’s the farthest, oldest, and perhaps most mysterious object we’ve ever discovered.
An ingenious technique could enable us to witness some of the strongest gravitational waves.
If confirmed, this could indicate a remarkable progress in modern physics.
Sometimes, even a black hole can choke on its meal.
This is one slow dance.
Your childhood fantasies weren’t all that off.
Ready to sink your teeth into some black holes?
Why blow up when you can mass up?
This was thought to be impossible to undertake until not too long ago.
In the aftermath of a titanic galactic battle, a merged black hole caused some waves.
A new study offers an unorthodox explanation for how supermassive black hole formed in the early Universe.
Many more might be lurking in the galaxy.
And things are about to get even more exciting. | <urn:uuid:c791bbbc-a4eb-4769-ba02-3de1d175d272> | 3.109375 | 282 | Content Listing | Science & Tech. | 48.72757 | 95,533,000 |
"This planet-like companion is the coldest object ever directly photographed outside our solar system," said Luhman, who led the discovery team. "Its mass is about the same as many of the known extra-solar planets -- about six to nine times the mass of Jupiter -- but in other ways it is more like a star. Essentially, what we have found is a very small star with an atmospheric temperature about cool as the Earth's."
Luhman classifies this object as a "brown dwarf," an object that formed just like a star out of a massive cloud of dust and gas. But the mass that a brown dwarf accumulates is not enough to ignite thermonuclear reactions in its core, resulting in a failed star that is very cool. In the case of the new brown dwarf, the scientists have gauged the temperature of its surface to be between 80 and 160 degrees Fahrenheit -- possibly as cool as a human.
Ever since brown dwarfs first were discovered in 1995, astronomers have been trying to find new record holders for the coldest brown dwarfs because these objects are valuable as laboratories for studying the atmospheres of planets with Earth-like temperatures outside our solar system.
Astronomers have named the brown dwarf "WD 0806-661 B" because it is the orbiting companion of an object named "WD 0806-661" -- the "white dwarf" core of a star that was like the Sun until its outer layers were expelled into space during the final phase of its evolution. "The distance of this white dwarf from the Sun is 63 light years, which is very near our solar system compared with most stars in our galaxy," Luhman said.
"The distance of this white dwarf from its brown-dwarf companion is 2500 astronomical units (AU) -- about 2500 times the distance between the Earth and the Sun, so its orbit is very large as compared with the orbits of planets, which form within a disk of dust swirling close around a newborn star," said Adam Burgasser at the University of California, San Diego, a member of the discovery team. Because it has such a large orbit, the astronomers say this companion most probably was born in the same manner as binary stars, which are known to be separated as far apart as this pair, while remaining gravitationally bound to each other.
Luhman and his colleagues presented this new candidate for the coldest known brown dwarf in a paper published in spring 2011, and they now have confirmed its record-setting cool temperature in a new paper that will be published in the Astrophysical Journal.
To make their discovery, Luhman and his colleagues searched through infrared images of over six hundred stars near our solar system. They compared images of nearby stars taken a few years apart, searching for any faint points of light that showed the same motion across the sky as the targeted star. "Objects with cool temperatures like the Earth are brightest at infrared wavelengths," Luhman said. "We used NASA's Spitzer Space Telescope because it is the most sensitive infrared telescope available."
Luhman and his team discovered the brown dwarf WD 0806-661 B moving in tandem with the white dwarf WD 0806-661 in two Spitzer images taken in 2004 and 2009. The images, which together show the movement of the objects, are available here. "This animation is a fun illustration of our technique because it resembles the method used to discovery Pluto in our own solar system," Luhman said.
In a related new discovery involving a different cool brown dwarf, Penn State Postdoctoral Scholar John Bochanski and his colleagues have made the most detailed measurement yet of ammonia in the atmosphere of an object outside our solar system. "These new data are much higher quality that previously achieved, making it possible to study, in much more detail than ever before, the atmospheres of the coldest brown dwarfs, which most closely resemble the atmospheres that are possible around planets," Bochanski said.
"Brown dwarfs that are far from their companion stars are much easier to study than are planets, which typically are difficult to observe because they get lost in the glare of the stars they orbit," Burgasser said. "Brown dwarfs with Earth-like temperatures allow us to refine theories about the atmospheres of objects outside our solar system that have comparatively cool atmospheres like that of our own planet."
This research was sponsored by grants from the National Science Foundation and the NASA Astrophysics Theory Program.
CONTACTSKevin Luhman at Penn State: email@example.com, (+1) 814-863-4957
Barbara Kennedy (Penn State PIO): 814-863-4682, firstname.lastname@example.org
Barbara Kennedy | EurekAlert!
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:c12e2dad-60f2-4733-ad88-a538a22d53e0> | 3.65625 | 1,611 | Content Listing | Science & Tech. | 43.998345 | 95,533,010 |
A View from Emerging Technology from the arXiv
Human Brain Is Limiting Global Data Growth, Say Computer Scientists
Evidence has emerged that the brain’s capacity to absorb information is limiting the amount of data humanity can produce
In the early 19th century, the German physiologist Ernst Weber gave a blindfolded man a mass to hold and gradually increased its weight, asking the subject to indicate when he first became aware of the change. Weber discovered that the smallest increase in weight a human can perceive is proportional to the initial mass.
This is now known as the Weber-Fechner law and shows that the relationship between the stimulus and perception is logarithmic.
It’s straightforward to apply this rule to modern media. Take images for example. An increase in resolution of a low resolution pictue is more easily perceived than the same increase to a higher resolution picture.
When two parameters are involved, the relationship between the stimuli and perception is the square of the logarithm. An example would be video in which images change with time.
This way of thinking about stimulus and perception clearly indicates that the Weber-Fechner law ought to have a profound effect on the rate at which we absorb information.
Today, Claudius Gros and a couple of pals at Goethe University Frankfurt in Germany look for signs of the Weber-Fechner law in the size distribution of files on the internet. And they say they’ve found them.
These guys measured the type and size of files pointed to by every outward link from Wikipedia and the open directory project, dmoz.org. That’s a total of more than 600 million files. Some 58 per cent of these pointed to image files, 32 per cent to application files, 5 per cent to text files, 3 per cent to audio and 1 per cent to video files.
Gros and co then plotted the size of each of these files types against the number of files to get the file size distribution.
Sure enough, they discovered that the audio and video file distribution followed a log-normal curve, which is compatible with a logarithmic squared-type relationship. By contrast, image files follow a power law distribution, which is compatible with a logarithmic relationship. That’s exactly as the Weber-Fechner law predicts
“[This] strongly indicates that [the distributions] are determined by the underlying neurophysiological limitations of the producing agents,” say Gros and co. In other words ‘us’.
Further evidence comes from a closer look at the tails of these curves. If the size of files were determined by some kind of economic factor, like the cost of producing a file, then the distribution ought to have an exponential tail. But that’s not the case. The absence of this feature suggests some other origin for the file size distribution.
Gros and co put it like this: “The neuropsychological capacity of the human brain to process and record information may constitute the dominant limiting factor for the overall growth of globally stored information, with real-world economic constraints having only a negligible influence.”
Quite! In other words, global information cannot grow any faster than our ability to absorb or monitor it.
That makes sense and raises some interesting avenues for future research too. For example, it’ll be interesting to see how machine intelligence might change this equation. It may be that machines can be designed to distort our relationship with information.
If so, then a careful measure of file size distribution could reveal the first signs that intelligent machines are among us!
Ref: arxiv.org/abs/1111.6849: Neuropsychological Constraints To Human Data Production On A Global Scale
12 December 2011: updated to correct the spelling of the the Weber-Fechner law. Thanks Gavin Owens!
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:891cd63e-92e5-4ab2-812a-7129980168cf> | 2.78125 | 832 | News Article | Science & Tech. | 41.254051 | 95,533,014 |
There is considerable evidence for the Medieval Warm Period and other periods of warmer temperatures. An argument can be made that our current warming is a continuation of the warming that began at the end of the Little Ice Age. Check the resources in this section for more... Inconvenient: new treeline paper suggests temperatures were warmer 9000 years ago. It’s Here: A 1900-2010 Instrumental Global Temperature Record That Closely Aligns With Paleo-Proxy Data. A global-scale instrumental temperature record that has not been contaminated by (a) artificial urban heat (asphalt, machines, industrial waste heat, etc.), (b) ocean-air affected biases (detailed herein), or (c) artificial adjustments to past data that uniformly serve to cool the past and warm the present . . . is now available.
Composed of 450 instrumental records from temperature stations sheltered from ocean-air/urbanization/adjustment biases throughout the world, a new 20th/21st century global temperature record introduced previously here very closely aligns with paleoclimate evidence from tree rings, ice cores, fossil pollen and other temperature proxies. The Alignment Of Paleoclimate Proxy Data & Instrumental Records In 2016, Dr. Image Source: Xing et al., 2016 (MDVM Reconstructed NH Temperature) Christiansen and Lungqvist (2012) utilize proxies from 91 locations across the extra-tropical Northern Hemisphere to reveal no net warming since the 1940s. Image Source: Stoffel et al., 2015.
81 Graphs From 62 New (2018) Papers Invalidate Claims Of Unprecedented Global-Scale Modern Warming. By Kenneth Richard on 10. May 2018 During 2017, there were 150 graphs from 122 scientific papers published in peer-reviewed journals indicating modern temperatures are not unprecedented, unusual, or hockey-stick-shaped — nor do they fall outside the range of natural variability. We are a little over 4 months into the new publication year and already 81 graphs from 62 scientific papers undermine claims that modern era warming is climatically unusual. Zheng et al., 2018 “In this study we present a detailed GDGT data set covering the last 13,000 years from a peat sequence in the Changbai Mountain in NE China.
The brGDGT-based temperature reconstruction from Gushantun peat indicates that mean annual air temperatures in NE China during the early Holocene were 5–7°C higher than today. Furthermore, MAAT records from the Chinese Loess Plateau also suggested temperature maxima 7–9°C higher than modern during the early Holocene (Peterse et al., 2014; Gao et al., 2012; Jia et al., 2013). Mikis, 2018.
Little Ice Age (LIA) and Other Mini-Ice-Ages. The Roman Warm Period. Holocene Thermal Optimum. The Hockey Sticks. 200 Non-Hockey Stick Graphs Published Since 2017 Invalidate Claims Of Unprecedented, Global-Scale Warming. By Kenneth Richard on 22. March 2018 46 New (2018) Non-Warming Graphs Affirm Nothing Climatically Unusual Is Happening Image Source: Lansner and Pepke Pederson, 2018 During 2017, there were 150 graphs from 122 scientific papers published in peer-reviewed journals indicating modern temperatures are not unprecedented, unusual, or hockey-stick-shaped — nor do they fall outside the range of natural variability. We are less than 3 months into the new publication year. Already 46 new graphs from 40+ scientific papers undermine claims that modern era warming — or, in some regions, modern cooling — is climatically unusual. 2018 and 2017 Non-Hockey Stick Graphs (~200) Maley et al., 2018 Polovodova Asteman et al., 2018 Wündsch et al., 2018 McGowan et al., 2018 “Our reconstructed Tmax [temperature maximum] for these warmer conditions peaks around 1390 CE at + 0.8 °C above the 1961–90 mean, similar to the peak Tmax during the RWP [Roman Warm Period].
Wu et al., 2018 Hanna et al., 2018 Li et al., 2018 Eck, 2018. Climate and Human Civilization over the last 18,000 years. Paleoclimate Page | Watts Up With That? 472 Years – CET Extended Graph – Tony Brown – Graph Background Tony Brown – Climate Etc. – Click the pic to view at source 600 Years Arctic Temperature – Overpeck et al. 1997 NOAA NCDC – Click the pic to view at source 1,100 Years – Ljungqvist et al CO2Science.Org – Click the pic to view at source 1,100 Years Ljungqvist et al JoNova.com – Click the pic to view at source 1,100 Years Kirby Kirkby 2007 Harvard – Kirby – Click the pic to view at source 1,100 Years – Lamb – IPCC Assessment Report 1 – Graph Background JoNova – IPCC AR1 – Click the pic to view at source 1,205 Years – M.L.
BioCab.org – Click the pic to view at source 2,000 Years – “Loehle and McCulloch 2008 Graph Background Craig Loehle, Ph.D. and J. 2,000 Years – J. J. 2,000 Years Christiansen 2,100 Years – Law Dome O18 Climate Audit – Law Dome – Click the pic to view at source 2,500 Years – GISP2 – Alley, 2000 Photobucket.com – Click the pic to view at source 3,000 Years – GISP2 – Alley, 2000, Moberg, Keigwin & HadCRUT3 10,000 Years – Vostok E.
Lindzen, Soon and Spencer debunked? By Andy May On Bret Stephens facebook page, I complimented Mr. Stephens on what I thought was a very good column. I also noted that the eminent climate scientist Dr. Richard Lindzen had said similar things. “Few “skeptics” have been debunked as much as Lindzen and Spencer.” Link to comment here. If you follow the link you will see it is followed with a google search for “Lindzen debunked.” The first reference in my search led to desmogblog, here. Their arguments appear to be as vacuous as their resumes. Figure 1 (source here) Below we see Dr. Figure 2 (source here) All of this “hottest year on record” nonsense is absurd, we are talking about very small changes in the average temperature.
Figure 3 (Data sources here and here) There is a secular warming trend that has persisted since the end of the Little Ice Age in the 19th century. Figure 4, “Anthropogenic Climate Change” (source here) Then, as now, the public chose to blame people for climate change without proof. Figure 6 (source) A Holocene Temperature Reconstruction Part 1: the Antarctic. By Andy May The only recent attempt at a global Holocene temperature reconstruction available today is the one by Marcott, et al. (2013), the paper abstract can be viewed here. His reconstruction is shown in figure 1. Figure 1 The Y axis is a reconstructed global temperature anomaly from the 1961-1990 mean. The reconstruction in figure 1 goes from the present (1950) on the left to nearly the beginning of the Holocene about 11,700 years ago on the right. The reconstruction shows an abrupt warming in the last 100 years (see the left side of figure 1). “We showed that no temperature variability is preserved in our reconstruction at cycles shorter than 300 years, 50% is preserved at 1000-year time scales, and nearly all is preserved at 2000-year periods and longer.
The problem is that 300 years is a very long time. This is a new look at Marcott’s proxies. Proxy selection The proxies were examined considering the criticism of Marcott’s analysis by Javier, McIntyre and Foster. Figure 2 Table 1. A Holocene Temperature Reconstruction Part 2: More reconstructions. By Andy May In the last post (see here) we introduced a new Holocene temperature reconstruction for Antarctica using some of the Marcott, et al. (2013) proxies. In this post, we will present two more reconstructions, one for the Southern Hemisphere mid-latitudes (60°S to 30°S) and another for the tropics (30°S to 30°N).
The next post will present the Northern Hemisphere mid-latitudes (30°N to 60°N) and the Arctic (60°N to the North Pole). As we did for the Antarctic, we will examine each proxy and reject any that have an average time step greater than 130 years or if it does not cover at least part of the Little Ice Age (LIA) and the Holocene Climatic Optimum (HCO). We are looking for coverage from 9000 BP to 500 BP or very close to these values. Southern Hemisphere mid-latitudes Our reconstruction for this region is shown in figure 1. Figure 1 This reconstruction has a more defined HCO than we saw in the Antarctic and it is placed between 8000 BP and 5000 BP. Figure 2 Figure 3 Figure 4. A Holocene Temperature Reconstruction Part 3: The NH and Arctic. By Andy May In the last post (see here) we reexamined the Marcott, et al. (2013) proxies for the Southern Hemisphere mid-latitudes and the tropics.
In this post, we will present two more reconstructions using their proxies, these are for the Northern Hemisphere mid-latitudes (30°N to 60°N) and for the Arctic region (60°N to 90°N). These two regions contain over half of the proxies used in this study. The next post will present a global area-weighted composite temperature reconstruction. As we did in the previous two posts, we will examine each proxy and reject any that have an average time step greater than 130 years or if it does not cover at least part of the Little Ice Age (LIA) and the Holocene Climatic Optimum (HCO). We are looking for coverage from 9000 BP to 500 BP or very close to these values. Northern hemisphere mid-latitudes There are 10 proxies that meet our basic criteria for the Northern Hemisphere reconstruction, although two of them are combined into one record. Figure 2. A Holocene Temperature Reconstruction Part 4: The global reconstruction.
By Andy May In previous posts (here, here and here), we have shown reconstructions for the Antarctic, Southern Hemisphere mid-latitudes, the tropics, the Northern Hemisphere mid-latitudes, and the Arctic. Here we combine them into a simple global temperature reconstruction. The five regional reconstructions are shown in figure 1. The R code to map the proxy locations, the references and metadata for the proxies, and the global reconstruction spreadsheet can be downloaded here.
For a description of the proxies and methods used, see part 1, here. Figure 1A, all proxies except TN057-17 on the Antarctic Polar Front Figure 1B, the proxies used for the reconstructions It is interesting that the Northern Hemisphere is the odd reconstruction. Figure 2 (Source: Javier, see his post for a detailed explanation of the figure.) The Southern Hemisphere is also a bit anomalous, with a dip in the period of the HCO, corresponding with a dip in winter insolation in the Southern Hemisphere.
Table 1 Conclusions. Global versus Greenland Holocene Temperatures. By Andy May Last week, I posted a global temperature reconstruction based mostly on Marcott, et al. 2013 proxies. The post can be found here. In the comments on the Wattsupwiththat post there was considerable discussion about the difference between my Northern Hemisphere mid-latitude (30°N to 60°N) and the GISP2 Richard Alley central Greenland temperature reconstruction (see here for the reference and data).
See the comments by Dr. Don Easterbrook and Joachim Seifert (weltklima) here and here, as well as their earlier comments. Richard Alley’s (Richard Alley, 2000) central Greenland reconstruction has become the de facto standard reconstruction and is displayed often in papers and posts. Figure 1 Alley’s reconstruction is based upon trapped air in ice cores taken from central Greenland and his proxies are calibrated to air temperatures on land.
Figure 2 Figure 3 Vinther’s record shows a more prominent HCO than ours, more detail and a deeper LIA. Figure 4 Figure 5 Figure 7 (Source CDIAC) A never before western published paleoclimate study from China suggests warmer temperatures in the past. People send me stuff. Today in my inbox, WUWT regular Michael Palmer sends this note: My wife Shenhui Lang found and translated an interesting article from 1973 that attempts the reconstruction of a climate record for China through several millennia (see attached). The author is long dead (he died in 1974), and “China Daily” is now the name of an English language newspaper established only in 1981. I think it would be very difficult to even locate anyone holding the rights to the original, and very unlikely for anyone to take [copyright] issue with the publication of the English translation.
The paper is interesting in that it shows a correlation between height of the Norwegian snow line and temperature in China for the last 5000 years. A Preliminary Study on the Climatic Fluctuation during the last 5000 years in China Zhu Kezhen Published in China Daily, June 19th, 1973 / translated by Shenhui Lang, PhD In the monsoon area of East Asia, the annual rainfall often varies greatly. 1. 2. 3. 4. The Proliferation Of Non-Global Warming Graphs In Science Journals Continues Unabated In 2018.
During 2017, there were 150 graphs from 122 scientific papers published in peer-reviewed journals that indicated modern temperatures are not unprecedented, unusual, or hockey-stick-shaped — nor do they fall outside the range of natural variability. Less than 3 weeks into the new publication year, the explosion of non-alarming depictions of modern climate change continues. Blarquez et al., 2018 Magyari et al., 2018 …its climatic tolerance limits were used to infer July mean temperatures exceeding modern values by 2.8°C at this time [8200-6700 cal yr BP] (Magyari et al., 2012).
White et al., 2018 Our data, together with published work, indicate both a long-term trend in ENSO strength due to June insolation [solar] forcing and high-amplitude decadalcentennial fluctuations; both behaviors are shown in models. Song et al., 2018 Huang et al., 2018 Perner et al., 2018 Maley et al., 2018 Polovodova Asteman et al., 2018 Papadomanolaki et al., 2018 (Baltic Sea) Yi, 2018 Bereiter et al., 2018 (press release) 150 NON-Global Warming Graphs From 2017 Pummel Claims Of Unusual Modern Warmth.
By Kenneth Richard on 1. January 2018 …in 122 (2017) scientific papers Image Source: Loisel et al., 201 2017: 150 Graphs, 122 Scientific Papers In the last 12 months, 150 graphs from 122 peer-reviewed scientific papers have been published that undermine the popularized conception of a slowly cooling Earth temperature history followed by a dramatic hockey-stick-shaped uptick, or an especially unusual global-scale warming during modern times. Yes, some regions of the Earth have been warming in recent decades or at some point in the last 100 years. Some regions have been cooling for decades at a time. And many regions have shown no significant net changes or trends in either direction relative to the last few hundred to thousands of years.
Succinctly, then, scientists publishing in peer-reviewed journals have increasingly affirmed that there is nothing historically unprecedented or remarkable about today’s climate when viewed in the context of long-term natural variability. 1. 2. 3. 4. 5. 6. 7. 80 Graphs From 58 New (2017) Papers Invalidate Claims Of Unprecedented Global-Scale Modern Warming.
By Kenneth Richard on 29. May 2017 “[W]hen it comes to disentangling natural variability from anthropogenically affected variability the vast majority of the instrumental record may be biased.” — Büntgen et al., 2017 Yes, some regions of the Earth have been warming in recent decades or at some point in the last 100 years. Some regions have been cooling for decades at a time. And many regions have shown no significant net changes or trends in either direction relative to the last few hundred to thousands of years. Succinctly, then, scientists publishing in peer-reviewed journals have increasingly affirmed that there is nothing historically unprecedented or remarkable about today’s climate when viewed in the context of long-term natural variability. Büntgen et al., 2017 “Spanning the period 1186-2014 CE, the new reconstruction reveals overall warmer conditions around 1200 and 1400, and again after ~1850.
Parker and Ollier, 2017 Gennaretti et al., 2017 Abrantes et al., 2017 Werner et al., 2017. Significant finding: Study shows why Europe’s climate varied over the past 3000 years. From CARDIFF UNIVERSITY and the “motion from the ocean makes or breaks English vineyards” department. Ocean floor mud reveals secrets of past European climateSamples of sediment taken from the ocean floor of the North Atlantic Ocean have given researchers an unprecedented insight into the reasons why Europe’s climate has changed over the past 3000 years.
In this 1677 painting by Abraham Hondius, ‘The Frozen Thames, looking Eastwards towards Old London Bridge’, people are shown enjoying themselves on the ice during a “frost fair”. From the warmer climates of Roman times when vineyards flourished in England and Wales to the colder conditions that led to crop failure, famine and pandemics in early medieval times, Europe’s climate has varied over the past three millennia. For the first time, researchers have been able to pinpoint why this occurs, and the answer lies far out at sea in the North Atlantic Ocean. Sediment core location and regional ocean circulation. Abstract Data availability.
485 Scientific Papers Published In 2017 Support A Skeptical Position On Climate Alarm. The Hockey Stick Collapses: 60 New (2016) Scientific Papers Affirm Today’s Warming Isn’t Global, Unprecedented, Or Remarkable. Deconstruction Of The Critical YouTube Response To Our 400+ ‘Skeptical’ Papers Compilation.
Climate and Human Civilization over the last 18,000 years, updated | Andy May Petrophysicist. Climate and Human Civilization for the Past 4,000 Years | Watts Up With That? A beneficial climate change hypothesis | Climate Etc. Effect of Atmospheric CO2 Concentrations on Early Human Societies | Watts Up With That? Nature Unbound III – Holocene climate variability (Part B) | Climate Etc. History Is Clear: Humans Prospered In Climates That Were Warmer Than Today’s…Died In Cooler Ones.
Climate Profoundly Impacted Development Of Civilization…Cool Periods Brought On Plagues/Death. Distinguishing Between ‘Safe’ or ‘Dangerous’ Warming Is Easy: ‘Dangerous’ Warming Is Red. New Paper Asserts ‘Biased’ Climate Models Underestimate Natural Variability And The Warmth Of The Past. 50 Inverted Hockey Sticks – Scientists Find Earth Cools As CO2 Rises.
‘Hide The Decline’ Unveiled: 50 Non-Hockey Stick Graphs Quash Modern ‘Global’ Warming Claims. Hockey Stick Was Refuted Before Its Fabrication – Study Ignored – IPCC And Mann Took World On 10-Year Joyride. 8 New Papers Reveal ‘Natural’ Global Warming Reaches Amplitudes Of 10°C In Just 50 Years With No CO2 Influence. Inconvenient: New paper finds the last interglacial was warmer than today – not simulated by climate models | Watts Up With That? Global Temperature Trends From 2500 B.C. To 2040 A.D. - Principia Scientific International.
2 More New Papers Affirm There Is More Arctic Ice Coverage Today Than During The 1400s. Inconvenient new study: Canadian Arctic had significantly warmer summers a few thousand years ago | Watts Up With That? Climate and People in the Prehistoric Arctic. New paleo reconstruction shows warmer periods in Alaska over the past 3000 years | Watts Up With That? Ice Patch Archaeology in Global Perspective: Archaeological Discoveries from Alpine Ice Patches Worldwide and Their Relationship with Paleoclimates. Archaeological Finds in Retreating Swiss Glacier « Climate Audit. Receding Swiss glaciers incoveniently reveal 4000 year old forests – and make it clear that glacier retreat is nothing new | Watts Up With That? Yet another paper demonstrates warmer temperatures 1000 years ago and even 2000 years ago. | Watts Up With That? “We Live In The Coldest Period Of The Last 10.000 Years" | NOT A LOT OF PEOPLE KNOW THAT.
CO2: Ice Cores vs. Plant Stomata | Watts Up With That? New Study: Two Thousand Years of Northern European Summer Temperatures Show a Downward Trend | Watts Up With That? The oldest ice core – Finding a 1.5 million-year record of Earth’s climate | Watts Up With That? Paleo-clamatology | Watts Up With That? A review of temperature reconstructions | Watts Up With That?
HH Lamb–“Climate: Present, Past & Future–Vol 2”–In Review–Part I | NOT A LOT OF PEOPLE KNOW THAT. Parts of Asia Were Warmer During the Holocene Than They Are Now. Climate Study: Scotland Warmer in 1300s, 1500s, and 1730s - Principia Scientific International. A 2000-Year SST History of the Northeastern Arabian Sea. Easterbrook on the magnitude of Greenland GISP2 ice core data | Watts Up With That? Four Centuries of Summer Temperatures in Coastal Northern Japan. Four Centuries of Spring Temperatures in Nepal. A Thousand-Year Drought History of China's Qilian Mountains. An 850-Year Hydroclimatic History of Northwestern China. 1200 Years of Historic Streamflow in the Eastern Great Basin of North America. Modeling 12 Centuries of Northern Hemispheric Hydroclimate. Comparing the Kobashi and Alley Central Greenland Temperature Reconstructions | Andy May Petrophysicist. A Review of Temperature Reconstructions | Andy May Petrophysicist.
Another New Paper Reveals No Discernible Human Influence On Global Ocean Temperatures, Climate. Warmest in a Millll-yun Years « Climate Audit. | <urn:uuid:bccd3200-b6bd-4c9f-bbd0-792859a9440d> | 3.21875 | 4,743 | Content Listing | Science & Tech. | 53.811803 | 95,533,024 |
Image: Courtesy of Paolo Gasparini
Mount Vesuvius, the volcano most famous for blanketing the towns of Pompei and Herculaneum with lava and debris in 79 A.D., may be sitting atop a reservoir of magma that covers more than 400 square kilometers, a new study suggests. The finding, reported in the current issue of the journal Science by a group of Italian and French scientists, may lead to more accurate monitoring of the area surrounding the volcano.
Building on previous work that suggested the presence of a magma zone underneath Vesuvius, Emmanuel Auger of the Università di Napoli Federico II in Naples, Italy and colleagues employed seismic tomography to estimate its size. The scientists produced seismic waves and traced their paths through the zone beneath Vesuvius. Using the speed and direction of the waves, they compiled an image of the crust under the volcano. The picture that emerged, the researchers report, includes a magma reservoir buried eight kilometers deep in the earth’s crust that is at least 400 square kilometers wide. "This also tells us that there is a huge amount of available magma under Vesuvius," co-author Paolo Gasparini says. "It was really unexpected for the reservoir to be that size, so very wide and large."
A better understanding of the reservoir’s structure, location and volume, the authors write, "can be used to help prediction of the scenario of the next eruption and to interpret the pattern of the expected precursory seismic activity and ground deformation." Unfortunately for the region’s inhabitants however, it can’t help predict when the next eruption will occur.
Sarah Graham | Scientific American
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:d3d4b9fa-7a16-43b7-bb09-cb4ce5665641> | 3.6875 | 975 | Content Listing | Science & Tech. | 39.328983 | 95,533,029 |
Researchers at the University of Illinois at Urbana-Champaign have introduced a new class of light-emitting quantum dots (QDs) with tunable and equalized fluorescence brightness across a broad range of colors. This results in more accurate measurements of molecules in diseased tissue and improved quantitative imaging capabilities.
"In this work, we have made two major advances--the ability to precisely control the brightness of light-emitting particles called quantum dots, and the ability to make multiple colors equal in brightness," explained Andrew M. Smith, an assistant professor of bioengineering at Illinois.
Left: Conventional fluorescent materials like quantum dots and dyes have mismatched brightness between different colors. When these materials are administered to a tumor (shown below) to measure molecular concentrations, the signals are dominated by the brighter fluorophores. Right: New brightness-equalized quantum dots that have equal fluorescence brightness for different colors. When these are administered to tumors, the signals are evenly matched, allowing measurement of many molecules at the same time.
Credit: University of Illinois
"Previously light emission had an unknown correspondence with molecule number. Now it can be precisely tuned and calibrated to accurately count specific molecules. This will be particularly useful for understanding complex processes in neurons and cancer cells to help us unravel disease mechanisms, and for characterizing cells from diseased tissue of patients."
"Fluorescent dyes have been used to label molecules in cells and tissues for nearly a century, and have molded our understanding of cellular structures and protein function. But it has always been challenging to extract quantitative information because the amount of light emitted from a single dye is unstable and often unpredictable.
Also the brightness varies drastically between different colors, which complicates the use of multiple dye colors at the same time. These attributes obscure correlations between measured light intensity and concentrations of molecules," stated Sung Jun Lim, a postdoctoral fellow and first author of the paper, "Brightness-Equalized Quantum Dots," published this week in Nature Communications.
According to the researchers, these new materials will be especially important for imaging in complex tissues and living organisms where there is a major need for quantitative imaging tools, and can provide a consistent and tunable number of photons per tagged biomolecule.
They are also expected to be used for precise color matching in light-emitting devices and displays, and for photon-on-demand encryption applications. The same principles should be applicable across a wide range of semiconducting materials.
"The capacity to independently tune the QD fluorescence brightness and color has never before been possible, and these BE-QDs now provide this capability," said Lim. "We have developed new materials-engineering principles that we anticipate will provide a diverse range of new optical capabilities, allow quantitative multicolor imaging in biological tissue, and improve color tuning in light-emitting devices.
In addition, BE-QDs maintain their equal brightness over time while whereas conventional QDs with mismatched brightness become further mismatched over time. These attributes should lead to new LEDs and display devices not only with precisely matched colors--better color accuracy and brightness--but also with improved performance lifetime and improved ease of manufacturing." QDs are already in use in display devices (e.g. Amazon Kindle and a new Samsung TV).
In addition to Lim and Smith, co-authors include Mohammad U. Zahid, Phuong Le, Liang Ma, Bioengineering at Illinois; David Entenberg, Allison S. Harney, and John Condeelis, Albert Einstein College of Medicine, New York.
Andrew M. Smith | EurekAlert!
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:b5f3b41a-7331-4137-ae23-ca0a0dec1e7f> | 3.171875 | 1,378 | Content Listing | Science & Tech. | 27.648002 | 95,533,030 |
Connected and mobile devices putting a strain on the environment
Wednesday, January 10, 2018
The always-on, internet-of-everything age of productivity in which we live actually has a down side (other than our constantly being connected to a device). These electronics cause a lot of emissions to be produced and might eventually stress the planet's power grid.
According to a new research, the billions and billions of internet-connected devices could produce up to 3.5 percent of global emissions within 10 years and a whopping 14 percent by 2040.
Additionally, these devices could use up to 20 percent of the world's electricity by 2025, hampering attempts to meet climate change targets and straining grids as demand by power-hungry server farms storing digital data from billions of smartphones, tablets and internet-connected devices grows exponentially.
Per a report by Climate Home News, global computing power demand from internet-connected devices, high-resolution video streaming, emails and even smart TVs is increasing 20 percent per year, consuming roughly 5 percent of the world's electricity in 2015.
Researchers say the industry's emissions could produce more pollution than any other country except the U.S., China and India, and industry power demand could increase from between 200 and 300 terawatt hours (TWh) of electricity a year now to 1,200 or even 3,000 TWh by 2025.
The digitalization of nearly everything — from watches and phones to TVs and home appliances – is pushing us to a point unseen in human history, and there is no end in sight as new technology generations continue to evolve (fifth generation mobile technology is coming soon). The data avalanche is bearing down on us with increasing intensity.
U.S. researchers "expect power consumption to triple in the next five years as 1 billion more people come online in developing countries, and the internet-of-things, driverless cars, robots, video surveillance and artificial intelligence grows exponentially in rich countries."
Estimates were 8.4 billion connected things in 2017, setting the stage for 20.4 billion internet-of-things devices to be deployed by 2020, according to analysts at Gartner. Also, global internet traffic could increase up to threefold in the next five years, the Cisco Visual Networking Index reports.
A 2016 Berkeley laboratory report for the U.S. government says the country's data centers, which held about 350 million terabytes of data in 2015, may need more than 100 TWh of electricity a year by 2020 — the equivalent of 10 large nuclear power stations. Data center growth has been on the rise in Europe, too.
"More than 1 billion new internet users are expected, growing from 3 billion in 2015 to 4.1 billion by 2020. Over the next five years global IP networks will support up to 10 billion new devices and connections, increasing from 16.3 billion in 2015 to 26 billion by 2020," says Cisco.
Even Greenpeace is concerned, where analysts there say only about 20 percent of the electricity used in the world's data centers is renewable; 80 percent of the power comes from fossil fuels. Massive power savings in future generations of the technology could help curb energy use, but it's no guarantee that consumers would use their devices any more carefully or responsibly.
But demand being what it is, use of resources to power these devices will likely only increase, putting even more pressure on the world's power grid — and the environment.
- Back to the future with Ford bioplastics
- US vs. Europe: Comparing different approaches to renewable energy
- Can solar energy compete with fossil fuels?
- Impressive new smartphone apps in health and medicine
- The environmental benefits of LED lighting
- Window film improves building system performance
- Just how serious is the tech world about diversity?
- 3-D printing is revolutionizing construction and design fields
- How intellectual property rights fit in the Chinese trade war puzzle
- A balanced approach to technology in the classroom
- Heads-up: Repeated headers may lead to balance issues for young soccer players
- TPWD now accepting drawn hunt applications for 2018-19 season
- Recent IT employment reports deliver mixed messages
See your work in future editions
Your content, Your Expertise,
Your Industry Needs YOUR Expert Voice & We've got the platform you needFind Out How | <urn:uuid:c617545e-b305-454c-95a4-f0c56bf67433> | 2.734375 | 883 | Truncated | Science & Tech. | 42.454822 | 95,533,037 |
In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple processes in a concurrent system such as a multitasking operating system. Semaphore is simply a variable. This variable is used to solve critical section problems and to achieve process synchronization in the multi processing environment.
A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions.
A useful way to think of a semaphore as used in the real-world systems is as a record of how many units of a particular resource are available, coupled with operations to adjust that record safely (i.e. to avoid race conditions) as units are required or become free, and, if necessary, wait until a unit of the resource becomes available.
Semaphores are a useful tool in the prevention of race conditions; however, their use is by no means a guarantee that a program is free from these problems. Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to implement locks.
The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1962 or 1963, when Dijkstra and his team were developing an operating system for the Electrologica X8. That system eventually became known as THE multiprogramming system.
Suppose a library has 10 identical study rooms, to be used by one student at a time. Students must request a room from the front desk if they wish to use a study room. If no rooms are free, students wait at the desk until someone relinquishes a room. When a student has finished using a room, the student must return to the desk and indicate that one room has become free.
In the simplest implementation, the clerk at the front desk knows only the number of free rooms available, which they only know correctly if all of the students actually use their room while they've signed up for them and return them when they're done. When a student requests a room, the clerk decreases this number. When a student releases a room, the clerk increases this number. The room can be used for as long as desired, and so it is not possible to book rooms ahead of time.
In this scenario the front desk count-holder represents a counting semaphore, the rooms are the resource, and the students represent processes/threads. The value of the semaphore in this scenario is initially 10, with all rooms empty. When a student requests a room, they are granted access, and the value of the semaphore is changed to 9. After the next student comes, it drops to 8, then 7 and so on. If someone requests a room and the resulting value of the semaphore would be negative, they are forced to wait until a room is freed (when the count is increased from 0). If one of the rooms was released, but there are several students waiting, then any method can be used to select the one who will occupy the room (like FIFO or flipping a coin). And of course, a student needs to inform the clerk about releasing their room only after really leaving it, otherwise, there can be an awkward situation when such student is in the process of leaving the room (they are packing their textbooks, etc.) and another student enters the room before they leave it.
When used to control access to a pool of resources, a semaphore tracks only how many resources are free; it does not keep track of which of the resources are free. Some other mechanism (possibly involving more semaphores) may be required to select a particular free resource.
The paradigm is especially powerful because the semaphore count may serve as a useful trigger for a number of different actions. The librarian above may turn the lights off in the study hall when there are no students remaining, or may place a sign that says the rooms are very busy when most of the rooms are occupied.
The success of the protocol requires applications follow it correctly. Fairness and safety are likely to be compromised (which practically means a program may behave slowly, act erratically, hang or crash) if even a single process acts incorrectly. This includes:
Even if all processes follow these rules, multi-resource deadlock may still occur when there are different resources managed by different semaphores and when processes need to use more than one resource at a time, as illustrated by the dining philosophers problem.
Counting semaphores are equipped with two operations, historically denoted as P and V (see § Operation names for alternative names). Operation V increments the semaphore S, and operation P decrements it.
The value of the semaphore S is the number of units of the resource that are currently available. The P operation wastes time or sleeps until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. The V operation is the inverse: it makes a resource available again after the process has finished using it. One important property of semaphore S is that its value cannot be changed except by using the V and P operations.
A simple way to understand wait (P) and signal (V) operations is:
Many operating systems provide efficient semaphore primitives that unblock a waiting process when the semaphore is incremented. This means that processes do not waste time checking the semaphore value unnecessarily.
The counting semaphore concept can be extended with the ability to claim or return more than one "unit" from the semaphore, a technique implemented in Unix. The modified V and P operations are as follows, using square brackets to indicate atomic operations, i.e., operations which appear indivisible from the perspective of other processes:
function V(semaphore S, integer I): [S function P(semaphore S, integer I): repeat: [if S >= I: S break]
However, the remainder of this section refers to semaphores with unary V and P operations, unless otherwise specified.
To avoid starvation, a semaphore has an associated queue of processes (usually with FIFO semantics). If a process performs a P operation on a semaphore that has the value zero, the process is added to the semaphore's queue and its execution is suspended. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution. When processes have different priorities the queue may be ordered by priority, so that the highest priority process is taken from the queue first.
If the implementation does not ensure atomicity of the increment, decrement and comparison operations, then there is a risk of increments or decrements being forgotten, or of the semaphore value becoming negative. Atomicity may be achieved by using a machine instruction that is able to read, modify and write the semaphore in a single operation. In the absence of such a hardware instruction, an atomic operation may be synthesized through the use of a software mutual exclusion algorithm. On uniprocessor systems, atomic operations can be ensured by temporarily suspending preemption or disabling hardware interrupts. This approach does not work on multiprocessor systems where it is possible for two programs sharing a semaphore to run on different processors at the same time. To solve this problem in a multiprocessor system a locking variable can be used to control access to the semaphore. The locking variable is manipulated using a test-and-set-lock command.
Consider a variable A and a boolean variable S. A is only accessed when S is marked true. Thus, S is a semaphore for A.
One can imagine a stoplight signal (S) just before a train station (A). In this case, if the signal is green, then one can enter the train station. If it is yellow or red (or any other color), the train station cannot be accessed.
Consider a system that can only support ten users (S=10). Whenever a user logs in, P is called, decrementing the semaphore S by 1. Whenever a user logs out, V is called, incrementing S by 1 representing a login slot that has become available. When S is 0, any users wishing to log in must wait until and the login request is enqueued onto a FIFO queue; mutual exclusion is used to ensure that requests are enqueued in order. Whenever S becomes greater than 0 (login slots available), a login request is dequeued, and the user owning the request is allowed to log in.
In the producer-consumer problem, one process (the producer) generates data items and another process (the consumer) receives and uses them. They communicate using a queue of maximum size N and are subject to the following conditions:
The semaphore solution to the producer-consumer problem tracks the state of the queue with two semaphores:
emptyCount, the number of empty places in the queue, and
fullCount, the number of elements in the queue. To maintain integrity,
emptyCount may be lower (but never higher) than the actual number of empty places in the queue, and
fullCount may be lower (but never higher) than the actual number of items in the queue. Empty places and items represent two kinds of resources, empty boxes and full boxes, and the semaphores
fullCount maintain control over these resources.
The binary semaphore
useQueue ensures that the integrity of the state of the queue itself is not compromised, for example by two producers attempting to add items to an empty queue simultaneously, thereby corrupting its internal state. Alternatively a mutex could be used in place of the binary semaphore.
emptyCount is initially N,
fullCount is initially 0, and
useQueue is initially 1.
The producer does the following repeatedly:
produce: P(emptyCount) P(useQueue) putItemIntoQueue(item) V(useQueue) V(fullCount)
The consumer does the following repeatedly
consume: P(fullCount) P(useQueue) item
Below is a substantive example:
- A single consumer enters its critical section. Since
fullCountis 0, the consumer blocks.
- Several producers enter the producer critical section. No more than N producers may enter their critical section due to
emptyCountconstraining their entry.
- The producers, one at a time, gain access to the queue through
useQueueand deposit items in the queue.
- Once the first producer exits its critical section,
fullCountis incremented, allowing one consumer to enter its critical section.
emptyCountmay be much lower than the actual number of empty places in the queue, for example in the case where many producers have decremented it but are waiting their turn on
useQueuebefore filling empty places. Note that
emptyCount + fullCount Nalways holds, with equality if and only if no producers or consumers are executing their critical sections.
The canonical names V and P come from the initials of Dutch words. V is generally explained as verhogen ("increase"). Several explanations have been offered for P, including proberen ("to test" or "to try"),passeren ("pass"), and pakken ("grab"). Dijkstra's earliest paper on the subject gives passering ("passing") as the meaning for P, and vrijgave ("release") as the meaning for V. It also mentions that the terminology is taken from that used in railroad signals. Dijkstra subsequently wrote that he intended P to stand for the portmanteau prolaag, short for probeer te verlagen, literally "try to reduce", or to parallel the terms used in the other case, "try to decrease".
In ALGOL 68, the Linux kernel, and in some English textbooks, the V and P operations are called, respectively, up and down. In software engineering practice, they are often called signal and wait,release and acquire (which the standard Java library uses), or post and pend. Some texts call them vacate and procure to match the original Dutch initials.
Semaphores vs. mutexes
A mutex is essentially the same thing as a binary semaphore and sometimes uses the same basic implementation. The differences between them are in how they are used. While a binary semaphore may be used as a mutex, a mutex is a more specific use-case, in that only the thread that locked the mutex is supposed to unlock it. This constraint makes it possible to implement some additional features in mutexes:
- Since only the thread that locked the mutex is supposed to unlock it, a mutex may store the id of the thread that locked it and verify the same thread unlocks it.
- Mutexes may provide priority inversion safety. If the mutex knows who locked it and is supposed to unlock it, it is possible to promote the priority of that thread whenever a higher-priority task starts waiting on the mutex.
- Mutexes may also provide deletion safety, where the thread holding the mutex cannot be accidentally deleted.
- Alternately, if the thread holding the mutex is deleted (perhaps due to an unrecoverable error), the mutex can be automatically released.
- A mutex may be recursive: a thread is allowed to lock it multiple times without causing a deadlock. | <urn:uuid:b6cac600-d785-467f-9267-83a026e15a07> | 3.4375 | 2,827 | Knowledge Article | Software Dev. | 35.90794 | 95,533,038 |
While in examining binding article, we start from bind operator, then convert it to do notation. This article start from do, and revert it back to monadic code using a few operator.
I also add Kleiski Fish Operator, that is very useful as a shortcut in a do notation.
This tutorial/ guidance/ article is one of some parts.
References: About Monad.
Examining Bind: Bind
>>=operator. Hello World Example.
<$>operators. Personal Notes. Example using Number.
Monadic Operator: Fish
The first one is overview, then some references. The last three parts is all about Example Code.
I you need to know more about operator,
you can just ask
And if you are curious for another operator, you can read this article and have fun.
then >> operator
bind >>= operator can be found here.
Think Function in Haskell as Math Equation
Suppose we have this very short function.
Think Action in Haskell as Procedure, Sequence of Command
We can rewrite this as an action
do special notation,
we can desugar the
then >> operator as below.
Writing it oneliner, would make this action looks exactly like a function. And in fact, it is just a function.
then >> operator is
bind >>= operator,
except it ignore input.
We need other example containing input.
do notation does avoid coding horror,
but sooner or later we need to know what is inside.
Consider this action.
Each command has a
Maybe String result type.
We can desugar the action above
into vanilla monadic code without
Or you can make it oneliner vanilla monadic code.
Here is another IO example, showing you home directory.
There is also good example here
Operation inside Bind
Monads is way to unwrap stuff, do something about it, and wrap the result. Or better, Monad give access for method, to work inside wrapped stuff, without unwrapping it.
This list, will show Monad is overloaded for different types. Every monad has its own implementation.
This will display
It can be desugared into oneliner vanilla monadic code.
The Monad Class
It is clear that sequence command in do notation is
just functions chained together
bind >>= operator
then >> operator.
Now the next question. What is these two operators has to do with monad ? So here it is the definition of monad in Prelude 22.214.171.124
Those two are Monad Operators.
What other practical used, Monads are good for?
There are many I guess. One of them is Kleiski Arrow.
Kleiski Arrow does function composition, just like
except it perform monadic effects.
Reference about Kleiski Arrow
<=< operator can be found here.
I looks like fish, so we can call this
Consider this function
We can avoid closing bracket
function application $.
Or we can join two function
function composition ..
Consider this action
Since the output is
IO String, while expected input type
The composition have to be Monadic. Kleiski Arrow perform this well.
Or Kleisli Composition, using reversed arrow.
Thank you for Reading.
- December 2017
- August 2017
- July 2017
- June 2017
- Examining Bind in Haskell: Example using Number
- Examining Bind in Haskell: Hello World
- Explaining Monad: References
- Explaining Monad: Overview
- Loop in Haskell With Map, Part Three
- Loop in Haskell With Map, Part Two
- Loop in Haskell With Map, Part One
- Loop in Haskell With Map, Overview
- Modularized HerbstluftWM in Haskell
- Modularized HerbstluftWM in Lua
- Modularized HerbstluftWM in PHP
- Modularized HerbstluftWM in Ruby
- Modularized HerbstluftWM in Python
- Modularized HerbstluftWM in Perl
- Modularized HerbstluftWM in BASH
- Modularized HerbstluftWM Overview
- April 2017 | <urn:uuid:9780394b-8cb7-4172-a196-3c3bb1c54088> | 3.9375 | 888 | Personal Blog | Software Dev. | 44.327283 | 95,533,051 |
You probably have seen or read news stories about fascinating ancient artifacts.
At an archaeological dig, a piece of wooden tool is unearthed and the archaeologist finds it to be 5,000 years old.
Radiocarbon dating (usually referred to simply as carbon-14 dating) is a radiometric dating method.
It is not possible to predict when an individual nucleus in a radioactive material will decay. Ninety-five percent of the activity of Oxalic Acid from the year 1950 is equal to the measured activity of the absolute radiocarbon standard which is 1890 wood. This is the International Radiocarbon Dating Standard.But it is possible to measure the time taken for half of the nuclei in a radioactive material to decay.This is called the half life of radioactive material or radioisotope. | <urn:uuid:48df25e4-346d-427e-874c-5f7092643e71> | 3.703125 | 165 | Knowledge Article | Science & Tech. | 39.742619 | 95,533,068 |
Empirical observations of the spawning migrationof European eels: The long and dangerous roadto the Sargasso Sea
Journal article, Peer reviewed
MetadataVis full innførsel
OriginalversjonScience Advances 2016, 2 10.1126/sciadv.1501694
The spawning migration of the European eel (Anguilla anguilla L.) to the Sargasso Sea is one of the greatest animal migrations. However, the duration and route of the migration remain uncertain. Using fishery data from 20 rivers across Europe, we show that most eels begin their oceanic migration between August and December. We used electronic tagging techniques to map the oceanic migration from eels released from four regions in Europe. Of 707 eels tagged, we received 206 data sets.Manymigrations ended soon after release because of predation events, but we were able to reconstruct in detail the migration routes of >80 eels. The route extended from western mainland Europe to the Azores region, more than 5000 km toward the Sargasso Sea. All eels exhibited diel vertical migrations, moving from deeper water during the day into shallower water at night. The range ofmigration speeds was 3 to 47 kmday−1. Using data from larval surveys in the Sargasso Sea, we show that spawning likely begins in December and peaks in February. Synthesizing these results, we show that the timing of autumn escapement and the rate of migration are inconsistent with the century-long held assumption that eels spawn as a single reproductive cohort in the spring time following their escapement. Instead, we suggest that European eels adopt a mixed migratory strategy, with some individuals able to achieve a rapid migration, whereas others arrive only in time for the following spawning season. Our results have consequences for eel management. | <urn:uuid:f390bbb3-26ca-4184-bebd-ef46ca2334c2> | 2.84375 | 382 | Academic Writing | Science & Tech. | 36.128152 | 95,533,096 |
You might not think a lone E. coli bacterium has much in the way of memory. But now, researchers have hacked their DNA so that they can store memories of their environment — working much like an old tape recorder.
New Scientist reports that chunks o DNA code in E. Coli called retrons carry the genetic code for enzymes that generate new strands of DNA that are inserted into the genome. Now, they have been engineered so that they produce DNA that corresponds to detection of certain ambient conditions in the surrounding environment — the presence of a certain chemical, say, or bright light. That new chunk of DNA chunk is then effectively a memory of what's happened around them. The results are published in Science.
What's interesting, though, is that the sensing and recording is only partially efficient. New Scientist explains:
As time goes by, more of the cells will respond to the input and record the memory. By calculating at a certain point how many of the cells carry the memory, it's possible to work out either the input's strength, or the length of exposure... It's a signal that accumulates over time rather than an all-or-nothing switch... In other words, its analogue rather than digital.
It may sound like a step back, but the researchers claim it could be used to great effect in the human body, sensing and recording events and exposures that cause damage to our cells, in the long-run providing the inside story of health inside our body. [Science via New Scientist]
Image by Sanofi Pasteur | <urn:uuid:4df29c16-c42a-47e9-9866-4718983aa455> | 3.921875 | 313 | News Article | Science & Tech. | 57.116245 | 95,533,100 |
In physics, a Dirac fermion is a fermion which is not its own antiparticle. The vast majority of particles fall under this category, as they are not their own antiparticles, and in particle physics all fermions in the standard model, except possibly neutrinos, are Dirac fermions. They are named for Paul Dirac, and can be modeled with the Dirac equation.
A Dirac fermion is equivalent to two Weyl fermions. The counterpart to a Dirac fermion is a Majorana fermion, a particle that is its own antiparticle.
In condensed matter physics, low-energy excitations in graphene and topological insulators, among others, are fermionic quasiparticles described by a pseudo-relativistic Dirac equation. | <urn:uuid:b3517260-6fee-47c8-8f37-99db1f5ed125> | 2.78125 | 171 | Knowledge Article | Science & Tech. | 25.948485 | 95,533,102 |
Isenthalpic process(Redirected from Isenthalpic)
In a steady-state, steady-flow process, significant changes in pressure and temperature can occur to the fluid, and yet the process will be isenthalpic if there is no transfer of heat to or from the surroundings, no work done on or by the surroundings, and no change in the kinetic energy of the fluid. (If a steady-state, steady-flow process is analysed using a control volume, everything outside the control volume is considered to be the surroundings.)
The throttling process is a good example of an isenthalpic process. Consider the lifting of a relief valve or safety valve on a pressure vessel. The specific enthalpy of the fluid inside the pressure vessel is the same as the specific enthalpy of the fluid as it escapes from the valve. With a knowledge of the specific enthalpy of the fluid and the pressure outside the pressure vessel, it is possible to determine the temperature and speed of the escaping fluid.
In an isenthalpic process:
- G. J. Van Wylen and R. E. Sonntag (1985), Fundamentals of Classical Thermodynamics, John Wiley & Sons, Inc., New York ISBN 0-471-82933-1
- Atkins, Peter; Julio de Paula (2006). Atkin's Physical Chemistry. Oxford: Oxford University Press. p. 64. ISBN 978-0-19-870072-2.
- G. J. Van Wylen and R. E. Sonntag, Fundamentals of Classical Thermodynamics, Section 5.13 (3rd edition).
- G. J. Van Wylen and R. E. Sonntag, Fundamentals of Classical Thermodynamics, Section 2.1 (3rd edition).
|This thermodynamics-related article is a stub. You can help Wikipedia by expanding it.|
|This physics-related article is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:bd6a0d27-1924-4cb3-9d37-03f5b34f2911> | 2.609375 | 418 | Knowledge Article | Science & Tech. | 59.351253 | 95,533,109 |
The Red River is a vital source of water for the Chickasaw and Choctaw Tribes. Learn how the South Central CASC is modeling how stream flow in the basin might change, as conditions become hotter and drier.
Lack of connected habitats in the face of warming temperatures and urbanization is one of the biggest threats facing wildlife. Learn how the Southeast CASC is mapping landscape connections in the region and examining how these connections might change.
The native westslope cutthroat trout has drawn generations of fly-fishers to the remote Flathead River system in western Montana. Learn about the Northwest CASC's research on what warming waters mean for the future of this iconic fish.
If you're attending The Wildlife Society's 2017 Annual Conference (September 23-27, 2017) in Albuquerque, NM, be sure and check out these presentations from staff and partners of the Climate Science Centers! | <urn:uuid:bb67d560-ede9-4ffd-bf11-a8ffb32e4e09> | 3.28125 | 185 | News (Org.) | Science & Tech. | 46.690403 | 95,533,128 |
invisible, highly penetrating electromagnetic radiation of much shorter wavelength (higher frequency) than visible light. The wavelength range for X rays is from about 10-8 m to about 10-11 m, or from less than a billionth of an inch to less than a trillionth of an inch; the corresponding frequency range is from about 3 × 1016 Hz to about 3 × 1019 Hz (1 Hz = 1 cps).
An important source of X rays is synchrotron radiation. X rays are also produced in a highly evacuated glass bulb, called an X-ray tube, that contains essentially two electrodes—an anode made of platinum, tungsten, or another heavy metal of high melting point, and a cathode. When a high voltage is applied between the electrodes, streams of electrons (cathode rays) are accelerated from the cathode to the anode and produce X rays as they strike the anode.
Two different processes give rise to radiation of X-ray frequency. In one process radiation is emitted by the high-speed electrons themselves as they are slowed or even stopped in passing near the positively charged nuclei of the anode material. This radiation is often called brehmsstrahlung [Ger.,=braking radiation]. In a second process radiation is emitted by the electrons of the anode atoms when incoming electrons from the cathode knock electrons near the nuclei out of orbit and they are replaced by other electrons from outer orbits. The spectrum of frequencies given off with any particular anode material thus consists of a continuous range of frequencies emitted in the first process, and superimposed on it a number of sharp peaks of intensity corresponding to discrete frequencies at which X rays are emitted in the second process. The sharp peaks constitute the X-ray line spectrum for the anode material and will differ for different materials.
Most applications of X rays are based on their ability to pass through matter. This ability varies with different substances; e.g., wood and flesh are easily penetrated, but denser substances such as lead and bone are more opaque. The penetrating power of X rays also depends on their energy. The more penetrating X rays, known as hard X rays, are of higher frequency and are thus more energetic, while the less penetrating X rays, called soft X rays, have lower energies. X rays that have passed through a body provide a visual image of its interior structure when they strike a photographic plate or a fluorescent screen; the darkness of the shadows produced on the plate or screen depends on the relative opacity of different parts of the body.
Photographs made with X rays are known as radiographs or skiagraphs. Radiography has applications in both medicine and industry, where it is valuable for diagnosis and nondestructive testing of products for defects. Fluoroscopy is based on the same techniques, with the photographic plate replaced by a fluorescent screen (see fluorescence; fluoroscope); its advantages over radiography in time and cost are balanced by some loss in sharpness of the image. X rays are also used with computers in CAT (computerized axial tomography) scans to produce cross-sectional images of the inside of the body.
Another use of radiography is in the examination and analysis of paintings, where studies can reveal such details as the age of a painting and underlying brushstroke techniques that help to identify or verify the artist. X rays are used in several techniques that can provide enlarged images of the structure of opaque objects. These techniques, collectively referred to as X-ray microscopy or microradiography, can also be used in the quantitative analysis of many materials. One of the dangers in the use of X rays is that they can destroy living tissue and can cause severe skin burns on human flesh exposed for too long a time. This destructive power is used in X-ray therapy to destroy diseased cells.
X rays were discovered in 1895 by W. C. Roentgen, who called them X rays because their nature was at first unknown; they are sometimes also called Roentgen, or Röntgen, rays. X-ray line spectra were used by H. G. J. Moseley in his important work on atomic numbers (1913) and also provided further confirmation of the quantum theory of atomic structure. Also important historically is the discovery of X-ray diffraction by Max von Laue (1912) and its subsequent application by W. H. and W. L. Bragg to the study of crystal structure.
- See X-ray Techniques in Art Galleries and Museums (1985). ; ,
- Naked to the Bone: Medical Imaging in the Twentieth Century (1997). ,
Ionizing or electromagnetic radiations of the same type as light but with much shorter wavelength. The absorption of the rays depends on the...
(Roentgen rays; X-rays). Electromagnetic radiation of extremely short wavelength (0.06–120 Å), emitted as the result of electron transitions in the
Author Undetermined The Roentgen Rays, The Roentgen Rays What is this craze, The town's ablaze, With the new phase of X-rays ways | <urn:uuid:5bbc883c-fa94-46aa-9a70-fb238ad214dd> | 4.09375 | 1,058 | Knowledge Article | Science & Tech. | 46.381308 | 95,533,130 |
Miniaturization is invading the world of chemical syntheses. Since typical chemical syntheses take place in several reaction steps with various separation or purification steps in between, microchemistry has almost always been limited to one-step reactions or sequences of reactions requiring no purification between steps.
Researchers at the Massachusetts Institute of Technology (MIT) have now produced an integrated multiple-step microscale production line. As reported in the journal Angewandte Chemie, their process includes three reaction steps and two separation processes (one gas–liquid and one liquid–liquid separation). Because it is arranged in a microscale reaction network, it is even possible to configure this process so that related compounds can be simultaneously produced in parallel.
To fully exploit the potential of microscale reaction technology, it is crucial to integrate the necessary separation steps. A team headed by Klavs F. Jensen has recently developed an efficient microfluidic separation technique and has now integrated this concept into a continuously operating, three-step reaction system. Microscale separations are driven by different principles than separations at normal scale, because in microfluidic systems, surface tension forces dominate over gravity.
This is how the microfluidic separation works: A porous separation membrane made of a fluoropolymer is coated with the organic phase of the mixture, which can “sneak through” the fine pores in the membrane. The aqueous phase to be separated off cannot coat the pores that have already been coated by the organic phase, because the two liquids are not miscible; the water can thus not pass through the membrane. The second separation, a gas–liquid separation, is based on the same principle: In this case, the liquid, which contains the intermediate product, wets the membrane and passes through the pores. Meanwhile, the coated membrane blocks the nitrogen gas that is released during the reaction.
To demonstrate their system, the researchers chose the synthesis of carbamates, compounds that are used as pesticides, among other things, and are important building blocks and reagents in chemical syntheses. The three-step synthesis used to make carbamates (the Curtius Rearrangement) involves intermediate products (azides, isocyanates) that have the potential to be dangerous, since some of these types of compounds pose an explosive or health hazard. The advantage of the microscale reaction system is that these intermediates are formed in situ and are then immediately consumed, so they don’t need to be isolated or stored.
If, after the second separation step, the product stream is divided and fed into multiple microreactors, each with a different reagent, a series of different but related carbamates can be produced in parallel.
Author: Klavs F. Jensen, Massachusetts Institute of Technology, Cambridge (USA), http://web.mit.edu/CHEME/people/faculty/jensen.html
Title: Multistep Continuous-Flow Microchemical Synthesis involving Multiple Reactions and Separations
Angewandte Chemie International Edition 2007, 46, No. 30, 5704–5708, doi: 10.1002/anie.200701434
Klavs F. Jensen | Angewandte Chemie
Colorectal cancer risk factors decrypted
13.07.2018 | Max-Planck-Institut für Stoffwechselforschung
Algae Have Land Genes
13.07.2018 | Julius-Maximilians-Universität Würzburg
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:6cb4dbb2-983c-4141-a489-7cfac309ee87> | 3.546875 | 1,305 | Content Listing | Science & Tech. | 32.188511 | 95,533,139 |
- Published: Tuesday, 03 January 2017 11:45
- Hits: 934
In a paper lead authored by Stefanie Rog in the journal Diversity and Distributions, the importance of mangroves for global, terrestrial vertebrates is revealed.
Stefanie conducted a review of the scientific literature published on mangroves, combined with open-source databases (WWF, ARKive and IUCN Red List).
The review found that 464 terrestrial species (320 mammals, 118 reptiles and 26 amphibians) use mangroves; five times more than previously reported. Of the 391 species whose conservation status has been assessed by ICUN, 35% were classified as threatened. Species were most often reported using mangroves for foraging habitat, followed by refuge, shelter, dispersal and breeding.
The highest alpha diversity of terrestrial invertebrates in mangroves occurs within Asia, northern Australia, West Africa and the Central American land bridge.
The terrestrial components of mangroves are often overlooked by society, and Stef's review extends our knowledge of mangrove forests and brings attention to these vital and undervalued ecosystems.
Read the full review in Diversity and Distributions here. | <urn:uuid:e71e779f-d862-4508-b4b4-e3417e85d3c0> | 3.734375 | 245 | Knowledge Article | Science & Tech. | 22.370954 | 95,533,162 |
Have you ever heard about El Niño? As you may be aware that Australia’s weather is influenced by many climate drivers. El Niño and La Niña may be two among the strongest influence on year-to-year climate variability in Australia. They are also a part of a natural cycle known as the El Niño–Southern Oscillation (ENSO) and are associated with a sustained period (many months) of warming (El Niño) or cooling (La Niña) in the central and eastern tropical Pacific. The ENSO cycle loosely operates over timescales from one to eight years.
What is El Niño and why does it have so much influence over our weather?
El Niño is an ocean and atmospheric phenomenon which has a significant impact on our planet’s weather.
While an El Niño event influences the whole world, the main effect is on the Pacific area, which includes Australia. El Niño in Australia means hot sunny weather and drought.
“During El Niño we have the droughts in western Pacific countries like Austrlia, ” says Dr Wenju Cai, a senior principal research scientist at CSIRO Wealth from Oceans Flagship.
El Niño also results in a hotter average temperature for the whole planet by roughly 0.1 to 0.2 degrees, and this is because the associated change in winds lead to the release of heat from the ocean to the atmosphere.
The two strongest El Niño we know of were in 1982-83 and 1997-98. Dubbed ‘super El Niños’, both these evens had significant global impacts.
“In 1982-83, Australia suffered one of the biggest droughts and we had the Ash Wednesday bushfires and Melbourne was covered by the dust storm.” says Cai.
“In 1997, over 23,000 people were killed due to extreme events, droughts, floods, cyclones.”
Potential effects of El Niño on Australia include:
What causes an El Niños?
An El Niño takes place when sea surface temperatures in the central and eastern tropical Pacific Ocean become substantially warmer than average, causing a shift in atmospheric circulation. Typically, the equatorial trade winds blow from east to west across the Pacific Ocean. El Niños events are associated with a weakening, or even reversal, of the prevailing trade winds.
Warming of ocean temperatures in the central and eastern Pacific causes the area to be more favourable for tropical rainfall and cloud development. As a result, the heavy rainfall that usually occurs to the norths of Australia moves to the central and eastern parts of the pacific basin.
El Niño years tend to see warmer-than-average temperatures across most of southern Australia, particularly during the second half of the year. In general, decreased cloud cover causes warmer-than-average daytime temperature, particularly in the spring and summer months. Higher temperatures exacerbate the effect of lower rainfall by increasing evaporative demand. Prior to 2013 (a neutral ENSO year), Australia’s two warmest years for seasonal daytime temperatures for winter (2009 and 2002), spring (2006 and 2002), and sumer (1982 – 83 and 1997 – 98) had all taken place during an El Niño. The warmth of recent El Niño events has been amplified by background warming trends, meaning that El Niño years have been tending to get warmer since the 1950s.
Australian winter-spring mean max. temperature deciles averaged for 12 strong El Niño events
Shift in temperature extremes
For temperature extremes, there are three different measures of heat relevant to El Niño: wide-area heatwaves (as indicated by a very warm national area-average temperature); single-day extremes at specific point locations; and long-duration warm spells. The relationship of El Niño with each of these elements may be quite different, and location dependant.
During the warmer half of the year, there is a tendency for weather systems to be more mobile during El Niño years, with fewer blocking (stationary) high pressure systems which means that for southern coastal locations such as Adelaide and Melbourne, individual daily heat extremes tend to be of greater intensity (hotter) during El Niño years but there is a reduced frequency of prolonged warm spells. Further north, El Niño is associated with both an increase in individual extreme hot days and multi-day warm spells. | <urn:uuid:99bd4737-7ec6-4c87-a9ee-61c39db4548d> | 3.9375 | 888 | Knowledge Article | Science & Tech. | 41.289249 | 95,533,213 |
The smalltooth sawfish is a species closely related to the shark and ray. The body is dark mouse gray to blackish-brown in color above and white to grayish white or pale yellow below, and it is flattened and shark-like in appearance. Like rays, the mouth is located on its flat underside. At birth, smalltooth sawfish are about 1.97 feet in length, and adults can reach up to 24.9 feet in length. Despite its fearsome appearance, sawfish are gentle creatures and will not attack humans unless provoked or surprised.
The most striking appearance of this fish is its long, toothy saw or snout called the rostrum. The rostrum is covered with motion sensitive pores that allow sawfish to detect the movement and even heartbeats of prey that may bury themselves on the ocean floor. It is a quarter of the total length of the body and has between 25 and 32 pairs of small, sharp teeth. Like an aquatic metal detector, the sawfish hovers over the sea floor looking for hidden food
such as crabs and shrimp. When prey is detected, the rostrum is used as a digging tool to unearth it. Also, when other suitable prey swims by (such as schools
of mullet fish), the sawfish will spring from the bottom and slash furiously with its saw in an attempt to lacerate or stun as many individuals as possible. The saw is also used for defense against predators
such as sharks.
This fish can exist in both saltwater and freshwater, and prefers fairly shallow water with muddy or sandy bottoms such as rivers, streams, lakes, creeks, bays, lagoons, and estuaries. Although it prefers depths of no more than 400 feet, it will cross deep oceans to reach new areas of coastline. This species is nocturnal, and spends most of the day sleeping on the sea floor. Hunting is done at night, and diet consists of small crustaceans and fish. Little is known about its life history and reproductive behavior, but females are known to give birth to live pups. The litter size is usually 15 to 20 pups.
All species of sawfish are considered vulnerable or endangered because of population decline. Sawfish are sometimes accidentally caught in fishing nets and they are also hunted for their rostrum, their fins (which are eaten as a delicacy), and their liver oil (used for medicine). Habitat disturbance is also a threat. This species is legally protected in the United States and Australia, and the state of Florida has established three wildlife refuges to protect the habitat of the species.
Smalltooth Sawfish Facts Last Updated: May 11, 2017
To Cite This Page:
Glenn, C. R. 2006. "Earth's Endangered Creatures - Smalltooth Sawfish Facts" (Online).
Accessed 7/22/2018 at http://earthsendangered.com/profile.asp?sp=820&ID=9.
Need more Smalltooth Sawfish facts?
Twelve Incredibly Odd Endangered Creatures
1. Solenodon The solenodon is a mammal found primarily in Cuba and Hispanola. The species was thought to be extinct until scientists found a few still alive in 2003. Solenodons only prefer to come out at night. They eat primarily insects and they are one of the few mammal species that are venomous, delivering a very powerful toxin. Symptoms of a solenodon bite are very similar to a snake bite, including swelling and severe pain, lasting several days. | <urn:uuid:5ff0418a-b2f7-4381-a8c1-d55e3d51101f> | 3.6875 | 737 | Knowledge Article | Science & Tech. | 58.008988 | 95,533,246 |
The Importance of Organic Acidity in Finnish Lakes
A lake survey with 987 randomly selected lakes demonstrated that small lakes throughout Finland have high organic matter concentrations (median TOC concentration 12 mg l−1). This characteristic of Finnish lakes is related to catchments contain large proportions of peatlands and acidic organic soils in coniferous forests. Most of the lakes (88%) have TOC concentrations≥5 mgl−1. Humic lakes are on an average more acid than clear water lakes. The median pH in the full data set is 6.3. The proportion of lakes with pH values lower than 5.5 out of the total sample of the lakes is 21%, and 93% of these acidic lakes have TOC concentrations 5≥ mg1−1. The organic anion, estimated by ion balance calculations, is the main anion in the full data set. However, in southern parts of the country where the acidic deposition is highest, the anthropogenic contribution to the acidity is more important than the catchment-derived organic acidity.
KeywordsTotal Organic Carbon Organic Anion Total Organic Carbon Concentration Acidic Lake Humic Lake
Unable to display preview. Download preview PDF.
- Eilers, J.M., Landers, D.H., and Brakke, D.F.: 1988b, Environ. Sci. Technol.22, 172 Google Scholar
- Forsius, M.: this volumeGoogle Scholar
- Henriksen, A., Lien, L., Traaen, T.S., Sevaldrud, I.S., and Brakke, D.F.,: 1988b, Ambio 17, 259Google Scholar
- National Board of Waters: 1981, Vesihallinnon analyysimenetelmiit (The analytical methods used by the National Board of Waters), Helsinki, Finland, report 213Google Scholar
- Nordfund, G., Pietarinen, M., and Tuovinen, J-P.: 1985, Publications of Nature Conservation Division A:26. Ministry of Environment, Helsinki, Finland (in Finnish)Google Scholar
- Røgeberg, E.J.S., and Henriksen, A.: 1985, Vatten 41, 48Google Scholar
- Swedish Environmental Protection Board: 1986, Monitor 1986, Acid and acidified waters.Google Scholar
- Swedish Environmental Protection Board, Solna, SwedenGoogle Scholar
- Verta, M., Rekolainen, S., Mannio, J., and Surma-Aho, K.: 1986, Publications of the Water Research Institute. National Board of Waters, Helsinki, Finland No. 65, 21Google Scholar | <urn:uuid:35ce0710-f0c4-4da8-ab53-bc263591e6ea> | 3.125 | 550 | Truncated | Science & Tech. | 54.575008 | 95,533,249 |
A Collatz sequence is the sequence where, for a given number n, the next number in the sequence is either n/2 if the number is even or 3n+1 if the number is odd. The sequence always terminates with 1.
n = 13
c = [13 40 20 10 5 16 8 4 2 1]
45 players like this problem
2 players like this solution
1 player likes this solution
3 players like this solution
6 players like this solution
19 players like this solution
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi
Opportunities for recent engineering grads.
New to MATLAB?
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Contact your local office | <urn:uuid:79d0bb55-414d-4084-84b1-378a8fa9bb53> | 2.84375 | 215 | Product Page | Science & Tech. | 60.402666 | 95,533,253 |
The Mediterranean plant Thapsia garganica (Apiaceae), also known as deadly carrot, produces the highly toxic compound thapsigargin. This compound is a potent inhibitor of the sarcoplasmic-endoplasmic reticulum Ca2+-ATPase calcium pump in mammals and is of industrial importance as the active moiety of the anticancer drug mipsagargin, currently in clinical trials. Thapsigargin is found in most parts of the plant T. garganica. Ripe fruits contain the highest amount of thapsigargin, with 0.7% to 1.5% of the dry weight, followed by roots (0.2%–1.2% of dry weight) and leaves (0.1% of dry weight). It is well established that many Apiaceae species store lipophilic compounds such as phenyl propanoids and terpenoids in secretory ducts, and this appears to be the case with T. gargantica as well. Andersen et al. () show that transcripts for two key enzymes in thapsigargin biosynthetase are found only in the epithelial cells lining these secretory ducts. This emphasizes the involvement of these cells in the biosynthesis of thapsigargin. This study paves the way for further studies of thapsigargin biosynthesis.
You might also like
Transcriptome analysis illuminates the nature of the intracellular interaction in a vertebrate-algal symbiosis
Review: How does a plant orchestrate defense in time and space? Using glucosinolates in Arabidopsis as case study
Exploring the chemical diversity and distribution of marine cyanobacteria and algae through mass spectrometry
Review: The sexual advantage of looking, smelling and tasting good, the metabolic network that produces signals for pollinators ($) | <urn:uuid:4b820cba-a9eb-4bdf-84e9-cdf98c5dc1a4> | 2.96875 | 387 | Knowledge Article | Science & Tech. | 33.020936 | 95,533,264 |
ECR discharges require a magnetic field such that the electrons’ cyclotron frequency is in resonance with the applied microwave frequency, usually 2.45 GHz. Both the large magnetic field of 875G and the microwave waveguide plumbing make these reactors more complicated and expensive than RIE reactors. Unless one uses tricky methods that depend on nonuniform magnetic fields and densities, microwaves cannot penetrate into a plasma if ωp > ω. Aτ 2.45 GHz, that means that the maximum density that can be produced, in principle , is 100 × (2.45/9)2 = 7.4 × 1010 cm−3 [Eq. (A1–9)]. However, this does not hold in the near-field of the launching device, usually a horn antenna or a loop or slot coupler. Densities of order 1012 cm-3 have been produced in ECR reactors because the free-space wavelength of 2.45-GHz radiation is 12.2 cm, and the interior of a 10 cm diam plasma is still within the near-field. This is discussed in more detail later.
KeywordsDispersion Curve Horn Antenna Microwave Source Resonance Zone Nonuniform Magnetic Field
Unable to display preview. Download preview PDF. | <urn:uuid:70a9ba2d-c96b-4c0e-82d3-e51251cb3bc4> | 3 | 266 | Truncated | Science & Tech. | 64.679139 | 95,533,265 |
A nuclear holocaust or nuclear apocalypse is a theoretical scenario involving widespread destruction and radioactive fallout causing the collapse of civilization, through the use of nuclear weapons. Under such a scenario, some of the Earth is made uninhabitable by nuclear warfare in future world wars.
Besides the obvious direct destruction of cities by nuclear blasts, the potential aftermath of a nuclear war could involve firestorms, a nuclear winter, widespread radiation sickness from fallout, and/or the temporary loss of much modern technology due to electromagnetic pulses. Some scientists, such as Alan Robock, have speculated that a thermonuclear war could result in the end of modern civilization on Earth, in part due to a long-lasting nuclear winter. In one model, temperatures following a full thermonuclear war fall for several years by 7 to 8 degrees Celsius on average. The accuracy of such models are often the subject of partisan dispute.[where?][why?][further explanation needed]
Early Cold War-era studies suggested that billions of humans would nonetheless survive the immediate effects of nuclear blasts and radiation following a global thermonuclear war. Some scholars[who?] argue that nuclear war could indirectly contribute to human extinction via secondary effects, including environmental consequences, societal breakdown, and economic collapse. Additionally, it has been argued that even a relatively small-scale nuclear exchange between India and Pakistan involving 100 Hiroshima yield (15 kilotons) weapons, could cause a nuclear winter and kill more than a billion people.
The threat of a nuclear holocaust plays an important role in the popular perception of nuclear weapons. It features in the security concept of mutually assured destruction (MAD) and is a common scenario in survivalism. Nuclear holocaust is a common feature in literature and film, especially in speculative genres such as science fiction, dystopian and post-apocalyptic fiction.
- 1 Etymology and usage
- 2 Likelihood of nuclear war
- 3 Moral importance of human extinction risk
- 4 Likelihood of complete human extinction
- 5 Effects of nuclear war
- 6 Origins and analysis of extinction hypotheses
- 7 See also
- 8 References
- 9 External links
Etymology and usage
One early use of the word "holocaust" to describe an imagined nuclear destruction appears in Reginald Glossop's 1926 novel The Orphan of Space: "Moscow ... beneath them ... a crash like a crack of Doom! The echoes of this Holocaust rumbled and rolled ... a distinct smell of sulphur ... atomic destruction." In the novel, an atomic weapon is planted in the office of the Soviet dictator, who, with German help and Chinese mercenaries, is preparing the takeover of Western Europe.
References to nuclear destruction often speak of "atomic holocaust" or "nuclear holocaust.” For instance, U.S. President Bush stated in August 2007: "Iran's active pursuit of technology that could lead to nuclear weapons threatens to put a region already known for instability and violence under the shadow of a nuclear holocaust".
Likelihood of nuclear war
As of 2016, humanity has about 15,000 nuclear weapons, thousands of which are on hair-trigger alert. While stockpiles have been on the decline following the end of the Cold War, every nuclear country is currently undergoing modernization of its nuclear arsenal. Some experts believe this modernization may increase the risk of nuclear proliferation, nuclear terrorism, and accidental nuclear war.
In a poll of experts at the Global Catastrophic Risk Conference in Oxford (17‐20 July 2008), the Future of Humanity Institute estimated the probability of complete human extinction by nuclear weapons at 1% within the century, the probability of 1 billion dead at 10% and the probability of 1 million dead at 30%. These results reflect the median opinions of a group of experts, rather than a probabilistic model; the actual values may be much lower or higher.
Scientists have argued that even a small-scale nuclear war between two countries could have devastating global consequences and such local conflicts are more likely than full-scale nuclear war.
Moral importance of human extinction risk
Compare three outcomes:
- A nuclear war that kills 99% of the world’s existing population.
- A nuclear war that kills 100%.
(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences?
He continues that "Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater." Thus, he argues, even if it would be bad if massive numbers of humans died, human extinction would itself be much worse because it prevents the existence of all future generations. And given the magnitude of the calamity were the human race to become extinct, Nick Bostrom argues that there is an overwhelming moral imperative to reduce even small risks of human extinction.
Likelihood of complete human extinction
Many scholars have posited that a global thermonuclear war with Cold War-era stockpiles, or even with the current smaller stockpiles, may lead to human extinction. This position was bolstered when nuclear winter was first conceptualized and modelled in 1983. However, models from the past decade consider total extinction very unlikely, and suggest parts of the world would remain habitable. Technically the risk may not be zero, as the climactic effects of nuclear war are uncertain and could theoretically be larger than current models suggest, just as they could theoretically be smaller than current models suggest. There could also be indirect risks, such as a societal collapse following nuclear war that can make humanity much more vulnerable to other existential threats.
A related area of inquiry is: if a future nuclear arms race someday leads to larger stockpiles or more dangerous nuclear weapons than existed at the height of the Cold War, at what point could a war with such weapons result in human extinction? Physicist Leo Szilard warned in the 1950s that a deliberate "doomsday device" could be constructed by surrounding powerful hydrogen bombs with a massive amount of cobalt. Cobalt has a half-life of five years, and its global fallout might, some physicists have posited, be able to clear out all human life via lethal radiation intensity. The main motivation for building a cobalt bomb in this scenario is its reduced expense compared with the arsenals possessed by superpowers; such a doomsday device does not need to be launched before detonation, and thus does not require expensive missile delivery systems, and the hydrogen bombs do not need to be miniaturized for delivery via missile. The system for triggering it might have to be completely automated, in order for the deterrent to be effective. A modern twist might be to also lace the bombs with aerosols designed to exacerbate nuclear winter. A major caveat is that nuclear fallout transfer between the northern and southern hemispheres is expected to be small; unless a bomb detonates in each hemisphere, the effect of a bomb detonated in one hemisphere on the other is diminished.
Effects of nuclear war
Historically, it has been difficult to estimate the total number of deaths resulting from a global nuclear exchange because scientists are continually discovering new effects of nuclear weapons, and also revising existing models.
Early reports considered direct effects from nuclear blast and radiation and indirect effects from economic, social, and political disruption. In a 1979 report for the U.S. Senate, the Office of Technology Assessment estimated casualties under different scenarios. For a full-scale countervalue/counterforce nuclear exchange between the U.S. and the Soviet Union, they predicted U.S. deaths from 35 to 77 percent (70 million to 160 million dead at the time), and Soviet deaths from 20 to 40 percent of the population.
Although this report was made when nuclear stockpiles were at much higher levels than they are today, it also was made before the risk of nuclear winter was discovered in the early 1980s. Additionally, it did not consider other secondary effects, such electromagnetic pulses (EMP), and the ramifications they would have on modern technology and industry.
In the early 1980s, scientists began to consider the effects of smoke and soot arising from burning wood, plastics, and petroleum fuels in nuclear-devastated cities. It was speculated that the intense heat would carry these particulates to extremely high altitudes where they could drift for weeks and block out all but a fraction of the sun's light. A landmark 1983 study by the so-called TTAPS team (Richard P. Turco, Owen Toon, Thomas P. Ackerman, James B. Pollack and Carl Sagan) was the first to model these effects and coined the term "nuclear winter."
More recent studies make use of modern global circulation models and far greater computer power than was available for the 1980s studies. A 2007 study examined consequences of a global nuclear war involving moderate to large portions of the current global arsenal. The study found cooling by about 12–20 °C in much of the core farming regions of the US, Europe, Russia and China and as much as 35 °C in parts of Russia for the first two summer growing seasons. The changes they found were also much longer lasting than previously thought, because their new model better represented entry of soot aerosols in the upper stratosphere, where precipitation does not occur, and therefore clearance was on the order of 10 years. In addition, they found that global cooling caused a weakening of the global hydrological cycle, reducing global precipitation by about 45%.
The authors did not discuss the implications for agriculture in depth, but noted that a 1986 study which assumed no food production for a year projected that "most of the people on the planet would run out of food and starve to death by then" and commented that their own results show that, "This period of no food production needs to be extended by many years, making the impacts of nuclear winter even worse than previously thought."
In contrast to the above investigations of global nuclear conflicts, studies have shown that even small-scale, regional nuclear conflicts could disrupt the global climate for a decade or more. In a regional nuclear conflict scenario where two opposing nations in the subtropics would each use 50 Hiroshima-sized nuclear weapons (about 15 kiloton each) on major populated centres, the researchers estimated as much as five million tons of soot would be released, which would produce a cooling of several degrees over large areas of North America and Eurasia, including most of the grain-growing regions. The cooling would last for years, and according to the research, could be "catastrophic". Additionally, the analysis showed a 10% drop in average global precipitation, with the largest losses in the low latitudes due to failure of the monsoons.
Regional nuclear conflicts could also inflict significant damage to the ozone layer. A 2008 study found that a regional nuclear weapons exchange could create a near-global ozone hole, triggering human health problems and impacting agriculture for at least a decade. This effect on the ozone would result from heat absorption by soot in the upper stratosphere, which would modify wind currents and draw in ozone-destroying nitrogen oxides. These high temperatures and nitrogen oxides would reduce ozone to the same dangerous levels we now experience below the ozone hole above Antarctica every spring.
It is difficult to estimate the number of casualties that would result from nuclear winter, but it is likely that the primary effect would be global famine (known as Nuclear Famine), wherein mass starvation occurs due to disrupted agricultural production and distribution. In a 2013 report, the International Physicians for the Prevention of Nuclear War (IPPNW) concluded that more than two billion people, about a third of the world's population, would be at risk of starvation in the event of a regional nuclear exchange between India and Pakistan, or by the use of even a small proportion of nuclear arms held by US and Russia. Several independent studies show corroborated conclusions that agricultural outputs will be significantly reduced for years by climatic changes driven by nuclear wars. Reduction of food supply will be further exacerbated by rising food prices, affecting hundreds of millions of vulnerable people, especially in the poorest nations of the world.
An electromagnetic pulse (EMP) is a burst of electromagnetic radiation. Nuclear explosions create a pulse of electromagnetic radiation called a nuclear EMP or NEMP. Such EMP interference is known to be generally disruptive or damaging to electronic equipment. If a single nuclear weapon "designed to emit EMP were detonated 250 to 300 miles up over the middle of the country it would disable the electronics in the entire United States."
Given that many of the comforts and necessities we enjoy in the 21st century are predicated on electronics and their functioning, an EMP would disable hospitals, water treatment facilities, food storage facilities, and all electronic forms of communication. An EMP blast threatens the foundation which supports the existence of the modern human condition. Certain EMP attacks could lead to large loss of power for months or years. Currently, failures of the power grid are dealt with using support from the outside. In the event of an EMP attack, such support would not exist and all damaged components, devices, and electronics would need to be completely replaced.
In 2013, the US House of Representatives considered the "Secure High-voltage Infrastructure for Electricity from Lethal Damage Act" that would provide surge protection for some 300 large transformers around the country. The problem of protecting civilian infrastructure from electromagnetic pulse has also been intensively studied throughout the European Union, and in particular by the United Kingdom. While precautions have been taken, James Woolsey and the EMP Commission suggested that an EMP is the most significant threat to the U.S. The greatest threat to human survival in the aftermath of an EMP blast would be the inability to access clean drinking water. For comparison, in the aftermath of the 2010 Haitian earthquake, the water infrastructure had been devastated and led to at least 3,333 deaths from cholera in the first few months after the earthquake. Other countries would similarly see the resurgence of previously non-existent diseases as clean water becomes increasingly scarce.
The risk of an EMP, either through solar or atmospheric activity or enemy attack, while not dismissed, was suggested to be overblown by the news media in a commentary in Physics Today. Instead, the weapons from rogue states were still too small and uncoordinated to cause a massive EMP, underground infrastructure is sufficiently protected, and there will be enough warning time from continuous solar observatories like SOHO to protect surface transformers should a devastating solar storm be detected.
Origins and analysis of extinction hypotheses
As a result of the extensive nuclear fallout of the 1954 Castle Bravo nuclear detonation, author Nevil Shute wrote the popular novel On the Beach which was released in 1957, in this novel so much fallout is generated in a nuclear war that all human life is extinguished. However the premise that all of humanity would die following a nuclear war and only the "cockroaches would survive" is critically dealt with in the 1988 book Would the Insects Inherit the Earth and Other Subjects of Concern to Those Who Worry About Nuclear War by nuclear weapons expert Philip J. Dolan.
In 1982 nuclear disarmament activist Jonathan Schell published The Fate of the Earth, which is regarded by many to be the first carefully argued presentation that concluded that extinction is a significant possibility from nuclear war. However, the assumptions made in this book have been thoroughly analyzed and determined to be "quite dubious". The impetus for Schell's work, according to physicist Brian Martin, was to argue that "if the thought of 500 million people dying in a nuclear war is not enough to stimulate action, then the thought of extinction will. Indeed, Schell explicitly advocates use of the fear of extinction as the basis for inspiring the "complete rearrangement of world politics".
The belief in "overkill" is also commonly encountered, with an example being the following statement made by nuclear disarmament activist Philip Noel-Baker in 1971 – "Both the US and the Soviet Union now possess nuclear stockpiles large enough to exterminate mankind three or four – some say ten – times over". Brian Martin suggested that the origin of this belief was from "crude linear extrapolations", and when analyzed it has no basis in reality. Similarly, it is common to see stated that the combined explosive energy released in the entirety of World War II was about 3 megatons, while a nuclear war with warhead stockpiles at Cold War highs would release 6000 WWII's of explosive energy. An estimate for the necessary amount of fallout to begin to have the potential of causing human extinction is regarded by physicist and disarmament activist Joseph Rotblat to be 10 to 100 times the megatonnage in nuclear arsenals as they stood in 1976; however, with the world megatonnage decreasing since the Cold War ended this possibility remains hypothetical.
According to the 1980 United Nations report General and Complete Disarmament: Comprehensive Study on Nuclear Weapons: Report of the Secretary-General, it was estimated that there were a total of about 40,000 nuclear warheads in existence at that time, with a potential combined explosive yield of approximately 13,000 megatons.
By comparison, in the Timeline of volcanism on Earth when the volcano Mount Tambora erupted in 1815 – turning 1816 into the Year Without A Summer due to the levels of global dimming sulfate aerosols and ash expelled – it exploded with a force of roughly 800 to 1,000 megatons, and ejected 160 km3 (38 cu mi) of mostly rock/tephra, which included 120 million tonnes of sulfur dioxide as an upper estimate. A larger eruption, approximately 74,000 years ago, in Mount Toba produced 2,800 km3 (670 cu mi) of tephra, forming lake Toba, and produced an estimated 6,000 million tonnes (6.6×109 short tons) of sulfur dioxide. The explosive energy of the eruption may have been as high as equivalent to 20,000,000 megatons (Mt) of TNT, while the Chicxulub impact, connected with the extinction of the dinosaurs, corresponds to at least 70,000,000 Mt of energy, which is roughly 7000 times the maximum arsenal of the US and Soviet Union.
However, it must be noted that comparisons with supervolcanos are more misleading than helpful due to the different aerosols released, the likely air burst fuzing height of nuclear weapons and the globally scattered location of these potential nuclear detonations all being in contrast to the singular and subterranean nature of a supervolcanic eruption. Moreover, assuming the entire world stockpile of weapons were grouped together, it would be difficult due to the nuclear fratricide effect to ensure the individual weapons would detonate all at once. Nonetheless, many people believe that a full-scale nuclear war would result, through the nuclear winter effect, in the extinction of the human species, though not all analysts agree on the assumptions inputted into these nuclear winter models.
- Cold War II
- Environmental impact of war
- Global catastrophic risk
- List of nuclear holocaust fiction
- Nuclear terrorism
- World War III
- Robock, Alan; Toon, Owen B (2012). "Self-assured destruction: The climate impacts of nuclear war". Bulletin of the Atomic Scientists. 68 (5): 66–74. doi:10.1177/0096340212459127. Retrieved 13 February 2016.
- Martin, Brian (1982). "Critique of Nuclear Extinction". Journal of Peace Research. 19 (4): 287–300. doi:10.1177/002234338201900401.
- The Effects of a Global Thermonuclear War. Johnstonsarchive.net. Retrieved on 2013-07-21.
- Martin, Brian (December 1982). "The global health effects of nuclear war". Current Affairs Bulletin. 59 (7): 14–26.
- Long-term worldwide effects of multiple nuclear-weapons detonations. Assembly of Mathematical and Physical Sciences, National Research Council.
- Helfand, Ira. "Nuclear Famine: Two Billion People at Risk?" (PDF). International Physicians for the Prevention of Nuclear War. Retrieved 13 February 2016.
- American Heritage Dictionary definition of "holocaust"
- Oxford Dictionary definition of "holocaust"
- Reginald Glossop, The Orphan of Space (London: G. MacDonald, 1926), pp. 303–306.
- McElroy, Damien; Spillius, Alex (28 August 2007). "Bush warns of Iran 'nuclear holocaust'". The Telegraph. Retrieved 20 November 2015.
- "Status of World Nuclear Forces". Federation of American Scientists. Retrieved 22 March 2016.
- "Fact Sheet: Building Global Security by Taking Nuclear Weapons off Hair-Trigger Alert". National Threat Initiative. 15 October 2012. Retrieved 22 March 2016.
- Broad, William J. "U.S. Ramping Up Major Renewal in Nuclear Arms". New York Times. Retrieved 24 January 2016.
- Mecklin, John (4 March 2015). "Disarm and Modernize". Retrieved 22 March 2016.
- Kristensen, H. M.; Norris, R. S. (20 June 2014). "Slowing nuclear weapon reductions and endless nuclear weapon modernizations: A challenge to the NPT". Bulletin of the Atomic Scientists. 70 (4): 94–107. doi:10.1177/0096340214540062.
- Gray, Richard; Zolfagharifard, Ellie (26 January 2016). "World is closest it has been to catastrophe since the Cold War: Doomsday Clock remains at just three minutes to midnight". DailyMail. Retrieved 29 January 2016.
- Allison, Graham (2012). "The Cuban Missile Crisis at 50". Foreign Affairs. 91 (4). Retrieved 9 July 2012.
- "ВЗГЛЯД / «США и Россия: кризис 1962–го»". vzglyad.ru. 22 November 2013.
- Sandberg, Anders; Bostrom, Nick. "Global Catastrophic Risks Survey" (PDF). Future of Humanity Institute. Future of Humanity Institute, Oxford University. Retrieved 18 August 2016.
- Regional Nuclear War Could Devastate Global Climate, Science Daily, December 11, 2006
- Robock, A; Oman, L; Stenchikov, GL; Toon, OB; Bardeen, C; Turco, RP (2007). "Climatic consequences of regional nuclear conflicts". Atmos. Chem. Phys. 7 (8): 2003–2012. doi:10.5194/acp-7-2003-2007. Retrieved 13 February 2016.
- Robock, A; Toon, OB (2010). "Local nuclear war, global suffering" (PDF). Scientific American. 302: 74–81. Retrieved 13 February 2016.
- Parfit, Derek (1986). "154. How both human history, and the history of ethics, may be just beginning". Reasons and Persons. Oxford University Press.
- Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority". Global Policy. 4 (1): 15–31. doi:10.1111/1758-5899.12002.
- Tonn, Bruce & MacGregor, Donald (2009). "A singular chain of events". Futures. 41 (10): 706–714. doi:10.1016/j.futures.2009.07.009.
- Bostrom, Nick (2002). "Existential risks". Journal of Evolution and Technology. 9 (1): 1–31, §4.2.
- Max Tegmark (2017). "Chapter 5: Aftermath: The Next 10,000 Years". Life 3.0: Being Human in the Age of Artificial Intelligence (1st ed.). "Doomsday Devices": Knopf. ISBN 9780451485076.
- Johns, Lionel S; Sharfman, Peter; Medalia, Jonathan; Vining, Robert W; Lewis, Kevin; Proctor, Gloria (1979). The Effects of Nuclear War (PDF). Library of Congress. Retrieved 13 February 2016.
- "Nuclear winter". Encyclopædia Britannica. Retrieved 13 February 2016.
- "Nuclear Winter: Global Consequences of Multiple Nuclear Explosions". Science. 222 (4630): 1283–92. 23 December 1983. Bibcode:1983Sci...222.1283T. doi:10.1126/science.222.4630.1283. PMID 17773320.
- Robock, Alan; Oman, Luke; Stenchikov, Georgiy L. (2007). "Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences" (PDF). Journal of Geophysical Research. 112 (D13107): 14. Bibcode:2007JGRD..11213107R. doi:10.1029/2006JD008235. Retrieved 13 February 2016.
- Mills, M. J.; Toon, O. B.; Turco, R. P.; Kinnison, D. E.; Garcia, R. R. (2008). "Massive global ozone loss predicted following regional nuclear conflict". Proc. Natl. Acad. Sci. U.S.A. 105 (14): 5307–12. Bibcode:2008PNAS..105.5307M. doi:10.1073/pnas.0710058105. PMC . PMID 18391218.as PDF Archived 2016-03-04 at the Wayback Machine.
- Harwell, M., and C. Harwell. (1986). "Nuclear Famine: The Indirect Effects of Nuclear War", pp. 117–135 in Solomon, F. and R. Marston (Eds.). The Medical Implications of Nuclear War. Washington, D.C.: National Academy Press. ISBN 0309036925.
- Loretz, John. "Nobel Laureate Warns Two Billion at Risk from Nuclear Famine" (PDF). IPPNW. Retrieved 13 February 2016.
- Kessler, Ronald. "EMP Attack Would Send America into a Dark Age". Newsmax. Retrieved 16 January 2016.
- "Report of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack". Retrieved 16 January 2016.
- McCormack, John (2013-06-17). "Lights out: House plan would protect nation's electricity from solar flare, nuclear bomb". Washington Examiner. Retrieved 2016-01-16.
- House of Commons Defence Committee, "Developing Threats: Electro-Magnetic Pulses (EMP)". Tenth Report of Session 2010–12.
- Woosley, R. James; Pry, Peter Vincent (2014-08-12). "The Growing Threat From an EMP Attack". Wall Street Journal. Retrieved 2016-01-16. (Subscription required (. ))
- Corneliussen, Steven T. (2016-06-23). "Conservative media sustain alarm about a possible electromagnetic-pulse catastrophe". Physics Today. doi:10.1063/PT.5.8178. Retrieved 2016-12-16.
- Martin, Brian (March 1983). "The fate of extinction arguments". Department of Mathematics, Faculty of Science, Australian National University.
- Willens, Harold (1990). "The Trimtab factor, 1984". Alternatives. 16 (4).
- Stothers, Richard B. (1984). "The Great Tambora Eruption in 1815 and Its Aftermath". Science. 224 (4654): 1191–1198. Bibcode:1984Sci...224.1191S. doi:10.1126/science.224.4654.1191. PMID 17819476.
- Oppenheimer, Clive (2003). "Climatic, environmental and human consequences of the largest known historic eruption: Tambora volcano (Indonesia) 1815". Progress in Physical Geography. 27 (2): 230–259. doi:10.1191/0309133303pp379ra.
- "Supersized eruptions are all the rage!". USGS. April 28, 2005.
- Robock, A.; C.M. Ammann; L. Oman; D. Shindell; S. Levis; G. Stenchikov (2009). "Did the Toba volcanic eruption of ~74k BP produce widespread glaciation?". Journal of Geophysical Research. 114: D10107. Bibcode:2009JGRD..11410107R. doi:10.1029/2008JD011652.
- Huang, C.Y.; Zhao, M.X.; Wang, C.C.; Wei, G.J. (2001). "Cooling of the South China Sea by the Toba Eruption and correlation with other climate proxies ∼71,000 years ago". Geophysical Research Letters. 28 (20): 3915–3918. Bibcode:2001GeoRL..28.3915H. doi:10.1029/2000GL006113.
- Margulis, Lynn (1999). Symbiotic Planet: A New Look At Evolution. Houston: Basic Book.
- Nuclear Holocausts: Atomic War in Fiction, By Paul Brians, Professor of English, Washington State University, Pullman, Washington
- Brief Q&A with Luke Oman on the unlikeliness of human extinction from nuclear war | <urn:uuid:5be74947-10c8-49a0-abae-2c6fa12e5eaa> | 3.65625 | 6,096 | Knowledge Article | Science & Tech. | 54.157918 | 95,533,276 |
Here i will share you how Appium works and when the developers used for that. Appium (Cross-Platform) involves many library files to automated the execution script. Appium is a free and open-source mobile Interface automation testing. Appium allows to test the mobile application like Android, iOS and related mobile operation systems.
APPIUM is a distributed open source mobile application for testing the UI mobile framework. Appium also has ability to automate the Desktop applications.
APPIUM Desktop App
Steps to Download and Install the APPIUM in desktop application. First we need server to initiate the APPIUM script in the mobile application. APPIUM provides Graphical User Interface, so you can start and stop the Appium server via GUI. In Appium also available in non-GUI, some the developers used non-GUI to run their server via command prompt.
Download Appium Desktop
Just follow the steps to download the APPIUM Server in your desktop application. I think u know very well about GitHub suppose if you have no idea about GitHub just read this article.
Open the Appium GitHub file: Just open the Appium Desktop file here. After that download the entire file of Appium Desktop.
After download the package, just click to download the Appium Desktop exe file in the packgae. It same on all of operating systems like Windows, Linux, Mac..The file looks like,
Once Complete the installation, Appium Desktop window is displayed like below image
Now click Start Server button, after starting the server display the welcome message like “Welcome to Appium”. Running server be like,
In next tutorial we will see how to use the full feature of Appium Desktop.
This is J.Vetrivel pandian completed MCA & lives in Udangudi,India. Blogging is my Passion, I love to write article about Programming. Apart from blogging, I’ll run Grocery Store that’s my Profession. | <urn:uuid:6d6a6973-fc63-4111-9125-cd23e8ad69ab> | 2.546875 | 410 | Tutorial | Software Dev. | 42.995423 | 95,533,292 |
Authors: Golden Gadzirayi Nyambuya
Exactly 100 years ago, German scientist -- Alfred Lothar Wegener, sailed against the prevailing wisdom of his day when he posited that not only have the Earth's continental plates receded from each other over the course of the Earth's history, but that they are currently in a state of motion relative to one another. To explain this, Wegener setforth the hypothesis that the Earth must be expanding as a whole. Wegener's inability to provide an adequate explanation of the forces and energy source responsible for continental drift and the prevailing belief that the Earth was a rigid solid body resulted in the acrimonious dismissal of his theories. Today, that the continents are receding from each other is no longer a point of debate but a sacrosanct pillar of modern geology and geophysics. What is debatable is the energy source driving this phenomenon. An expanding Earth hypothesis is currently an idea that is not accepted on a general consensus level. Anti-proponent of the expanding Earth mercilessly dismiss it as a pseudo or fringe science. Be that it may, we show herein that from the well accepted law of conversation of spin angular momentum, Stephenson9 and Morrison (1995)'s result that over the last 2700 years or so, the length of the Earth's day has undergone a change of about +17.00 microsecond/yr, this result invariably leads to the fact the Earth must be expanding radially at a paltry rate of about +0.60mm/yr. This simple fact, automatically move the expanding Earth hypothesis from the realm of pseudo or fringe science, to that of real and ponderable science.
Comments: 5 Pages.
[v1] 2012-12-16 23:38:26
Unique-IP document downloads: 161 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:a4f2d893-0697-420d-ba4d-9bdb73f3cd7b> | 2.75 | 514 | Knowledge Article | Science & Tech. | 43.354534 | 95,533,329 |
The response of beetles to climate change during the Quaternary Period is reviewed for the purpose of evaluating their future response to global warming. Beetles responded to Quaternary climatic changes mostly by dispersal, which ultimately led to large-scale changes in geographic distribution. Fragmentation and isolation of populations associated with climate change did not result in either higher rates of speciation or extinction, although local extinctions occurred when dispersal routes were blocked by barriers. Studies from archaeological and late Holocene sites indicate that the fragmentation of the natural landscape by human activities had as great an impact on the local diversity of beetle populations as did climate change. Habitat reduction and fragmentation continue today and are making species increasingly vulnerable to extinction. The major difference between the future and past responses of beetles to climate change is that extinction rates are expected to be much higher, independent of whether the causes of climate change are natural or anthropogenic. The question of determining whether global warming has natural or anthropogenic causes is important because of the ethical implications of extinction. | <urn:uuid:4be3bff1-4a1c-486c-b225-7d53f30693c9> | 3.40625 | 205 | Truncated | Science & Tech. | 1.75486 | 95,533,333 |
The Tropical Rainfall Measuring Mission satellite known as TRMM is managed by both NASA and the Japanese Space Agency. From its orbit in space, TRMM's instruments can estimate rainfall from tropical cyclones.
The TRMM satellite captured rainfall rates from Tropical Storm Hector on Aug. 14, 2012 1:28 a.m. EDT. TRMM data showed that Hector had a small area of moderate to heavy rainfall around the center of circulation. Heavy rainfall appears in red, falling at 2 inches/50 mm per hour. Light to moderate rainfall is depicted in blue and green (falling at a rate between .78 to 1.57 inches (20 to 40 mm) per hour.
Credit: Credit: SSAI/NASA, Hal Pierce
The TRMM satellite captured rainfall rates from Tropical Storm Hector on August 14, 2012 1:28 a.m. EDT. TRMM data showed that Hector had a small area of moderate to heavy rainfall around the center of circulation. That small area of heavy rainfall was falling at 2 inches/50 mm per hour. For the most part, rainfall was light-to-moderate in other areas of Hector.
Hector is being battered by moderate winds from the east, and that has been pushing the rainfall to the west of the storm's center. That wind shear is expected to be around for the next couple of days, which will prevent Hector from strengthening.
On Tuesday, August 14 at 11 a.m. EDT (1500 UTC) Hector's maximum sustained winds remained near 45 mph (75 kmh). The center of Tropical Storm Hector was about 230 miles (365 km) west-southwest of Socorro Island and about 440 miles (710 km) southwest of the southern tip of Baja California. That puts Hector's center near latitude 18.1 north and longitude 114.4 west. Hector is moving toward the west near 6 mph (9 kmh).
Hector is moving west and is expected to turn northwest before weakening into a remnant low pressure area by the end of the week.
Rob Gutro | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:883c9a30-4a3f-4396-a677-7757a7e03f59> | 3.171875 | 997 | Content Listing | Science & Tech. | 55.443446 | 95,533,335 |
In this chapter we give a descriptive account of surfaces, of which we have already met the plane, the sphere and the torus. There are many other surfaces, shortly to be described. The essential idea is that near each of its points a surface is just like the plane.
KeywordsTopological Space Good Surface Open Disc Opposite Point Klein Bottle
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This innocent-looking operation is very useful: it is an example of something called the attaching map and is a good way of building up more complicated spaces from simpler ones.Google Scholar
Important examples not discussed here are the orbit spaces: see .Google Scholar
© Springer-Verlag London 2001 | <urn:uuid:bfc3a2f5-5f77-4900-be73-8adc30f18eb2> | 3.140625 | 160 | Truncated | Science & Tech. | 37.145299 | 95,533,342 |
Temporal range: Middle Pennsylvanian
|The Tully Monster|
Tullimonstrum gregarium fossil
(part and counterpart)
Tullimonstrum, colloquially known as the Tully Monster, is an extinct genus of soft-bodied bilaterian that lived in shallow tropical coastal waters of muddy estuaries during the Pennsylvanian geological period, about 300 million years ago. A single species, T. gregarium, is known. Examples of Tullimonstrum have been found only in the Mazon Creek fossil beds of Illinois, United States. Its classification has been the subject of controversy, and interpretations of the fossil likened it to molluscs, arthropods, conodonts, worms, and vertebrates.
Tullimonstrum probably reached lengths of up to 35 centimetres (14 in); the smallest individuals are about 8 cm (3.1 in) long.
Tullimonstrum had a pair of vertical, ventral fins (though the fidelity of preservation of fossils of its soft body makes this difficult to determine) situated at the tail end of its body, and typically featured a long proboscis with up to eight small sharp teeth on each "jaw", with which it may have actively probed for small creatures and edible detritus in the muddy bottom. It was part of the ecological community represented in the unusually rich group of soft-bodied organisms found among the assemblage called the Mazon Creek fossils from their site in Grundy County, Illinois.
The absence of hard parts in the fossil implies that the animal did not possess organs composed of bone, chitin or calcium carbonate. There is evidence of serially repeated internal structures. Its head is poorly differentiated. A transverse bar-shaped structure, which was either dorsal or ventral, terminates in two round organs which are associated with dark material which have been identified as melanosomes (containing the pigment melanin). Their form and structure is suggestive of a camera-type eye. Tullimonstrum possessed structures which have been interpreted as gills, and a possible notochord or rudimentary spinal cord.
History of discovery
Amateur collector Francis Tully found the first of these fossils in 1955 in a fossil bed known as the Mazon Creek formation. He took the strange creature to the Field Museum of Natural History, but paleontologists were stumped as to which phylum Tullimonstrum belonged. The species Tullimonstrum gregarium ("Tully's common monster"), as these fossils later were named, takes its genus name from Tully, whereas the species name, gregarium, means "common", and reflects its abundance. The term monstrum ("monster") relates to the creature's outlandish appearance and strange body plan.
The fossil remained "a puzzle", and interpretations likened it to a worm, a mollusc, an arthropod, a conodont, or a vertebrate. Since it appeared to lack characteristics of the well-known modern phyla, it was speculated that it was representative of a stem group to one of the many phyla of worms that are poorly represented today. Similarities with Cambrian fossil organisms were noted. Chen et al. suggested similarities to Vetustovermis planus. Others pointed to a general resemblance between Tullimonstrum and Opabinia regalis, although Cave et al. noted that they were too morphologically dissimilar to be related.
Arguments in favour of vertebrate affinities
This section needs expansion. You can help by adding to it. (February 2017)
In 2016 a morphological study showed that Tullimonstrum may have been a basal vertebrate, and thus a member of the phylum Chordata, with one study suggesting Tullimonstrum may be closely related to modern lampreys. This affinity was attributed based on pronounced cartilaginous arcualia, a dorsal fin and asymmetric caudal fin, keratinous teeth, a single nostril, and tectal cartilages like in lampreys. McCoy et al. raised the possibility that Tullimonstrum belongs to the ancestral group of lamprey, but it also has many features not found in Cyclostomes (lampreys and hagfishes). A second study found further evidence that Tullimonstrum was a stem vertebrate: a camera-like eye, with preserved lenses and the presence of cylindrical and spheroid melanosomes in the eye arranged in distinct layering. These ocular pigments and their unique structure was interpreted to be a retinal pigmented epithilium (RPE), indicating for the first time that the bar organs were indeed eyes. Furthermore, Clements, et al. chemically confirmed the presence of fossil melanin as opposed to ommochromes or pterines (ocular pigments used by many invertebrate groups). Although the ocular pigments of many invertebrate groups have been poorly investigated, there is strong evidence that the dual melanosome morphology and presence of an RPE is a uniquely vertebrate trait.
Arguments in favour of non-vertebrate affinities
A 2017 study rejected the above conclusions. Firstly, it was noted that even the presence of the two melanosome types is variable among vertebrates; hagfish lack them altogether, and extant sharks as well as extinct forms found in the Mazon Creek area, such as Bandringa, only have spheroid melanosomes. Additionally, the supposed notochord extends in front of the level of the eyes, which is not the case in any other vertebrate; even if it was a notochord, the presence of notochords is not limited to vertebrates either. Further criticism was drawn towards the identification of the blocks of the body variously as gill pouches and muscle blocks (myomeres), despite the lack of differentiation in the structure of these blocks. In vertebrates, myomeres are also thinner, and extend along the whole length of the body rather than stopping short of the head. Meanwhile, the gill pouches of lampreys are paired extensions rather than segmented structures, and are usually embedded in a complex gill skeleton, neither of which is the case in Tullimonstrum.
Other identifications of soft-tissue structures were considered as being equally problematic. The supposed brain has no associated nervous tissue and is not connected to the eyes, and the purported liver was located under the gills as opposed to being further back as in other vertebrates. The "mouth" at the front of the proboscis was described as possessing gnathostome-like distinct tooth rows, despite lampreys having "tooth fields" on the interior of the mouth. This would necessitate the convergent re-evolution of grasping jaws. Additionally, the thin and jointed proboscis is inconsistent with a role in ram or suction feeding, which is the feeding method typically used for open-water vertebrates; the gill pouches would have further obstructed the flow of water.
The study noted that stalked eyes, tail fins, and brains are also present in anomalocaridids, and that Opabinia also has a similar proboscis. While arthropod affinities were rejected under the assumption that other Mazon Creek arthropods are preserved in three-dimensions with carbonization of the exoskeleton, this is not actually the case. Although arthropods do not have the melanosomes of vertebrates, some do have convergently evolved spheroid eye cells that may be preserved similarly; however, these pigments (ommochromes and pterines) have unique chemical signatures which were not found in the eyes of Tullimonstrum. Sallen, et al. also suggested that molluscs convergently evolved complex camera-like eyes containing melanosomes, but failed to note that no known molluscs have dual melanosome morphologies. Further similarities (such as the lobed brain, muscle bands, tail fin, proboscis, and "teeth") could support possible molluscan affinities. Even if the eye of Tullimonstrum is homologous with vertebrates, it could be a tunicate (the larvae of which have pigmented eyes and tail fins), a lancelet or an acorn worm (both of which have gill openings and a notochord), or a vetulicolian.
Tullimonstrum was probably a free-swimming carnivore that dwelt in open marine water, and was occasionally washed to the near-shore setting in which it was preserved.
The formation of the Mazon Creek fossils is unusual. When the creatures died, they were rapidly buried in silty outwash. The bacteria that began to decompose the plant and animal remains in the mud produced carbon dioxide in the sediments around the remains. The carbonate combined with iron from the groundwater around the remains, forming encrusting nodules of siderite. The organism was entombed, retarding decay and allowing an impression of the organism to be preserved. It should be noted that the mechanisms of preservation in the Mazon Creek are poorly understood.
The combination of rapid burial and rapid formation of siderite resulted in excellent preservation of the many animals and plants that were entombed in the mud. As a result, the Mazon Creek fossils are one of the world's major Lagerstätten, or concentrated fossil assemblages. The rapid burial and compression often caused Tullimonstrum carcasses to fold and bend like other Mazon Creek animals.
The proboscis is rarely preserved in its entirety; it is complete in around 3% of specimens. However, some part of the organ is preserved in about 50% of cases.
In popular culture
- Johnson, Ralph Gordon; Richardson, Eugene Stanley, Jr. (March 24, 1969). "Pennsylvanian Invertebrates of the Mazon Creek Area, Illinois: The Morphology and Affinities of Tullimonstrum". Fieldiana Geology. 12 (8): 119–149. OCLC 86328.
- Richardson, Eugene Stanley, Jr. (January 7, 1966). "Wormlike Fossil from the Pennsylvanian of Illinois". Science. 151 (3706): 75–76. Bibcode:1966Sci...151...75R. doi:10.1126/science.151.3706.75-a. PMID 17842092.
- Clements, Thomas; Dolocan, Andrei; Martin, Peter; et al. (April 28, 2016). "The eyes of Tullimonstrum reveal a vertebrate affinity". Nature. 532 (7600): 500–503. Bibcode:2016Natur.532..500C. doi:10.1038/nature17647. PMID 27074512.
- McCoy, Victoria E.; Saupe, Erin E.; Lamsdell, James C.; et al. (April 28, 2016). "The 'Tully monster' is a vertebrate". Nature. 532 (7600): 496–499. Bibcode:2016Natur.532..496M. doi:10.1038/nature16992. PMID 26982721.
- Dunham, Will (March 16, 2016). "Tully Monster Mystery Solved, Scientists Say". Scientific American. Reuters. Retrieved March 18, 2016.
- Greshko, Michael (March 16, 2016). "Scientists Finally Know What Kind of Monster a Tully Monster Was". National Geographic. Retrieved March 17, 2016.
- Mikulic, Donald G.; Kluessendorf, Joanne (1997). "Illinois' State Fossil—Tullimonstrum gregarium" (PDF). Geobit. 5. OCLC 38563956. Archived from the original (PDF) on February 22, 2014.
- Briggs, Helen (March 16, 2016). "Fishy origin of bizarre fossil 'monster'". BBC News.
- Chen, Jun-yuan; Huang, Di-ying; Bottjer, David J. (October 2005). "An Early Cambrian problematic fossil: Vetustovermis and its possible affinities". Proceedings of the Royal Society B. 272 (1576): 2003–2007. doi:10.1098/rspb.2005.3159. OCLC 112007302. PMC . PMID 16191609.
- Switek, Brian (January 27, 2011). "Tully's Mystery Monster". Wired. Laelaps. Retrieved February 5, 2014.
- Cave, Laura Delle; Insom, Emilio; Simonetta, Alberto Mario (1998). "Advances, diversions, possible relapses and additional problems in understanding the early evolution of the Articulata". Italian Journal of Zoology. 65 (1): 19–38. doi:10.1080/11250009809386724.
- St. Fleur, Nicholas (March 16, 2016). "Solving the Tully Monster's Cold Case". The New York Times. Retrieved March 16, 2016.
- Sallan, L.; Giles, S.; Sansom, R. S.; et al. (February 20, 2017). "The 'Tully Monster' is not a vertebrate: characters, convergence and taphonomy in Palaeozoic problematic animals". Palaeontology. 60: 149–157. doi:10.1111/pala.12282.
- Baillie, Katherine Unger (February 20, 2017). "'Tully Monster' Mystery Is Far From Solved, Penn-led Group Argues". The University of Pennsylvania. Retrieved February 20, 2017.
- Baird, Gordon (1986). "Taphonomy of Middle Pennsylvanian Mazon Creek area fossil localities, northeast Illinois: Significance of exceptional fossil preservation in syngenetic concretions". PALAIOS. 1 (3): 271–285. doi:10.2307/3514690.
- Kloss, Gerald (June 18, 1968). "The Great Dancing Worm Hoax". The Milwaukee Journal. Retrieved March 31, 2012.
- Rory, E. Scumas (1969). The Dancing Worm of Turkana. Vanishing Press. OCLC 191964063.
- "State Symbol: Illinois State Fossil — Tully Monster (Tullimonstrum gregarium)". Illinois State Museum. Retrieved March 31, 2012.
|Wikimedia Commons has media related to Tullimonstrum.|
|Wikispecies has information related to Tullimonstrum| | <urn:uuid:f0cebfcf-0562-4891-ac37-435225717683> | 3.46875 | 3,082 | Knowledge Article | Science & Tech. | 46.158595 | 95,533,354 |
Splashing, as a result of the drop impact onto a liquid film, occurs in various natural phenomena and technical processes. Examples include rain drops impacting on the ground, fuel injection impacting chamber walls, spray cleaning, spray cooling, or spray coating. Inertia, viscous and capillary forces determine the impact outcome if both liquids are the same. In the case of two different liquids, also the miscibility and the interfacial forces influence the drop impact phenomenon. This latter case is much less understood. The main objective of the present experimental work is to elucidate the impact phenomena of a single Newtonian drop onto a liquid wall film of different viscosity. The experimental setup consist of a drop-on-demand drop generator, a wetted, horizontal, glass substrate, and an observation system, consisting of the illumination source and a high-speed video camera. A high frame rate of up to 40,000 fps is used to investigate the impact dynamics of the fluids and to observe the fluid distribution after the drop impact. The liquid of the drop is marked by a dye to distinguish liquid interfaces. The viscosity and surface tension of the two liquids are varied. The drop and wall liquids are not miscible. The experiments revealed several interesting phenomena, typical only to the case of collision of different liquids. | <urn:uuid:95f08fb6-5d14-4774-93eb-b95627262eab> | 2.84375 | 263 | Academic Writing | Science & Tech. | 33.166476 | 95,533,360 |
In the past, even modern technologies have failed to produce high-resolution fluorescence images from this depth because of the strong scattering of light.
In the Nature Photonics journal, the Munich researchers describe how they can reveal genetic expression within live fly larvae and fish by “listening to light”. In the future this technology may facilitate the examination of tumors or coronary vessels in humans.
Since the dawn of the microscope scientists have been using light to scrutinize thin sections of tissue to ascertain whether they are healthy or diseased or to investigate cell function. However, the penetration limits for this kind of examination lie between half a millimeter and one millimeter of tissue. In thicker layers light is diffused so strongly that all useful details are obscured.
Together with his research team, Professor Vasilis Ntziachristos, director of the Institute of Biological and Medical Imaging of the Helmholtz Zentrum München – German Research Center for Environmental Health and chair for biological imaging at the Technische Universität München, has now broken through this barrier and rendered three-dimensional images through at least six millimeters of tissue, allowing whole-body visualization of adult zebra fish.
To achieve this feat, Prof. Ntziachristos and his team made light audible. They illuminated the fish from multiple angles using flashes of laser light that are absorbed by fluorescent pigments in the tissue of the genetically modified fish. The fluorescent pigments absorb the light, a process that causes slight local increases temperature, which in turn result in tiny local volume expansions. This happens very quickly and creates small shock waves. In effect, the short laser pulse gives rise to an ultrasound wave that the researchers pick up with an ultrasound microphone.
The real power of the technique, however, lies in specially developed mathematical formulas used to analyze the resulting acoustic patterns. An attached computer uses these formulas to evaluate and interpret the specific distortions caused by scales, muscles, bones and internal organs to generate a three-dimensional image.
The result of this “multi-spectral opto-acoustic tomography”, or MSOT, is an image with a striking spatial resolution better than 40 micrometers (four hundredths of a millimeter). And best of all, the sedated fish wakes up and recovers without harm following the procedure.
Dr. Daniel Razansky, who played a pivotal role in developing the method, says, "This opens the door to a whole new universe of research. For the first time, biologists will be able to optically follow the development of organs, cellular function and genetic expression through several millimeters to centimeters of tissue.”
In the past, understanding the evolution of development or of disease required numerous animals to be sacrificed. With a plethora of fluorochrome pigments to choose from – including pigments using the fluorescence protein technology for which a Nobel Prize was awarded in 2008 and clinically approved fluorescent agents – observing metabolic and molecular processes in all kinds of living organisms, from fish to mice and humans, will be possible. The fruits of pharmaceutical research can also be harvested faster since the molecular effects of new treatments can be observed in the same animals over an extended period of time.
Bio-engineer Ntziachristos is convinced that, “MSOT can truly revolutionize biomedical research, drug discovery and healthcare. Since MSOT allows optical and fluorescence imaging of tissue to a depth of several centimeters, it could become the method of choice for imaging cellular and subcellular processes throughout entire living tissues.”Further information
Nature Photonics, published online on 21 June 2009; doi:10.1038/nphoton.2009.98
Helmholtz Zentrum München is the German Research Center for Environmental Health. As leading center oriented toward Environmental Health, it focuses on chronic and complex diseases which develop from the interaction of environmental factors and individual genetic disposition. Helmholtz Zentrum München has around 1680 staff members. The head office of the center is located in Neuherberg to the north of Munich on a 50-hectare research campus. Helmholtz Zentrum München belongs to the Helmholtz Association, Germany’s largest research organization, a community of 15 scientific-technical and medical-biological research centers with a total of 26,500 staff members.
The Institute for Biological and Medical Imaging (IBMI) focuses on the development and propagation of in-vivo imaging technology to the life sciences with application spanning from basic and drug discovery interrogations to pre-clinical imaging and clinical translation.Editor
Sven Winkler | EurekAlert!
Further reports about: > Environmental Health > Health > MSOT > Medical Wellness > Photonic > cellular process > drug discovery > environmental risk > genetic expression > imaging cellular > listening to light > living organism > molecular process > multi-spectral opto-acoustic tomography > plethora of fluorochrome pigments > subcellular processes
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:81c24008-89de-4867-9b86-811cff4b8b14> | 3.53125 | 1,666 | Content Listing | Science & Tech. | 29.074995 | 95,533,380 |
Global brain initiatives generate tsunami of neuroscience data
Three years ago the White House launched the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative to accelerate the development and application of novel technologies that will give us a better understanding about how brains work.
Since then, dozens of technology firms, academic institutions, scientists and other have been developing new tools to give researchers unprecedented opportunities to explore how the brain processes, utilizes, stores and retrieves information. But without a coherent strategy to analyze, manage and understand the data generated by these new technologies, advancements in the field will be limited.
This is precisely why Lawrence Berkeley National Laboratory (Berkeley Lab) Computational Neuroscientist Kristofer Bouchard assembled an international team of interdisciplinary researchers—including mathematicians, computer scientists, physicists and experimental and computational neuroscientists—to develop a plan for managing, analyzing and sharing neuroscience data. Their recommendations were published in a recent issue of Neuron.
"The U.S. BRAIN Initiative is just one of many national and private neuroscience initiatives globally that are working toward accelerating our understanding of brains," says Bouchard. "Many of these efforts have given a lot of attention to the technological challenges of measuring and manipulating neural activity, while significantly less attention has been paid to the computing challenges associated with the vast amounts of data that these technologies are generating."
To maximize the return on investments in global neuroscience initiatives, Bouchard and his colleagues argue that the international neuroscience community should have an integrated strategy for data management and analysis. This coordination would facilitate the reproducibility of workflows, which then allows researchers to build on each other's work.
For a first step, the authors recommend that researchers from all facets of neuroscience agree on standard descriptions and file formats for products derived from data analysis and simulations. After that, the researchers should work with computer scientists to develop hardware and software ecosystems for archiving and sharing data.
The authors suggest an ecosystem similar to the one used by the physics community to share data collected by experiments like the Large Hadron Collider (LHC). In this case, each research group has their own local repository of physiological or simulation data that they've collected or generated. But eventually, all of this information should also be included in "meta-repositories" that are accessible to the greater neuroscience community. Files in the "meta-repositories" should be in a common format, and the repositories would ideally be hosted by an open-science supercomputing facility like the Department of Energy's (DOE's) National Energy Research Scientific Computing Center (NERSC), located at Berkeley Lab.
Because novel technologies are producing unprecedented amounts of data, Bouchard and his colleagues also propose that neuroscientists collaborate with mathematicians to develop new approaches for data analysis and modify existing analysis tools to run on supercomputers. To maximize these collaborations, the analysis tools should be open-source and should integrate with brain-scale simulations, they emphasize.
"These are the early days for neuroscience and big data, but we can see the challenges coming. This is not the first research community to face big data challenges; climate and high energy physics have been there and overcome many of the same issues," says Prabhat, who leads NERSC's Data & Analytics Services Group.
Berkeley Lab is well positioned to help neuroscientists address these challenges because of its long tradition of interdisciplinary science, Prabhat adds. DOE facilities like NERSC and the Energy Sciences Network (ESnet) have worked closely with Lab computer scientists to help a range of science communities—from astronomy to battery research—collaborate and manage and archive their data. Berkeley Lab mathematicians have also helped researchers in various scientific disciplines develop new tools and methods for data analysis on supercomputers.
"Harnessing the power of HPC resources will require neuroscientists to work closely with computer scientists and will take time, so we recommend rapid and sustained investment in this endeavor now," says Bouchard. "The insights generated from this effort will have high-payoff outcomes. They will support neuroscience efforts to reveal both the universal design features of a species' brain and help us understand what makes each individual unique."
Note: Material may have been edited for length and content. For further information, please contact the cited source.
Bouchard KE et al. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination. Neuron, Published November 2 2016. doi: 10.1016/j.neuron.2016.10.035
Convergence of Synaptic Signals is Mediated by a Protein Critical for Learning and MemoryNews
Researchers show that protein Kinase C is a novel information integrator, keeping tabs on the recent history of neighboring synapses while simultaneously monitoring local synaptic inputREAD MORE
Through the Eyes of the Crab: Binocular processing of object motion in the crustaceanNews
The widely spaced eyes and visually guided behaviors of the crab Neohelice granulata suggest it may compute visual parameters of moving targets by combining input from both eyes.READ MORE
Perinatal Exposure to Phthalates Results in Lower Number of Neurons and Synapses in the Medial Prefrontal CortexNews
Phthalates - chemicals used in plastics belonging to the same class as Bisphenol A (BPA) - can potentially interfere with hormones important for the developing brain.READ MORE | <urn:uuid:35270d57-9080-436b-a769-6a7bc935acf9> | 2.671875 | 1,113 | News Article | Science & Tech. | 10.763516 | 95,533,392 |
The Solar Cycle 23 – 24 Minimum. A Benchmark in Solar Variability and Effects in the Heliosphere
Given the numerous ground-based and space-based experiments producing the database for the Cycle 23 – 24 Minimum epoch from September 2008 to May 2009, we have an extraordinary opportunity to understand its effects throughout the heliosphere. We use solar radiative output in this period to obtain minimum values for three measures of the Sun’s radiative output: the total solar irradiance, the Mg ii index, and the 10.7 cm solar radio flux. The derived values are included in the research summaries as a means to exchange ideas and data for this long minimum in solar activity.
KeywordsSolar irradiance Cycle 23 – 24 minimum
Unable to display preview. Download preview PDF.
- Tapping, K.F., Harvey, K.L.: 1994, Slowly-varying microwave emissions from the solar corona. In: Pap, J.M., Hudson, H.S., Solanki, S.K. (eds.) The Sun as a Variable Star: Solar and Stellar Irradiance Variations, Cambridge University Press, Cambridge, 182 – 195. Google Scholar | <urn:uuid:6a47ecaa-d669-4fc8-87c8-b7959983090d> | 2.890625 | 246 | Academic Writing | Science & Tech. | 56.450186 | 95,533,404 |
Air mass (solar energy)
The air mass coefficient defines the direct optical path length through the Earth's atmosphere, expressed as a ratio relative to the path length vertically upwards, i.e. at the zenith. The air mass coefficient can be used to help characterize the solar spectrum after solar radiation has traveled through the atmosphere. The air mass coefficient is commonly used to characterize the performance of solar cells under standardized conditions, and is often referred to using the syntax "AM" followed by a number. "AM1.5" is almost universal when characterizing terrestrial power-generating panels.
Solar radiation closely matches a black body radiator at about 5,800 K. As it passes through the atmosphere, sunlight is attenuated by scattering and absorption; the more atmosphere through which it passes, the greater the attenuation.
As the sunlight travels through the atmosphere, chemicals interact with the sunlight and absorb certain wavelengths changing the amount of short-wavelength light reaching the Earth's surface. A more active component of this process is water vapor, which results in a wide variety of absorption bands at many wavelengths, while molecular nitrogen, oxygen and carbon dioxide add to this process. By the time it reaches the Earth's surface, the spectrum is strongly confined between the far infrared and near ultraviolet.
Atmospheric scattering plays a role in removing higher frequencies from direct sunlight and scattering it about the sky. This is why the sky appears blue and the sun yellow — more of the higher-frequency blue light arrives at the observer via indirect scattered paths; and less blue light follows the direct path, giving the sun a yellow tinge. The greater the distance in the atmosphere through which the sunlight travels, the greater this effect, which is why the sun looks orange or red at dawn and sundown when the sunlight is travelling very obliquely through the atmosphere — progressively more of the blues and greens are removed from the direct rays, giving an orange or red appearance to the sun; and the sky appears pink — because the blues and greens are scattered over such long paths that they are highly attenuated before arriving at the observer, resulting in characteristic pink skies at dawn and sunset.
For a path length through the atmosphere, for solar radiation incident at angle relative to the normal to the Earth's surface, the air mass coefficient is:
where is the zenith path length (i.e. normal to the Earth's surface) at sea level and is the zenith angle in degrees.
The air mass number is thus dependent on the Sun's elevation path through the sky and therefore varies with time of day and with the passing seasons of the year, and with the latitude of the observer.
Accuracy near the horizon
The above approximation overlooks the curvature of the Earth, and is reasonably accurate for values of up to around 75°. A number of refinements have been proposed to more accurately model the path thickness towards the horizon, such as that proposed by Kasten and Young (1989):
A more comprehensive list of such models is provided in the main article Airmass, for various atmospheric models and experimental data sets. At sea level the air mass towards the horizon ( = 90°) is approximately 38.
Modelling the atmosphere as a simple spherical shell provides a reasonable approximation:
where the radius of the Earth = 6371 km, the effective height of the atmosphere ≈ 9 km, and their ratio ≈ 708.
These models are compared in the table below:
|Flat Earth||Kasten & Young||Spherical shell|
This implies that for these purposes the atmosphere can be considered to be effectively concentrated into around the bottom 9 km, i.e. essentially all the atmospheric effects are due to the atmospheric mass in the lower half of the Troposphere. This is a useful and simple model when considering the atmospheric effects on solar intensity.
The spectrum outside the atmosphere, approximated by the 5,800 K black body, is referred to as "AM0", meaning "zero atmospheres". Solar cells used for space power applications, like those on communications satellites are generally characterized using AM0.
The spectrum after travelling through the atmosphere to sea level with the sun directly overhead is referred to, by definition, as "AM1". This means "one atmosphere". AM1 (=0°) to AM1.1 (=25°) is a useful range for estimating performance of solar cells in equatorial and tropical regions.
Solar panels do not generally operate under exactly one atmosphere's thickness: if the sun is at an angle to the Earth's surface the effective thickness will be greater. Many of the world's major population centres, and hence solar installations and industry, across Europe, China, Japan, the United States of America and elsewhere (including northern India, southern Africa and Australia) lie in temperate latitudes. An AM number representing the spectrum at mid-latitudes is therefore much more common.
"AM1.5", 1.5 atmosphere thickness, corresponds to a solar zenith angle of =48.2°. While the summertime AM number for mid-latitudes during the middle parts of the day is less than 1.5, higher figures apply in the morning and evening and at other times of the year. Therefore, AM1.5 is useful to represent the overall yearly average for mid-latitudes. The specific value of 1.5 has been selected in the 1970s for standardization purposes, based on an analysis of solar irradiance data in the conterminous United States. Since then, the solar industry has been using AM1.5 for all standardized testing or rating of terrestrial solar cells or modules, including those used in concentrating systems. The latest AM1.5 standards pertaining to photovoltaic applications are the ASTM G-173 and IEC 60904, all derived from simulations obtained with the SMARTS code
AM2 (=60°) to AM3 (=70°) is a useful range for estimating the overall average performance of solar cells installed at high latitudes such as in northern Europe. Similarly AM2 to AM3 is useful to estimate wintertime performance in temperate latitudes, e.g. airmass coefficient is greater than 2 at all hours of the day in winter at latitudes as low as 37°.
AM38 is generally regarded as being the airmass in the horizontal direction (=90°) at sea level. However, in practice there is a high degree of variability in the solar intensity received at angles close to the horizon as described in the next section Solar intensity.
- At higher altitudes
The relative air mass is only a function of the sun's zenith angle, and therefore does not change with local elevation. Conversely, the absolute air mass, equal to the relative air mass multiplied by the local atmospheric pressure and divided by the standard (sea-level) pressure, decreases with elevation above sea level. For solar panels installed at high altitudes, e.g. in an Altiplano region, it is possible to use a lower absolute AM numbers than for the corresponding latitude at sea level: AM numbers less than 1 towards the equator, and correspondingly lower numbers than listed above for other latitudes. However, this approach is approximate and not recommended. It is best to simulate the actual spectrum based on the relative air mass (e.g., 1.5) and the actual atmospheric conditions for the specific elevation of the site under scrutiny.
Solar intensity at the collector reduces with increasing airmass coefficient, but due to the complex and variable atmospheric factors involved, not in a simple or linear fashion. For example, almost all high energy radiation is removed in the upper atmosphere (between AM0 and AM1) and so AM2 is not twice as bad as AM1. Furthermore, there is great variability in many of the factors contributing to atmospheric attenuation, such as water vapor, aerosols, photochemical smog and the effects of temperature inversions. Depending on level of pollution in the air, overall attenuation can change by up to ±70% towards the horizon, greatly affecting performance particularly towards the horizon where effects of the lower layers of atmosphere are amplified manyfold.
where solar intensity external to the Earth's atmosphere = 1.353 kW/m2, and the factor of 1.1 is derived assuming that the diffuse component is 10% of the direct component.
This formula fits comfortably within the mid-range of the expected pollution-based variability:
|AM||range due to pollution||formula (I.1)||ASTM G-173|
|0°||1||840 .. 1130 = 990 ± 15%||1040|
|23°||1.09||800 .. 1110 = 960 ± 16%||1020|
|30°||1.15||780 .. 1100 = 940 ± 17%||1010|
|45°||1.41||710 .. 1060 = 880 ± 20%||950|
|48.2°||1.5||680 .. 1050 = 870 ± 21%||930||1000.4|
|60°||2||560 .. 970 = 770 ± 27%||840|
|70°||2.9||430 .. 880 = 650 ± 34%||710|
|75°||3.8||330 .. 800 = 560 ± 41%||620|
|80°||5.6||200 .. 660 = 430 ± 53%||470|
|85°||10||85 .. 480 = 280 ± 70%||270|
This illustrates that significant power is available at only a few degrees above the horizon. For example, when the sun is more than about 60° above the horizon ( <30°) the solar intensity is about 1000 W/m2 (from equation I.1 as shown in the above table), whereas when the sun is only 15° above the horizon ( =75°) the solar intensity is still about 600 W/m2 or 60% of its maximum level; and at only 5° above the horizon still 27% of the maximum.
At higher altitudes
where is the solar collector's height above sea level in km and is the airmass (from A.2) as if the collector was installed at sea level.
Alternatively, given the significant practical variabilities involved, the homogeneous spherical model could be applied to estimate AM, using:
where the normalized heights of the atmosphere and of the collector are respectively ≈ 708 (as above) and .
These approximations at I.2 and A.4 are suitable for use only to altitudes of a few kilometres above sea level, implying as they do reduction to AM0 performance levels at only around 6 and 9 km respectively. By contrast much of the attenuation of the high energy components occurs in the ozone layer - at higher altitudes around 30 km. Hence these approximations are suitable only for estimating the performance of ground-based collectors.
Solar cell efficiency
Silicon solar cells are not very sensitive to the portions of the spectrum lost in the atmosphere. The resulting spectrum at the Earth's surface more closely matches the bandgap of silicon so silicon solar cells are more efficient at AM1 than AM0. This apparently counter-intuitive result arises simply because silicon cells can't make much use of the high energy radiation which the atmosphere filters out. As illustrated below, even though the efficiency is lower at AM0 the total output power (Pout) for a typical solar cell is still highest at AM0. Conversely, the shape of the spectrum does not significantly change with further increases in atmospheric thickness, and hence cell efficiency does not greatly change for AM numbers above 1.
|AM||Solar intensity||Output power||Efficiency|
|Pin W/m2||Pout W/m2||Pout / Pin|
This illustrates the more general point that given that solar energy is "free", and where available space is not a limitation, other factors such as total Pout and Pout/$ are often more important considerations than efficiency (Pout/Pin).
Notes and references
- or more precisely 5,777 K as reported in NASA Solar System Exploration - Sun: Facts & Figures retrieved 27 April 2011 "Effective Temperature ... 5777 K"
- See also the article Diffuse sky radiation.
- Yellow is the color negative of blue — yellow is the aggregate color of what remains after scattering removes some blue from the "white" light from the sun.
- Peter Würfel (2005). The Physics of Solar Cells. Weinheim: Wiley-VCH. ISBN 3-527-40857-6.
- Kasten, F. and Young, A. T. (1989). Revised optical air mass tables and approximation formula. Applied Optics 28:4735–4738.
- The main article Airmass reports values in the range 36 to 40 for different atmospheric models
- Schoenberg, E. (1929). Theoretische Photometrie, g) Über die Extinktion des Lichtes in der Erdatmosphäre. In Handbuch der Astrophysik. Band II, erste Hälfte. Berlin: Springer.
- The main article Airmass reports values in the range 8 to 10 km for different atmospheric models
- Gueymard, C.; Myers, D.; Emery, K. (2002). "Proposed reference irradiance spectra for solar energy systems testing". Solar Energy. 73 (6): 443–467. Bibcode:2002SoEn...73..443G. doi:10.1016/S0038-092X(03)00005-7.
- Reference Solar Spectral Irradiance: Air Mass 1.5 NREL retrieved 1 May 2011
- Reference Solar Spectral Irradiance: ASTM G-173 ASTM retrieved 1 May 2011
- Planning and installing photovoltaic systems: a guide for installers, architects and engineers, 2nd Ed. (2008), Table 1.1, Earthscan with the International Institute for Environment and Development, Deutsche Gesellshaft für Sonnenenergie. ISBN 1-84407-442-0.
- PVCDROM retrieved 1 May 2011, Stuart Bowden and Christiana Honsberg, Solar Power Labs, Arizona State University
- Meinel, A. B. and Meinel, M. P. (1976). Applied Solar Energy Addison Wesley Publishing Co.
- The Earthscan reference uses 1367 W/m2 as the solar intensity external to the atmosphere.
- The ASTM G-173 standard measures solar intensity over the band 280 to 4000 nm.
- Interpolated from data in the Earthscan reference using suitable Least squares estimate variants of equation I.1:
- for polluted air:
- for clean air:
- The ASTM G-173 standard measures solar intensity under "rural aerosol loading" i.e. clean air conditions - thus the standard value fits closely to the maximum of the expected range.
- Laue, E. G. (1970), The measurement of solar spectral irradiance at different terrestrial elevations, Solar Energy, vol. 13, no. 1, pp. 43-50, IN1-IN4, 51-57, 1970.
- R.L.F. Boyd (Ed.) (1992). Astronomical photometry: a guide, section 6.4. Kluwer Academic Publishers. ISBN 0-7923-1653-3. | <urn:uuid:05db548c-3000-4895-a20f-0e34677512f5> | 3.5625 | 3,198 | Knowledge Article | Science & Tech. | 54.424336 | 95,533,434 |
Where should runners start the 200m race so that they have all run the same distance by the finish?
Could nanotechnology be used to see if an artery is blocked? Or is this just science fiction?
Explore the properties of isometric drawings.
Make your own pinhole camera for safe observation of the sun, and find out how it works.
Examine these estimates. Do they sound about right?
When a habitat changes, what happens to the food chain?
Can you work out what this procedure is doing?
Is it really greener to go on the bus, or to buy local?
If I don't have the size of cake tin specified in my recipe, will the size I do have be OK?
Make an accurate diagram of the solar system and explore the concept of a grand conjunction.
Can you sketch graphs to show how the height of water changes in different containers as they are filled?
Work out the numerical values for these physical quantities.
Many physical constants are only known to a certain accuracy. Explore the numerical error bounds in the mass of water and its constituents.
Get some practice using big and small numbers in chemistry.
Formulate and investigate a simple mathematical model for the design of a table mat.
Explore the properties of perspective drawing.
Can you work out which drink has the stronger flavour?
Two trains set off at the same time from each end of a single straight railway line. A very fast bee starts off in front of the first train and flies continuously back and forth between the. . . .
Is it cheaper to cook a meal from scratch or to buy a ready meal? What difference does the number of people you're cooking for make?
Imagine different shaped vessels being filled. Can you work out what the graphs of the water level should look like?
How much energy has gone into warming the planet?
How would you design the tiering of seats in a stadium so that all spectators have a good view?
In which Olympic event does a human travel fastest? Decide which events to include in your Alternative Record Book.
Use trigonometry to determine whether solar eclipses on earth can be perfect.
How efficiently can you pack together disks?
Work with numbers big and small to estimate and calculate various quantities in physical contexts.
Can you deduce which Olympic athletics events are represented by the graphs?
Explore the relationship between resistance and temperature
Andy wants to cycle from Land's End to John o'Groats. Will he be able to eat enough to keep him going?
Work with numbers big and small to estimate and calculate various quantities in biological contexts.
Which units would you choose best to fit these situations?
Estimate these curious quantities sufficiently accurately that you can rank them in order of size
Work with numbers big and small to estimate and calulate various quantities in biological contexts.
Which dilutions can you make using only 10ml pipettes?
When you change the units, do the numbers get bigger or smaller?
What shapes should Elly cut out to make a witch's hat? How can she make a taller hat?
Are these estimates of physical quantities accurate?
Can you suggest a curve to fit some experimental data? Can you work out where the data might have come from?
Can Jo make a gym bag for her trainers from the piece of fabric she has?
These Olympic quantities have been jumbled up! Can you put them back together again?
To investigate the relationship between the distance the ruler drops and the time taken, we need to do some mathematical modelling...
Have you ever wondered what it would be like to race against Usain Bolt?
Starting with two basic vector steps, which destinations can you reach on a vector walk?
This problem explores the biology behind Rudolph's glowing red nose.
Use your skill and knowledge to place various scientific lengths in order of size. Can you judge the length of objects with sizes ranging from 1 Angstrom to 1 million km with no wrong attempts?
Analyse these beautiful biological images and attempt to rank them in size order.
Does weight confer an advantage to shot putters?
How would you go about estimating populations of dolphins?
Can you visualise whether these nets fold up into 3D shapes? Watch the videos each time to see if you were correct.
What shape would fit your pens and pencils best? How can you make it? | <urn:uuid:c8a04ee9-6d48-4804-979a-6051414af592> | 3.5 | 900 | Content Listing | Science & Tech. | 59.54906 | 95,533,448 |
Smooth, dark buildings, vehicles and even roads can be mistaken by insects and other creatures for water, according to a Michigan State University researcher, creating "ecological traps" that jeopardize animal populations and fragile ecosystems.
It's the polarized light reflected from asphalt roads, windows -- even plastic sheets and oil spills -- that to some species mimics the surface of the water they use to breed and feed. The resulting confusion could drastically disrupt mating and feeding routines and lead insects and animals into contact with vehicles and other dangers, Bruce Robertson said.
An ecologist studying at the W.K. Kellogg Biological Station in Hickory Corners, north of Kalamazoo, Robertson said polarized light reflected from man-made structures can overwhelm natural cues to animal behavior. Dragonflies can be prompted to lay eggs on roads or parking lots instead of water, for example, and such aquatic insects are at the center of the food web. Insect population crashes can impact higher levels of the food chain.
"Any kind of shiny, black object -- oil, solar cells, asphalt -- the closer they are to wetlands, the bigger the problem," he said.
Predators following misdirected insect prey then also can find themselves in danger.The importance of natural light to creatures' ability to navigate -- and the impacts of visible light pollution from man-made sources -- are well understood. Those include the tendency of newly hatched sea turtles to move from their beach nests toward landward light sources instead of following moonlight to the safety of open water. Horizontally polarized light has been found to be a reliable cue for creatures to locate water, Robertson said, and now he and fellow researchers are discovering the effects of light reflected from man-made structures.
Although the research highlights new concerns about human impact on native species and ecological communities, it suggests the importance of building with alternative materials and, when necessary, employing mitigation strategies. Those might include adding white curtains to dark windows or adding white hatching marks to asphalt.
There also might be potential for turning it to an advantage, Robertson said. In locations where trees are being destroyed by insect infestations, for example, "you may be able to create massive polarized light traps to crash bark beetle populations," if such species are found to be responsive to polarized light cues.
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:42671f1b-5ebb-4405-b806-91ef5f0d80bb> | 3.1875 | 1,060 | Content Listing | Science & Tech. | 36.442655 | 95,533,466 |
dark, funnel-shaped cloud containing violently rotating air that develops below a heavy cumulonimbus cloud mass and extends toward the earth. The funnel twists about, rises and falls, and where it reaches the earth causes great destruction. The diameter of a tornado varies from a few feet to a mile; the rotating winds may attain velocities of 200 to 300 mi (320–480 km) per hr, and the updraft at the center may reach 200 mi per hr. The Enhanced Fujita scale is the standard scale for rating the severity of a tornado as measured by the damage it causes. A tornado is usually accompanied by thunder, lightning, heavy rain, and a loud "freight train" noise.
In comparison with a cyclone or hurricane, a tornado covers a much smaller area but can be very violent and destructive. Under the right conditions, however, a large storm system can produce multiple (more than a hundred in rare cases) and longer-lasting tornadoes over a wide area, leading to widespread damage. The atmospheric conditions typically required for the formation of a tornado include great thermal instability, high humidity, and the convergence of warm, moist air at low levels with cooler, drier air aloft. Wind shear at the back of large thunderstorm can create horizontally spinning vortices that are pulled into the stormcloud by updrafts to form a mesocyclone, a rotating, upward-flowing columnar air mass; a tornado may form from the base of an intense mesocyclone.
Although tornadoes have occurred on every continent except Antarctica, they are most common in the continental United States, where tornadoes typically form over the central and southern plains, the Ohio valley, and the Gulf states. The area where the most violent storms commonly occur in the United States is known as Tornado Alley, which is usually understood to encompass the plains from N central Texas north to the Dakotas, with the peak frequency located in Oklahoma. A tornado typically travels in a northeasterly direction with a speed of 20 to 40 mi (32–64 km) per hr, but tornadoes have be reported to move in a variety of directions and as fast as 73 mi (117 km) per hr—or to hover in one place. The length of a tornado's path along the ground varies from less than one mile to several hundred. Tornadoes occurring over water are called waterspouts.
- See Under the Whirlwind: Everything You Need to Know about Tornadoes but Didn't Know Who to Ask (1998);. ; ,
- Tornado Alley: Monster Storms of the Great Plains (1999). ,
Full text Article Understanding tornadoes: 5 questions answered
Editor’s note: Tornado season in North America typically starts in the Southeast in March and April, then moves north and west into the Plains states in May and June. We asked Penn State meteorology professors Paul Markowski and Yvette Richardson to explain why tornadoes form, how to stay safe if…continue
Tornadoes are most common in the interior of the United States, particularly east of the Rocky Mountains in Texas, Oklahoma, Kansas, and...
The largest number of tornado deaths per capita in the United States occur in the states of Mississippi, Arkansas, Kansas and Oklahoma. The...
A tornado vortex is a swirling or spinning mass of air descending from a storm cloud and taking the form of a long, tapering long funnel that... | <urn:uuid:ee32a5f6-4d09-461d-b0de-ebe3d37ebd28> | 4.03125 | 698 | Knowledge Article | Science & Tech. | 44.754191 | 95,533,482 |
The atmosphere of Earth is the layer of gases, commonly known as air, that surrounds the planet Earth and is retained by Earth's gravity. The atmosphere of Earth protects life on Earth by creating pressure allowing for liquid water to exist on the Earth's surface, absorbing ultraviolet solar radiation, warming the surface through heat retention (greenhouse effect), and reducing temperature extremes between day and night (the diurnal temperature variation).
By volume, dry air contains 78.09% nitrogen, 20.95% oxygen, 0.93% argon, 0.04% carbon dioxide, and small amounts of other gases. Air also contains a variable amount of water vapor, on average around 1% at sea level, and 0.4% over the entire atmosphere. Air content and atmospheric pressure vary at different layers, and air suitable for use in photosynthesis by terrestrial plants and breathing of terrestrial animals is found only in Earth's troposphere and in artificial atmospheres.
The atmosphere has a mass of about 5.15×1018 kg, three quarters of which is within about 11 km (6.8 mi; 36,000 ft) of the surface. The atmosphere becomes thinner and thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. The Kármán line, at 100 km (62 mi), or 1.57% of Earth's radius, is often used as the border between the atmosphere and outer space. Atmospheric effects become noticeable during atmospheric reentry of spacecraft at an altitude of around 120 km (75 mi). Several layers can be distinguished in the atmosphere, based on characteristics such as temperature and composition.
The three major constituents of Earth's atmosphere, are nitrogen, oxygen, and argon. Water vapor accounts for roughly 0.25% of the atmosphere by mass. The concentration of water vapor (a greenhouse gas) varies significantly from around 10 ppm by volume in the coldest portions of the atmosphere to as much as 5% by volume in hot, humid air masses, and concentrations of other atmospheric gases are typically quoted in terms of dry air (without water vapor). The remaining gases are often referred to as trace gases, among which are the greenhouse gases, principally carbon dioxide, methane, nitrous oxide, and ozone. Filtered air includes trace amounts of many other chemical compounds. Many substances of natural origin may be present in locally and seasonally variable small amounts as aerosols in an unfiltered air sample, including dust of mineral and organic composition, pollen and spores, sea spray, and volcanic ash. Various industrial pollutants also may be present as gases or aerosols, such as chlorine (elemental or in compounds), fluorine compounds and elemental mercury vapor. Sulfur compounds such as hydrogen sulfide and sulfur dioxide (SO2) may be derived from natural sources or from industrial air pollution.
(A) volume fraction is equal to mole fraction for ideal gas only,
also see volume (thermodynamics)
(B) ppmv: parts per million by volume
(C) Water vapor is about 0.25% by mass over full atmosphere
(D) Water vapor strongly varies locally
The relative concentration of gasses remains constant until about 10,000 m (33,000 ft). | <urn:uuid:c4e82820-4386-42d8-8b91-c62add433f44> | 3.921875 | 654 | Knowledge Article | Science & Tech. | 42.928082 | 95,533,517 |
As we all know "Every program needs a memory and unfortunately which
is finite in terms of its availability".....:)
Software must cope with memory usage, and there are two ways to manage it.
Manual management are more prone to errors especially with exceptions and in asynchronous code. This is why modern managed environments (.NET, Java, Erlang, and many more) implement automatic memory management with garbage collection.
Lets see what is Garbage Collection in C# world.
When a C# program instantiates a class, it creates an object.
The program manipulates the object, and at some point the object may no longer be needed. When the object is no longer accessible to the program and becomes a candidate for garbage collection.
There are two places in memory where the CLR stores items while your code executes.
- Stack : stack keeps track of what’s executing in your code (like your local variables)
- Heap : heap keeps track of your objects.
For an object on the heap, there is always a reference on the stack that points to it.
The garbage collector starts cleaning up only when there is not enough room on the heap to construct a new object.
The stack is automatically cleared at the end of a method. The CLR takes care of this and you don’t have to worry about it.
The heap is managed by the garbage collector.
In unmanaged environments without a garbage collector, you have to keep track of which objects were allocated on the heap and you need to free them explicitly. In the .NET Framework, this is done automatically by the garbage collector.
How the Garbage Collector works?
Let’s look at below diagram which i referred from microsoft site to understand it better.
Before Garbage Collector Runs:
In the above diagram, Before Garbage collector runs, the application root has dependency on object 1, object 3 and object 5.
Object 1 is dependent on object 2 and Object 5 is dependent on object 6. So the application root does not have any dependency on object 4 and object7.
When Garbage collector runs:
It marks all the heap memory as not in use then examines all the programs reference variables, parameters that has object reference and other items that point to heap objects
For each references, the garbage collector marks the object to which the reference points as in use
Then it compacts heap memory that is still in use and updates program reference
Garbage collector updates the heap itself so that the program can allocate memory from unused portion
After the Garbage collector runs:
It discards Object 4 and Object 7 since there is no dependency exists
and compact the heap memory. When it destroys an object, the garbage
collector frees the object’s memory and any unmanaged resource it
Forcing Garbage Collection:
You can force this by adding a call to GC.Collect but it is always recommended not to call garbage collector explicitly.
Hope it helps!
Subscribe to Engineering At Kiprosh
Get the latest posts delivered right to your inbox | <urn:uuid:596449bf-23c5-4a6c-a879-66bb7e9442a8> | 3.515625 | 634 | Personal Blog | Software Dev. | 46.750043 | 95,533,523 |
wcsstr − locate a substring in a wide-character string
wchar_t *wcsstr(const wchar_t *haystack, const wchar_t *needle);
The wcsstr() function is the wide-character equivalent of the strstr(3) function. It searches for the first occurrence of the wide-character string needle (without its terminating L'\0' character) as a substring in the wide-character string haystack.
The wcsstr() function returns a pointer to the first occurrence of needle in haystack. It returns NULL if needle does not occur as a substring in haystack.
Note the special case: If needle is the empty wide-character string, the return value is always haystack itself.
This page is part of release 3.22 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/. | <urn:uuid:e3494f1d-9684-4005-af43-19eecd019107> | 2.65625 | 209 | Documentation | Software Dev. | 67.76713 | 95,533,548 |
posted by Anonymous
AN EMPTY DENSITY BOTTLE WEIGHS 25GRAMS. IT WEIGHS 65GRAMS WHEN FILL WITH WATER. CALCULATE THE DENSITY OF THE LIQUID.
Please type your post in "proper case", i.e. use lower case except for the first letter of a paragraph, sentence, or proper nouns.
In posts, we use all uppercase only when we mean to SHOUT at someone.
"Density bottle" is also called a specific gravity bottle, or pycnometer, which has a known capacity.
They come in different capacities. Common capacities are 2, 5, 10, 25, 50 ml.
The question does not specify the capacity, so an assumption is required. Assuming a 25 ml. capacity, then
net volume = 25 ml
net mass = 65-25 g = 40 g
Density = 40/25 g/ml = 1.6 g/ml
Note that the calculated density might change when the real capacity of the bottle is known.
75 g - 30 g
Mass of water= 45 g
65 g - 30 g
Mass of liquid x = 35
Density = mass/volume
= 0.77 cm cube | <urn:uuid:6f7b4055-bd25-412b-9fd7-d0c32f90df6d> | 3.140625 | 263 | Q&A Forum | Science & Tech. | 79.762647 | 95,533,569 |
This year’s meeting will highlight the interdisciplinary nature of ecology and linking research with education. A wide range of University of Wisconsin-Madison research will be presented at the meeting, including a number of presentations of interest to environmental reporters. Highlights are included in this tipsheet.
New book explores ecological transformations in Wisconsin
A new book, titled “The Vanishing Present: Wisconsin’s Changing Lands, Waters, and Wildlife,” examines how human pressures – urbanization, population growth, and land use changes – are reshaping the state’s ecology and environment.
Edited by UW-Madison botany and environmental studies professor Don Waller and Wright State University biology professor Thomas Rooney, the book brings together viewpoints of dozens of scientists, natural resource managers, and policy experts to offer insight into Wisconsin’s ecological past, present, and future. The book will be available at the meeting.
Contact: Don Waller, firstname.lastname@example.org, (608) 263-2042
TUESDAY, AUGUST 5, 2008
Increasing climate variability predicted to change vegetation patterns and wildfire risk
Climate models generally predict an increase in climate variability and extremes due to the enhanced greenhouse effect. Michael Notaro, a scientist in the UW-Madison Center for Climatic Research, has applied a dynamic global vegetation model to determine how year-to-year climate fluctuations will affect vegetation types and patterns and fire risk.
He has found that interannual climate variability reduces net global vegetation cover, particularly in semi-arid regions such as the southwest U.S. At the same time, the model reveals that year-to-year variability in precipitation supports greater frequency and intensity of wildfires.
High interannual climate variability can change vegetation patterns, favoring the expansion of grass cover at the expense of tree cover and, within forested areas, the expansion of deciduous forest at the expense of evergreen forests. These results offer insight into future global and regional ecosystem distributions and boundaries.
Contact: Michael Notaro, email@example.com, (608) 261-1503
COS 18-1, Response of the mean global vegetation distribution to interannual climate variability (Tuesday, Aug. 5 at 8:00 a.m.)
To save fish, researchers look to trees
Native brook trout in streams on Wisconsin’s Bayfield Peninsula have struggled for decades, mainly due to springtime floods of snowmelt that blanket their gravel spawning beds with sand and clay. Logging and denuded stream banks are often to blame when streams experience intense runoff. Yet, after being heavily logged in the late 1800s, this area along Lake Superior’s south shore is mostly reforested. So, why is the problem continuing?
Drainage from farm fields and roads are partly to blame, but forest ecologists Jordan Muss and David Mladenoff think another answer may lie in the treetops. Though historically dominated by spruce, pine and other evergreens, the peninsula’s forests today are mostly composed of deciduous trees. So, the pair hypothesized, when evergreens were abundant, perhaps their leafed branches in winter held more snow, allowing it to evaporate from the canopy rather than accumulate below. Two winters of data collection under a variety of the peninsula’s forest types now support this: Snow pack drops by as much as 55 percent as canopy density increases.
The findings suggest that managing forests for more evergreen species could help curb runoff. The scientists will explore this possibility next through a modeling study.
Contact: Jordan Muss, firstname.lastname@example.org, (608) 265-6321
COS 23-2, Using forest canopy density to model beneath canopy snowpack (Tuesday, August 5, 2008, 8:20 a.m.)
Where in Wisconsin do wolves call home?
In 1992, forest and wildlife ecology professor David Mladenoff began identifying and mapping Wisconsin’s most preferred wolf habitat, both to assist in the management of existing packs and predict where new ones might establish. Though he included a wide variety of landscape characteristics in his model, including the presence of forests and wetlands, and the densities of streams and deer, what wolves seemed to prefer above all else were areas with fewer roads. Now, with more than 500 wolves roaming the state and much of that top habitat occupied, does this prediction still hold?
Yes, says Mladenoff, with some nuances. While roads still play a part in his new model, the most critical factor is agriculture, whose presence appears to have a strong negative effect on wolves’ ability to establish. Wolves are least successful in places with many roads and farms because these landscape features represent contact and conflict with humans, he says.
While this suggests that wolves likely won’t colonize central and southern Wisconsin heavily in the future, he cautions that habitat preferences could shift again now that wolves no longer enjoy protection as endangered species in the state.
Contact: David Mladenoff, email@example.com, (608) 262-1992
COS 42-9, A new habitat selection model for gray wolves in Wisconsin after 30 years of recovery (Tuesday, August 5, 2008, 4:20 p.m.)
Tracking long-term ecological changes in Wisconsin
Ecological changes are complex and occur over large spans of space and time, making the specific causes and effects of change difficult to track. UW-Madison botany professor Don Waller and his research group are studying ecological changes in Wisconsin over time by comparing modern data with a rare and valuable historical resource – a detailed dataset of plant communities in the state compiled by John Curtis in the 1950s.
Four presentations at the ESA meeting will describe some of the group’s approaches to understanding what factors drive ecological change and how such changes are affecting Wisconsin’s environment today. Their studies show that forest habitat fragmentation, invasive plant species, and deer have each profoundly impacted Wisconsin’s forests over the past 50 years. In many areas, native species have declined, invasive plants have gained a foothold, and overall plant diversity has dropped as previously diverse areas become more and more similar. An understanding of the factors underlying ecological change may help guide efforts to identify vulnerable habitats and management strategies to protect the state’s natural resources.
Contact: Don Waller, firstname.lastname@example.org, (608) 263-2042
SYMP 8-7, Drivers of long-term ecological change and hysteresis in Midwestern forest communities (Tuesday, Aug. 5 at 3:50 p.m.)
PS 18-19, A functional approach to analyzing long-term change in plant communities in Wisconsin, USA (Tuesday, Aug. 5 at 5:00 p.m.)
PS 18-23, Forty-seven year changes in vegetation at the Apostle Islands: Effects of deer on the forest understory (Tuesday, Aug. 5 at 5:00 p.m.)
PS 26-120, Colonization, establishment, and impacts of three notorious invasive species over five decades in southern Wisconsin broadleaf forests (Tuesday, Aug. 5 at 5:00 p.m.)
WEDNESDAY, AUGUST 6, 2008
Lakeshore development impacts sport fish populations in Wisconsin
Lakeshores offer prime real estate for residences and recreation, but this development comes at an ecological price. Two studies to be presented at the ESA meeting offer new insights into determining how lakeshore development impacts sport fish populations.
Van Butsic, David Lewis, and Volker Radeloff will present an economic model to predict housing density around lakes based on zoning policies and other development constraints. They report that decreasing the minimum frontage zoning will increase lakefront housing density. Since bluegills are known to be adversely affected by increasing development, they predict that reducing frontage minimums would substantially compromise bluegill growth.
Jereme Gaeta, Stephen Carpenter, and colleagues have found that growth rates of most size classes of largemouth bass, a popular sport fish, are also negatively impacted by development. In 16 northern Wisconsin lakes, they found that larger fish grow more slowly in lakes surrounded by more extensive building than in less developed lakes. The effect is most pronounced on the largest bass, those over about 10 inches, which show a strong decline in growth rate with increasing numbers of neighboring homes.
Contacts: Van Butsic, email@example.com, (608) 345-7201; Jereme Gaeta, firstname.lastname@example.org
COS 20-9, Predicting lakefront housing growth and changing bluegill growth rates using a linked economic-ecological model (Tuesday, Aug. 5 at 10:50 a.m.)
COS 47-4, Coarse woody habitat density and largemouth bass (Micropterus salmoides) growth rates (Wednesday, Aug. 6 at 9:00 a.m.)
THURSDAY, AUGUST 7, 2008
Wisconsin Healthy Grown Potato program: Conservation potential in agricultural ecosystems
There is growing interest in using the uncultivated portions of farms for biodiversity conservation purposes. The Wisconsin Healthy Grown Potato eco-label provides an economic incentive for growers to implement conservation and restoration plans on their non-crop lands.
Environmental studies professor Paul Zedler and UW-Extension scientist Deana Knuteson are leading efforts to assess current levels of biodiversity in non-croplands adjacent to potato fields in central Wisconsin. Participating farms – currently around a dozen – conduct one or more management practices chosen to enhance the conservation value of non-crop lands.
Current projects being presented at the ESA meeting include analyses of the diversity and abundance of native plants, birds, and beneficial insects. Early results show that even without ecological management these farms contain a significant proportion of native biodiversity and highlight the conservation value of non-field habitats next to cultivated potato fields.
Contacts: Paul Zedler, email@example.com, (608) 265-8018; Deana Knuteson, firstname.lastname@example.org, (608) 265-9798
COS 21-10, Assessment of avian communities for conservation potential in central sands Wisconsin agroecosystems (Tuesday, Aug. 5 at 11:10 a.m.)
COS 78-1, Influence of non-crop habitat on weed seed predation within potato crops (Thursday, Aug. 7 at 8:00 a.m.)
COS 78-5, Applied plant ecology in an agricultural landscape: Linking potato production with plant conservation in Wisconsin (Thursday, Aug. 7 at 9:20 a.m.)
PS 53-15, Achieving conservation objectives on farms: A case study of the ecolabel approach in Wisconsin (Thursday, Aug. 7 at 5:00 p.m.)
Scientists attempt prairie restoration after dam removal
Not only does Wisconsin have more dams than any other state, it’s removing dams more quickly than nearly any other state as well. By restoring natural flows to streams and rivers dam removal offers many benefits to fish communities. But once a reservoir is drained, freshly exposed sediments can become hot beds of invasive plants. Little is known about the factors that control plant communities in these basins or whether they offer favorable sites for restoration.
As part of a project to characterize the soils in a recently drained basin in southwestern Wisconsin, soil scientists Sam Eldred, Ana Wells and Nick Balster spread the seeds of prairie plants at densities ranging from very high to none. Three years of monitoring has now revealed that despite the altered state of the sediments after 60 years of impoundment, prairie species seeded at the highest densities seem to be holding their own against invasive plants, says Balster. However, the amount of seed this takes – 1,000 seeds per roughly 10 square-feet – would be prohibitively expensive for most landowners.
Balster is now interested to see if, over time, prairie growth in plots that were seeded at mid-range or even low densities might also prove successful.
Contact: Nick Balster, email@example.com, (608) 263-5719
PS 67-163, Effects of seed application rates and soil properties on the interaction between restored native prairie and invasive species in dewatered sediments following a recent dam removal in southwestern Wisconsin (Thursday, Aug. 7, 2008, 5:00 pm)
FRIDAY, AUGUST 8, 2008
What makes a perfect rain garden?
Homeowners hoping to do their part for urban water quality have made rain gardens – small garden plots for capturing stormwater – one of the fastest-growing features of the home landscape. But as their popularity has risen, so have opinions about the plants they should contain. Some insist that prairie species are needed to penetrate compacted soil and allow stormwater to permeate the ground. Others claim that typical urban plants, such as turfgrasses and shrubs, work just as well.
As a first foray into this issue, soil scientists Nick Balster and Marie Johnston examined the soils beneath a dozen prairie gardens ranging from one to 15 years-old at private homes in Madison, Wisconsin, and compared them with those in adjacent stretches of manicured lawn. Soil parameters, such as density and compaction, differed little between the prairie garden and turfgrass plots, the pair found. Instead, the age of the residence was the biggest factor, with soils at homes built before the 1970s showing less compaction than those at younger sites.
Prairie garden soils did, however, have higher levels of organic matter and soil aggregation – changes that, over time, may lead to less dense soil and higher infiltration rates, says Balster.
Contact: Nick Balster, firstname.lastname@example.org, (608) 263-5719
COS 121-4, The effect of native prairie on rain garden function and qualitative assessment of their implementation by homeowners (Friday, Aug. 8, 2008, 9:00 am)
Christine Buckley | Newswise Science News
Leading experts in Diabetes, Metabolism and Biomedical Engineering discuss Precision Medicine
13.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Conference on Laser Polishing – LaP: Fine Tuning for Surfaces
12.07.2018 | Fraunhofer-Institut für Lasertechnik ILT
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Science Education
23.07.2018 | Health and Medicine
23.07.2018 | Life Sciences | <urn:uuid:4d693b1e-a619-4d96-b3b5-2bb8cbb58922> | 2.625 | 3,535 | Content Listing | Science & Tech. | 39.203514 | 95,533,581 |