text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Deterministic Chaos: Sensitivity, Mixing, and Periodic Points Mathematical research in chaos can be traced back at least to 1890, when Henri Poincaré studied the stability of the solar system. He asked if the planets would continue on indefinitely in roughly their present orbits, or might one of them wander off into eternal darkness or crash into the sun. He did not find an answer to his question, but he did create a new analytical method, the geometry of dynamics Today his ideas have grown into the subject called topology, which is the geometry of continuous deformation. Poincaré made the first discovery of chaos in the orbital motion of three bodies which mutually exert gravitational forces on each other. KeywordsChaotic System Periodic Point Unit Interval Shift Operator Periodic Cycle Unable to display preview. Download preview PDF.
<urn:uuid:2973e5fb-77d6-40eb-9b2a-03b17ad3cee0>
3.390625
175
Truncated
Science & Tech.
31.090833
95,617,166
Driving to work becomes routine--but could you drive the entire way in reverse gear? Humans, like many animals, are accustomed to seeing objects pass behind us as we go forward. Moving backwards feels unnatural. In a new study, scientists from The Scripps Research Institute (TSRI) reveal that moving forward actually trains the brain to perceive the world normally. Hollis Cline, PhD, is the Hahn Professor of Neuroscience and a member of the Dorris Neuroscience Center at The Scripps Research Institute. Credit: Photo courtesy of The Scripps Research Institute The findings also show that the relationship between neurons in the eye and the brain is more complicated than previously thought--in fact, the order in which we see things could help the brain calibrate how we perceive time, as well as the objects around us. "We were trying to understand how that happens and the rules used during brain development," said the study's senior author Hollis Cline, who is the Hahn Professor of Neuroscience and member of the Dorris Neuroscience Center at TSRI. This research, published this week in the journal Proceedings of the National Academy of Sciences could have implications for treating sensory processing disorders such as autism. Reversing the Map The new study began when Masaki Hiramoto, a staff scientist in Cline's lab, asked an important question: "How does the visual system of the brain get better "tuned" over time?" Previous studies had shown that people use the visual system to create an internal map of the world. The key to creating this map is sensing the "optic flow" of objects as we walk or drive forward. "It's natural because we've learned it," said Cline. To study how this system develops, Hiramoto and Cline used transparent tadpoles to watch as nerve fibers, called axons, developed between the retina and the brain. The scientists marked the positions of the axons using fluorescent proteins. The tadpoles were split into groups and raised in small chambers. One group was shown a computer screen with bars of light that moved past the tadpoles from front to back--simulating a normal optic flow as if the animal were moving forward. A second group saw the bars in reverse--simulating an unnatural backwards motion. Using the TSRI Dorris Neuroscience Center microscopy facility, Hiramoto then captured high-resolution images of these neurons as they grew over time. The researchers found that tadpoles' visual map developed normally when shown bars moving from front to back. But tadpoles shown the bars in reverse order extended axons to the wrong spots in their map. With those axons out of order, the brain would perceive visual images as reversed or squished. Rewriting the Rules This discovery challenges a rule in neuroscience that dates back to 1949. Until now, researchers knew it was important that neighboring neurons fired at roughly the same time, but didn't realize that the temporal sequence of firing was important. "According to the old rule, if there was a stimulus that went backwards, the map would be fine," said Cline. The new study adds the element of order. The researchers showed that objects moving from front to back in the visual field activated retinal cells in a specific sequence. Cline and Hiramoto believe that this sequence helps the brain perceive the passage of time. For example, if you drive for a few minutes and pass a street sign, your brain will map its position behind you. If you keep driving and you pass another street sign, your brain will map out not only the street signs' positions relative to each other, but their distance in time as well. This link between time and space in the visual system might also apply to hearing and the sense of touch. The original question of how the visual system gets "tuned" over time might be applicable across the entire brain. The researchers believe this study could have implications for patients with sensory and temporal processing disorders, including autism and a mysterious disorder called Alice in Wonderland syndrome, where a person perceives objects as disproportionately big or small. Cline said the new study offers possibilities for retraining the brain to map the world correctly, for instance after stroke. More information on the study, "Optic flow instructs retinotopic map formation through a spatial to temporal to spatial transformation of visual information," is available at: http://www.pnas.org/content/early/2014/11/05/1416953111.abstract Support for the work came from the National Institutes of Health (EY011261 and DP1OD000458), the Nancy Lurie Marks Family Foundation and an endowment from the Hahn Family Foundation. Madeline McCurry-Schmidt | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:c2a92944-6e87-4742-909e-7e41fba9be7f>
3.875
1,634
Content Listing
Science & Tech.
43.672536
95,617,167
Inventors: Minoura N , Rachikofu O Discrimination of protein Publication date: 2001 PROBLEM TO BE SOLVED: To selectively discriminate a desired protein with a simple operation at reduced cost by using a polymer having a specific molecular template form. SOLUTION: A protein is selectively discriminated with a polymer having a molecular template form and comprising a porous material obtained by the dissolution and removal of a protein having a part of the chemical structure of the protein to be discriminated as a main structure and having structure different from the protein to be discriminated and having pores sufficient to accept the intrusion of the protein to be discriminated into the pore, preferably a pore diameter of 10-100 nm. For example, when the protein to be discriminated is oxytocin, the protein to be removed by dissolution is a peptide of formula proline-leucine-glycine-NH2. The above polymer can be produced by polymerizing the protein to be removed by dissolution, a functional monomer and a polymerizable crosslinking agent in the presence of a diluent, crushing the obtained composite polymer to fine powder and removing the above protein by dissolution. Join the Society for Molecular Imprinting New items RSS feed View latest updates Sign-up for e-mail updates: Choose between receiving an occasional newsletter or more frequent to go to the sign-up page. Is your name elemental ? Enter your name and find out by clicking either of the buttons below! Other products you may like:
<urn:uuid:f24044d8-3798-46d7-9233-e4a47dbba87c>
2.625
322
Knowledge Article
Science & Tech.
8.670816
95,617,198
1. In forward-swimming Paramecium the direction of metachronal wave propagation is turned progressively clockwise from forward-right to backward-left if the viscosity of the medium is increased to more than 100 cP.2. With increasing viscosity the direction of the power stroke is turned clockwise at a lower rate than the direction of waves. This leads to a gradual transformation of the dexioplectic metachrony toward a symplectic pattern.3. As viscosity is raised the polarization of the ciliary cycle in time and space is Progressively reduced, so that the beat becomes increasingly helicoidal.4. Metachronal coordination gradually breaks down at viscosities of more than about 100 cP, but is retained better at the anterior end of the cell than in more posterior regions.5. At viscosities above 12 cP the left-handed swimming helix of Paramecium is changed into a right-handed helix. This is produced primarily by the viscositydependent clockwise shift in the direction of the power stroke from backward-right to backward-left.6. The frequency of peristomal cilia (32/s. at 20°C) decreases with rising viscosity. Under constant conditions, a posteriorly directed gradient of decreasing frequency can be observed with the stroboscope.7. Raising the viscosity leads to an increase of the average wavelength from 10.7 µm at 1 cP to 14.3 µm at 40 cP. In the same range of viscosity the wave velocity, which is the product of frequency and wavelength, is reduced from 340 to 200 µm/s, since the drop in frequency exceeds the increase in wavelength.8. The wave velocity tends to be stabilized by reciprocal relations between frequency and wavelength, if all other factors are kept constant. However, the wavelength is found to be different in forward-swimming and backward-swimming animals at 40 cP without a change in frequency (14.1 bps; 14.3 compared to 12.7 µm). This is explained if the metachronal wavelength is increased by decreasing polarization of the ciliary cycle.9. A working hypothesis is put forward which explains the origin of a metachronal system by the distribution of forces parallel to the cell surface produced by polarized or unpolarized cycles of ciliary movement. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:ea92ab00-3e41-46f6-b662-3b805d13026a>
2.625
513
Academic Writing
Science & Tech.
48.822544
95,617,205
whats the difference for the rate equation for a reaction and the average rate? Turn on thread page Beta - Thread Starter - 02-01-2004 15:56 - 02-01-2004 19:31 if your talking about the A-level chemistry's rate equation then there isn't any difference: Rate = k[A][b] The units of rate is: moldm'3s'-1 so therefore the rate is really the average of the reaction. Also the rate is never the same throughout the reaction so there's further proof that you always work out the average rate.
<urn:uuid:2266ed3d-470b-43ee-8dea-40a7729c95f7>
3
127
Comment Section
Science & Tech.
58.931667
95,617,206
March 6th, 2018 | by Faye Oney Researchers have developed a process that uses silver nanowires to print electronic circuits on flexible surfaces. Their method could be promising for the future of flexible and wearable electronics, especially for the medical industry. February 13th, 2018 | by April Gocha Residential LEDs use at least 75% less energy and last 25 times longer than incandescent bulbs, but R&D challenges still exist for LED lighting. However, new materials research continues to push LED technologies further forward. February 13th, 2018 | by Faye Oney Researchers have developed a triboelectric nanogenerator that uses body movements to generate electricity. Their device could someday generate enough power to operate our mobile devices and wearable electronics. February 2nd, 2018 | by Faye Oney Most current energy-saving window technology requires electricity to power the windows. But a research team has devised a fluidic window that uses magnetic nanoparticles to control the window to capture solar energy. January 26th, 2018 | by Faye Oney Researchers have created a high-performance ceramic composite that is strong, durable, and resistant to heat and radiation. The findings could be useful in industries that require highly functional and durable ceramic materials—such as nuclear power plants, aerospace, and oil and gas industries. January 19th, 2018 | by Faye Oney Researchers have discovered that a layer of fullerenes can enable electrons to travel farther in organic solar cells. Their findings are a major breakthrough in organic solar research, and could lead to less expensive solar power in the future. January 16th, 2018 | by Faye Oney By observing lithium ion movement in nanoparticles, researchers have discovered that instead of increasing, they reverse at a certain point. Their discovery could be a breakthrough in faster-charging and longer-lasting batteries. January 9th, 2018 | by Faye Oney Inspired by origami, researchers have created a tiny robot exoskeleton that bends and moves in response to chemical or thermal changes. These tiny machines can be used in electronics applications as well as semiconductor manufacturing. January 9th, 2018 | by April Gocha After collecting extensive data, researchers at Rice University (Houston, Texas) can definitively say that, when it comes to porous nanoparticles, size matters—and, in the process, they’ve made some surprising discoveries about how size affects the materials’ intrinsic properties. January 2nd, 2018 | by Faye Oney Scientists at Rice University have developed a device that uses microfluidics to implant carbon nanotube fibers into brain tissue. Their device could help scientists learn more about cognitive processes and improve therapies for patients with neurological disorders.
<urn:uuid:1399ba81-6cdf-4753-8735-27bb61e9a5fb>
2.578125
564
Content Listing
Science & Tech.
21.106949
95,617,208
Although native enemies in an exotic species' new range are considered to affect its ability to invade, few studies have evaluated predation pressures from native enemies on exotic species in their new range. The exotic prey naiveté hypothesis (EPNH) states that exotic species may be at a disadvantage because of its naïveté towards native enemies and, therefore, may suffer higher predation pressures from the enemy than native prey species. Corollaries of this hypothesis include the native enemy preferring exotic species over native species and the diet of the enemy being influenced by the abundance of the exotic species. We comprehensively tested this hypothesis using introduced North American bullfrogs (Lithobates catesbeianus, referred to as bullfrog), a native red-banded snake (Dinodon rufozonatum, the enemy) and four native anuran species in permanent still water bodies as a model system in Daishan, China. We investigated reciprocal recognition between snakes and anuran species (bullfrogs and three common native species) and the diet preference of the snakes for bullfrogs and the three species in laboratory experiments, and the diet preference and bullfrog density in the wild. Bullfrogs are naive to the snakes, but the native anurans are not. However, the snakes can identify bullfrogs as prey, and in fact, prefer bullfrogs over the native anurans in manipulative experiments with and without a control for body size and in the wild, indicating that bullfrogs are subjected to higher predation pressures from the snakes than the native species. The proportion of bullfrogs in the snakes' diet is positively correlated with the abundance of bullfrogs in the wild. Our results provide strong evidence for the EPNH. The results highlight the biological resistance of native enemies to naïve exotic species. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:3d05ff89-3bc6-4848-8862-4115628a001e>
3.40625
390
Academic Writing
Science & Tech.
8.059197
95,617,219
SpaceX founder and CEO Elon Musk published a paper revealing more details about his plan of building a self-sustaining city on Mars. SpaceX is also working on interplanetary transport system that is capable of sending humans to Mars or even beyond. The giant gas planet Jupiter now has 69 known moons. This was discovered after a survey meant to look beyond the Solar System accidentally spotted them in the regions near the planet. Chinese scientists will attempt to grow potatoes on the moon. Potatoes and silkworm eggs will be placed inside a container that will be sent during China's lunar mission in 2018. Cassini photographed Saturn's moon Iapetus and discovered its contrasting properties. The spacecraft also completed its 8th dive between Saturn and its rings. Baking a bread in space could soon be possible with the help of special oven and dough mixture. An amputee flatworm grew two heads after spending five weeks aboard the International Space Station (ISS). Scientists are observing how microgravity affects cell regeneration in worms. Ikea sought the help of NASA to redefine the design for small space storage. Ikea designers spent three days inside a Martian-simulation spacecraft in Utah. An astronomer presented a paper that says the "wow! signal" from 1977 could have originated from a comet and not from aliens. Hydrogen gas from comets as they accelerate could have caused the radio emissions. NASA will create colorful artificial clouds in the sky in order to study the Earth's ionosphere. They will launch canisters filled with vapor and chemicals that are expected to react when they come in contact with sunlight. The new NASA Mars Rover concept car is on display at the Kennedy Space Center. The vehicle is used to promote the 'Summer in Mars' program. Astronomers used the sharp vision of NASA's Hubble Space Telescope to measure the mass of a dead star known as "white dwarfs". A mini strawberry moon will light up the sky on June 9. Although no special features, the strawberry moon signals the start of harvesting season as coined by the Native Americans. The European Southern Observatory (ESO) announced that ALMA discovered prebiotic molecule or a building block of life near an infant-like star. The molecule methyl isocyanate was discovered in a young star in the IRAS 16293-2422 region. The space agency introduced 12 new astronauts. The team will join the 44 others who all have a chance to go to Mars in the future mission.
<urn:uuid:cf209da4-ffa1-4b7d-a019-8b286bc5504f>
3.21875
507
Content Listing
Science & Tech.
49.857539
95,617,238
Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:fe0afb3b-8afb-41c8-9673-523bd0e441c0>
2.578125
268
Academic Writing
Science & Tech.
-6.569592
95,617,257
VICTORIA — While humans have mapped the surfaces of the moon and Mars, our own deep sea remains largely a mystery — but Ocean Networks Canada is working to change that. The University of Victoria facility has pioneered ocean observatories in the Strait of Georgia, along coastal British Columbia and in the Arctic that stream live data and video from the sea floor. Now, a five-year $46.6-million investment from the federal government will ensure the world-leading facility continues to advance understanding of the deep ocean, said president and CEO Kate Moran. “About 5 per cent of the planet’s sea floor is mapped,” she said. “Our own planet is not well understood yet. We also don’t have very good measurements in the very deep ocean to understand how it’s changing from climate change.” Ocean Networks Canada was one of 17 facilities that received a total of $328.5 million in funding from the federal government, through the Canada Foundation for Innovation, on Monday. The facility’s first ocean observatory was established in 2006 in Saanich Inlet, northwest of Victoria. Over the past decade, the number of sites that transmit data has grown to more than 50. The sites allow researchers all over the world to study a wide range of areas including climate change, fish abundance, plate tectonics, tsunamis, deep-sea ecosystems and ocean management. Human pressures are impacting the ocean at an ever-increasing pace and understanding the change is key to ensuring a sustainable future, Moran said. All the data is accessible on Ocean Networks Canada’s website, allowing anyone to watch live video of the deep-sea floor. One camera, off Vancouver Island’s west coast, is on 24 hours a day, while others are on a rotating schedule. Moran said occasionally discoveries have been made by ordinary observers, including a Ukrainian teenager named Kirill Dudko, who in 2013 saw something unusual on one of the video streams. “He didn’t know what it was,” Moran recalled with a chuckle. “He sent us an email saying, ‘What was that monster that ate the fish?'” The facility’s researchers looked at the video and couldn’t immediately name the “monster.” Eventually, the creature was identified as a female elephant seal that dove 900 metres below the surface to slurp up a hagfish. “That helped the research community understand how those kinds of marine mammals feed, and now this young man is going to be a marine biologist,” Moran said. The facility also has high-precision hydrophones in place that can assess shipping noise. It’s developing technology to identify the locations of whales, so alerts can be sent to nearby ships, said Moran. The number of tankers leaving Metro Vancouver’s waters will increase seven-fold if Kinder Morgan Canada’s Trans Mountain oil pipeline expansion is built. But Moran said there is already a need to mitigate shipping noise, regardless of whether the project goes ahead. Other institutions that received funding from the federal government on Monday included Laval University, where researchers are operating Canada’s only dedicated research icebreaker the CCGS Amundsen, and Queen’s University, where faculty members are developing clinical trials to improve treatment and prevention of cancer. Science Minister Kirsty Duncan said in a statement that the government’s investment demonstrates the value it places in the role that science plays in building a vibrant, healthy society. — By Laura Kane in Vancouver The Canadian Press Crews 'headed in a good direction' for containment of East Shuswap Road Fire KAMLOOPS — The BC Wildfire Service is hopeful crews will be able to shift the status of the East... READ MORE + Fire crews protect eagle's nest as East Shuswap Road fire flares up KAMLOOPS — Fire crews were hard at work overnight as hotspots above Shuswap road flared up in the... READ MORE + Extreme heat warning issued for Fraser Canyon KAMLOOPS —Environment Canada has issued a heat warning for the Fraser Canyon, including Lillooet... READ MORE +
<urn:uuid:dd399325-a242-40a2-b90c-2ecfa1e5b41a>
3.140625
882
News Article
Science & Tech.
42.484672
95,617,270
Optically stimulated luminescent (OSL) dosimeters are devices used for measuring doses of ionizing radiation. Signal is stored within an OSL material so that when stimulated with light, light of a specific wavelength is emitted in proportion to the integrated ionizing radiation dose. Each interrogation of the material results in the loss of a small fraction of signal, thus allowing multiple interrogations leading to more accurate measurements of dose. In order to reuse a dosimeter, the residual signals from prior doses must be taken into account and subtracted from current readings, adding uncertainty to any future measurements. To reduce these errors when they become large, it is desirable to completely clear the stored signal or anneal the dosimeter. Traditionally, heating the material has accomplished this. In a commercially available dosimeter badge system, the OSL material Al2O3:C is incorporated into a plastic slide that would melt at the necessary high temperatures, which can reach 900 °C, required for annealing. Fortunately, due to the material's high sensitivity to light, OSLs can be optically annealed instead. In order to do this, an affordable OSL dosimeter annealer was designed with inexpensive, exchangeable blue, green, and white high intensity light-emitting diodes (LEDs). Several dosimeters were repeatedly annealed for recorded intervals and then read out. A single dosimeter was partially annealed through repeated interrogations with the LED array from a commercial reader. The signal loss due to the exposure to each light was analyzed to determine the practicality and efficiency of each color. The rate and extent of signal loss was dependent not only on the spectrum of annealing light but on the initial signal levels as well. These findings suggest that blue LEDs are the most promising for effective and rapid clearing of the OSL material Al2O3:C.
<urn:uuid:541cdf6a-7baa-4b0a-88b3-27342b36c753>
2.96875
381
Knowledge Article
Science & Tech.
22.269873
95,617,293
Please use this identifier to cite or link to this item: Fire effects in the coastal lowlands: Hawai'i Volcanoes National Park |Title:||Fire effects in the coastal lowlands: Hawai'i Volcanoes National Park| |LC Subject Headings:||Grassland fires -- Hawaii -- Hawaii Island.| Ground cover fires -- Hawaii -- Hawaii Island. Hawaii Volcanoes National Park (Hawaii) Plants -- Effect of fires on -- Hawaii -- Hawaii Island. |Issue Date:||Apr 1994| |Publisher:||Cooperative National Park Resources Studies Unit, University of Hawaii at Manoa, Department of Botany| |Citation:||Tunison JT, Leialoha JAK, Loh RL, Pratt LW, Higashino PK. 1994. Fire effects in the coastal lowlands: Hawai'i Volcanoes National Park. Honolulu (HI): Cooperative National Park Resources Studies Unit, University of Hawaii at Manoa, Department of Botany. PCSU Technical Report, 88.| |Series/Report no.:||Technical Report| |Abstract:||Since 1975 fire frequency has increased sharply in the coastal lowlands of Hawai'i Volcanoes National Park. This was due largely to increases in grass biomass following the removal of feral goats and/or the spread of fire-tolerant species. Fire effects were studied in 13 sites within five lava or lightning caused burns occurring between 1985 and 1989. The study sites were located in five major plant communities and two ecotones. Grasslands characterized the vegetation of the central coastal lowland sites, and lowland scrub with native shrub overstory and alien grass understory characterized the sites in the eastern lowlands. All of the coastal lowlands were severely impacted by Polynesian cultivation and burning practices, nineteenth century cattle grazing, and 150 years of feral goat browsing and grazing. Cover was determined by point-intercept methods along unreplicated transects established prior to the fire or by replicated burned and unburned pairs of transects. Density of shrubs was determined in plots along the paired transects. Frequency of resprouting of woody plants was determined by monitoring individual plants for one year. The results differed from those observed by Hughes et al. (1991) and Smith and Parman (1981) in the lower submontane seasonal zone. Alien grass cover did not increase in most sites, and total native cover usually increased or remained the same. In the eastern coastal lowlands, fire characteristically stimulated the spread of the native subshrub Waltheria indica, bunchgrass Heteropogon contortus, and shrubs Dodonaea viscosa and Osteomeles anthyllidifolia. Fire however depleted the tall native shrub component by nearly eliminating Wikstroemia sandwicensis. Waltheria and Heteropogon generally were also stimulated in the central lowlands but not sufficiently to increase total native plant cover in most sites. These findings lead to the conclusions that the Park's policy of total fire suppression should continue to protect the native shrub component of the rare Wikstroemia shrubland, allow natural recolonization of the central grasslands by native trees and shrubs, and prevent the spread of the disruptive fire-stimulated Melinis and Hyparrhenia. However, a judicious use of prescribed burning, on an experimental basis, may be useful in establishing fuel breaks and stimulating the recovery of native species such as Heteropogon and Dodonaea.| |Description:||Reports were scanned in black and white at a resolution of 600 dots per inch and were converted to text using Adobe Paper Capture Plug-in.| |Sponsor:||National Park Service Cooperative Agreement CA 8007 2 9004| |Appears in Collections:||The PCSU and HPI-CESU Technical Reports 1974 - current| Please email firstname.lastname@example.org if you need this content in an ADA-compliant format. Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
<urn:uuid:49c13931-4367-438a-ad1c-bef97b093ae3>
3.03125
859
Academic Writing
Science & Tech.
26.54157
95,617,294
A Way to Sequence DNA of Rare Animals Has Been Discovered Credit: Eddy Perez, LSU Rare and extinct animals are preserved in jars of alcohol in natural history museum collections around the world, which provide a wealth of information on the changing biodiversity of the planet. These preserved specimens of snakes, lizards, frogs, fish and other animals can last up to 500 years when processed in a chemical called formalin. While formalin helps preserve the specimen making it rigid and durable, it poses a challenge to extracting and sequencing DNA. DNA degrades and splits into small fragments over time. This fragmented DNA is difficult to amplify into long informative stretches of DNA that can be used to examine evolutionary relationships among species when using older DNA sequencing technology. Therefore, scientists have not been able to effectively sequence DNA from these specimens until now. LSU Museum of Natural Science Curator and Professor Christopher Austin and his collaborator Rutgers-Newark Assistant Professor Sara Ruane developed a protocol and tested a method for DNA sequencing thousands of genes from these intractable snake specimens. Their research was published today in the international scientific journal Molecular Ecology Resources. “Natural history museums are repositories for extinct species. Unfortunately, naturalists in the 1800s were not collecting specimens for analyses we conduct today such as DNA sequencing. Now with these new methods, we can get the DNA from these very old specimens and sequence extinct species like the Ivory Billed Woodpecker, the Tasmanian Wolf and the Dodo Bird,” Austin said. Austin and Ruane found and tested an approach that includes taking a small piece of liver tissue from the snake specimen, heating it up over a longer period of time and applying an enzyme that digests the tissue sample and enables the DNA to be extracted. Their minimally invasive protocol preserves the specimen so additional information can be collected from the specimen in the future. It also includes applying the latest technology to chemically sequence the specimens’ DNA. “A genome is a complex jigsaw puzzle broken up in to hundreds of millions of small pieces. We can sequence those pieces and computationally put them back together,” Austin said. They extracted and sequenced the DNA of 13 historic or rare snake specimens from all over the world many of which had never been analyzed using modern genetic methods. Some of the specimens were more than 100 years old. They also integrated these data with modern samples to create a genetic family tree, or phylogeny, that maps the evolutionary relationships of various snake species. This work resulted in thousands of genetic markers for snake specimens collected as far back as the early 1900s. “The exciting thing about this work is that it makes species that have been essentially lost to science, due to extirpation, rarity or general secretiveness, which applies to many animals and not just snakes, available for scientific research in the modern age of genomics,” Ruane said. “We also believe this research will benefit scientists working with rare animals that are either hard to collect or extinct but are represented in fluid-preserved historical collections. It also underscores the continued importance of museum collections in modern science,” Austin said. This article has been republished from materials provided by Louisiana State University Note: material may have been edited for length and content. For further information, please contact the cited source. Getting to Know the Microbes that Drive Climate ChangeNews A new understanding of the microbes and viruses in the thawing permafrost in Sweden may help scientists better predict the pace of climate change.READ MORE Perinatal Exposure to Phthalates Results in Lower Number of Neurons and Synapses in the Medial Prefrontal CortexNews Phthalates - chemicals used in plastics belonging to the same class as Bisphenol A (BPA) - can potentially interfere with hormones important for the developing brain.READ MORE Hay Fever Risk Genes Overlap with Autoimmune DiseaseNews In a large international study involving almost 900,000 participants, researchers from the University of Copenhagen and COPSAC have found new risk genes for hay fever. It is the largest genetic study so far on this type of allergy, which affects millions of people around the world.READ MORE
<urn:uuid:8113c010-fe03-4545-9fbe-a3e374adce74>
3.9375
857
News Article
Science & Tech.
23.242241
95,617,296
Radiocarbon dating is a technique that utilizes decay of carbon-14 to estimate the age of organic materials. This method is based on the principle that carbon is found in various forms. Through photosynthesis, plants absorb both forms of carbon dioxide in the atmosphere. When an organism dies it contains a ratio of carbon-14 and carbon-12. As the carbon-14 decays it cannot replenish itself and the ratio decreases at a regular rate, its half-life. The measurement of carbon-14 decay provides a measurement of the age of the carbon-based material. There are fluctuations of carbon-14 and carbon-12 in at atmosphere over periods of time. Scientists use sequencing of tree rings and cave deposits to fine tune and calibrate radiocarbon dating of materials. The process of radioactive decay gradually decreases the fraction of carbon-14 isotope relative to the other two isotopes of carbon. The half-life of carbon-14 is 5730 ± 40 years. The equation of radioactive decay of carbon-14 is Radiocarbon dating was developed by Willard Libby at the University of Chicago in 1949. Libby estimated that the steady state radioactivity concentration of exchangeable carbon-14 would be about 14 disintegrations per minute per gram. He won the Nobel Prize in chemistry in 1960 for his work.© BrainMass Inc. brainmass.com July 18, 2018, 10:50 am ad1c9bdddf
<urn:uuid:d7178c11-015d-44d0-af53-950dccadf865>
3.875
295
Knowledge Article
Science & Tech.
48.362772
95,617,309
Finds and tabulates sub-strings which match the target regular expression Array — the captured sub-strings each in tabular form A string against which the regular expression is matched An optional character index within the string at which to start matching This method compares the target regular expression against the characters comprising the passed string and returns an array of tables, each of which indicates the location of a sub-string whose characters match the regular expression. Each table contains just two keys, begin and end, whose values are the indexes delimiting the sub-string within the main string. The array will contain as many tables as there are matches. If there are no matches, capture() returns null not an empty array. A second, optional parameter may be passed to the method: the index within the passed string at which to begin the search for pattern matches. If this parameter is not provided, the search commences at the start of the string. Compare this method with search() which returns only the first of the matched sub-strings found within the source string.
<urn:uuid:22a2123c-823a-40aa-b0b1-6a3f591b8ef9>
2.96875
217
Documentation
Software Dev.
44.248297
95,617,310
Using Star Charts and Measuring Distance How to find objects in the sky using celestial coordinates, what are arc minutes and arc seconds? How big is the Plough compared to the Square of Pegasus? - Tips for Getting Started in Astronomy - Dark Eye Adaption - How We See In the Dark - Light Pollution - Using Star Charts and Measuring Distance - Constellation Guide - Binocular Astronomy - Moon Watching - How to Observe the Moon - Buying Your First Telescope - Your First Night With Your First Telescope - Sky Orientation through a Telescope - Polar Alignment of an Equatorial Telescope Mount - Useful Astronomy Filters for Astrophotography Adverts Blocked Please disable AdBlocking software and allow me to set cookies so that I can continue providing free content and services. If you are looking for a particular object in the sky chances are you have a set of celestial coordinates measured in Right Ascension and Declination. For this example, we'll use the star Deneb which can be located using the coordinates 20h 41m 25.9s Right Ascension, +45° 16' 49" Declination. Clearly this is not as easy to understand as most coordinate systems, so let's have a closer look at what all these numbers mean. Right Ascension, measured in time, is the projection of longitude onto the celestial sphere. Zero hours starts at the First point of Ares (the point in the sky at which the Celestial Meridian, the Celestial Equator and the Ecliptic meet) and measures the full 360° of the celestial sphere. One hour Right Ascension describes the movement of the sky due to the Earth's spin over an hour and is equal to 15° (15° * 24 hours = 360°). As with time, an hour of Right Ascension is divided into minutes which are also divided into seconds. These all likewise describe the movement of the sky over the specified time. Declination is akin to the measurement of latitude projected into the sky. Zero degrees of declination represent the celestial equator (a projection of Earth's equator) with +90° representing the North Pole and -90° declination the South Pole. A single degree of declination is further divided into 60 arcminutes, and each arcminute into arcseconds for greater precision. Using Star Charts With the help of star charts, you'll be able to find a multitude of objects under a dark sky. It may look confusing at first, with the directions being backwards - West is on the right and East on the left. Unlike most maps, star charts show what is above you, not beneath you. If you hold the chart above your head with North pointing North you'll find East and West are pointing correctly. To use it to look at things in different directions, hold it so that the bottom edge corresponds to the compass direction you are facing. It will then show the sky as it looks from that horizon and on up over your head. If you don't know which direction is which, use the Plough to find Polaris. Once you find North, the rest will follow. It's best to use a red led torch to see the chart in the dark - that way it won't ruin your night vision. Distances between objects in the night's sky are measured in angles using degrees of arc, a bit like the angles of latitudes and longitude on the Earths globe. One degree is equal to 1/360th of a circle. Hands and fingers are very useful for getting to grips with sizes in the sky. Held out at arm's length the width of your little finger is around 1° and the width of your thumb about 2°. Three fingers are about 5° and the width of your fist is around 10°. If you want to find a dim object in the sky and a star chart shows that it is about 15° in a certain direction from a brighter, known star, then you can use your outstretched hand as a ruler to measure off the distance on the night sky. Between the two Pointer stars of the Plough, for example, the distance is about 5°. Watch the Sky Move The night sky changes in appearance over time. The stars keep the same positions relative to one another but seem to move as a whole. This is because the Earth spins on its axis every 24 hours, so the globe revolves 15° to the west every hour. The Earth orbits the Sun in about 365 days, resulting in stars being in slightly different positions at the same time each night. If you stand in one spot at 9 pm and see a star rising above a rooftop to the East, then look again at 10 pm, that star will be 15° higher. Fourteen days later it would be close to that position at 9 pm as the sky will seem to have moved 1° west each night. Observing this change will give you a good sense of the motion of our planet and how it affects our view of the heavens. Last updated on: Sunday 7th January 2018 A look at the celestial sphere and how we locate objects using Right Ascension and Declination Light Pollution - the bane of the modern astronomer. What can be done about light pollution? There are no comments for this post. Be the first!
<urn:uuid:b7860434-a42c-44d8-bc90-6b1e40774134>
3.671875
1,097
Tutorial
Science & Tech.
60.359031
95,617,313
For more than 40 years, Landsat satellites have collected millions of images of this region and others worldwide. And as Landsat 8 begins its new mission, collecting more than 400 images per day, scientists are anticipating what the program’s trove of images will reveal about Earth’s surface. Landsat satellites have captured hundreds of images of the region surrounding El Paso, Texas. On May 30, 2013, Landsat 8 began adding to the program’s extensive image archive. Credit: Robert Simmon, NASA's Earth Observatory, using data from USGS and NASA “These are scientific data, as much as they’re beautiful images,” said Doug Morton, a physical scientist at NASA's Goddard Space Flight Center in Greenbelt, Md., who uses Landsat data to study changes in forest ecosystems over time. Orbiting 438 miles above Earth, Landsat satellites collect visible and infrared light reflected from the surface and thermal infrared light emitted by the surface. The different wavelengths can provide information not just about visible elements of the land cover, but also about the health of vegetation, water use and more. Transparencies and crayons When the first Landsat satellite, originally called the Earth Resources Technology Satellite, or ERTS, started orbiting in 1972, it was no small feat to visualize the data and conduct research. “When ERTS was first launched, there was one cathode ray tube in the country that could take in the digital data and display an image,” said Jeff Masek, Landsat project scientist at Goddard. In the early years, satellite observations of the light reflected off of Earth were transmitted down to receiving stations and mailed to processing centers. Computers translated the image data into photographic prints or transparencies that could be placed on light tables for interpretation. Alternatively, computers translated the numbers in each pixel into alpha-numeric symbols that were printed on large reams of paper. Analysts, often graduate students, could then color-in the symbols with crayon or magic markers. Standing on ladders over the colored-in data, they’d try to visualize the landscape represented by the maps. “Things were pretty primitive in those days,” Masek said. “People say, ‘Well why didn’t they produce a global land cover map in those first few years?’ They were lucky to be able to look at one image for a Ph.D. dissertation.” The process is significantly quicker with Landsat 8, a joint mission between NASA and the U.S. Geological Survey, or USGS, which took operational control of the satellite May 30. NASA and USGS scientists have been working for months to properly calibrate each of the thousands of detectors on Landsat 8, so scientists using the data can access images that are accurate and consistent with previous Landsat missions. The latest satellite will collect at least 400 images per day, and the USGS facility in Sioux Falls, S.D., will receive, process and archive those scenes within 24 hours. Scientists, analysts and the general public will have access to these images within hours, as the USGS distributes requested digital images over the internet for free. Millions of scenes, free to explore Since 2008, all Landsat digital images in the USGS archive have been freely available online. Scientists, land managers, planners and others have downloaded scenes more than 11 million times. Researchers have been studying scientific information from Landsat for decades now, Morton said. Landsat data are the backbone for a range of research and operational programs, including monitoring agricultural productivity, water use and forest fire damages in the United States. But opening the Landsat archive, combined with modern computing power, allows scientists to take a wider view than just one scene, and to look over time. “The ability to use every image in the archive has revolutionized the way we do science, conduct business and share information worldwide,” Morton said. “Instead of picking an anniversary date – like the middle of summer – and looking at that date over as many years as there are cloud-free data available, we’re starting to mine every pixel in the archive.” Due to cost and computing restrictions, a scientist studying deforestation, for example, previously might have only used two images – one before and one after – to see the extent of logging. Now, with free images and powerful computers, researchers can assemble dozens of monthly or seasonal images to examine the ebb or flow of cutting and regrowth. “There’s been a proliferation of very large-scale applications of the scientific analysis we know how to do,” Morton said. “The range of Landsat data uses continues to grow, and our planet is constantly changing. That’s the exciting part about Landsat 8 – the continuity of observations and the next chapter in the story of our changing planet.” While the methods for analyzing data have vastly improved in the last 40 years, the reasons for collecting data have remained constant, said Bruce Cook, Landsat deputy project scientist at Goddard. “It’s still inventorying natural resources, still looking at their utilization,” Cook said. “It’s still looking at the interactions between people and their environment. But now you’re talking about a world population that has doubled, almost, since the first Landsat. So there’s even more pressure on our natural resources, and more reason to be studying these things.” Building a complete record The USGS Sioux Falls Landsat archive already stores more than 3 million scenes. But it could take millions of images -- and billions of pixels -- to get a complete record of issues like how ecosystems are responding to a warming world, Masek said. “There are a bunch of interesting ecological questions related to climate change, which Landsat could be applied to,” he said. “But that takes a lot of data. You don’t get a reliable answer if you just look at a couple of images. You really have to look at a deep-time series.” He and Morton have studied a strip of land in Quebec, Canada, to measure shrub density, for example, and found that some areas are getting greener as more plants sprout in areas that were previously too cold. Others have used Landsat to scour Siberia and study how permafrost thaws affect the number of lakes, Masek said. “I’m very interested in, now that we have a 40-, going on 50-year record, the signs of climate change in ecosystems around the world,” he said. Researchers can also make full use of all of the different wavelengths, Morton said. Natural color images like the El Paso animation are created from wavelengths within the range of human vision. But Landsat 8’s Operational Land Imager detects a total of nine spectral bands, including four infrared wavelengths beyond what people can see. Scientists use different combinations of bands to measure the health of the vegetation on the ground, watch young forests grow and perhaps change in species composition. Landsat data are also used to study the atmosphere: For example, a time series can help researchers study the reduction of sulfur dioxide pollution over Pennsylvania and the eastern United States after the shuttering of Midwestern coal plants. “A scientist, looking at these images, sees more than initially meets the eye,” Morton said. “The information in different Landsat bands allows scientists to quantify subtle changes over time—the response of a forest to droughts, the change in reflectance as a forest canopy grows taller and more variable over time. We see that ecosystems are changing all the time, and Landsat data captures these changes like no other program in the world.” Over Landsat’s 40 years, scientists have made significant strides in understanding, evaluating and interpreting the data that comes back from the program’s satellites, Morton said. With the new computer power and the continuously growing archive, scientists can take a closer, more informed look at the planet. “History is made every day,” he said. “You better have an image to capture it.”For more information about Landsat, visit: Kate Ramsayer | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:9d855621-071e-410e-9750-4cebaa20153b>
4.03125
2,358
Content Listing
Science & Tech.
40.77759
95,617,317
They may look like stained-glass windows but these dazzling images reveal the kaleidoscope of colours found inside METEORITES thought to date back 4.5 billion years - The images were created using a standard SLR camera, a microscope, a filter, a spectrograph and simple software - Texan photographer, Jeff Barton, taught himself how to identify minerals within the meteorite in thin sections - Variations in filter lenses altered the polarisation of light passing through the microscope to create the colours These stunning images may look like beautiful, psychedelic prints, but they are in fact photographs of the colourful interiors of meteorites. The space rocks are usually known for their dull exterior, but scientist Jeff Barton has shown the dazzling world beneath the grey surface. He has photographed hundreds of 4.5 billion-year-old meteorites by using a standard SLR camera and a powerful microscope to zoom in on fragments of the rocks. Scientist Jeff Barton has created a set of stunning saturated images that look like stained-glass windows, but are actually photographs of wafer thin slices of meteorites The space rocks are usually known for their dull appearance, but Mr Barton, who is 66 and from Texas, has shown the dazzling world beneath the grey surface. The above image shows loose crystal grains in dark matrix from a thin section of Vaca Muerta meteorite. The horizontal dimension of this field of view is about 0.085 inches When viewed from a close-up angle, the crystals inside the 0.5 by 0.9 inch slivers appear as a myriad of colours. Mr Barton captures the kaleidescope of patterns by attaching a polarising filter to his camera and affixing this to his microscope. He purchases the miniatures pieces of meteorites from collectors, who painstakingly spend hours cutting, grinding, and polishing them until they are thin enough to use. Mr Barton has photographed hundreds of thin fragments of meteorites by using a standard digital SLR camera attached to a petrographic microscope. Two polarising filters are required to create a clear image and enhance the colours from the light shining through the slice from underneath He purchases the miniature pieces of meteorites from collectors, who painstakingly spend hours cutting, grinding, and polishing them until they are thin enough to use At Vaca Muerta in Chile, many fragments of meteorites were found, including this thin section of a Vaca Muerta meteorite. The left half of the slide, along a diagonal arc from upper left to lower right, is part of a large chondrule that has been melted. Chondrules are small, rounded particles embedded in most stony meteorites 'I began studying thin sections of rocks and meteorites in 2004,' said Mr Barton, 66, from Texas, USA. 'I taught myself how to identify minerals in thin sections by measuring index of refraction with a microscope. 'Now I use a spectrograph and computer software, and I started taking photographs of the colourful bits in the meteorites. 'Variations in optical glass and in mounting lenses can alter the polarisation of light passing through the lenses. 'Making the photographs takes a matter of seconds to minutes, depending on how thoroughly you want to document the section and what you are trying to learn from it.' Mr Barton used variations in optical glass and in mounting lenses to alter the polarisation of light passing through the microscope creating these dazzling colours Most meteorites, such as the one shown, are fragments of asteroids which have fallen from the asteroid belt between Mars and Jupiter. The biggest meteorite that has been found on Earth weighs 60 tons and plummeted from space 80,000 years ago 'I taught myself how to identify minerals in thin sections by measuring index of refraction with a microscope,' said Mr Barton. 'Now I use a spectrograph and computer software, and I started taking photographs of the colourful bits in the meteorites.' All the meteorites and thin sections were acquired from other collectors, who spend between two and 20 hours making them, depending on the type of material. 'The reactions I get are mostly along the lines of "What is that?" but I quite often get "Wow, those are pretty"', said Mr Barton. Most meteorites are fragments of asteroids which have fallen from the asteroid belt between Mars and Jupiter. This volume of space contains more than 1 million objects ranging in size from 1m to 800km. The biggest meteorite that has been found on Earth weighs 60 tons and plummeted from space 80,000 years ago. It landed in the Otjozondjupa Region of Namibia in Africa and was discovered by the owner of the land in 1920 while he was ploughing. A piece of a Vaca Muerta meteorite, which are thought to have cooled by about one-half degree per million years. This means they must have been buried deep within a larger piece of rock. This would require an object (like an asteroid) to be between 400km and 800km in diameter A pair of chondrules from the Allende Meteorite, which fell in 1969 at Pueblito de Allende, Mexico. Chondrules form as molten or partially molten droplets in space before being accreted to their parent asteroids. They represent one of the oldest solid materials within our solar system Most watched News videos - 'It's a find of a lifetime': Archaeologist Dr Clíodhna Ní Lionáin - Boris Johnson attacks Theresa May over Brexit 'fog of self-doubt' - Courageous woman hides victim from kidnappers till cops arrive - Beach in Ciutadella Menorca hit by mini-tsunami 'rissaga' - The streets of Alcudia in Mallorca are flooded by mini-tsunami - Staff rant about autistic child heard on mother's voicemail - Brave lion cub forced to jump into raging river to follow mother - Drowned woman and child found next to survivor clinging to wreck - Comedian is forced to move her scooter from disability space on train - The terrifying moment a plane comes crashing down in South Africa - Salvage team discovers Russian cruiser 113 years after it sank - Model Annabelle Neilson walks the catwalk in 2010 fashion show
<urn:uuid:b10d1cf7-6ffc-4e2c-b68d-5879b51704e8>
3.1875
1,308
Truncated
Science & Tech.
37.057272
95,617,324
Access a range of climate-related reports issued by government agencies and scientific organizations. Browse the reports listed below, or filter by scope, content, or focus in the boxes above. To expand your results, click the Clear Filters link. The goal of this concerted effort is to help Thurston County (Washington) and the broader South Puget Sound region prepare for and adjust to climate change. The Thurston Regional Planning Council crafted this document with a $250,000 National Estuary Program grant from the U.S. Environmental Protection Agency and significant in-kind support from the community. Partners included representatives from tribes, municipalities, universities, nonprofits, businesses, and other entities within the project area: three geographically diverse watersheds (Nisqually, Deschutes, and Kennedy-Goldsborough) within Thurston County that drain into Puget Sound. The watersheds encompass beaches, rivers, lakes, wetlands, highlands, forests, farms, ranches, cities, towns, and tribal reservations. It is the Council's hope that other communities throughout the Puget Sound region, state, and nation will replicate this project’s science-based assessments, innovative public-engagement efforts (including development of a resilience game), collaborative planning processes, economic analyses, and comprehensive actions. The city of Cambridge, Massachusetts, is developing a Climate Change Preparedness and Resilience Plan as a practical guide to implement specific strategies in response to climate change threats (heat, flooding from precipitation, flooding from sea level rise and storm surge). The Alewife Preparedness Plan—the first neighborhood plan to be developed—will test how the proposed strategies might create a new framework for resiliency in Alewife. It comprises two parts: a Report and a Handbook. The Report provides the context, framework, and strategies to create a prepared and resilient Alewife neighborhood; the Handbook, a companion document, is a practical compendium of specific preparedness and resiliency strategies and best practices. A coalition of 26 businesses, environmental organizations, community groups, and universities in the Detroit area has produced the “Detroit Climate Action Plan.” The proposition intends to address public health and environmental justice issues through a plan that individuals and businesses can practice. The 77-page report contains 20 major goals for the coming years, including calls for the reduction of greenhouse gas emissions by Detroit businesses by 10 percent in the next 5 years, and 80 percent by 2050. Additionally, the plan recommends improvements to the energy efficiency and durability of homes, better stormwater runoff management, expanded use of renewable energy, and broadened recycling and organic waste collection by 2022. The Tampa Bay region is known as one of the most vulnerable in the world to wind damage, coastal flooding from storm surge, and rising sea levels. The City of St. Petersburg—with over 60 miles of coastal frontage—has already felt the impacts of storms. The adverse effects from these types of environmental events often impact low-income communities the hardest, as they have the most difficulty bouncing back from stresses and shocks. The City of St. Petersburg is committed to ensuring that investments in making the city resilient are equitable and create a range of opportunities that everyone can benefit from. The Urban Land Institute of Tampa Bay convened top experts in climate resilience from New Orleans, Miami, Boston, and the Tampa Bay region to provide technical assistance to the city on creating an equitable culture of resilience. A grant from the ULI Foundation and Kresge Foundation funded this effort. This plan sets forth the 2017 federal policy platform of the Mississippi River Cities and Towns Initiative, an association of 75 U.S. mayors along the Mississippi River. The document sets forth the mayors’ recommendation of federal programs to support and strengthen the built and natural infrastructure of the Mississippi River corridor, proposing specific funding levels and support of several federal programs. Suggestions for finance mechanisms to restore Mississippi River infrastructure are also included. This guidebook results from the culmination of a year of dialogue among diverse stakeholders in southeastern Connecticut who defined challenges and solutions from extreme weather, climate change, and shifting social and economic conditions. Participants included representatives from nine municipalities, public and private utilities, public health departments, chambers of commerce, major employers, conservation organizations, academic institutions, community non-profits, and state agencies, among others. The dialogue captured six themed planning sectors (water, food, ecosystem services, transportation, energy, and regional economy) in a process that used surface and integrated solutions to address singular and multiple challenges across planning sectors. The guidebook provides a quick reference resource to help shape and inform actions that will advance a regional resilience framework for southeastern Connecticut; an accompanying Summary of Findings captures the project's final outcomes and conclusions, as well as providing a comprehensive account of the objectives, process, and details. This user-friendly summary is based on the 2015 report “City of Long Beach Climate Resiliency Assessment Report" and “Appendices” prepared by the Aquarium of the Pacific at the request of Mayor Robert Garcia. The report includes clear infographics that describe current and projected conditions in the city. It also describe what the city is currently doing and what else the city and its residents can do. This draft Regional Action Plan in support of NOAA Fisheries Climate Science Strategy helps communicate a regional vision for climate-related science in the South Atlantic, providing a framework for scientists and managers to prioritize and accomplish research on climate-related impacts to marine and coastal ecosystems. It promotes scientists working with partners and the management community to construct management approaches that ensure the development of science-based strategies to sustain marine resources and resource-dependent coastal communities in a changing climate. Highlights include establishing a NOAA Fisheries South Atlantic Climate Science Team, expanding scientific expertise and partnerships, conducting vulnerability assessments for South Atlantic species, and drafting a South Atlantic Ecosystem Status Report. The draft was available for public comment through March 24, 2017; the Plan will be finalized in summer 2017. This report summarizes findings from a workshop held in El Paso, Texas, on July 13, 2016. The El Paso-Juárez-Las Cruces region is home to approximately 2.4 million people, most of whom are living in or near the urban centers of Ciudad Juárez (Chihuahua), El Paso, and Las Cruces (New Mexico). These cities share characteristics, such as a high proportion of residents of Hispanic origin, median income below the U.S. national average, and a range of climate-related environmental issues that include drought, flooding, air pollution, dust storms, and frequent occurrences of extremely high temperatures during the late spring and early summer. With hotter temperatures and more frequent and persistent heat waves projected for the El Paso-Juárez-Las Cruces region, it is critical to develop more robust systems of institutions, social learning, and partnerships to understand risks and strengthen public health resilience. The Northeast Regional Action Plan was developed to increase the production, delivery, and use of climate-related information to fulfill the NOAA Fisheries mission in the region, and identifies priority needs and specific actions to implement the NOAA Fisheries Climate Science Strategy in the Northeast over the next three to five years. The U.S. Northeast Shelf Large Marine Ecosystem supports a number of economically important fisheries and a wide variety of other important marine and coastal species, from river herring to marine mammals and sea turtles. The region has experienced rising ocean temperatures over the past several decades, along with shifts in the distribution of many fish stocks poleward or deeper. Other expected climate-related changes include sea level rise, decreasing pH (acidification), and changing circulation patterns that could impact marine resources, their habitats, and the people, businesses, and communities that depend on them. The Gulf of Mexico Regional Action Plan was developed to increase the production, delivery, and use of climate-related information to fulfill the NOAA Fisheries mission in the region, and identifies priority needs and specific actions to implement the NOAA Fisheries Climate Science Strategy in the Gulf of Mexico over the next three to five years. The Gulf contains a diverse range of habitats, including unique coral systems atop salt domes, high relief carbonate banks, and shallow coastal ecosystems that support a variety of commercially and recreationally important marine fish and shellfish. Understanding how the major climate drivers affect the distribution and abundance of marine species, their habitats, and their prey is important to effective management. Climate-related factors expected to impact the Gulf of Mexico include warming ocean temperatures, sea level rise, and ocean acidification. The Pacific Islands Regional Action Plan was developed to increase the production, delivery, and use of climate-related information to fulfill the NOAA Fisheries mission in the region, and identifies priority needs and specific actions to implement the NOAA Fisheries Climate Science Strategy in the Pacific Islands over the next three to five years. The Pacific Islands Region spans a large geographic area including the North and South Pacific subtropical gyres and the archipelagic waters of Hawaii, American Samoa, Guam, the Commonwealth of the Northern Marianas Islands, and the U.S. Pacific remote island areas. The Pacific Islands region supports a wide variety of ecologically and economically important species and habitats, from coral reefs to pelagic fish stocks. Climate-related changes in the region include a rise in ocean temperatures, reduced nutrients in the euphotic zone, an increase in ocean acidity, a rise in sea level, and changes in ocean currents. Many of these changes have already been observed and are projected to increase further. These changes will directly and indirectly impact insular and pelagic ecosystems and the communities that depend upon them. Emeryville is the first city in California's Bay Area to update its Climate Action Plan and align its greenhouse gas (GHG) emissions targets with the State of California’s climate targets. This Climate Action Plan 2.0 includes updates to Emeryville’s 2008 Climate Action Plan, looking towards state targets for reducing 40 percent below baseline levels of GHG emissions by 2030 and 80 percent below baseline levels by 2050. The CAP 2.0 meets the compliance for the Global Covenant of Mayors, a platform for standardizing climate change action planning for local city governments and demonstrating local commitment to climate change mitigation and adaptation. The plan contains GHG targets, updated GHG community and municipal inventories, business-as-usual GHG forecast, deep decarbonization vision for 2050, adaptation and mitigation action plans, and a monitoring plan. With 17 mitigation goals, five adaptation goals, over 100 combined initiatives for 2030, and five long-term strategies for 2050, this CAP 2.0 represents a strong step in reducing emissions and building climate resilience. The Western Regional Action Plan was developed to increase the production, delivery, and use of climate-related information to fulfill the NOAA Fisheries mission in the region, and identifies priority needs and specific actions to implement the NOAA Fisheries Climate Science Strategy in the West over the next three to five years. The California Current Large Marine Ecosystem (CCLME) spans the entire west coast of the continental U.S. and has significant seasonal, inter-annual, and inter-decadal fluctuations in climate that impact the marine food-web and fisheries. The CCLME is highly important economically and ecologically. Commercial and recreational fisheries in the CCLME contribute significantly to the U.S. economy, and a host of fish, bird, and mammal species depend on the productive waters and lipid-rich food web of the CCLME for their annual feeding migrations. Migrant species include several million metric tons of hake and sardine from the waters off southern California, several hundred million juvenile salmon from U.S. West Coast rivers, millions of seabirds from as far as New Zealand (sooty shearwaters) and Hawaii (Laysan and black-footed albatrosses), and tens of thousands of grey whales from Baja California and humpback whales from the Eastern North Pacific. These feeding migrations allow species to load up on energy reserves as an aid to survival during their winter months in southern extremes of their distribution. Climate-related physical processes that disrupt the CCLME ecosystem may result in negative impacts to U.S. fisheries, migrant species, and the people and communities that depend on these living marine resources. As climate changes and ocean temperatures rise, the abundance, distribution, and life cycles of fish in federally managed ocean fisheries may change too. Federal agencies managing ocean fisheries have limited information to determine exactly how climate change might harm specific fish populations, and may not always understand the potential effects. To better manage climate-related risks, the report recommends (1) the development of guidance on how to incorporate climate information into the fisheries management process, and (2) finalizing Regional Action Plans for implementing the NOAA Fisheries Climate Science Strategy that incorporate performance measures for tracking achievement of the Strategy’s Objectives. This annual report details the progress made in reducing costs and ramping up deployments of clean energy technologies. In particular, the report highlights the progress of five clean energy technologies: wind turbines, solar technologies for both utility-scale and distributed photovoltaic (PV), electric vehicles, and light-emitting diodes (LEDs). The report also highlights emerging technologies that the Department of Energy believes have the potential to transform our energy sector over the next five to ten years. These include fuel-efficient technologies for heavy trucks, smart building controls, and vehicle lightweighting. Along with updates in these areas, the report also highlights the accomplishments and potential of fuel cells, industrial energy management, grid-scale batteries, and big area additive manufacturing. The Alaska Regional Action Plan (ARAP) for the Southeastern Bering Sea was developed to increase the production, delivery, and use of climate-related information to fulfill the NOAA Fisheries mission in the region, and identifies priority needs and specific actions to implement the NOAA Fisheries Climate Science Strategy in the region over the next three to five years. NOAA’s Alaska Fisheries Science Center is responsible for marine resources in five large marine ecosystems—the southeastern Bering Sea, the Gulf of Alaska, the Aleutian Islands, the northern Bering and Chukchi seas, and the Beaufort Sea. The first ARAP focuses on the southeastern Bering Sea because it supports large marine mammal and bird populations and some of the most profitable and sustainable commercial fisheries in the United States. Climate-related changes in ocean and coastal ecosystems are already impacting the fish, seabirds, and marine mammals as well as the people, businesses, and communities that depend on these living marine resources. Blacksburg's Climate Action Plan represents both a short- and long-term set of strategies to pursue to reach the community’s energy and climate action goals. The long-range goal, established by Town Council in 2007, is to reduce community-wide greenhouse gas emissions by 80 percent below 1990 levels by 2050. Blacksburg’s Climate Action Plan is divided into six chapters covering the major sectors of the community responsible for Blacksburg’s greenhouse gas emissions. Citizens' priority strategies are reflected in each of the sector chapters in three ways: a set of “Individual Actions” that citizens can choose to adopt in their own lives, shorter time-horizon “Let’s Get Started” strategies, and longer-term “Looking Ahead” strategies. Shaktoolik, a community on the eastern edge of Norton Sound in Alaska, faces considerable threats from erosion and flooding. The community decided to take a “defend in place” approach to erosion, allowing residents to remain at the current village site for the immediate future, although residents have indicated that they are interested in eventually relocating. This Strategic Management Plan provides the “blueprint” or framework for how the community and agencies will proceed to make Shaktoolik a more resilient community and to support their “defend in place” efforts. President Obama issued this Memorandum and Action Plan on building long-term drought resilience under his Climate Action Plan. The document elucidates the role of the National Drought Resilience Partnership, a team of federal agencies, in helping communities manage the impact of drought by linking information—such as forecasts and early warnings—with drought preparedness strategies in critical sectors like agriculture, municipal water systems, tourism, and transportation. On Earth Day 2015, Connecticut Governor Malloy issued Executive Order 46 creating the Governor’s Council on Climate Change, also known as the GC3. The Council is to examine the effectiveness of existing policies and regulations designed to reduce greenhouse gas emissions and identify new strategies to meet the state’s greenhouse gas emissions reduction target of 80 percent below 2001 levels by 2050. It will do so, in part, by developing interim state-wide greenhouse gas reduction targets for years between 2020 and 2050 and by identifying short- and long-term statewide strategies to achieve the necessary reductions. In January 2015, Long Beach Mayor Robert Garcia asked the Aquarium of the Pacific to take a lead in assessing the primary threats that climate change poses to Long Beach, to identify the most vulnerable neighborhoods and segments of the population, and to identify and provide a preliminary assessment of options to reduce those vulnerabilities. Over the course of 2015, the Aquarium hosted and participated in meetings and workshops with academic and government scientists, business and government leaders, local stakeholders, and Long Beach residents to discuss key issues facing our community as the result of climate change. This report, completed in December 2015, represents the culmination of these efforts. The report offers detailed assessments of the five main threats of climate change to Long Beach: drought, extreme heat, sea level rise and coastal flooding, deteriorating air quality, and public health and social vulnerability. It also provides an overview of what is currently being done to mitigate and adapt to these threats, and other options to consider. Finally, this report presents a series of steps and actions that city leaders and community stakeholders can use as a template for making Long Beach a model of a climate resilient city. This technical report focuses on sharing the collective efforts of the Inuit Circumpolar Council-Alaska, 146 Inuit contributing authors, a 12-member Food Security Advisory Committee, and many other Inuit who provided input and guidance to the process. The report aspires to strengthen the evidence base of (1) what Inuit food security is, (2) what the drivers of food (in)security are, and (3) identify information needed to conduct an assessment through the development of a conceptual framework. The assessment tool is designed to build the baseline of information needed to understand the Arctic environment and allow a pathway for assessments (food security, ecosystem, political, cultural, etc.) to link eco- and socio- components of sciences and indigenous knowledge. The Global Warming Solutions Act of 2008 required the Massachusetts Secretary of Energy and Environmental Affairs to set a limit on greenhouse gas emissions that would lead to a 10–20 percent reduction in emissions by 2020, and an 80 percent reduction by 2050. This update to Massachusetts' 2010 Climate Action Plan includes recommendations on how to achieve this goal. With the goal of creating a cleaner San Diego for future generations, the City of San Diego’s Climate Action Plan calls for eliminating half of all greenhouse gas emissions in the City and aims for all electricity used in the city to be from renewable sources by 2035. The Climate Action Plan is a package of policies that will benefit San Diego’s environment and economy. It will help create new jobs in the renewable energy industry, improve public health and air quality, conserve water, more efficiently use existing resources, increase clean energy production, improve quality of life, and save taxpayer money. The plan identifies steps the City of San Diego can take to achieve the 2035 targets, including creating a renewable energy program, implementing a zero waste plan, and changing policy to have a majority of the City’s fleet be electric vehicles. The Climate Action Plan helps achieve the greenhouse gas reduction targets set forth by the State of California. The City’s first Climate Action Plan was approved in 2005 and a commitment to update the plan was included in the City’s 2008 General Plan update. Flooding and sea level rise are challenges the City of Charleston has taken seriously for centuries. However, this City that we love is experiencing the effects more frequently than ever. In the 1970s Charleston experienced an average of 2 days of tidal flooding per year and it is projected that the City could experience 180 days of tidal flooding in 2045. Identifying initiatives that will improve our ability to withstand these effects is timely. This Sea Level Rise Strategy Plan is that comprehensive inventory of initiatives. King County, Washington's Strategic Climate Action Plan sets forth strategies for reducing greenhouse gas emissions and preparing for climate change impacts. This report updates the information contained within Maryland's 2012 Greenhouse Gas Reduction Act (GGRA) Plan. This document summarizes the state’s progress toward achieving the 2020 emissions reduction goal established by the GGRA and shows that Maryland is on target to not only meet, but to exceed, its emission reduction goal. The Hawai‘i Fresh Water Initiative was launched in 2013 to bring multiple, diverse parties together to develop a forward-thinking and consensus-based strategy to increase water security for the Hawaiian Islands. This Blueprint is the result of the work of the Hawai‘i Fresh Water Council, and provides Hawai‘i policy and decision makers with a set of solutions that have broad, multisector support in the fresh water community that should be adopted over the next three years to put Hawai‘i on a path toward water security. The ultimate goal of the initiative is to create 100 million gallons per day in additional, reliable fresh water capacity for the islands by 2030. The report outlines three aggressive water strategy areas with individual targets. The purpose of this document is to promote state policy recommendations and actions that aim to help improve Colorado’s ability to adapt to future climate change impacts and increase Colorado’s state agencies' levels of preparedness, while simultaneously identifying opportunities to mitigate greenhouse gas emissions at the agency level. Under the Clean Air Act and President Obama's Climate Action Plan, this plan would cut carbon pollution from existing power plants, the largest source of greenhouse gas emissions in the U.S. In 1993, Portland was the first U.S. city to create a local action plan for cutting carbon. Portland’s Climate Action Plan is a strategy to put Portland and Multnomah County on a path to achieve a 40 percent reduction in carbon emissions by 2030 and an 80 percent reduction by 2050 (compared to 1990 levels). The 2015 Climate Action Plan builds on the accomplishments to date with ambitious new policies, fresh research on consumption choices, and engagement with community leaders serving low-income households and communities of color to advance equity through the City and County’s climate action efforts. This document guides federal land managers in the effective and efficient use of available resources and engaging public and private partnerships in taking action for the conservation and management of pollinators and pollinator habitat on federal lands. Successfully negotiating climate change challenges will require integrating a sound scientific basis for climate preparedness into local planning, resource management, infrastructure, and public health, as well as introducing new strategies to reduce greenhouse gas emissions or increase carbon sequestration into nearly every sector of California’s economy. This Research Plan presents a strategy for developing the requisite knowledge through a targeted body of policy-relevant, California-specific research over three to five years (from early 2014), and determines California’s most critical climate-related research gaps. In September of 2013, Governor Jack Markell of Delaware signed Executive Order 41 directing state agencies to address the causes and consequences of climate change. The order asked for recommendations to reduce greenhouse gas emissions, increase resilience to climate impacts, and find ways to avoid or minimize flood risks caused by rising sea level. The order also created a committee to oversee the development and implementation of recommendations. A five-year effort by the California Department of Water Resources, this report presents the status and trends of California's water-dependent natural resources, water supplies, and agricultural, urban, and environmental water demands for a range of plausible future scenarios. Update 2013, as it is known, is designed to work in tandem and help implement the Governor’s Water Action Plan. At more than 3,500 pages, Update 2013 covers a variety of information, from detailed descriptions of current and potential regional and statewide water conditions to a “Roadmap For Action” intended to achieve desired benefits and outcomes.
<urn:uuid:4d1dea8c-1c9a-4101-a763-989e066c46b9>
2.625
4,994
Content Listing
Science & Tech.
19.37621
95,617,327
electrolysis: Commercial Applications of Electrolysis Various substances are prepared commercially by electrolysis, e.g., chlorine by the electrolysis of a solution of common salt; hydrogen by the electrolysis of water; heavy water (deuterium oxide) for use in nuclear reactors, also by electrolysis of water. A metal such as aluminum is refined by electrolysis. A solution of aluminum oxide in a molten mineral decomposes into pure aluminum at the cathode and into oxygen at the anode. In these examples the electrodes are inert. In electroplating, the plating metal is generally the anode, and the object to be plated is the cathode. A solution of a salt of the plating metal is the electrolyte. The plating metal is deposited on the cathode, and the anode replenishes the supply of positive ions, thus gradually being dissolved. Electrotype printing plates, silverware, and chrome automobile trim are plated by electrolysis. The English scientist Michael Faraday discovered that the amount of a material deposited on an electrode is proportional to the amount of electricity used. The ratio of the amount of material deposited in grams to the amount of electricity used is the electrochemical equivalent of the material. Actual electric consumption may be as high as four times the theoretical consumption because of such factors as heat loss and undesirable side reactions. An electric cell is an electrolytic system in which a chemical reaction causes a current to flow in an external circuit; it essentially reverses electrolysis. A battery is a single electric cell (or two or more such cells linked together for additional power) used as a source of electrical energy. Metal corrosion can take place by electrolysis in an unintentionally created electric cell. The Italian physicist Alessandro Volta discovered the principle of the electric cell (see voltaic cell ) in 1800. Within a few weeks William Nicholson and Sir Anthony Carlisle, English scientists, performed the first electrolysis, breaking water down into oxygen and hydrogen. See more Encyclopedia articles on: Chemistry: General Browse by Subject - Earth and the Environment +- - History +- - Literature and the Arts +- - Medicine +- - People +- - Philosophy and Religion +- - Places +- - Australia and Oceania - Britain, Ireland, France, and the Low Countries - Commonwealth of Independent States and the Baltic Nations - Germany, Scandinavia, and Central Europe - Latin America and the Caribbean - Oceans, Continents, and Polar Regions - Spain, Portugal, Italy, Greece, and the Balkans - United States, Canada, and Greenland - Plants and Animals +- - Science and Technology +- - Social Sciences and the Law +- - Sports and Everyday Life +-
<urn:uuid:3a99445b-1816-43e6-a5ef-dff2734bba88>
3.296875
573
Knowledge Article
Science & Tech.
11.084582
95,617,334
Does radiation affect carbon dating The method was developed by Willard Libby in the late 1940s and soon became a standard tool for archaeologists. Libby received the Nobel Prize in Chemistry for his work in 1960. After 11,460 years (two half-lives), only 256 atoms are left. After ten half-lives (or 57,300 years), less than one-thousandth of the original amount remains. Carbon-14 decays to nitrogen-14 by emitting an electron and a neutrino, and it does so with a half-life of 5,730 years. Thus, if one started with 1,024 atoms of carbon-14, after 5,730 years, only 512 would remain. The radiocarbon dating method is based on the fact that radiocarbon is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen. The resulting radiocarbon combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire in a sample from a dead plant or animal such as a piece of wood or a fragment of bone provides information that can be used to calculate when the animal or plant died. The idea behind radiocarbon dating is straightforward, but years of work were required to develop the technique to the point where accurate dates could be obtained.Even with these weird––and challenging from an old-earth perspective––results, radiocarbon (or, carbon-14) dating remains one of the best tools for determining the ages of things that lived from 500 to 50,000 years ago. Carbon-14 (C) is a naturally occurring radioisotope of carbon and is found in trace amounts on Earth.
<urn:uuid:6594ab7e-baf6-455a-acd9-6b4fe6f957ac>
3.671875
355
Knowledge Article
Science & Tech.
46.933295
95,617,344
Candid Critter: Library trail cams will track backyard fauna RALEIGH, N.C. (AP) — People in North Carolina will soon be able to check out motion-activated trail cameras from the library to get close-up pictures of the deer, coyotes and other critters that wander through their backyards. It's part of a plan by state wildlife officials to enlist the public's help in learning more about the habits of mammals across the state. The News & Observer of Raleigh reports (http://bit.ly/2eQDIN5 ) organizers of the Candid Critters program hope to have the camera traps at 20,000 to 30,000 locations in backyards, state and national parks, game lands and forests over a three-year period. The cameras will be camouflaged and use an infrared flash so they don't disturb the animals when they go off. "For a long time, scientists have wanted to collect this kind of large-scale data using camera traps," said Roland Kays, the head of the Biodiversity Research Lab at the state Museum of Natural Sciences. "But it's daunting to do by yourself. We basically have built an e-Mammal data management system so that researchers can see and use the information that comes from citizen scientists' camera traps." Kays said scientists are especially interested in studying the distribution patterns of deer across the state, and how that relates to the number of coyotes. Coyotes aren't native to North Carolina, but they have increased in recent years. Kays said scientists also hope the project will yield new information about bears, skunks, chipmunks, feral hogs and other animals. Beginning next month, citizens in the eastern third of the state will be able to check out the cameras from their local libraries, said project coordinator Arielle Parsons, a research associate with the museum's Biodiversity Research Lab. The project will expand statewide in March, Parsons said. She said the project is beginning with 300 cameras purchased with grant funds. The Candid Critters program is being operated by the state Museum of Natural Sciences, the Wildlife Resources Commission and North Carolina State University. Information from: The News & Observer, http://www.newsobserver.com Sorry we are not currently accepting comments on this article.
<urn:uuid:e8d4351b-5b1f-4e8b-a2d6-6b8691bfd887>
2.546875
478
News Article
Science & Tech.
49.725068
95,617,345
About This Workshop: Python is a versatile and widely-used programming language with many applications. This workshop explores basic programming concepts in Python and discusses real world applications across multiple industries. Designed as a stand-alone introduction to the programming in Python, this class is also a recommended refresher for students planning to enroll in General Assembly's upcoming Data Science course. The course is hands-on and exercise based, so students will get plenty of practice. This class is aimed at new programmers, or those who are looking to brush up on the basics and get a hands on introduction to the Python programming language. -Ability to write and run Python scripts -Ability to understand basic programming concepts -Understanding of Python’s role as a tool in the tech industry -Understanding of the possibilities opened up through a better understanding of Python
<urn:uuid:e1823815-6794-49ad-a525-e034705c3086>
2.703125
167
Product Page
Software Dev.
22.212993
95,617,390
Gravity Satellite Blasts Off on Climate Mission The spacecraft will measure the earth’s gravitational field with new accuracy. Today the European Space Agency (ESA) launched one of the most advanced Earth-observing satellites ever built. The GOCE (Gravity field and steady-state Ocean Circulation Explorer) satellite blasted off from the Plesetsk Cosmodrome in Northern Russia aboard a modified intercontinental ballistic missile. GOCE will orbit the earth at an altitude of around 260 kilometers for 24 months, circling from pole to pole as the planet turns beneath it. From August, the satellite will measure the earth’s gravitational field with better accuracy and higher resolution than any previous mission. Measuring the earth’s gravitational field will provide scientists with new insights into the composition of the planet, the movements of its oceans, and the thickness and movement of polar ice sheets. Crucially, it will also help scientists build computer models to predict the impact of climate change more accurately. Monitoring ice coverage, sea-level changes, and ocean circulation will be a particularly important part of the mission. “One of the major impacts of climate change is loss of ice mass and increase in sea-level rise,” says Prasad Gogineni, the director of the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas. Data from the new satellite will be coupled with sea-level models to better predict regional and global sea-level rises, adds Claude Laird, a research associate at CReSIS. Danilo Muzi, project manager of GOCE, says that the spacecraft’s sensors are 100 times more sensitive than anything flown up until now. The spacecraft will measure the earth’s gravitational field using a gradiometer. This type of instrument is already used for ground-based measurements and on airplanes, but it has never before been placed on a satellite. The gradiometer consists of six ultrasensitive accelerometers–sensors that measure acceleration and were specifically built for the mission. These accelerometers are arranged in three pairs and are aligned along three different axes of the gradiometer. By measuring subtle differences in the gravitation pull felt by each pair of accelerometers, the satellite will produce a better map of gravity measurements for the entire globe. GOCE’s instruments will also let scientists measure the earth’s gravity field down to very small spatial scales. “The smaller detail you have, the more you can learn about the earth underneath the surface,” says John Wahr, a professor of physics at the University of Colorado. The satellite will provide a resolution better than 100 kilometers. “To get the whole earth–including the oceans–at the small scales and the amazing resolution that GOCE is going to provide is just remarkable and extremely useful,” Wahr says. NASA currently has a pair of satellites that measure the earth’s gravity field in orbit. These twin spacecraft, known as GRACE, were launched in 2002 and orbit 500 kilometers above the earth’s surface. They have already provided valuable information about how Antarctic and Greenland ice-sheet mass is changing, but their spatial resolution is not as good as GOCE, says Gogineni. “The [new] satellite will provide an improved data set that will allow us to do a much better job of measuring the ice-sheet mass, particularly in Greenland,” he says. In order to make the measurement as accurate as possible, the researchers had to compensate for the air drag created by the atmosphere at low-earth orbit. This drag creates a tiny deceleration of the satellite, which would be sensed by the accelerometers as acceleration. Therefore, the researchers added an ion engine to the tail of the satellite that will emit ions at a rate that perfectly matches this deceleration. GOCE is part of a larger ESA mission called the Living Planet Program, which will involve launching seven more satellites over the next two years, each designed to measure a different feature of the planet. For example, this summer, ESA plans to launch a satellite called SMOS to measure the earth’s moisture and ocean salinity. Another satellite, called CrySat-2, will blast off at the end of the year to map ice coverage. In the past, both ESA and NASA have focused on launching larger satellites carrying many instruments. In 2002, ESA launched Envisat, a 10-instrument satellite, and NASA has an ongoing program called Landsat, which started in 1972 and is considered the gold standard for earth-science missions. Wahr, who worked on the GRACE mission, says that the new mission is very exciting. “For those of us in the business, it is going to be wonderful,” he says. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:504455c2-8072-4335-ab26-822903853767>
3.734375
1,026
News Article
Science & Tech.
33.520976
95,617,414
Sydney: Polar scientists said Thursday they had successfully drilled a 2,000-year-old ice core in the heart of Antarctica in a bid to retrieve a frozen record of how the planet`s climate has evolved. The Aurora Basin North project involves scientists from Australia, China, France, Denmark, Germany and the United States who hope it will also advance the search for the scientific "holy grail" of the million-year-old ice core. The five-week expedition, in a hostile area that harbours some of the deepest ice in the frozen continent, over three kilometres (1.9-miles) thick, will give experts access to some of the most detailed records yet of past climate in the vast region. About two tonnes of ice core sections drilled at Aurora Basin, 500 kilometres (310 miles) inland of Australia`s Casey station, is now being distributed to Australian and international ice core laboratories. They will conduct an analysis of atmospheric gases, particles and other chemical elements that were trapped in snow as it fell and compacted to form ice. Australian Antarctic Division glaciologist and project leader Mark Curran said it will help fill a gap in the science community`s knowledge of climate records. "Using a variety of scientific tests on each core, we`ll be able to obtain information about the temperature under which the ice formed, storm events, solar and volcanic activity, sea ice extent, and the concentration of different atmospheric gases over time," he said. The team, working in temperatures of minus 30 Celsius, used a Danish Hans Tausen drill to extract the main 303-metre-long ice core, which will provide annual climate records for the past 2,000 years. "There are only a handful of records with comparable resolution that extend to 2,000 years from the whole of Antarctica, and this is only the second one from this sector of East Antarctica," added Curran. Additionally two smaller drills were used to take out 116 and 103-metre cores spanning the past 800 to 1,000 years. Data collected during the drill should help scientists locate a suitable site for a more ambitious expedition to collect a one million year-old ice core in the future. "Such an ice core would help us understand what caused a dramatic shift in the frequency of ice ages about 800,000 years ago, and further understand the role of carbon dioxide in climate change," said Curran.
<urn:uuid:32099d44-81cf-48db-a9cf-09f1c6e1f358>
2.9375
496
News Article
Science & Tech.
34.066311
95,617,421
Will the Parker Solar Probe really touch the surface of the sun and what science will it do? Tag Archives | Moon Astronomers hate the Moon because it ruins perfectly good observing nights. But is it possible that we all need the Moon for our very existence? Thanks to Cassini and other spacecraft, we’ve learned a tremendous amount about the icy worlds in the Solar System, from Jupiter’s Europa to Saturn’s Enceladus, to Pluto’s Charon. Geysers, food for bacteria, potential oceans under the ice and more. What new things have we learned about these places? June is more exciting than normal, by providing us a chance to see all the naked eye planets by month’s end, lunar close encounters with all but Mercury, and an especially close one with Saturn (only 1˚!), with most planets visible for long periods of the night. So get out your scope and try to […] Hey there’s another extrasolar asteroid and China moving toward a moon rover How to define lunar calendar based on the first visibility of the lunar crescent. And the recent scientific estimates suggest that a typical human being has the same number of bacteria and other microbes as they do actual human cells. Asteroid hunters now frequently spot these small asteroids as they come between the Moon and the Earth’s surface. What to look out, and up, for in March.
<urn:uuid:571c6449-f88b-4111-b097-dcc6aac069ee>
2.765625
297
Content Listing
Science & Tech.
52.677917
95,617,423
Jaguar electric boat breaks the world’s speed record with its electric boat. As we know Jaguar is famous for its luxury vehicle brand of Jaguar Land Rover, and not only luxury vehicles but also electric cars, and what about electric boats? It really sounds fantastic, they released not only electric boat but also electric boat brakes the speed record. By the word of scientists, the coming nearer comet, which can quite appear “A comet of a century”, can create an unusual type of a meteor shower. When the comet ISON 2013 will fly by Earth this year, it is quite possible that the dust from a tail of a comet will create a meteoric stream. In that moment to the atmosphere of a planet will get a stream of the smallest particles, which once were part of a tail of a comet. According to the Paul Veygert instead of combustion in light flash, they will softly drift down to the Earth. By the Veygert’s computer model motes will travel with a speed of 125 000 miles/h (201 168 km/h) but as soon as they will get to Earth atmosphere, will be slowed down before total loss of speed. Because of it, observers on the Earth won’t be able to see meteors as they fall through the atmosphere in January 2014, the scientist added. The invisible meteor shower of a comet dust, if it really occurs, will be very slow. It can take months or even years for the fine dust settling from an upper atmosphere. But the hope of brilliant show can’t be lost. The dust with ISON can create silvery clouds – ice brilliant night clouds over Earth poles, which are shone blue. Possibly, hot permits will be in demand there. The blue ripples over Polar Regions of Earth can be the only visible sign of that the meteor shower goes at full speed. ISON 2013 headed through solar system on the way to the Sun. At approach on November 28 to the Sun, the comet will be in limits of 730 000 miles (nearly 1,2 million kilometers) from its surface.
<urn:uuid:f63eadb8-5251-4367-bbde-931ec988f3fa>
2.78125
431
News Article
Science & Tech.
65.885992
95,617,439
In software application development, Agile is a methodology that anticipates the need for flexibility and applies a level of pragmatism into the delivery of the finished product. Agile requires a cultural shift in many companies because it focuses on the clean delivery of individual pieces or parts of the software and not on the entire application. Twelve principles of the Agile Manifesto In 2001, 17 software development professionals gathered to discuss concepts around the idea of lightweight software development and ended up creating the Agile Manifesto. The Manifesto outlines the core values of Agile, and although there has been debate about whether the Manifesto has outlived its usefulness, it continues at the core of the Agile movement. Included in the Manifesto are concepts that were revolutionary at the time, including the emphasis on people and communication, rather than on processes and tools. Other key parts of the Manifesto include working directly with and satisfying customers, breaking all work down into small chunks, meeting daily to ensure work is on track and being open to changes even at the very end of the process. Types of Agile methodologies In any Agile environment, it is likely there are several Agile methodologies being used. One of the oldest of these is extreme programming, which is based on the idea that for successful development to happen quickly, testing must be done regularly. In many cases, the tests must be written even before the code. Another Agile methodology that is widely used is Scrum. Scrum brings everyone on the team, including the business stakeholders, together to agree on features. Then, specific goals are set for a 30-day sprint, at which point the agreed-upon software is delivered. Some Agile proponents emphasize Lean development, or Lean programming, which strips software development down to the basics. Feature-driven, test-driven or behavior-driven development can also be used in an Agile environment, depending on the needs of the organization. Advantages of Agile Much has been compared over the years with Agile versus Waterfall approaches. In the Waterfall era of software development, coders worked alone, with little to no input before handing the software to testers and then on to production. Bugs, complications and feature changes either weren't handled well, or were dealt with so late in the process that projects were seriously delayed or even scrapped. The idea the behind Agile model, in which everyone -- including the business side -- stayed involved and informed in the development process, represented a profound change in both the culture and a company's ability to get better software to market more quickly. Collaboration and communication became as important as technology, and because the Agile Manifesto is open to interpretation, Agile has been adapted and modified to fit organizations of all sizes and types. The Agile cultural shift also paved the way for the latest software development evolution, DevOps. Disadvantages of Agile Many would say the biggest disadvantage of Agile is the fact it has been modified -- some would say diluted -- by many organizations. This phenomenon is so widespread that the "Agile my way" practitioners are known as "Scrumbuts," as in, "We do Scrum in our organization, but...". Although Agile opens up the lines of communication among developers and the business side, it's been less successful bringing testing and operations into that mix -- an omission that may have helped the idea of DevOps gain traction. Another potential concern about Agile is its lack of emphasis on technology, which can make it a difficult sell the concept to upper managers who don't understand the role that culture plays in software development.
<urn:uuid:7ef86242-fdfd-453a-bbe3-a3e379831011>
3.171875
746
Knowledge Article
Software Dev.
22.171369
95,617,440
This week I wrap up my college class on Java programming. We spent a lot of time on some basic Java concepts in class. That meant we had to rush through a lot of the final chapters in the book. Unfortunately we rushed through inheritance, and skipped many sections. One such section we skipped was the use of super() to reach up to a superclass. On first glass, I assumed you had to prepend any methods of the base class when you call them from a child class. However that is not normally the case. You can just call a method. If it is defined in the super class, Java will know what code to execute normally. That is because subclasses are extensions of the superclass. That is, they have access to the same public methods defined in the superclass. There are two exceptions which require the use of super. One of them is the constructor of the child class. If you do nothing special in this constructor, Java will call the default constructor of the parent class for you. However if you want to call one of the other constructors of the parent class, you must make that call with super(). The other occasion when you need to use super is when you have a method that is overridden in the child class. If you make a call to an overridden method in the child class, Java chooses the overridden method to execute. You might want to execute the code for the overridden method in the base class. To do that, you have to put the super keyword before the method name. Mysterious Double Instance Hampering Performance - I study the existing code base. Confer with a colleague. Then I determine the optimal plan to change the functionality to load only a slice of all the dat... 4 days ago
<urn:uuid:d3989430-1145-4730-bfd4-c08a676dcc2f>
3.15625
362
Personal Blog
Software Dev.
62.458132
95,617,459
In a laboratory test, a car of mass 1200 kg is driven into a concrete wall, as shown in Fig. 2.1. A video recording of the test shows that the car is brought to rest in 0.36s when it collides with the wall. The speed of the car before the collision is 7.5 m / s. the change of momentum of the car, Turn on thread page Beta Plz help me how to do this I know it’s easy but it isn’t for me watch - Thread Starter - 23-10-2017 15:19 - 23-10-2017 20:24 Can you workout what the momentum before the collision is? Do you know what the formula for the change of momentum is? How would you use this to find the change in momentum?Last edited by y.u.mad.bro?; 23-10-2017 at 20:25.
<urn:uuid:48d60b12-339c-4863-8865-19018a835b92>
3.390625
194
Comment Section
Science & Tech.
104.230857
95,617,469
The equation of state of one mole of a van der Waals gas is given by (P+a/(v^2))(V-b) = RT with a and b are constants. a) Calculate the work W in an isothermal reversible process when volume changes from V1 to V2. b) Using the energy equation, show that (du/dV) = a/v^2 c) Calculate the change in internal energy U in the process (a) d) Calculate the heat exchange in this process e) Calculate the change in quantity PV.deltaPV f) Deduce the change in enthalpy I've included in this text an Appendix containing part of a previous problem I did for you which is useful for this problem. The equation of state for one mole is: (P + a/V^2) (V-b) = R T Solving for P yields: P = R T/(V - b) - a/V^2 The work done by the gas when its volume changes from V1 to V2 at constant temperature T is W = Integral from V1 to V2 of P dV = R T Log[(V2-b)/(V1-b)] +a/V2 - a/V1 (1) Then Eq. A9 of the Appendix says that: dU = Cv dT +a n^2/V^2 dV The coefficient of dV in here is the derivative of U w.r.t. V at constant T: (dU/dV)_T = a n^2/V^2 = a/V^2 for n = 1 mole. So, when the volume changes from V1 to V2, the change in U is: Delta U = Integral from V1 to V2 of a/V^2 dV = -a/V2 + a/V1 (2) Then from the First law of thermodynamics we have that: Delta U ... Detailed solution is given where everything is derived from first principles.
<urn:uuid:382358bf-c591-4c3a-9781-3e71377d2387>
3.15625
458
Q&A Forum
Science & Tech.
89.467596
95,617,476
A Surrey scientist claims to have an answer to what is often considered to be the hardest problem in science (sometimes just known as the “Hard Problem”): why we are aware. Johnjoe McFadden, Professor of Molecular Genetics at the University of Surrey, has previously proposed that consciousness is generated by the brain’s electromagnetic field, the cemi field. The cemi field theory – that our thoughts are electric fields in the brain – has generated a lot of interest both in the UK and across the world. In McFadden’s theory nerve signals – the wires of the brain – are responsible for driving our unconscious actions (like walking or driving to work every day, when our conscious mind seems to be elsewhere) but our conscious thoughts are the electric fields that ebb and flow through the brain. Nerves and wires can only encode (know) ones and zeros but fields can encode the complexity of our thoughts. Now, in a paper published in the latest issue of Journal of Consciousness Studies (Johnjoe McFadden, 2002 “The Conscious Electromagnetic Information (Cemi) Field Theory: The Hard Problem Made Easy?”) McFadden proposes an answer to the hard problem, claiming that awareness is electromagnetic field information, viewed from the inside. Liezel Tipper | alfa O2 stable hydrogenases for applications 23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Materials Sciences 23.07.2018 | Information Technology 23.07.2018 | Health and Medicine
<urn:uuid:ce7a6f2c-88bb-498c-b7eb-9b50d4ca80db>
3.484375
854
Content Listing
Science & Tech.
39.932312
95,617,480
We studied variation in C and energy flow in stream food webs by examining primary consumer diets and potential food sources at 8 sites of different drainage areas in the South Fork Eel River drainage. Both heptageniid mayfly nymphs and Glossosoma caddisfly larvae are considered scrapers in traditional functional feeding group classification, but past studies suggested that they differed in their relative use of terrestrial and algal C in some streams. In our study, microscopic examination and stable C isotope ratios (delta13C) suggested an increasing contribution of algae to both epilithic biofilms and fine particulate organic matter as stream drainage area and productivity increased. The proportion of algal cells in biofilms of small, unproductive streams was low, and biofilm delta13C values were similar to those of terrestrial detritus, suggesting that biofilms were composed primarily of heterotrophic microorganisms. Glossosoma larvae fed selectively on algae where it was scarce within the biofilms of small forested streams. In contrast, heptageniid mayfly nymphs did not appear to feed selectively on algae, but consumed algae and other materials in proportion to their abundance in the environment. These feeding patterns may have consequences for energy flow through food webs. Heptageniid mayflies feeding on biofilms in unproductive streams may augment the flow of dissolved organic C from terrestrial sources through food webs. In contrast, selective feeding by abundant Glossosoma larvae may reduce the flow of algal C through food webs because they are resistant to aquatic predators. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:45316aa1-393e-4520-a049-b5ede0de027a>
3.0625
341
Academic Writing
Science & Tech.
14.712326
95,617,493
2. Calculate the pressure needed to prevent osmosis when 10.0 g KCl is added to 150 mL of water at 25oC. 20.1 atm 40.1 atm 65.7 atm 80.2 atm 160 atm 3. Which statement is false: if a nonvolatile solute is added to a solvent, the vapor pressure of the solvent is lowered DHsolution">salts have a low DHsolution the amount of gas dissolved is directly proportional to the pressure of the gas above the solution heating always helps the solute dissolve in the solvent All statements are FALSE 4. A solution contains the same mass of two substances, substance A, which weights 200 g, and substance B, which weights 50 g. What is the mole fraction of substance A in this solution? 1/5 1/4 4/5 5/4 2/3 5. Which of the following are colligative properties? Boiling Point Elevation Freezing Point Depression Osmotic Pressure Vapor Pressure Law All of the above | about us | contact us tutorials index | organic chemistry | practice tests | online quizzes | reference tools site copyright (c) 2002 to neopages.com
<urn:uuid:ac7812db-17c5-43c7-8b40-148750514552>
3.1875
262
Q&A Forum
Science & Tech.
69.529011
95,617,506
DNA may seem an unlikely molecule from which to build nanostructures, but this is not correct. The specificity of interaction that enables DNA to function so successfully as genetic material also enables its use as a smart molecule for construction on the nanoscale. The key to using DNA for this purpose is the design of stable branched molecules, which expand its ability to interact specifically with other nucleic acid molecules. The same interactions used by genetic engineers can be used to make cohesive interactions with other DNA molecules that lead to a variety of new species. Branched DNA molecules are easy to design, and they can assume a variety of structural motifs. These can be used for purposes both of specific construction, such as polyhedra, and for the assembly of topological targets. A variety of two-dimensional periodic arrays with specific patterns have been made DNA nanomechanical devices have been built with a series of different triggers, small molecules, nucleic acid molecules and proteins. Recently, progress has been made in self-replication of DNA nanoconstructs, and in the scaffolding of other species into DNA arrangements. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:32b8e7ca-29ff-434d-a450-ec3a60607a32>
3.875
247
Academic Writing
Science & Tech.
18.025662
95,617,521
If a British biotech company called Oxitec wins approval, the Nothing smells quite like the familiar, earthy aroma that arrives with rain at the end of a dry spell. What you’re smelling is known as “petrichor,” which brings back many fond memories for anyone who attended summer camp as a kid. What creates this telltale yet pleasant odor? The term petrichor was coined in the 1960s by scientists who theorized that the smell originates from oils and chemicals after rain strikes the ground. The lighter the raindrops, the stronger the odor. All of these assumptions entered the fount of common pop-sci knowledge without question. Until now, nobody thought to get down and dirty and observe petrichor’s creation in action. Using high speed cameras, researchers at the Massachusetts Institute of Technology believe they’ve isolated the mechanism that causes petrichor. In a new MIT paper, scientists Youngsoo Joung and Cullen R. Buie describe what happened when they observed raindrops landing on 28 different surfaces, from porous alumina to sandy clay. Here’s just a taste of the action: As you can see, champagne-like bubbles — some of which are nearly microscopic in size — shoot upward and release aerosols containing aromatics, which distribute the earthy aroma. Buie said, “It’s a very common phenomenon, and it was intriguing to us that no one had observed this mechanism before.” He and Young look towards future research on how these chemicals “can be delivered in the environment, and possibly to humans.” Throughout the video, the 16 different soil types combine with the magic of raindrops in different ways. According to raindrop rate, speed, and the porousness of surface, the size of the aerosol bubbles differ. Check out the actual research video from MIT at this link.
<urn:uuid:9e0053e1-f9d3-4c45-90b2-99d46308d500>
3.109375
391
News Article
Science & Tech.
46.738565
95,617,528
Two articles published in the magazines “Science” and “Nature Astronomy” describe two studies also presented at the American Geophysical Union meeting taking place in San Francisco that reported new evidence of the presence of water ice below the surface of the dwarf planet Ceres. The researchers used the data collected by NASA’s Dawn space probe to find two sets of evidence that in Ceres’s subsoil there’s more ice than expected and that it can exist for a very long time. Ceres seemed water-poor, a dark body with some white patches made bright by the presence of salts that reflect sunlight. However, the various research carried out thanks to the data collected by the Dawn space probe after it started studying the dwarf planet from its orbit, are showing a different situation and the research just presented suggest that there’s actually an abundance of ice water. The team led by Thomas Prettyman of the Planetary Science Institute, Tucson, Arizona, which produced the article published in “Science”, used the GRaND (Gamma Ray and Neutron Detector) instrument to determine the concentrations of hydrogen, iron and potassium on Ceres’ surface layer and measure the energy generated by gamma rays and neutrons emitted by the dwarf planet. The neutron measure is important because those particles are produced by cosmic rays interacting with the surface of Ceres. Hydrogen slows them down so detecting those that escape the dwarf planet allow to assess the amount of this element near the surface, most likely combined with oxygen in water that in those conditions is frozen. A confirmation of the possibility that water ice exists near the surface of Ceres a few billion years after its formation is important because it opens up interesting possibilities concerning asteroids. The elements detected on the surface of Ceres indicate that liquid water is rising from its interior by altering the dwarf planet’s top layer. It’s possible that some process had warmed the water, perhaps the chemical one called serpentinization of olivine or the decay of radioactive materials. According to the authors of a study published in the journal “Nature” in June 2016 it’s possible that even today there’s liquid water in Ceres’ underground. Even if today it was all frozen, it would constitute an explanation of the difference between the rocky interior and the surface layer containing ice with different chemical compositions. An interesting comparison is the one with the meteorites called carbonaceous chondrites, which were altered by water. They probably originate from bodies smaller than Ceres, which contains more hydrogen and less iron than them. One explanation is a hypothesis already offered by other researchers, namely that Ceres formed in a region different from the meteorites. The presence of ammonia may indicate that it formed in the outer solar system. The team led by Thomas Platz at the Max Planck Institute in Göttingen, Germany, who produced the article published in “Nature Astronomy”, focused on what have been called “cold traps”, craters where temperatures drop to 110 Kelvin (-163° Celsius, -260° Fahrenheit). In at least 10 craters deposits of reflective materials were identified and in one of them partially illuminated by sunlight the Dawn space probe’s VIR (Visible and infrared spectrometer) instrument confirmed the presence of ice. Scientists think that at those conditions very little ice can sublimate over billions of years. Cold traps have also been found on the Moon and even on Mercury there are icy areas because some craters are always in the shadow. On Ceres the situation is different and ice could have come from the dwarf planet’s icy crust or from space. In any case, water molecules can move from warmer regions and fall back into cold traps: a lot of vapor gets lost in space but a part falls on the surface where it freezes. The Dawn space probe began its extended mission in July and at the moment its orbit is elliptical at a distance of over 7,200 kilometers (4,500 miles) from Ceres. Its data collection continues but research already confirmed a presence of water greater than expected. It’s another proof of the abundance of water in the solar system.
<urn:uuid:8f8bfbba-7603-4a50-af49-d30d3b7207cc>
4.09375
858
News Article
Science & Tech.
27.047016
95,617,538
What is it? Many people have probably seen the terms "services oriented architecture" (SOA) and microservices. If you're curious about the differences between them, there's a free ebook from O'Reilly available. Today I want to focus on the latter. So - you've heard microservices are awesome and you should consider them in your project. But how do you know when and where it's a good choice to use them? Most of the time it'll be overengineering to go with microservices, especially when you're building MVP. There are, however, few circumstances under which you should at least consider using microservices as your application architecture. A bit of history Back in the late 2000's when I started my adventure with building web applications, the most common usage of MVC coding pattern was to use model as mapping objects to database, managing associations and providing validations and callbacks. Then some logic in the views and most of the code in controllers. Some time later people came to their senses and rethought how it should work (still far from perfection, but better) - a controller should only accept request and pass data to the model, but it resulted in situation when instead of fat controllers and skinny models, we had skinny controllers and fat models - still not the best scenario. Fast forward - skinny controllers, skinny models, fat services. Services are classes that are reusable in different models (think of notification service that gets parameters from model and based on those parameters sends a proper notification) But also services tend to grow bigger and bigger. And even if they're not, you may end up having dozens of new services in single app, which is hard to maintain. Another case - your userbase is growing, you're getting more and more data, more things to calculate, transform and visualize - and technology initialy used for your app doesn't necesserily be the best one to do such jobs. When to use It's easy to get tempted and use it everywhere, but it would be overkill and overengineering in lot of cases. However there are few scenarios where is very good idea: - If you have monolithic application which parts of tend to eat a lot of memory or computing power. - If you want a higher level of security and data separation. - If you know from the beginning that different parts of the system will need to be scaled a lot, and in different proportions. - And last but not least - if you are at point where you need to optimize a lot of various pieces of code and want to use technologies that suit particular job the best. Pros & cons Like every decision, switching to microservices oriented architecture has its good and bad sides. Good things are: - scalability - you can deploy more instances of only services you need, not an entire monolithic application - performance - you may use python for crawling pages, golang for high rate operations, elixir for real-time features etc. - elasticity - you're able to scale app more easily - security - every part of the system can be separated and has its own database, so your users' data is in few places rather than one And by bad sides I mean things like: - monitoring - now you have more things to monitor and log, BUT its also good thing as this makes it easier to find a bug - deployment - to keep everything consistent the only way is to automate builds using Continuous Integration and Continuous Deployment services (and let's say docker for development mode) - configuration provisioning and service discovery - if you don't want to keep thinking about adding new instances to some configs (which is ridiculous by the way) you should think of tools like etcd, Zookeeper or Consul that are responsible for service discovery - communication - you will need to share data between different services, for sure. How would you do that? It depends - you have choices like Apache Thrift, REST API, gRPC, messages brokers like ZeroMQ or RabbitMQ or Apache Kafka. It's hard to choose right, so take some time to do some research on this matter. It comes with costs - approaching microservices-oriented architecture requires skilled developers as it takes some experience to understand more complex systems, and it could also take more time to launch new features as now a few apps may require changes. On the other side, changing technologies to those that perform better may slow down rising costs of infrastructure. Building a team capable of managing microservices in different technologies has some positive side effects: - people with backgrounds in different technologies may easily share the best solutions with each other - communication between groups improves as they need to keep things consistent and make some compromises, - you need to keep API documentation updated (well, you need to actually have documentation - many companies still forget about this) Microservices-based architecture is not golden solution for every problem you may have, but it’s certainly worth considering using it if you need modular, scalable system. It also helps your team grow and learn from each other, especially when you have polyglotic company. If you'd like to discuss how microservices may help you and your company, drop me an e-mail Subscribe to Matt Kozak Get the latest posts delivered right to your inbox
<urn:uuid:5dcd7673-0c5e-4300-b785-2128c53bf348>
2.625
1,101
Personal Blog
Software Dev.
30.074784
95,617,593
Bernhard Riemann and the Greatest Unsolved Problem in MathematicsBook - 2003 In August 1859 Bernhard Riemann, a little-known 32-year old mathematician, presented a paper to the Berlin Academy titled: "On the Number of Prime Numbers Less Than a Given Quantity." In the middle of that paper, Riemann made an incidental remark " a guess, a hypothesis. What he tossed out to the assembled mathematicians that day has proven to be almost cruelly compelling to countless scholars in the ensuing years. Today, after 150 years of careful research and exhaustive study, the question remains. Is the hypothesis true or false? Riemann's basic inquiry, the primary topic of his paper, concerned a straightforward but nevertheless important matter of arithmetic " defining a precise formula to track and identify the occurrence of prime numbers. But it is that incidental remark " the Riemann Hypothesis " that is the truly astonishing legacy of his 1859 paper. Because Riemann was able to see beyond the pattern of the primes to discern traces of something mysterious and mathematically elegant shrouded in the shadows " subtle variations in the distribution of those prime numbers. Brilliant for its clarity, astounding for its potential consequences, the Hypothesis took on enormous importance in mathematics. Indeed, the successful solution to this puzzle would herald a revolution in prime number theory. Proving or disproving it became the greatest challenge of the age. It has become clear that the Riemann Hypothesis, whose resolution seems to hang tantalizingly just beyond our grasp, holds the key to a variety of scientific and mathematical investigations. The making and breaking of modern codes, which depend on the properties of the prime numbers, have roots in the Hypothesis. In a series of extraordinary developments during the 1970s, it emerged that even the physics of the atomic nucleus is connected in ways not yet fully understood to this strange conundrum. Hunting down the solution to the Riemann Hypothesis has become an obsession for many " the veritable "great white whale" of mathematical research. Yet despite determined efforts by generations of mathematicians, the Riemann Hypothesis defies resolution. Alternating passages of extraordinarily lucid mathematical exposition with chapters of elegantly composed biography and history, Prime Obsession is a fascinating and fluent account of an epic mathematical mystery that continues to challenge and excite the world. Posited a century and a half ago, the Riemann Hypothesis is an intellectual feast for the cognoscenti and the curious alike. Not just a story of numbers and calculations, Prime Obsession is the engrossing tale of a relentless hunt for an elusive proof " and those who have been consumed by it.
<urn:uuid:bf2b80e6-a816-416a-a6d5-09cd4deee5d6>
3.03125
553
Content Listing
Science & Tech.
24.304412
95,617,594
http://divipiso.com/rtyre/6825 At the end of the article, you will able to distinguish between two types of properties – intensive property, extensive property, examples. Let’s start discussing it one by one. Affileremmo diserberete battitacchi http://pandjrecords.com/brokerage.html/2006/TIARA/360-SOVRAN/36-FT/2003/MONTEREY/298-SC-CRUISER/29-FT /1988/CRUISER-YACHT-INC./3170/32-FT /2018/MONTEREY/238-SUPER-SPORT/23-FT/details-5371723/contact-form?_mrpPls=false usciolando resinerete aggiungiate? Intensive Property Examples opzioni digitali bonus senza deposito 2018 The properties which do not depend upon either the size of the system or the quantity of matter present in it, are known as intensive properties. For example Pressure, temperature, density, specific heat, surface tension, viscosity, refractive index, melting and boiling points etc. Intensive properties which are unlike the extensive properties. They depend on the http://www.idfopoitiers.fr/maskoer/495 type of the matter and not on the amount or the quantity of the matter such alike the intensive extensive properties. http://www.accomacinn.com/?falos=bin%C3%A4re-optionen-demokonto-optionfair binäre optionen demokonto optionfair Examples If we have two beakers of water. - One beaker number one is filled with a more quantity of water than beaker second beaker. - An equal amount of heat is given to raise the temperature of both Beakers. - The whole process is watched by two separate thermometers. When the temperature reaches 100 degrees Celsius (boiling point of water). Definitely, water will start boiling in the beaker. The two beakers one and two regardless of the quantity that’s found in both beakers. This means that the boiling point is not affected by the different quantities of water in both beakers. This means that the boiling point is an intensive property. see Another example of the intensive property A color let’s suppose we take a green liquid. The liquid does not change its color even though we placed it in different containers with different shapes and volume. So, here the color was not affected by the amount of matter. So, color is also an intensive property. Other intensive properties include the hardness and the density. como ligar chicas hermosas Extensive Property Examples The properties which depend upon the quantity of the matter present in the system are known as extensive properties. For example Mass, volume, energy, enthalpy, work etc. Sbottassi inasprisca squittira software previsioni opzione binarie spazzi ringhiai scozzera! Examples Let’s take an example: If we have a ruler with a certain height and cut a certain piece from the ruler. This action will change the dimension (height of the ruler). So, Here what we did? We removed a matter. This removing of matter will decrease the height. So, we call the height as an extensive property because it depends on the amount of matter. http://zspskorcz.pl/pictose/eseit/3388 Another example of the Extensive property Another example – Suppose that we have a balloon with a certain volume. We expand balloon by pushing more air and gases inside it. So, the volume of this balloon will expand this means What? Adding extra matter will increase the volume of this balloon. So, the volume is an extensive property. So these properties are those which depend on the quantity of the matter. This all about the basics of http://hickscountry.com/sitemap-pt-page-2015-11.xml intensive, extensive properties, examples.
<urn:uuid:4df0d9e7-7d33-448d-9c4c-b4bf3635e9eb>
3.390625
876
Spam / Ads
Science & Tech.
47.128332
95,617,614
This animation illustrates the science objectives of the SOFIE instrument studied by the Aeromony of Ice in the Mesosphere (AIM) spacecraft. The Solar Occultation For Ice Experiment (SOFIE) instrument uses solar occultation to measure cloud particles, temperature and atmospheric gases involved in forming the noctilucent clouds studied by the Aeronomy of Ice in the Mesosphere (AIM) spacecraft. The instrument will reveal the recipe of chemicals that prompt formation of polar mesospheric clouds. It will provide the most accurate and comprehensive look to date of ice particles and chemicals within the clouds as well as of the environment in which these clouds form. GCMD keywords can be found on the Internet with the following citation: Olsen, L.M., G. Major, K. Shein, J. Scialdone, S. Ritz, T. Stevens, M. Morahan, A. Aleman, R. Vogel, S. Leicester, H. Weir, M. Meaux, S. Grebas, C.Solomon, M. Holland, T. Northcutt, R. A. Restrepo, R. Bilodeau, 2013. NASA/Global Change Master Directory (GCMD) Earth Science Keywords. Version 184.108.40.206.0
<urn:uuid:0ec19ac6-e0d3-4911-86ed-286490c9cb41>
3.0625
270
Knowledge Article
Science & Tech.
60.151
95,617,625
Authors: Raymond W. Jensen Using a three-particle entangled system (triple), it is possible in principle to transmit signals faster than the speed of light from sender to receiver in the following manner: From an emitter, for every triple, particles 1 and 2 are sent to the receiver and 3 to the sender. The sender is given the choice of whether or not to measure polarization of particle 3. Meanwhile the receiver measures particle correlation vs. relative polarization angle for the polarizers of particles 1 and 2. The particle 1 and 2 correlation statistics depend on whether or not particle 3 polarization was measured, instantaneously. This dependence is a basis for faster-than-light communication. Comments: 20 Pages. Russian translation by V.A. Kasimov from http://vixra.org/pdf/1007.0044v1.pdf [v1] 2018-06-29 11:52:56 Unique-IP document downloads: 10 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:b7d91f82-b552-4209-a284-1e149b2165e6>
2.9375
336
Knowledge Article
Science & Tech.
51.449529
95,617,630
“Crowdsourcing”, the cooperation of a large number of interested nonscientists, has helped to find a new fungus from which American researchers have now isolated and characterized an unusual metabolite with interesting antitumor activity. To date, fewer than 7 % of the more than 1.5 million species of fungi thought to exist have been investigated for bioactive components. To change this situation, a research group headed by Robert H. Cichewicz at the University of Oklahoma has prepared a collection of several thousand fungal isolates from three regions: arctic Alaska, tropical Hawaii, and subtropical to semiarid Oklahoma.The fungal extracts were analyzed and subjected to biological tests, including antitumor activity, by Susan L. Mooberry at the University of Texas at San Antonio. This resulted in the discovery of a number of interesting substances. The researchers soon realized that the efforts of a single research team were insufficient to acquire samples representing the immense diversity of the thousands of fungi they hoped to test. Therefore, the team turned to a “crowdsourcing” approach, in which lay people with an interest in science, known as “citizen scientists”, were invited to take part in the collection process by submitting soil samples from their properties. Crowdsourcing is becoming an increasingly important tool, giving research groups access to information and samples that could otherwise not be subjected to scientific study. Crowdsourcing has previously been used in a variety of projects, including the analysis of historic weather data and the classification of newly discovered galaxies. Putting this approach into practice, the research team uncovered a new fungal strain identified as a Tolypocladium species in a crowdsourced soil sample from Alaska. The fungal isolate, which was identified by Andrew Miller at the University of Illinois, was highly responsive to changes in the way it was grown, leading to the production of several new compounds, including a unique metabolite with significant antitumor activity. This substance may represent a valuable new approach to cancer treatment because it avoids certain routes that lead to resistance. To obtain this substance, a biosynthetic pathway that is not active under normal conditions was activated by the addition of specific chemicals, cultivation in a special medium, and in the presence of Pseudomonas bacteria. The scientists were able to isolate and characterize the metabolite they called maximiscin. Spectroscopic techniques revealed that maximiscin is a rather unusual structure, having been produced through a combination of diverse biosynthetic pathways unique to this fungus. The researchers point out the essential roles that citizen scientists can play. “Many of the groundbreaking discoveries, theories, and applied research during the last two centuries were made by scientists operating from their own homes. Although much has changed, the idea that citizen scientists can still participate in research is a powerful means for reinvigorating the public’s interest in science and making important discoveries,” says Cichewicz.About the Author Author: Robert H. Cichewicz, University of Oklahoma, Norman (USA), http://chem.ou.edu/robert-h-cichewicz Title: Crowdsourcing Natural Products Discovery to Access Uncharted Dimensions of Fungal Metabolite Diversity Angewandte Chemie International Edition, Permalink to the article: http://dx.doi.org/10.1002/anie.201306549 Robert H. Cichewicz | Angewandte Chemie World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:486f895c-d420-4416-8b9f-d73a191c9b01>
3.015625
1,374
Content Listing
Science & Tech.
31.600187
95,617,636
Using a simple "mirror trick" and not-so-simple computational analysis, scientists affiliated with the Marine Biological Laboratory (MBL) have considerably improved the speed, efficiency, and resolution of a light-sheet microscope, with broad applications for enhanced imaging of live cells ... more Live-cell microscopy reveals cell migration by direct forces The images show the orientation of integrins (yellow and green lines) at the leading edge (red lines) of a migrating white blood cell and within the focal adhesions (magenta lines) of a migrating fibroblast, analyzed using the instantaneous fluorescence polarization microscope developed at MBL. Circular histograms show that integrins are consistently oriented relative to the direction of migration in both cell types. How do cells move in a certain direction in the body -- go to a wound site and repair it, for example, or hunt down infectious bacteria and kill it? Two new studies from the Marine Biological Laboratory (MBL) show how cells respond to internal forces when they orient, gain traction, and migrate in a specific direction. The research, which began as a student project in the MBL Physiology Course and was developed in the MBL Whitman Center, is published in Proceedings of the National Academy of Sciences (PNAS) and this week in Nature Communications. Both papers focus on the activation of integrins, proteins that allow cells to attach to their external environment and respond to signals coming from other cells. Integrins are transmembrane proteins: part lies on the cell surface and part lies inside the cell. Using a microscope invented at the MBL, the authors showed that when integrins unfurl from the cell surface and bind extracellularly, they simultaneously align in the same direction as a force inside the cell (actin retrograde flow). "If you think of a cell as a car, the actin flow is the engine," says Clare Waterman, a Whitman Center Scientist from the National Heart, Lung and Blood Institute. "The cell can sit there, idling its engine. But when the integrins activate and bind externally, they are like the tires hitting the road, providing friction. The engine goes into gear and the car moves." Timothy Springer of Harvard University, who co-discovered the integrin family of proteins in the 1980s and has largely defined their mechanism of activation, and Satyajit Mayor of The National Centre for Biological Sciences, Bangalore, were principal collaborators with Waterman on the project. The team used a fluorescence polarized light microscope developed by MBL Associate Scientist Tomomi Tani and former Staff Scientist Shalin Mehta (now at Chan Zuckerberg Biohub) to measure -- in real time and with high precision -- the orientation of the integrins on the cell surface. "It's quite remarkable that you can do that with a microscope," Springer says. "I don't know of any other examples where people have actually measured the orientation of a cell surface molecule." There are 24 different types of integrins found on human cells. The PNAS paper studies an integrin on fibroblast cells while the Nature Communications paper analyzes an integrin on white blood cells. "The two integrins we worked on were about as structurally different as you can get in the integrin family," says Springer, yet both types, when activated, oriented in a direction dictated by intracellular actin flow. "This is really beautiful basic research," Springer says. "While we knew a lot about highly purified integrins in solution, this research gives us specific information about their activation state in living cells." Waterman was co-directing the MBL Physiology Course when she initiated this research with a group of students, including Vinay Swaminathan and Pontus Nordenfelt. After the course ended, the team added members, including Joseph Mathew Kalappurakkal and Travis I. Moore, and continued to collaborate in the MBL Whitman Center with support from a Lillie Research Innovation Award from the University of Chicago and the MBL. "The MBL is known for its ability to convene scientific teams with deep interdisciplinary expertise though the communication that flows between its advanced courses, its resident scientists, and the Whitman Center," says David Mark Welch, MBL Director of Research. "In this case, insightful scientists with very different skills -- cell biologists, microscope developers, computational scientists, molecular modelers, protein chemists -- synergized to reveal a fundamentally important driver of cellular migration." Vinay Swaminathanan, Joseph Mathew Kalappurakkalal, Shalin B. Mehta, Pontus Nordenfelt, Travis I. Moore, Nobuyasu Koga, David A. Baker, Rudolf Oldenbourg, Tomomi Tani, Satyajit Mayor, Timothy A. Springer, and Clare M. Waterman; "Actin retrograde flow actively aligns and orients ligand-engaged integrins in focal adhesions"; PNAS; 2017 Pontus Nordenfelt, Travis I. Moore, Shalin B. Mehta, Joseph Mathew Kalappurakkal, Vinay Swaminathan, Nobuyasu Koga, Talley J. Lambert, David Baker, Jennifer C. Waters, Rudolf Oldenbourg, Tomomi Tani, Satyajit Mayor, Clare M. Waterman & Timothy A. Springer; "Direction of actin flow dictates integrin LFA-1 orientation during leukocyte migration"; Nature Comm.; 2017 - white blood cells - Marine Biological L… - cell migration The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Chemistry for 2008 jointly to Osamu Shimomura, Marine Biological Laboratory (MBL), Woods Hole, MA, USA and Boston University Medical School, MA, USA, Martin Chalfie, Columbia University, New York, NY, USA and Roge ... more
<urn:uuid:670404be-6b99-4ec9-9d14-0c6fa31faed8>
2.90625
1,234
News Article
Science & Tech.
27.11381
95,617,649
In a laboratory, researchers created a miniature river delta that replicates flooding patterns seen in natural rivers, resulting in a mathematical model capable of aiding in the prediction of the next catastrophic flood. The results appear in the current issue of Geophysical Research Letters. Slow deposition of sediment within rivers eventually fills channels, forcing water to spill into surrounding areas and find a new, steeper path. The process is called avulsion. The result, with the proper conditions, is catastrophic flooding and permanent relocation of the river channel. The goal of the Penn research was to improve prediction of why and where such flooding will occur and to determine how this avulsion process builds deltas and fans over geologic time. Research was motivated by the Aug. 18, 2008, flooding of the Kosi River fan in northern India, where an artificial embankment was breached and the resulting floodwaters displaced more than a million people. Looking at satellite pictures, scientists from Penn and University of Minnesota Duluth noticed that floodwaters principally filled abandoned channel paths. Meredith Reitz, lead author of the study and a graduate student in the Department of Physics and Astronomy in Penn’s School of Arts and Sciences, conducted a set of four laboratory experiments to study the avulsion process in detail. Reitz injected a mixture of water and sediment into a bathtub-sized tank and documented the formation and avulsion of river channels as they built a meter-sized delta. “Reducing the scale of the system allows us to speed up time,” Reiz said. “We can observe processes in the lab that we could never see in nature.” The laboratory experiments showed flooding patterns that were remarkably similar to the Kosi fan and revealed that flooding and channel relocation followed a repetitive cycle. One major finding was that the formation of a river channel on a delta followed a random path; however, once a network of channels was formed, avulsion consistently returned flow to these same channels, rather than creating new ones. An additional important finding was that the average frequency of flooding was determined by how long it took to fill a channel with sediment. Researchers constructed a mathematical model incorporating these two ideas, which was able to reproduce the statistical behavior of flooding. “Avulsions on river deltas and fans are like earthquakes,” said Douglas Jerolmack, director of the Sediment Dynamics Laboratory in the Department of Earth and Environmental Science at Penn and a co-author of the study. “It is impossible to predict exactly where and when they will occur, but we might be able to predict approximately how often they will occur and which areas are most vulnerable. Just as earthquakes occur along pre-existing faults, flooding occurs along pre-existing channel paths. If you want to know where floodwaters will go, find the old channels.” The authors derived a simple method for estimating the recurrence interval of catastrophic flooding on real deltas. When used in conjunction with satellite images and topographic maps, this work will allow for enhanced flood hazard prediction. Such prediction is needed to protect the hundreds of millions of people who are threatened by flooding on river deltas and alluvial fans. The work could also help in exploration for oil reservoirs, because sandy river channels are an important source of hydrocarbons. The study was funded by the National Science Foundation and was conducted by Reitz and Jerolmack at Penn and John Swenson of the University of Minnesota Duluth. Jordan Reese | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:9576c557-be9d-4876-8949-630dede0d9d4>
4.28125
1,346
Content Listing
Science & Tech.
36.797978
95,617,659
- Research Article - Open Access Hardware Architecture for Pattern Recognition in Gamma-Ray Experiment © Sonia Khatchadourian et al. 2009 Received: 19 March 2009 Accepted: 21 July 2009 Published: 10 September 2009 The HESS project has been running successfully for seven years. In order to take into account the sensitivity increase of the entire project in its second phase, a new trigger scheme is proposed. This trigger is based on a neural system that extracts the interesting features of the incoming images and rejects the background more efficiently than classical solutions. In this article, we present the basic principles of the algorithms as well as their hardware implementation in FPGAs (Field Programmable Gate Arrays). For many years, the study of gamma photons have led scientists to understand more deeply the complex processes that occur in the Universe, for example, remnants of supernova explosions, cosmic rays interactions with interstellar gas, and so forth. In the 1960s, it has finally been possible to develop efficient measuring instruments to detect gamma-ray emissions, thus enabling to validate the theoretical concepts. Most of these instruments were built in order to identify the direction of gammas rays. Since gamma photons are not deflected by interstellar magnetic fields, it becomes possible to determine the position of the source accurately. In this context, Imaging Atmospheric Cherenkov Telescopes constitute the most sensitive technique for the observation of high-energy gamma-rays. Such telescopes provide a large effective collection area and achieve excellent angular and energy resolution for detailed studies of cosmic objects. The technique relies upon Cherenkov light produced by the secondary particles once the gamma-ray interacts with the atmosphere at about 10 km of altitude. It results a shower of secondary particles, that also may interact with the atmosphere producing other particles according to well-known physical rules. By detecting shower particles (electrons, muons, protons), it is then possible to reconstruct the initial event and determine the precise location of a source within the Universe. In order to determine the nature of the shower, it is important to analyze its composition, that is, determine the types of particles that have been produced during the interaction with the atmosphere. This is performed by studying the different images that are collected by the telescopes and that are generally representative of the particle type. For example, gamma-ray showers usually have thin, high-density structures. On the other hand, protons are quite broad with low density. The major problem in these experiments is that the number of images to be collected is generally huge and the complete storage of all events is impossible. This is mainly due to the fact that data-storage capacity is limited and that it is impossible to keep track of all incoming images for off-line analysis. In order to circumvent this issue, a trigger system is often used to select the events that are interesting (from a physicist's point of view). This processing must be performed in real time and is very tightly constrained in terms of latency since it is compatible with the data acquisition rate of the cameras. The role of such triggering system is to rapidly decide whether an event is to be recorded for further studies or rejected by the system. The organization of this paper is given as follows: the context of our work is presented in Section 2. Section 3 describes the algorithms that are envisaged in order to build a new trigger system. Considerations on hardware implementations are then provided in Section 4 and Section 5 describes the results in terms of timing and resource usage. 2. The HESS Project The High-Energy Stereoscopic System (HESS) is a system of imaging Cherenkov telescopes that strive to investigate cosmic gamma rays in the 100 GeV to 100 TeV energy range . It is located in Namibia at an altitude of 1800 m where the optical quality is excellent. The Phase-I of this project went into operation in Summer 2002 and consists of four Large Cherenkov Telescopes (LCT), each of 107 mirror area in order to provide a good stereoscopic viewing of the air showers. The telescopes are arranged on a square of 120 m sides, enabling thus to optimize the collection area. The cameras of the four telescopes serve to capture and record the Cherenkov images of air showers. They have excellent resolution since the pixel size is very small: each camera is equipped of 960 photomultiplier tubes (PMTs) that are assimilated to pixels. An efficient trigger scheme has also been designed in order to reject background such as the light of the night sky that interferes with measurements. Next sections describe both phases of the project in terms of triggering issues. The trigger system of the HESS Phase-I project is devised in order to make use of the stereoscopic approach: simultaneous observation of interesting images must be required in order to store a specific event . This coincidence requirement reduces the rate of background events, that is, events that may be assimilated to night sky noise. It is composed of two separate levels (L1 and the central trigger). At the first level, a basic threshold is applied on signals collected by the camera. A trigger occurs if the signals in pixels within a 64-pixel sector of the camera exceed a value of photoelectrons. This enables to get rid of isolated pixels and thus to eliminate the noise. The pixel signals are sampled using 1 GHZ Analogue Ring Samplers (ARSs) with a ring buffer depth of 128 cells. Following a camera trigger, the ring buffer is stopped and its content is digitized, summed and written in an FPGA buffer. After read-out, the camera is ready for the next event, and further processing may be performed including the transmission of data via optical cable to the PC processor farm located in the control building. Since its inception in 2002, the HESS project keeps on delivering very significant results. In this very promising context, researchers of the collaboration have decided to improve the initial project by adding a new Very Large Central Telescope (VLCT) in the middle of the four existing ones. This new telescope should permit to increase the sensitivity of the global system as well as improving resolution for high-energy particles. It is composed of 2048 pixels which represent the energy of the incident event. Considering the new approach, the quantity of data to be collected would drastically increase, and it becomes necessary to build a new trigger system in order to be compatible with the new requirements of the project. One of the most challenging objectives of the HESS project is to detect particles which energy are below 50 GeV. In this energy range, it is not conceivable to use all telescopes (since the smallest ones cannot trigger), and only the fifth telescope may be used in a monoscopic mode. 3. The HESS2 L2 Triggering System In order to cope with the new performances of the HESS Phase-II system, an efficient L2 trigger scheme is currently being built. Like all triggers, it aims to provide a decision regarding the interest of a particular event. In this context, two parallel studies have been led in order to identify the best algorithms to implement at that level. The first study relied on the Hillas parameters which are seen as a classical solution in astrophysics pattern recognition. The second study that has been envisaged is to use pattern recognition tools such as neural networks associated with an intelligent preprocessing. Both approaches are described in the next sections. 3.1. The First Approach 3.1.1. Hillas Parameters 3.1.2. The Classifier In this first approach, the classifier consists in applying thresholds on the hillas parameters (or a combination of these parameters) computed on the incoming images in order to distinguish gamma signatures between all collected images. One of the best parameters that have been identified as a good discriminator is the Center of Gravity ( ). This parameter represents the center of gravity of all illuminated pixels within the ellipse. In this case, the recognition of particles is performed according the following rule: (c)otherwise, the event is rejected. The major drawback of such approach is that the considered thresholds consist of constant values. Thus, a lack of flexibility is to be deplored. For example, it does not allow to take into consideration the various conditions of the experiment that may have a significant impact on the shape of signatures. 3.2. Intelligent Preprocessing The second studied approach aims to make use of algorithms that already brought significant results in terms of pattern recognition. Neural networks are good candidates because they are a powerful computational model. On the other hand, their inherent parallelism makes them suitable for a hardware implementation. Although used in different fields of physics, these algorithms based on neural networks have successfully been implemented and have already proved their efficiency [5, 6]. Typical applications include particle recognition in tracking systems, event classification problems, off-line reconstruction of event, and online trigger in High-Energy Physics. From the assumption that neural networks may be useful in such experiments, we have proposed a new Level 2 (L2) trigger system enabling to implement rather complex processing on the incoming images. The major issue with neural networks resides in the learning phase which strives to identify optimal parameters (weights) in order to solve the given problem. This is true when considering unsupervised learning in which representative patterns have to be iteratively presented to the network in a first learning phase until the global error has reached a predefined value. One of the most important drawbacks of this type of algorithms is that the number of weights strongly depends on the dimensionality of the problem which is often unknown in practice. This implies to find the optimal structure of the network (number of neurons, number of layers) in order to solve the problem. Moreover, the curse of dimensionality constitutes another challenge when dealing with neural networks. This problem expresses a correlation between the size of the network and the number of examples to furnish. This relation is exponential, that is, if the network's size becomes significant, the number of training examples may become relatively huge. This cannot be considered in practice. In order to reduce the size of the network, it is possible to simplify its, task that is, reduce the dimensionality of the problem. In this case, a preprocessing step aims at finding correlations on data and at applying basic transformations in order to ease the resolution. In this study, we advise to use an "intelligent" preprocessing based on the extraction of the intrinsic features of the incoming images. 3.2.1. The Rejection Step The rejection step has two significant roles. First, it aims to remove isolated pixels that are typically due to background. These pixels are eliminated by applying a filtering mask on the entire image in order to keep the only relevant information, that is, clusters of pixels. This consists in testing the neighborhood of each pixel of the image. As the image has an hexagonal mesh grid, a hexagonal neighborhood is used. The direct neighborhood of each pixel of the image is tested. If none of the neighbors are activated, the corresponding central pixel is considered as isolated and deactivated. Second, the rejection step permits to eliminate particles that cannot be distinguished by the classifier. Very small images ( pixels) are discarded since they contain poor information that cannot be deciphered. 3.2.2. The Preprocessing Step The envisaged system is based on a preprocessing step whose role consists in applying basic transformations on incoming images in order to isolate the main characteristics of a given image. The most important role of the preprocessing is to guarantee invariance in orientation (rotation and translation) of the incoming images. Since signatures of particles within the image depend on the impact point of an incident particle, the image may result in a series of pixels located wherever on the telescope. Without using a preprocessing stage based on orientation invariance, the 2048 inputs of the classifier would completely differ from an image to another although the basic shape of the particle would remain the same. The retained preprocessing is based on the use of Zernike moments. These moments are mainly considered in shape reconstruction and can be easily made invariant to changes in objects orientation. They are defined as a set of orthogonal functions based on complex polynomials originally introduced in . Zernike polynomials can be expressed as where , is a nonnegative integer, is apositive integer such as is even and , : length of a vector from the origin to a point such as , that is, where , : angle between the axis and the vector extending from the origin to a point . where refers to the pixels' value of coordinates(x,y). The rotation invariance property of Zernike moments is due to the intrinsic nature of such moments. In order to guarantee translation invariance as well, it is necessary to align the center of the object to the center of the unit circle. This may be performed by changing the coordinates and of each processing point by the coordinates and where and refer to the center of the signature and may be obtained by In the context of our application, it has been found that considering the first 8-order Zernike moments was sufficient to obtain the best performances. This implies to compute 25 polynomials for each of the pixels within an image. Then, it is necessary to accumulate these values in order to obtain 25 real values corresponding to all the Zernike moments of the image. The module of these moments is then computed and a normalization step is fulfilled in order to scale the obtained values within the −1 to 1 range. These values are then provided to the neural network. 3.2.3. The Neural Classifier According to the nature of the network, the value of the output may be computed according to (6). In (6), represents the output; and respectively, represent the weights connecting the output layer and the hidden layer and the weights connecting the hidden layer and the input nodes. is the number of neurons in the hidden layer and is the number of inputs. denotes the value of an input node and is an activation function. In our case, the nonlinear function has been used: In the considered application, the output layer is composed of a three neurons (which value ranges between −1 and 1) corresponding to the type of particle to identify. Each output refers to a gamma, proton, and muon particle, respectively. If the value of an output neuron is positive, it may be assumed that the corresponding particle has been identified by the network. In the case that more than one output neuron is activated, the maximum value is taken into account. The learning phase has been performed off-line on a set of 4500 patterns computed on simulated images as the HESS2 telescope is not yet installed. The simulated images are generated thanks to series of Monte Carlo simulations. These patterns covered all ranges of energies and types of particles. 1500 patterns were considered for each class of particles. A previous study had determined the reliability of the patterns in order to consider the most representative patterns that may be collected by the telescope. A classical backpropagation algorithm has been programmed off-line in order to get the optimal value of weights. The training have been performed simultaneously on two sets of patterns (learning and testing set). Once the error on the testing phase was minimum, the training was stopped ensuring that the weights had an optimal value. The size of the input layer was determined according to the type of preprocessing that was envisaged. In the case of a Zernike preprocessing, this number has been set to 25 since it corresponds to the number of outputs furnished by the preprocessing step. The number of hidden nodes (in the hidden layer) has been evaluated regarding the results obtained on a specific validation set of patterns. This precaution has been handled in order to ensure that the neural network was able to generalize on new data (i.e., it has not learnt explicitly). 3.3. Simulated Performances Performances according to both approaches. According to Table 1, it may be seen that the neural solution provides significant improvement compared to classical methods in terms of classification. This improvement resides in the fact that a largest dimensionality of the problem has been taken into account. Whereas Hillas processing takes only five parameters into consideration, the number of inputs in the case of a neural preprocessing is set to 25. Moreover, as the Hillas approach only consists in applying strong "cuts" on predefined parameters, the neural approach is more flexible and guarantees nonlinear decision boundaries. It may be assumed that the considered neural network is capable of extracting the relevant information and discriminate between all images, efficiently. The major drawback of the neural approach is its relative complexity in terms of computation and hardware implementation. Although Hillas algorithms may be implemented in software, it is impossible to implement both the neural network and the preprocessing step in the same manner. In this context, dedicated circuits have to be designed in order to be compliant with the strong timing constraints imposed by the entire system. In our case, an L2-decision has to be taken at a rate of 3.5 KHz which corresponds to a timing constraint of 285 microseconds. 4. Hardware Implementation The complete L2 trigger system is currently being built, making intensive use of the reconfigurable technology. Components such as FPGAs constitute an attractive alternative to classical circuits such as ASICs (Application Specific Integrated Circuits). This type of reconfigurable circuits tends to be more and more efficient in terms of speed and logic resources and is more and more envisaged in deeply constrained applications. 4.1. Hardware Implementation of Zernike Moments Although very efficient, Zernike moments are known for their computation complexity. Many solutions have been proposed for the fast implementation of the Zernike moments. Some algorithms are based on recursivity , reuse of previous parts of the computation or moment generators . Since using a moment generator allows a reduction of the number of operations, we have decided to follow this approach, that is, to compute Zernike moments from accumulation moments. 4.1.1. Zernike Moments via Accumulation Moments The mechanism of a moment generator can be summarized by the expression of the geometric moments with respect to the point of coordinates from the accumulation moments: According to (8), it is important to note that geometric moments may be expressed as a function of accumulation moments. In the context of our application, (10) to (23) demonstrate how to calculate the Zernike moments from the geometric moments and thus from accumulation moments. We have seen that Zernike moments may be expressed as follows: According to the binomial theorem, the development of a given polynom can be expressed as follows: It is then possible to modify the following expressions: Thus, (13) can be reformulated as follows: Since the coordinates of the pixels in the image are expressed as real numbers, we need to express these coordinates with integers in order to formulate the Zernike moments in function of the geometric moments. As we can see in Figure 7, the even rows have to be distinguished from the odd rows. Therefore, the coordinate is expressed in two different ways according to the type of row (even or odd row). where and are positive integers such as and with corresponding to the number of columns, and corresponding to the number of rows. (resp., ) is the distance between two adjacent columns (resp., rows) and and correspond to the new position of the origin of the image in the upper left corner. In the following equations, since the first part of the expressions does not change, the second part is just developed: Equation (23) shows that the Zernike moments can be computed from the geometric moments. If we consider two accumulation grids, the first computes the accumulation moments on the odd lines of the image and the second on the even lines. Since the computation is divided into two different parts, the image should be arranged in two components: the odd component of the image and the even one. Therefore, according to (8), the analogy gives an expression of the Zernike moments which is function of (accumulation moments computed from the odd component of the image) and (accumulation moments computed from the even component of the image) by setting and : By reinjecting (24) in the (23), Zernike moments are reformulated as follows: We have developed here an algorithm enabling the computation of Zernike moments based on the moment generator using the accumulation moments. This algorithm has the advantage to be used on images which have particular topologies since their mesh grid is regular or semiregular by the use of a second accumulation grid. The second advantage of this algorithm is its simplicity to be implemented on FPGA, for instance. The base of this algorithm relies on the accumulation moments and is easily computed thanks to a simple accumulation grid. 4.1.2. Architecture Description To make the exploitation of (25) easier, we need to reorder the terms to get an expression of the Zernike moments such as The registers used between the column accumulators are synchronized at the end of each row; so their clock enable depends on the image topology. In our case, corners have been filled with zeros before dividing the image. Therefore, the size of each image's component is . In this case, the accumulation moment is computed on clock cycles from the moment when the first pixel arrives into the accumulation grid. A Zernike computation block aims to compute the module of Zernike moments from the accumulation moments that are provided by the grids and from the module that furnishes the coefficients (see Figure 8). This block consists in summing the different coefficients and in computing the module of each moment. In order to reduce the amount of logical resources to provide, the computation of the square root is simplified according to the following approximation: This approximation is often utilized in image processing and does not impact significantly the final results. 4.1.3. FPGA Implementation of Zernike Moment's Computation In order to compute the Zernike moment from the accumulation, we proposed an original architecture which is presented in Figure 10. This architecture is very regular and simplifies the implementation on an FPGA target. Furthermore, we can notice that the hardware required is simple to design for both the moments' accumulation and moments' computation. In fact, the computations are based on a multiplier and an adder. These constitute the MAC (for Multiply-ACcumulate) operator and are widely available in current FPGA devices. In order to improve performances, MAC operators are integrated in some FPGA devices as a hardwired component like DSP48 in Xilinx Virtex4. Two implementation approaches are possible in which either hardware or time optimization is considered. 4.2. Hardware Implementation of the Neural Network The parallel nature of neural networks makes them very suitable for hardware implementation. Several studies have been performed so far allowing complex configurations to be implemented in reconfigurable circuits [15, 16]. In this example, the neural architecture is implementing a 5-input MLP with 7 hidden nodes and 3 outputs. These parameters are easily modifiable since the proposed circuit is scalable. Input data are accepted sequentially and applied to the series of multipliers. corresponds to the th input of the present state whereas corresponds to the th of the next set. Data arrive at each clock cycle. At each clock cycle, at any particular level of adder, apart from the addition operation between the multiplier output and the sum from the previous level, the multiplication operation of the next set of inputs at the adjacent multiplier is also simultaneously performed. The sum, thus, ripples and accumulates through the central adders (48 bits) until it is fed to a barrel shifter that aims to translate the data into a 16-bit address. The obtained sum addresses a sigmoid block memory (SIGMOID0) containing 65536 values of 18 bits. This block feeds the outputs of the hidden layer sequentially to three MAC units for the output layer calculation. Finally a multiplexer distributes serially the results of the output layer to another sigmoid block memory (SIGMOID1). After a study on data representation, it has been decided to code the incoming data in 18 bits. Weights are stored in ROM (Read-only Memories) containing 256 words of 18 bits. The control of the entire circuit is performed by a simple state machine that aims to organize the sequence of computations and memory management. The number of multipliers required for the network is where is the number of inputs and is the number of outputs. Considering that the number of hidden nodes may be large compared to the number of inputs and the number of outputs, the adopted solution does not affect the number of multipliers which is a great relief. In this context, it is also important to note that the design is very easily scalable to accommodate more hidden, input or output nodes. For example, adding a hidden node does not impact the number of resources but requires an additional cycle of computation. Adding an input may be accommodated by the addition of another ROM, multiplier, and adder set to the series of adders at the centre (part HL of the figure). Moreover, the addition of an output node can be fulfilled by adding another ROM, MAC unit, and sigmoid block to the part OL of the figure. Another advantage of the architecture is that a single activation function (sigmoid block) is required to compute the complete hidden layer. This block consists of a Look-up Table (LUT) that stores 65536 values of the function. In general, the time required to obtain the outputs after the arrival of the first input is fixed to , where is the number of inputs, and is the number of hidden units. In every cycle, number of multiplications is performed ( is the number of output units). The complete architecture (preprocessing + neural network) has been implemented in a Xilinx Virtex4 (xc4lx100) FPGA which is the part that has been retained for the trigger implementation. This type of reconfigurable circuit exhibits a lot of dedicated resources such as memory blocks or DSP blocks that allow to compute a MAC very efficiently. 5.1. Resources Requirements Summary of occupied resources. Number of used logic slices/ Used DSP blocks/ Number of used memory bits/ 2829 /4320 (65.5%) Concerning the hardware implementation of the neural network, it is important to notice that, independently of the configuration, the amount of used resources is very low. Nevertheless, one may deplore an important usage of memory blocks destined to store the values of the sigmoid functions. This issue may be circumvented in case where hardware resources constitute an issue. A modified activation function could be used . This has a shape quite similar to the sigmoid function and is very easy to implement on hardware with just a number of shifts and adds. This function can be executed with an error less than According to Table 3, it is clear that the entire system fits in an FPGA without consuming to much logic (4%). Moreover, the complete architecture has been devised in order to take full benefit of the intrinsic dedicated resources of the FPGA, that is, DSP and memory blocks. It is important to note that most of the computation time is monopolized by the computation of Zernike moments from the accumulation moments. This is mainly due to the fact that the number of accumulations to perform is huge (104663 accumulations for an order-8) and that these computations are performed iteratively. Even if we have decided to parallelize the architecture in five stages, the number of iterations remains high ( ). A current work is performed to optimize the computations in this block for further improvements. The maximum clock frequency has been estimated at 120 MHz and 366 MHz for the DSP blocks. In this article, we have presented an original solution that may be seen as an intelligent way of triggering data in the HESS Phase-II experiment. The system relies on the utilization of image processing algorithms in order to increase the trigger efficiency. The hardware implementation has represented a challenge because of the relatively strong timing constraints 285 microseconds to process all algorithms. This problem has been circumvented by taking advantage of the nature of the algorithms. All these concepts are implemented making intensive use of FPGA circuits which are interesting for several reasons. First, the current advances in reconfigurable technology make FPGAs an attractive alternative compare to very powerful circuits such as ASICs. Moreover, their relatively small cost permits to rapidly implement a prototype design without major developmental constraints. The reconfigurability also constitutes a major point. It allows to configure the whole system according to the application needs, enabling flexibility and adaptivity. For example, in the context of the HESS project, it may be conceivable to reconfigure the chip according to the surrounding noise or to deal with specific experimental conditions. - Hinton JA: The status of the hess project. New Astronomy Reviews 2004,48(5-6):331-337. 10.1016/j.newar.2003.12.004View ArticleGoogle Scholar - Funk S, Hermann G, Hinton J, et al.: The trigger system of the Hess telescope array. Astroparticle Physics 2004,22(3-4):285-296. 10.1016/j.astropartphys.2004.08.001View ArticleGoogle Scholar - Delagnes E, Degerli Y, Goret P, Nayman P, Toussenel F, Vincent P: Sam: a new ghz sampling asic for the Hess-ii front-end electronics. Nuclear Instruments and Methods in Physics Research 2006,567(1):21-26. 10.1016/j.nima.2006.05.052View ArticleGoogle Scholar - Hillas AM: Cerenkov Light Images of EAS Produced by Primary Gamma Rays and by Nuclei. Proceedings of the 19th International Cosmic Ray Conference (ICRC '85), August 1985, San Diego, Calif, USA Google Scholar - Denby B: Neural networks in high energy physics: a ten year perspective. Computer Physics Communications 1999,119(2):219-231. 10.1016/S0010-4655(98)00199-4View ArticleGoogle Scholar - Kiesling C, Denby B, Fent J, et al.: The h1 neural network trigger project. Advanced Computing and Analysis Techniques in Physics Research 2001, 583: 36-44.Google Scholar - Bishop CM: Neural Networks for Pattern Recognition. Oxford University Press, Oxford, UK; 1995.Google Scholar - Teague MR: Image analisis via the general theory of moments. Journal of the Optical Society of America 1979, 70: 920-930.MathSciNetView ArticleGoogle Scholar - Zernike F: Beugungstheorie des schneidenverfahrens und seiner verbesserten form, der phasenkontrastmethode. Physica 1934, 1: 689-704. 10.1016/S0031-8914(34)80259-5MATHView ArticleGoogle Scholar - Khatchadourian S, Prévotet J-C, Kessal L: A neural solution for the level 2 trigger in gamma ray astronomy. In Proceedings of the 11th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT '07), April 2007, Amsterdam, The Netherlands, Proceedings of Science. Nikhef;Google Scholar - Belkasim SO, Ahmadi M, Shridhar M: Efficient algorithm for fast computation of zernike moments. Journal of the Franklin Institute 1996, 333: 577-581. 10.1016/0016-0032(96)00017-8MathSciNetView ArticleGoogle Scholar - Kintner EC: On the mathematical properties of the zernike polynomials. Journal of Modern Optics 1976, 23: 679-680.Google Scholar - Kotoulas L, Andreadis I: Real-time computation of zernike moments. IEEE Transactions on Circuits and Systems for Video Technology 2005,15(6):801-809.View ArticleGoogle Scholar - Hatamian M: A real-time two-diensional moment generating algorithm and its single chip implementation. IEEE Transactions on Acoustics, Speech, and Signal Processing 1986, 34: 546-553. 10.1109/TASSP.1986.1164853View ArticleGoogle Scholar - Prévotet J-C, Denby B, Garda P, Granado B, Kiesling C: Moving nn triggers to level-1 at lhc rates. Nuclear Instruments and Methods in Physics Research A 2003,502(2-3):511-512. 10.1016/S0168-9002(03)00484-4View ArticleGoogle Scholar - Omondi AR, Rajapakse JC: Fpga Implementations of Neural Networks. Springer; 2006.View ArticleGoogle Scholar - Skrbek M: Fast neural network implementation. Neural Network World 1999, 9: 375-391.Google Scholar This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
<urn:uuid:42d7a1f4-1651-42a9-8c90-ba90287fb221>
3.375
6,920
Academic Writing
Science & Tech.
38.468152
95,617,674
New geological evidence indicates the Grand Canyon may be so old that dinosaurs once lumbered along its rim, according to a study by researchers from the University of Colorado at Boulder and the California Institute of Technology. The team used a technique known as radiometric dating to show the Grand Canyon may have formed more than 55 million years ago, pushing back its assumed origins by 40 million to 50 million years. The researchers gathered evidence from rocks in the canyon and on surrounding plateaus that were deposited near sea level several hundred million years ago before the region uplifted and eroded to form the canyon. A paper on the subject will be published in the May issue of the Geological Society of America Bulletin. CU-Boulder geological sciences Assistant Professor Rebecca Flowers, lead author and a former Caltech postdoctoral researcher, collaborated with Caltech geology Professor Brian Wernicke and Caltech geochemistry Professor Kenneth Farley on the study. "As rocks moved to the surface in the Grand Canyon region, they cooled off," said Flowers. "The cooling history of the rocks allowed us to reconstruct the ancient topography, telling us the Grand Canyon has an older prehistory than many had thought." The team believes an ancestral Grand Canyon developed in its eastern section about 55 million years ago, later linking with other segments that had evolved separately. "It's a complicated picture because different segments of the canyon appear to have evolved at different times and subsequently were integrated," Flowers said. The ancient sandstone in the canyon walls contains grains of a phosphate mineral known as apatite -- hosting trace amounts of the radioactive elements uranium and thorium -- which expel helium atoms as they decay, she said. An abundance of the three elements, paired with temperature information from Earth's interior, provided the team a clock of sorts to calculate when the apatite grains were embedded in rock a mile deep -- the approximate depth of the canyon today -- and when they cooled as they neared Earth's surface as a result of erosion. Apatite samples from the bottom of the Upper Granite Gorge region of the Grand Canyon yield similar dates as samples collected on the nearby plateau, said Caltech's Wernicke. "Because both canyon and plateau samples resided at nearly the same depth beneath the Earth's surface 55 million years ago, a canyon of about the same dimensions of today may have existed at least that far back, and possibly as far back as the time of dinosaurs at the end of the Cretaceous period 65 million years ago." One of the most surprising results from the study is the evidence showing the adjacent plateaus around the Grand Canyon may have eroded away as swiftly as the Grand Canyon itself, each dropping a mile or more, said Flowers. Small streams on the plateaus appear to have been just as effective at stripping away rock as the ancient Colorado River was at carving the massive canyon. "If you stand on the rim of the Grand Canyon today, the bottom of the ancestral canyon would have sat over your head, incised into rocks that have since been eroded away," said Flowers. The ancestral Colorado River was likely running in the opposite direction millions of years ago, she said. When the canyon was formed, it probably looked like a much deeper version of present-day Zion Canyon, which cuts through strata of the Mesozoic era dating from about 250 million to 65 million years ago, Wernicke said. From 28 million to 15 million years ago, a pulse of erosion deepened the already-formed canyon and also scoured surrounding plateaus, stripping off the Mesozoic strata to reveal the Paleozoic rocks visible today, he said. The prevailing belief is that the canyon was incised by an ancient river about six million years ago as the surrounding plateau began rising from sea level to the current elevation of about 7,000 feet. The new scenario described in the GSA Bulletin by Flowers and her colleagues is consistent with recent evidence by other geologists using radiometric dating techniques indicating the Grand Canyon is significantly older than scientists had long believed. Rebecca Flowers | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:adf0d417-6a84-48e0-95b8-6aadc6d3917e>
4.21875
1,397
Content Listing
Science & Tech.
38.370228
95,617,699
News Jan 27, 2017 | Original Story By Bob Marcotte of the University of Rochford For the past several years, Jessica Cantlon has been working to understand how humans develop the concept of numbers, from simple counting to complex mathematical reasoning. Early in her career at the University of Rochester, the assistant professor of brain and cognitive sciences began studying primates in her search for the origins of numeric understanding. In 2013, she, PhD candidate Steve Ferrigno, and colleagues at Rochester and the Seneca Park Zoo made a surprising discovery: in an experiment using varying quantities of peanuts, baboons (even as young as one year of age) clearly showed an ability to distinguish between large and small quantities of objects. But the finding raised another question. To what extent might that ability be influenced by other dimensions of those objects—such as their relative surface area—in addition to their number? This month Cantlon, Ferrigno, and two additional coauthors—Steven Piantadosi, an assistant professor of brain and cognitive sciences at Rochester, and Julian Jara-Ettinger, a postdoctoral researcher in brain and cognitive sciences at MIT—are publishing the results of a new study suggesting that primates do, in fact, have the ability to distinguish large and small quantities of objects, irrespective of the surface area they appear to occupy. Study subjects included rhesus monkeys, young children and adults in the United States, and adult members of the Tsimane', a predominately “low numeracy” cultural group that inhabits an area of remote rain forest in Bolivia. Study subjects included rhesus monkeys, young children and adults in the United States, and adult members of the Tsimane’, a predominately “low numeracy” cultural group that inhabits an area of remote rain forest in Bolivia. Study subjects included both humans and primates: adults and children in the United States; adults of the Tsimane’, a predominately “low numeracy” cultural group that inhabits an area of remote rainforest in Bolivia, and that has been long studied by Piantadosi and Jara-Ettinger; and rhesus monkeys, a species with strong neural and cognitive similarities to humans. The researchers found that all groups showed a bias toward numbers over surface area in their estimations. “This shows that the spontaneous aspect of extracting numerical information likely has an evolutionary basis, because this has been seen across all humans and also with other primate species,” said Ferrigno. The study also showed that the bias toward the numerical dimension was strongest in humans compared to primates, and was correlated with increasing age and math education in humans. “As children get older, they are more likely to represent numerical information as opposed to other quantitative information,” Ferrigno added. “Similarly, when Tsimane’ adults had more math education, they were more likely to represent numbers as opposed to other dimensions.” The study, published in Nature Communications, is an exciting development for anyone interested in improving early math education. Because the testing process was nonverbal, it could be especially useful in assessing math abilities in young children. “It’s very hard to test young children at age four on their math abilities because it’s hard to differentiate what they know, and what they know, but just can’t express,” Ferrigno said. “With further refinements, this type of numerical bias test could in the future be an indicator of how they are progressing in their education.” The study is the first to compare number perception with a single task performed across a diverse testing population. This illustration from the study shows the images subjects were presented with at top. Each sample they viewed, for example, was a dot array, followed by two icons for categorizing the array as little (star) or a lot (diamond). At bottom is a sampling of dot arrays that were used, varying in number of dots and the percentage of surface area they occupied. To test the relative importance of numerical quantities versus surface area, researchers presented subjects with dot arrays, varying in both the number of dots and the relative surface area they occupied. For each array the subjects then selected one of two icons to categorize the array as a large or small quantity. To keep the task the same across groups, no verbal description of the categories was provided; instead, subjects learned from nonverbal demonstration by the experimenters, and trial and error feedback. The tests with primates and children and adults in the United States were conducted with touch screen monitors; Tsimane’ adults, who have limited exposure to such devices, were tested with laminated printouts. Cantlon says the study shows “that the initial step toward becoming mathematically sophisticated likely had to do with focusing in on the number of objects, not just total mass or size.” In a broader sense, she adds, it shows “how humans got to be the way they are. “This is about understanding human origins and how humans evolved thought processes that are mathematically sophisticated.” This article has been republished from materials provided by University of Rochester. Note: material may have been edited for length and content. For further information, please contact the cited source. Ferrigno, S., Jara-Ettinger, J., Piantadosi, S.T. and Cantlon, J.F. (2017) ‘Universal and uniquely human factors in spontaneous number perception’, Nature Communications, 8, p. 13968. doi: 10.1038/ncomms13968. What Makes Good Brain Proteins Turn Bad?News The protein FUS is implicated in two neurodegenerative diseases: amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD). Using a newly developed fruit fly model, researchers have zoomed in on the protein structure of FUS to gain more insight into how it causes neuronal toxicity and disease. Researchers are One Step Closer to Developing Eye Drops to Treat Age-Related Macular DegenerationNews Scientists at the University of Birmingham are one step closer to developing an eye drop that could revolutionise treatment for age-related macular degeneration (AMD).READ MORE A Bad Mood May Help Your Brain With Everyday TasksNews New research found that being in a bad mood can help some people’s executive functioning, such as their ability to focus attention, manage time and prioritize tasks.READ MORE
<urn:uuid:ac0fb657-1097-40f0-b5ce-62ab3e0bfd54>
3.5
1,364
News Article
Science & Tech.
27.801033
95,617,701
Mathematical models of seasonally migrating populations MetadataShow full item record This item's downloads: 220 (view details) The phenomenon of seasonal migration has attracted a wealth of attention from biologists. However, the dynamics of migratory populations have been little considered. In this thesis, we use differential equations to model the variation in abundance of seasonally migrating populations. Our contribution to the field begins with a representation of seasonal breeding. We use piecewise-smooth differential equations to model the variation in the size of a population that has a short interval each year during which successful reproduction is possible. We first consider a one-species model which illustrates the dynamics of a population of specialist feeders over the course of a single breeding season and use it to examine how reproductive success depends on the population's distribution of breeding dates. We then introduce time-dependent switches to extend the model to a broader class of species. This allows us to consider the effect of climate change on populations that annually travel long distances. We then shift focus to consider interactions between migrants and species at higher levels in the food web. Predatory pressure influences almost all populations to some extent. Here, however, interactions may occur for just a brief period each year before the populations involved become spatially separated. The range of a migrating population may overlap with that of a population of predators for a single season. We outline a framework for examining how this kind of "transient" predation influences the dynamics of the prey population. We are then able to examine how a migratory population may be overwhelmed by the fleeting influence of members of other species. Finally, as an alternative to the aforementioned models, we outline a different approach to modelling migration, namely using partial differential equations instead of ordinary differential equations. In this way, we provide two distinct templates for the future exploration of the dynamical features of such populations. This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. Please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply. The following license files are associated with this item:
<urn:uuid:c51e0cc7-09a3-410c-93d9-2c6f93b6cbb3>
3
443
Truncated
Science & Tech.
20.759752
95,617,711
Visualization of Flow Past Inclined Bluff Cylinders Most of the engineering structures like buildings normally have either square, rectangular, trapezoidal or triangular cross-sectional shape. It is thus essential to know the aerodynamic loading, both in magnitude and frequency, on prismatic bodies. Some preliminary measurement had been carried out by the authors. The objective of the present paper is to conduct flow visualization to gain better understanding of the flow past prismatic bodies with different cross-sectional shape, which in turn will help the analysis of the experimental data. KeywordsShear Layer Flow Visualization Aerodynamic Loading Separate Shear Layer Side Face These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Gerrard, J. H.: The mechanics of the formation region of vortices behind bluff bodies, J. Fld. Mech. 25 (1966) 401–413.CrossRefGoogle Scholar Bearman, P. W. & Trueman, D. M.: An investigation of flow around rectangular cylinders, Aero. Quart. 23 (1972) 229–236.Google Scholar Vickery, B. J.: Fluctuating lift and drag on a long cylinder of square cross-section in a smooth and turbulent stream, J. Fld. Mech. 25 (1966) 481–494.CrossRefGoogle Scholar Lee, B. E.: The surface pressure field experienced by a two-dimensional square prism, Central Electricity Generating Board, RD/L/N17/74 (1974).Google Scholar © Springer-Verlag Berlin Heidelberg 1992
<urn:uuid:81e0cfb9-510a-49b1-932c-7c7091f3df85>
2.75
341
Academic Writing
Science & Tech.
55.045326
95,617,713
Consider a two-dimensional spatial coordinate system S' whose coordinates (u,v) are defined by x = u + v y = u - v in terms of the coordinates of a Cartesian coordinate system S. Suppose you are given a vector in S whose contravariant components are Am = (2,8). Determine the contravariant components of this vector in S'.© BrainMass Inc. brainmass.com July 15, 2018, 8:29 pm ad1c9bdddf You'll find different notations for tensors in the literature. The so-called kernel-index notation makes it particularly easy to remember the transformation rules. In this notation you denote the components of a tensor in a transformed coordinate system (S') by putting a prime on the index, instead of using a different name for the tensor itself. You also do this for the coordinates. A detailed solution is given. A two-dimensional spatial coordinate systems is defined. The contravariant components of this vector is determined.
<urn:uuid:71700952-f37c-4236-aae8-b8e4acc05528>
3.046875
215
Tutorial
Science & Tech.
48.470221
95,617,724
Radical Factors in the Evolution of Animal LifeJune 22, 2018 / Posted by: Miki Huynh Complex animal life appeared in the earth’s record soon after the second major rise in atmospheric oxygen roughly 800 million years ago. The evolution of enzymatic reduction of oxygen yielded a several-fold increase in energy production by life on Earth, enabling this progression to multicellular animal life. However, higher atmospheric oxygen concentrations would also have been expected to result in increased levels of reactive oxygen species (ROS) – derivatives of oxygen that are harmful to life and that, for humans, can accelerate aging and cardiovascular disease. Yannick Taverne and his team, with support from Alternative Earths, the NASA Astriobiology Institute Team based at UC Riverside, have developed a molecular timeline for the production of ROS in the evolution of life. The paper is published in BioEssays. The results indicate that the production of hydrogen peroxide (H2O2) and other reactive oxygen species was not a result of increased O2 levels in the atmosphere, but rather originates back to the very early Earth, and was a crucial consequence of early Earth’s weakly oxic microenvironments. Earth’s oxygen cycle did coincide with the evolution of major redox pathways that were preserved throughout time, and it played a major role in metazoan diversification and longevity. Earlier lab data placed ROS at the center of human cardiovascular disease; however, the study’s new data suggest that ROS are essential in normal cellular physiology. Only when produced in excess or when antioxidants are depleted can ROS inflict damage on the cells leading to cardiovascular disease. The results of this work, in addition to exploring how radical factors evolved, suggest a new focus on reactive oxygen species centered on the possible beneficial effects of ROS, rather than trying to eradicate them in clinical trials in order to modify human disease. March 2018 cover of BioEssays. Source: WIley Online https://onlinelibrary.wiley.com Source: [BioEssays (via UC Riverside)] - Electron Acceptors and Carbon Sources for a Thermoacidophilic Archaea - Yosemite Granite Tells New Story About Earth's Geologic History - Supporting SHERLOC in the Detection of Kerogen as a Biosignature - New Estimates of Earth's Ancient Climate and Ocean pH - How Microbes From Spacecrafts Survive Clean Rooms - Understanding Oxygen as an Exoplanet Biosignature - Recap of the 2018 Astrobiology Graduate Conference (AbGradCon) - Astrobiologist Rebecca Rapf Receives Inaugural Maggie C. Turnbull Early Career Award - Searching for the Great Oxidation Event in North America - Astrobiology Activities at the Chickasaw Nation Aviation and Space Academy
<urn:uuid:f89b26bc-97e4-4e74-b156-756dbe2a3889>
3.34375
580
News (Org.)
Science & Tech.
7.678365
95,617,726
In a study published today in Nature Climate Change researchers used the latest emissions scenarios and climate models to show how varying levels of carbon emissions are likely to result in more frequent and severe coral bleaching events. In a new article in Nature Climate Change scientists from NOAA's Cooperative Institute for Marine and Atmospheric Studies show maps that illustrate how rising sea temperatures are likely to affect all coral reefs, including those in Polynesia, in the form of annual coral bleaching events under various different emission scenarios. Their results emphasize that without significant reductions in emissions most coral reefs on the planet are at risk for bleaching within the next several decades. Credit: Thomas Vignaud Large-scale 'mass' bleaching events on coral reefs are caused by higher-than-normal sea temperatures. High temperatures make light toxic to the algae that reside within the corals. The algae, called 'zooxanthellae', provide food and give corals their bright colors. When the algae are expelled or retained but in low densities, the corals can starve and eventually die. Bleaching events caused a reported 16 percent loss of the world's coral reefs in 1998 according to the Global Coral Reef Monitoring Network. If carbon emissions stay on the current path most of the world's coral reefs (74 percent) are projected to experience coral bleaching conditions annually by 2045, results of the study show. The study used climate model ensembles from the upcoming Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Around a quarter of coral reefs are likely to experience bleaching events annually five or more years earlier than the median year, and these reefs in northwestern Australia, Papau New Guinea, and some equatorial Pacific islands like Tokelau, may require urgent attention, researchers warn. "Coral reefs in parts of the western Indian Ocean, French Polynesia and the southern Great Barrier Reef, have been identified as temporary refugia from rising sea surface temperatures," said Ruben van Hooidonk, Ph.D., from the Cooperative Institute for Marine and Atmospheric Studies (CIMAS) at the University of Miami and NOAA's Atlantic Oceanographic and Meteorological Laboratory. "These locations are not projected to experience bleaching events annually until five or more years later than the median year of 2040, with one reef location in the Austral Islands of French Polynesia protected from the onset of annual coral bleaching conditions until 2056." The findings emphasize that without significant reductions in emissions most coral reefs are at risk, according to the study. A reduction of carbon emissions would delay annual bleaching events more than two decades in nearly a quarter (23 percent) of the world's reef areas, the research shows. "Our projections indicate that nearly all coral reef locations would experience annual bleaching later than 2040 under scenarios with lower greenhouse gas emissions." said Jeffrey Maynard, Ph.D., from the Centre de Recherches Insulaires et Observatoire de l'Environnement (CRIOBE) in Moorea, French Polynesia. "For 394 reef locations (of 1707 used in the study) this amounts to at least two more decades in which some reefs might conceivably be able to improve their capacity to adapt to the projected changes." "More so than any result to date, this highlights and quantifies the potential benefits for reefs of reducing emissions in terms of reduced exposure to stressful reef temperatures." "This study represents the most up-to-date understanding of spatial variability in the effects of rising temperatures on coral reefs on a global scale," said researcher Serge Planes, Ph.D., also from the French research institute CRIOBE in French Polynesia. The researchers involved in the study all concur that projections that combine the threats posed to reefs by increases in sea temperature and ocean acidification will further resolve where temporary refugia may exist. The study was funded by the Pacific Islands Climate Change Cooperative based in Hawaii, the U.S. National Research Council and CNRS. AOML, a federal research laboratory, is part of NOAA's Office of Oceanic and Atmospheric Research, located in Miami, Fla. AOML's research spans hurricanes, coastal ecosystems, oceans and human health, climate studies, global carbon systems, and ocean observations. For more information, please visit http:/www.aoml.noaa.gov CIMAS is a research institute based at the University of Miami, within the Rosenstiel School of Marine & Atmospheric Science. It serves as a mechanism to bring together the research resources of nine major public and private research universities in Florida and the U.S. Caribbean with those of NOAA in order to develop a Center of Excellence that is relevant to understanding the Earth's oceans and atmosphere within the context of NOAA's mission. For more information, please visit http://cimas.rsmas.miami.edu/ Barbra Gonzalez | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:aa2f6fe8-f4a4-47d9-9de3-ec249f4d6852>
3.796875
1,662
Content Listing
Science & Tech.
37.70967
95,617,729
In places with large concentrations of people, animals and natural fresh water is usually not enough, especially if it is used to collect sewage and transport them away from populated areas. If dirt gets into the soil is not much soil organisms recycle them, re-using nutrients, and in the neighboring streams seeps have clean water. But if you get sewage directly into the water, they rot, and their oxidation consumes oxygen. Created so called biochemical oxygen demand. The higher the demand, the less oxygen remains in the water for living organisms, especially fish and seaweed. Sometimes because of lack of oxygen dies every living thing. Water become biologically dead – there are only anaerobic bacteria, and they thrive without oxygen and in the course of his life isolated hydrogen sulphide – a toxic gas with a specific smell of rotten eggs. And without that lifeless water becomes putrid smell and becomes quite unfit for humans and animals. The same thing could happen with an excess of water in substances such as nitrates and phosphates, they fall into the water from the agricultural fertilizers in the fields or from contaminated waste water detergents. These nutrients stimulate the growth of algae, the algae begin to consume much oxygen, and when it becomes insufficient, they die. Under natural conditions, the lake silted up before and disappear, there are about 20 thousand. years. Excess nutrients accelerates the aging process, or , and reduces the life of the lake, making it to the Besides unattractive.
<urn:uuid:8c56a433-77f3-4ccb-b354-3322f30cf06d>
3.28125
297
Knowledge Article
Science & Tech.
32.940471
95,617,747
"We uncovered a novel mechanism that allows proteins that direct pre-mRNA splicing – RNA-binding proteins – to induce a regulatory effect from greater distances than was thought possible," said first author Michael T. Lovci, of the Department of Cellular and Molecular Medicine, the Stem Cell Research Program and Institute for Genomic Medicine at UC San Diego. Researchers from California, Oregon, Singapore and Brazil made this finding while working toward an understanding of the most basic signals that direct cell function. According to Lovci, the work broadens the scope that future studies on the topic must consider. More importantly, it expands potential targets of rationally designed therapies which could correct molecular defects through genetic material called antisense RNA oligonucleotides (ASOs). "This study provides answers for a decade-old question in biology," explained principal investigator Gene Yeo, PhD, assistant professor of Cellular and Molecular Medicine, member of the Stem Cell Research Program and Institute for Genomic Medicine at UC San Diego, as well as with National University of Singapore. "When the sequence of the human genome was fully assembled, under a decade ago, we learned that less than 3 percent of the entire genome contains information that encodes for proteins. This posed a difficult problem for genome scientists – what is the other 97 percent doing?" The role of the rest of the genome was largely a mystery and was thus referred to as "junk DNA." Since then sequencing of other, non-human, genomes has allowed scientists to delineate the sequences in the genome that are remarkably preserved across hundreds of millions of years of evolution. It is widely accepted that this evidence of evolutionary constraint implies that, even without coding for protein, certain segments of the genome are vital for life and development. Using this evolutionary conservation as a benchmark, scientists have described varied ways cells use these non-protein-coding regions. For instance, some exist to serve as DNA docking sites for proteins which activate or repress RNA transcription. Others, which were the focus of this study, regulate alternative mRNA splicing. Eukaryotic cells use alternative pre-mRNA splicing to generate protein diversity in development and in response to the environment. By selectively including or excluding regions of pre-mRNAs, cells make on average ten versions of each of the more than 20,000 genes in the genome. RNA-binding proteins are the class of proteins most closely linked to these decisions, but very little is known about how they actually perform their roles in cells. "For most genes, protein-coding space is distributed in segments on the scale of islands in an ocean," Lovci said. "RNA processing machinery, including RNA-binding proteins, must pick out these small portions and accurately splice them together to make functional proteins. Our work shows that not only is the sequence space nearby these 'islands' important for gene regulation, but that evolutionarily conserved sequences very far away from these islands are important for coordinating splicing decisions." Since this premise defies existing models for alternative splicing regulation, whereby regulation is enacted very close to protein-coding segments, the authors sought to define the mechanism by which long-range splicing regulation can occur. They identified RNA structures – RNA that is folded and base-paired upon itself – that exist between regulatory sites and far-away protein-coding "islands." Dubbing these types of interactions "RNA-bridges" for their capacity to link distant regulators to their targets, the authors show that this is likely a common and under-appreciated mechanism for regulation of alternative splicing. These findings have foreseeable implications in the study of biomedicine, the researchers said, as the RNA-binding proteins on which they focused – RBFOX1 and RBFOX2 – show strong associations with neurodevelopmental disorders, such as autism and also certain cancers. Since these two proteins act upstream of a cascade of effects, understanding how they guide alternative splicing decisions may lead to advancements in targeted therapies which correct the inappropriate splicing decisions that underlie many diseases. Additional contributors to the paper include Justin Arnold, Tiffany Y. Liang, Thomas J. Stark, Katlin B. Massirer and Gabriel A. Pratt, UC San Diego; Sherry Gee, Marilyn Parra, Dana Ghanem, Henry Marr and John G. Conboy, Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley; Lauren T. Gehman and Douglas Black, UCLA Department of Microbiology, Immunology and Molecular Genetics and Howard Hughes Medical Institute, UCLA; Shawn Hoon, Nanyang Technological University, Singapore; and Joe W. Gray, Oregon Health and Science University. Support was provided in part by the National Institutes of Health, (U54 HG007005, R01 HG004659, R01 GM084317 and R01 NS075449,HL045182,DK094699,CA112970, CA126551 and DK032094); and by the Director, Office of Science, and Office of Biological & Environmental Research of the US Department of Energy. Debra Kain | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:e46bfe92-1f9e-4422-a1b7-ee2ae0c8b299>
3.765625
1,709
Content Listing
Science & Tech.
32.183831
95,617,756
why do we need light to see Does this seem like a strange question? It could be if you don't really think about it, but, could we survive without light? Food : Light is the sole source of food generation for all living organisms on the earth. Except for a few almost all living beings depend on light for their food and energy. and other autotrophs synthesize their own food materials by use of light. The light converts to reserve energy in the form of food by a process known a photosynthesis. From these plants animals get the required food materials and the required. So plants depend on sunlight for their own food and animals depend on plants. Thus all the living beings are dependent on light for food. Answer 2: Great question! Its a complex topic but I will try my best. So, things that have color consist of different materials, which consist of different chemicals, which consist of different atoms. All these elements are responsible for the properties of the surface of the thing that reflects light. The way the thing reflect light we see as color. Think of atoms like little bricks and chemicals like the way the wall is made from those bricks. So now you throw the ball into the wall. Depending of whether the wall is smooth or has sharp corners, or bended, or has big holes or has holes where the ball may stuck, your ball may jump back in different directions or just go through the wall, or be stuck in one of the tricky corners. Same with every surface when light hits it, surface may reflect the light back; it can absorb light or just let it go through (transparent things). So now, we need to talk more about light: we were talking about one ball hitting the wall, while light is more like a lot of balls of different size hitting the wall at the same time. Because balls are of different size, some of them will be mostly bounded back, while others may be mostly stuck in the wall. Different objects (like lamps or sun or fire) emit energy (we imagine it as "balls" for now). Those balls are of different size. If you collect them from all over the universe and put all in the order from the very little to the very big you get what scientists call "spectrum" - the range of energy from ultraviolet to radio waves. Our sun emits energy of certain range (balls sizes are, lets say, only from 2 to 5 cm, not more, not less). So our eyes were developed to be sensitive only to those size of "balls" (others were not important as there were not many hitting our earth). so when sunlight hits the surface, for example the plant leave, the balls size 4 are mostly jumping back (while others are absorbed), and balls 4 reach our eyes. Our eyes react to the "balls" of certain size as color. So ball size 4 we call "green", the balls size 2 we call blue etc. So now, the interesting conclusions: things do not have color by themselves. only when light (energy) hits them - we can see colors. You probably noticed that when it is dark, things get grayish and its hard to distinguish colors. Another interesting conclusion - our eyes are only sensitive to a little part of spectrum, or in other words can see only a little part of all possible colors. Thats why we develop instruments, like different spectrometers to learn about those "balls" we can not see with our eyes. And, finally, summing up the story: colors are out interpretation on the ability of things (surfaces) to reflect the certain part of light. And different things have different colors as their light reflecting properties are different. - Views: 128 why do we see the colors we see why do we need air to live why is glucose the primary energy source for cells why is biodiversity highest in tropical rainforest why is it important for the body to maintain homeostasis
<urn:uuid:00049ba6-08c2-48db-98c7-ad5ef656d886>
3.515625
805
Q&A Forum
Science & Tech.
60.837059
95,617,766
Routing in Microsoft MVC or WebAPI applications serves as a map to define the routes for all incoming requests. There are three basic components of routing in MVC , WebAPI applications Controller, Action method, and parameters. In the provided link account is a controller, login is action method and 24 is parameter or query string. Let’s review the different ways to build routing in ASP.NET CORE. Creating Default Routes By convention in your project’startup class, the default route can be defined. We need to be sure that essential configuration for MVC pattern of Controller + Action + ID route, exist in our project. Routing pattern can also be declared like: Extending Default Routes On specific needs, Default Route can be extended by adding customized routes. By MapRoute() method, configurations can be added: On the Home controller, Adding extra route allows access to About action with an/about the route. About page can be still accessed with conventional /home/about the route, as the default pattern route is still present. Using attributes in controller and actions, you can also configure routes. Now, controller actions can be accessed through the following routes: Two tokens [controller] and [action] indicate that we have to refer to the controller and action name that has been declared. In this case, “Analytics” is the name of the controller, and “Charts” the name of the action, therefore it the name of the route. Building RESTful Routes We need to use the following route configuration, to declare a RESTful controller. Here, RESTful service is told to accept calls under the /api/values route. Now, we do not use the Route attribute for actions. Instead, we decorate it with HttpGet, HttpPost, HttpPut, HttpDelete attributes - Deependra is a Senior Developer with Microsoft technologies, currently working with Opteamix India business private solution. In My Free time, I write blogs and make technical youtube videos. Having the good understanding of Service-oriented architect, Designing microservices using domain driven design.
<urn:uuid:4fbbd60b-1745-46aa-a430-286a89c91138>
3.234375
449
Tutorial
Software Dev.
36.049798
95,617,767
Estimating a Probability Law Estimating the probability law that gave rise to given data is one of the chief aims of statistics. Once known, its variance, confidence limits, and all other parameters describing fluctuation may be determined. There are two main schools of thought — the classical and the Bayesian — regarding what may be assumed while making the estimate. These guiding philosophies are discussed more generally in Chap. 16. KeywordsPrior Knowledge Maximum Entropy Laser Speckle Resolution Cell Orthogonal Expansion Unable to display preview. Download preview PDF. - 10.5J. P. Burg: “Maximum entropy spectral analysis,” 37th Annual Soc. Exploration Geophysicists Meeting, Oklahoma City, 1967Google Scholar - 10.6S. J. Wernecke, L. R. D’Addario: IEEE Trans. C-26, 352 (1977)Google Scholar - 10.8S. Goldman: Information Theory (Prentice-Hall, New York 1953)Google Scholar - 10.10D. Marcuse: Engineering Quantum Electrodynamics (Harcourt, Brace and World, New York 1970)Google Scholar
<urn:uuid:72c8099a-7511-4d65-aeab-1a390719497f>
2.546875
242
Truncated
Science & Tech.
46.223539
95,617,769
Using particles that are 1/100,000 the width of a human hair to deliver drugs to cells or assist plants in fighting off pests may sound like something out of a science fiction movie, but these scenarios may be a common occurrence in the near future. Carbon nanotubes, cylindrically shaped carbon molecules with a diameter of about 1 nanometer, have many potential applications in a variety of fields, such as biomedical engineering and medical chemistry. Proteins, nucleic acids, and drugs can be attached to these nanotubes and delivered to cells and organs. Carbon nanotubes can be used to recognize and fight viruses and other pathogens. However, results of studies in animals have also raised concerns about the potential toxicity of nanoparticles. Recent research by a team of researchers from China, led by Dr. Nan Yao, explored the effects of nanoparticles on plant cells. The findings of Dr. Yao and his colleagues are published in the October issue of the American Journal of Botany (http://www.amjbot.org/cgi/reprint/97/10/1602). Dr. Yao and his team of researchers isolated cells from rice as well as from the model plant species Arabidopsis. The researchers treated these cells with carbon nanotubes, and then assessed the cells for viability, damage to DNA, and the presence of reactive oxygen species. The researchers found an increase in levels of the reactive oxygen species hydrogen peroxide. Reactive oxygen species cause oxidative stress to cells, and this stress can result in programmed cell death. Dr. Yao and his colleagues discovered that the effect of carbon nanotubes on cells was dosage dependent—the greater the dose, the greater the likelihood of cell death. In contrast, cells exposed to carbon particles that were not nanotubes did not suffer any ill effects, demonstrating that the size of the nanotubes is a factor in their toxicity. "Nanotechnology has a large scope of potential applications in the agriculture industry, however, the impact of nanoparticles have rarely been studied in plants," Dr. Yao said. "We found that nanomaterials could induce programmed cell death in plant cells." Despite the scientists' observations that carbon nanotubes had toxic effects on plant cells, the use of nanotechnology in the agriculture industry still has great promise. The scientists only observed programmed cell death as a temporary response following the injection of the nanotubes and did not observe further changes a day and a half after the nanotube treatments. Also, the researchers did not observe death at the tissue level, which indicates that injecting cells with carbon nanotubes caused only limited injury. "The current study has provided evidence that certain carbon nanoparticles are not 100% safe and have side effects on plants, suggesting that potential risks of nanotoxicity on plants need to be assessed," Dr. Yao stated. In the future, Dr. Yao and colleagues are interested in investigating whether other types of nanoparticles may also have toxic effects on plant cells. "We would like to create a predictive toxicology model to track nanoparticles." Only once scientists have critically examined the risks of nanoparticles can they take advantage of the tremendous potential benefits of this new technology. CITATION: Cong-Xiang Shen, Quan-Fang Zhang, Jian Li, Fang-Cheng Bi, and Nan Yao (2010). Induction of programmed cell death in Arabidopsis and rice by single-wall carbon nanotubes. American Journal of Botany 97(10): 1602-1609. DOI: 10.3732/ajb.1000073 The full article in the link mentioned is available for no charge for 30 days following the date of this summary at http://www.amjbot.org/cgi/reprint/97/10/1602. After this date, reporters may contact Richard Hund at email@example.com for a copy of the article. The Botanical Society of America (www.botany.org) is a non-profit membership society with a mission to promote botany, the field of basic science dealing with the study and inquiry into the form, function, development, diversity, reproduction, evolution, and uses of plants and their interactions within the biosphere. It has published the American Journal of Botany (www.amjbot.org) for nearly 100 years. In 2009, the Special Libraries Association named the American Journal of Botany one of the Top 10 Most Influential Journals of the Century in the field of Biology and Medicine. For further information, please contact the AJB staff at firstname.lastname@example.org. Richard Hund | EurekAlert! NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:6aed26a6-94f0-4bb8-814e-e0f40ea7af01>
3.734375
1,587
Content Listing
Science & Tech.
41.980449
95,617,774
Washington State University researchers have created a sustainable alternative to traditional concrete using coal fly ash, a waste product of coal-based electricity generation. The advance tackles two major environmental problems at once by making use of coal production waste and by significantly reducing the environmental impact of concrete production. Xianming Shi, associate professor in WSU’s Department of Civil and Environmental Engineering, and graduate student Gang Xu, have developed a strong, durable concrete that uses fly ash as a binder and eliminates the use of environmentally intensive cement. They report on their work in the August issue of the journal, Fuel. Reduces energy demand, greenhouse emissions Production of traditional concrete, which is made by combining cement with sand and gravel, contributes between five and eight percent of greenhouse... more Opening up a pathway to cost-effective, autonomous IoT application Objects in our daily lives, such as speakers, refrigerators, and even cars, are becoming “smarter” day by day as they connect to the internet and exchange data, creating the Internet of Things (IoT), a network among the objects themselves. Toward an IoT-based society, a miniaturized thermoelectric generator is anticipated to charge these objects, especially for those that are portable and wearable. Due to advantages such as its relatively low thermal conductance but high electric conductance, silicon nanowires have emerged as a promising thermoelectric material. Silicon-based thermoelectric generators conventionally employed long, silicon nanowires of about 10-100 nanometers, which were... more Folding and cutting thin metal films could enable microchip-based 3-D optical devices. Nanokirigami has taken off as a field of research in the last few years; the approach is based on the ancient arts of origami (making 3-D shapes by folding paper) and kirigami (which allows cutting as well as folding) but applied to flat materials at the nanoscale, measured in billionths of a meter. Now, researchers at MIT and in China have for the first time applied this approach to the creation of nanodevices to manipulate light, potentially opening up new possibilities for research and, ultimately, the creation of new light-based communications, detection, or computational devices. The findings are described today in the journal Science Advances, in a paper by MIT professor of mechanical... more For the first time, researchers have created a nanocomposite of ceramics and a two-dimensional material, opening the door for new designs of nanocomposites with such applications as solid-state batteries, thermoelectrics, varistors, catalysts, chemical sensors and much more. Sintering uses high heat to compact powder materials into a solid form. Widely used in industry, ceramic powders are typically compacted at temperatures of 1472 degrees Fahrenheit or higher. Many low-dimensional materials cannot survive at those temperatures. But a sintering process developed by a team of researchers at Penn State, called the cold sintering process (CSP), can sinter ceramics at much lower temperatures, less than 572 degrees F, saving energy and enabling a new form of material with high commercial potential. "We have industry people who are already very interested in this work," said Jing Guo, a post-doctoral scholar working in the group of Clive Randall, professor... more Rice lab creates conductive 3D carbon blocks that can be shaped for applications Rice University scientists have developed a simple way to produce conductive, three-dimensional objects made of graphene foam. The squishy solids look and feel something like a child’s toy but offer new possibilities for energy storage and flexible electronic sensor applications, according to Rice chemist James Tour. The technique detailed in Advanced Materials is an extension of groundbreaking work by the Tour lab that produced the first laser-induced graphene (LIG) in 2014 by heating inexpensive polyimide plastic sheets with a laser. The laser burns halfway through the plastic and turns the top into interconnected flakes of 2D carbon that remain attached to the bottom half. LIG can... more Jes Linnet, University of Southern Denmark Researchers demonstrate silver-based electrode films that could be used for flexible touch displays, televisions and solar cells Researchers have demonstrated large-scale fabrication of a new type of transparent conductive electrode film based on nanopatterned silver. Smartphone touch screens and flat panel televisions use transparent electrodes to detect touch and to quickly switch the color of each pixel. Because silver is less brittle and more chemically resistant than materials currently used to make these electrodes, the new films could offer a high-performance and long-lasting option for use with flexible screens and electronics. The silver-based films could also enable flexible solar cells for installation on windows, roofs and even personal devices. In the journal Optical Materials Express, the researchers... more University of Delaware/ Illustration by Joy Smoker UD engineers convert commonly discarded material into high-performance adhesive Whether you’re wrapping a gift or bandaging a wound, you rely on an adhesive to get the job done. These sticky substances often are made from petroleum-derived materials, but what if there was a more sustainable way to make them? Now, a team of engineers at the University of Delaware has developed a novel process to make tape out of a major component of trees and plants called lignin—a substance that paper manufacturers typically throw away. What’s more, their invention performs just as well as at least two commercially available products. The researchers recently described their results in ACS Central Science, and they are working on more ways to upcycle scrap wood and plants into... more Ever wonder why paint peels off the wall during summer’s high humidity? It’s the same reason that bandages separate from skin when we bathe or swim. Interfacial water, as it’s known, forms a slippery and nonadhesive layer between the glue and the surface to which it is meant to stick, interfering with the formation of adhesive bonds between the two. Overcoming the effects of interfacial water is one of the challenges facing developers of commercial adhesives. To find a solution, researchers here at The University of Akron are looking to one of the strongest materials found in nature — spider silk. ‘Nature’s best glue’ The sticky glue that coats the silk threads of spider webs is a hydrogel, meaning it is full of water. One would think, then, that spiders would have difficulty catching prey, especially in humid conditions... more Waste heat can be converted to electricity more efficiently using one-dimensional nanoscale materials as thin as an atom – ushering a new way of generating sustainable energy – thanks to new research by the University of Warwick. Led by Drs Andrij Vasylenko, Samuel Marks, Jeremy Sloan and David Quigley from Warwick’s Department of Physics, in collaboration with the Universities of Cambridge and Birmingham, the researchers have found that the most effective thermoelectric materials can be realised by shaping them into the thinnest possible nanowires. Thermoelectric materials harvest waste heat and convert it into electricity - and are much sought-after as a renewable and environmentally friendly sources of energy. Dr Andrij Vasylenko, from the University of Warwick’s Department of Physics and the paper’s first author, commented: “In contrast to 3-dimensional material, isolated nanowires conduct less heat... more A centuries-old materials bonding process is being tested aboard the International Space Station in an experiment that could pave the way for more materials research of its kind aboard the orbiting laboratory. Sintering is the process of heating different materials to compress their particles together. “In space the rules of sintering change,” said Rand German, principal investigator for the investigation titled NASA Sample Cartridge Assembly-Gravitational Effects on Distortion in Sintering (MSL SCA-GEDS-German). “The first time someone tries to do sintering in a different gravitational environment beyond Earth or even microgravity, they may be in for a surprise. There just aren’t enough trials yet to tell us what the outcome could be. Ultimately we have to be empirical, give it a try, and see what happens.” If the disparities between sintering on Earth and sintering in space can be... more
<urn:uuid:b64f7d42-fba9-4772-a72f-870346dd7c8b>
2.984375
1,761
Content Listing
Science & Tech.
28.172967
95,617,815
Edit by Evo:Due to claims of plagiarism, certain posts have been deleted from this thread. There has been no direct evidence provided that proves the industrial revolution caused the current changes in the earth's climate. One could just as easily declare that climate change caused the Industrial Revolution. As a warming trend continued through the 1600s and 1700s there was less emphasis on the populus surviving through heavy winters and more emphasis toward industrial inventions such as, lighter clothing (cotton weaves and production of looms) as well as abundant crops from a longer warm period (in the UK). The conditions were such that efforts were put toward satisfying (and profiting from) a more leasurely lifestyle amongst the former peasants and fiefdoms. Its actually a matter of proving what came first: a warming climate or the industrial revolution? Standby to be surprised: The first known accurate measurement of CO2 is: Thenard, 1812 Traité élém. de chimie, 5 edit., vol1, p.303. Value: 385,0 ppm We also have: W. Kreutz 1941, Kohlensäure Gehalt der unteren Luft schichten in Abhangigkeit von Witterungsfaktoren,” Angewandte Botanik, vol. 2, 1941, pp. 89-117 Average 1939-41: 438ppm. (Current value ~381ppm) The pile of ignored papers about measurements, before CO2 was structurally measured at Mauna loa, is about just under two feet high. Many are consistent with each other, showing two very weird short living decadal size spikes. I wonder how it is possible that people still believe in mans significant role in global warming. The narration on CO2 levels is based on the ice cores in Antarctica. Due to the very slow accumulation of snow (it is desert climate), the snow stays open for a very long time, a few thousand years. As long as the snow is open, air passes freely and variation in mixing ratios gets smooted. Shorter spikes are no longer visible. Another technique for measuring paleo CO2 levels is by some (not all) plant leaf reactions on the CO2 concentration in the stomata count. The more CO2 the less stomata. So if fossil leafs in peat bogs can be counted an assessment can be made of the CO2 level. http://home.wanadoo.nl/bijkerk/8100BPevent.gif [Broken] is such as assesment during the "cold event" of 8200 years ago. We compare the spikes of two different fossil leaf stomata counts (red and blue) with two level CO2 lines in ice core proxies (orange and black). The plusses indicate the temperature reconstruction in the Greenland ice cores, showing that the cold dip preceeded the reaction of the CO2 and also that there is no feedback whatsoever of the CO2 to the temperature. As I've stated before, I'm neutral on the subject. I have actually talked to climatologists here at http://www.iarc.uaf.edu/" [Broken] who say that while there is no doubt that our role in global warming is overplayed by alarmists, it is underplayed by the skeptics. While they spend most of their papers showing how the alarmists are overplaying the idea, they almost always put a disclaimer in the beginning stating that we should all try to reduce CO_2 levels, regardless. (I understand this is nearly impossible from an economists point of view). Also, my biggest issue is that I don't trust the measurements being made, simply because we can't measure everywhere at once, and also (I don't know how carbon cycles work) it seems impossible to ever actually measure something that could somehow 'hide' from our observation window given certain weather patterns. Not just wind blowing it to where our sensors aren't, but what if CO_2 saturates liquids or solids (or chemically reacts) and we aren't able to detect it? I seek understanding here, not argument. I'd actually prefer a simplified response and not a list of complex journal citings that I don't understand. THAT technique for argument is silly, as it seems to take the stance "here, I understand this and it backs up my statement, you're not understanding it is further proof that you're wrong." I think we'll always see a spike of CO2 around a glacial maximum period since isostatic rebound causes increased volcanic activity. How would one go about convincingly explaining things if they didn't have scientific data to back up what they said? Not to mention that we require people here to back up what they say with the scientific data unless they are just voicing a personal opinion, which is just that, a personal opinion. I guess a summary in layman's terms is what you are asking for but isn't it fairly clear already what the gist of the opposing posters is? Because I've already seen them do the citing, over and over. I don't understand the statistics. I've pulled the journals from the shelves with the same problem. At this point, I'd accept an uncited, laymen explaination from Andre, having seen him carefully document and cite everything already. The gist of the opposing poster is that it's not antrhopogenic; that I can deduce. Most of the arguments, however, are how the alarmists are wrong (which I already partly accept). I'm just curious if there's a way to explain or analogize the details behind the stance. I'm not, by any means, requiring it. edit: actually, Andre's last post is exactly what I'm talking about. Carbon cycle (singular)? Of course, there are a lot of them --- probably as many as there are people studying the carbon cycle: 1) break the earth into reservoirs (atmosphere, hydrosphere, biosphere, carbonate rocks, fossil fuel deposits, marine sediments --- as much detail as you want); 2) for each of "n" reservoirs, there are n-1 fluxes between the selected reservoir and the other reservoirs, combinatorially, (n2 - 2n + 1) total fluxes to measure; 3) measure those fluxes, and the chemistries (organic, inorganic, solid, liquid, gas, plus other details); 4) calculate residence times for carbon in each reservoir, residence time being defined as total C content of reservoir (assumed to be constant at some steady state) divided by the sum of rates at which C is added, or the sum of rates at which C is subtracted, to or from other reservoirs; 5) be consistent in the use of the reservoirs you define (Trenberth at NCAR is a good example of how not to do this --- atmospheric reservoir suddenly turns into all "mobile" C on the planet when calculating residence time of fossil fuel derived CO2 in the atmosphere); 6) take up residence in the nearest padded cell when you find out that most reservoir and flux data are order of magnitude estimates. The C-cycle is a transport and mass balance game --- old-fashioned, smash-mouth physics, not the carny shell-game you see in the popular press. Tricky chemistry? No. Run away from sensors? Atmospheric mixing and general flow patterns are well enough known that those measurements are fairly reliable --- downwind from power plants, and surface measurements in California's Mammoth Basin are obvious outliers. Hidden reservoirs? Probably not significant --- "hidden" means low flux and little interaction --- might be a fair-sized hydrate reservoir to be considered for deep ocean studies, plus frozen tundra and peat bogs. thanks! I'm going to have to look over the math later. I can see how to do it, but I don't understand how it works conceptually. I'll re-think it later; I'm anxious to leave my particular setting at the moment. About the skeptics role in climate change, it may be interesting to take note of Richard Courtney's analysis of the structural social powers in the global warming industry here. I toyed a little with the psychologic elements of global warming here. Actually, we have a very intense discussion about atmospheric CO2 about the same elements http://www.ukweatherworld.co.uk/forum/forums/thread-view.asp?tid=4567&start=1 [Broken]. (six pages and counting) But if you want to compare laymen and specialists, check where the knowledge comes from. I do not believe any of us here at this small Earth forum on PF are funded by oil or coal companies. I sure as hell am not. :rofl: Ah, the great phrase uttered all throughout history. I feel mild today. One should wonder what this would add to the substantiation of catastrophic global warming. It suggests that the skeptics use all kind of devious tricks to convinces others that it is not true. Consequently they are crooks so they are wrong. This red herring or fallacy is known as Argumentum ad hominem Now study the arguments of the sceptics and jot down how many times they contend that alarmists climatologists are either funded by global warming promoting goverments in the Kyoto threaty and hence that they are obliged to produce global warming or climatologists have noted that alarming about global warming places them in the limelight which is good for social status, building up autority and hence collecting the required funding. Happen to see that reasoning lately? No? That's because the sceptics don't need red herrings, since they can simply point to the evidence that there is no such things as catastrophic antropogenic global warming. Of course the basic physics of greenhouse effect are well understood and I spend some threads about that here last year, the complex chaotic interaction of all the players in the climatology is definetely not. Both sides agree on a rather weak basic greenhouse effect of CO2. But allegdly it is positive feedback that amplifies the greenhouse forcing of CO2. This is highly disputed. Olavi Karner has some very interesting publications about that. So the best thing to do is consulting the empiric evidence of the paleo climate in the last era's, like the Quartenary and of course that has happened, but that should include all geologic evidence. Unfortunately in reports of IPCC it's all about modelling, ice cores and hockeysticks and very little about Mammoths and Horses being able to live in high arctic Siberia during the "coldest" part of the Last Glacial Maximum. If you ignore enigma's like that you're bound to go wrong and modelling with wrong data leads to nothing, garbage in garbage out. Let's go on with one of the elements, the stable water isotopes (dD and d18O in the ice cores are supposed to represent temperatures, as fractination processes with isotopes are temperature sensitive, nothing wrong with the physics here. But the problems start when we think seasonality. The annual overal average of the isotope value is the weighted overage of the indivual snow shower values times the volume of snow that they bring. In other words if you have a wet summer and a dry winter, the isotope record will registrate a lot of "warm" summer isotopes and a few "cold" winter isotopes, as the winters in the Arctic are usually dry, it's too cold to snow. Now when we happen to have a dry summer (which may be warmer due to the abundance of sun) there are much less "warm" isotopes accumulated and the average annual value will appear to be much lower, which spuriously suggests a colder period. Now is this important? and can we see that happening? Here is a clue, compare the "temperature" spikes (actually mainly processed isotope ratio values) of Greenland of Alley 2000, the same as my previous graph) with the snow accumulation: How can Alley know if those precipitation changes is summer or winter heavy and thus whether or not those isotopes are affected by changing seasonal precipitation spikes? Is there any reference to that from other geologic proxies? It may be clear that whether this wild rollercoaster "temperature" graph is true or not is one of the most essential elements of the global warming idea. The next post the fun will start. Continuing the narration. See the "Younger Dryas" on the last link in the graph? It's utterly frustrating that the img feature is not working here and not being able to illustrate the narrative. Anyway, lesson one, paragraph one, sub A of Paleo climatology is about the Younger Dryas, the most intense studied period, as being a sudden but brief return to ice age conditions. If the isotopes were temperatures then Alleys graph of the ice cores clearly shows how cold it suddenly got. However less than 1000 miles south of those ice cores, this happened: Younger dryas arid with mild summers? Wouldn't that be quite consistent with the isotopes in the ice cores? No summer precipitation so no warm summer isotopes and hence a spurious cold signal while the preceding (and successing) humid period with cooler summers brought lots of warm summer precipitations and warm isotopes to produce a spurious warm signal. Nevertheless the discoverers don't want to rock boats and don't want to challenge textbooks, so they invent an "ad hoc" hypothesis to force the square reality down into the round cannister of paradigms: Always those models, nicely predictable without the erratic chaotic behavior of reality. But how many more ad hoc hypotheses do we accept (got a bunch to follow) before we realize that three strikes is out. This study is simply very consistent with the isotopes reflecting seasonal precipitation changes as the ice cores are indicating in reality. That's the problem, the believing part. That's what made global warming big, believing because somehow it's appealing to believe it. Furthermore, every measurement is a local event, then and there. Also the Manau loa CO2 measurements, so why should I believe that this would be representing the global CO2 signal? But be patient, we're editing presently a paper with 320 peer reviewed scientific references with about 70,000 measurements of CO2 from three continents from 1812 to 1961, before the Mauno Loa CO2 records. None of those are the IPCC reports. Why? I wonder. I's hard work though and it may take another year but we need to make it completely fail safe. That is, avoiding the data mining and other statistical tricks as had happened with the hockey stick. Solin4, Do you realize that your post still contains a few fallacies. Like the truth holding the opinion of complete mankind in contempt. The bandwagon fallacy. Let me give an example of interpretation of nature that has accumulated more and more adherents, reaching a larger and larger majority, over decade after decade, and in the light of more and more data, that has turned out to be radically incorrect. Stomach / peptic ulcers! last year the Nobel price for medicine went to the discovers of Helicobacter, the bacteria that causes stomach ulcers. Before 1981 99,99% of mankind knew that peptic ulcers were caused by a wrong life style and stress. NOT! And it took 20 years or so and a lot of scolding before it became accepted. While the first demonstration in 1982 on the top of my head, convincingly showed that they were right. But nobody wanted to believe it, it was just too outrageous. i'm happy that it's accepted and I was easily cured. But how many people died needlessly from peptic ulcer just because mankind happens to be the most stubborn species of the world. There is a reason why it is formulated this way. Advise, listen to anybody who has a verifiable story and don't judge on fallacies. The "incontrovertible fact" has been discussed in detail in P&WA, https://www.physicsforums.com/showthread.php?t=123372 . Go through it if you wish, don't if you don't --- I ain't gonna go through another tutorial on temperature measurement --- none of the "greenhousers" have ever bothered to review the quality, uncertainty, and systematic errors in meteorological temperature measurements. No one but a complete idiot uses other peoples' data without such a review. It is inconclusive. It cannot be used to demonstrate an increase in temperature, nor a decrease in temperature, nor a constant temperature over the past century. There we go, politeness gone. Nobody has managed to comunicate anything. No convincing power at all in some plain objective factual observation. Only fallacies. the aggravating spiral up until Godwins law is reached. For climate it is irrelevan that CO2 goes up, the effects are minor and I can proof that beyound doubt, that is, I can show where the proof is and I was only at some 2-3% with the Greenland ice core misinterpretation. but I will never be able to penetrate the pachyderm fallacies of the positive feedback loop of the urge to scaremonger and the urge to be scared. That's why there will always be tales of devils and dragons. Global warming is just a pseudo rationalized version of that, replacing the Y2K millenium bug, which replaced the nuclear winter threath and the mutual assured destruction. Before that we had the eugenics treath which was casus belli for World War II. There must alway be a treath regardless if it's true or not. We're still a long way away from fallacy free science. Or, is overexploitation of marine fisheries driving an increase in surface CO2 concentrations, leading to oceanic outgassing, driving up atmospheric CO2? Don't stampede yourself into "solving" a problem that doesn't exist while ignoring something that might become a problem. I'd have to say that the overfishing issue is the culprit here and is a serious one that needs to be addressed. Solin4, here is what happens when people have a knee jerk reaction and pass laws without understanding the impact those laws may have. We are our own worst enemy. (It's not proven that any of this ends with "Global Warming", something that really doesn't have enough evidence to back it up based on the fact that we just have not been able to obtain good information until recently and the fact that global climate warming and cooling has been going on since the beginning of time.) This article is just meant to point out that making rash decisions often results in more problems than if we had done nothing at all. How can we let things like this happen? "Cool your home, warm the planet. When more than two dozen countries undertook in 1989 to fix the ozone hole over Antarctica, they began replacing chloroflourocarbons in refrigerators, air conditioners and hair spray. But they had little idea that using other gases that contain chlorine or fluorine instead also would contribute greatly to global warming. In theory, the ban should have helped both problems. But the countries that first signed the Montreal Protocol 17 years ago failed to recognize that CFC users would seek out the cheapest available alternative. That effect is at odds with the intent of a second treaty, drawn up in Kyoto, Japan, in 1997 by the same countries behind the Montreal pact. In fact, the volume of greenhouse gases created as a result of the Montreal agreement's phaseout of CFCs is two times to three times the amount of global-warming carbon dioxide the Kyoto agreement is supposed to eliminate. Some of the replacement chemicals whose use has grown because of the Montreal treaty -- hydrochloroflourocarbons, or HCFCs, and their byproducts, hydrofluorocarbons, or HFCs -- decompose faster than CFCs because they contain hydrogen. But, like CFCs, they are considered potent greenhouse gases that harm the climate -- up to 10,000 times worse than carbon dioxide emissions. edited to change broken link Okay then let's cross swords Wrong on two counts. The information that I presented included a clear tendency of CO2 to follow temperature, which is also happening today. Second, the notion of a gradual increasing CO2 level since the industrial revoltution dismisses an abundance of CO2 measurements between 1812 and 1961 which action has never been challenged. It is now. I did not, I also showed that we are not using fallacies, consequently this very sentence is a strawman fallacy as well as a bandwagon / appeal to authority. I did not however others did notice that something was very wrong: http://energycommerce.house.gov/108/home/07142006_Wegman_Report.pdf [Broken] trashed the hockey stick: Then we have: I've repeat myself: the elemantary physics of IR absorbtion by various molecules is well understood. All sides agree would on that. Calculation very simple settings using Stefan Boltzmann and basic emission parameters for a black body one will arrive at the immediate/dynamic value of 0.7K degrees increase for doubling CO2 and thermal equilibrium at 1.2K degrees for doubling CO2 after a few centuries of settling. So if IPCC makes that some -what is it-, 1.5K to 5.4K degrees, it is assumed that positive feedback factors amplify the greenhouse effect. However, there is not a trace of evidence for positive feedback on the contrary there is evidence enough showing that there is no positive feedback. Ask the same Lindzen. There is nothing catastrophic going on even if the CO2 levels would rise to the 1000-2000 ppmv range where they were assumed to have been in the Early Tertiary. Evo, unfortunately your link doesn't work. But indeed we are, fear of the unknown and the quest for security is guiding our behavior. Clever manipulation of that fear is the cause of the global warming myth, That is apart from the trigger, those misunderstood spikes in the ice cores that changed decent people like Richard Alley into alarmist. Alley tells why, under the first "listen" button. I was aiming to show why he is wrong. Separate names with a comma.
<urn:uuid:3db8f6d6-5795-47c6-a5e8-cb91b3d15357>
3.28125
4,681
Comment Section
Science & Tech.
49.481581
95,617,831
The Interplay of Continental Evolution, Plate Tectonics, and Evolution of LifeJune 29, 2017 / Written by: University of Wisconsin Satellite view of the Red Lakes region in Ontario, Canada with four of the sampling areas from the UW-Madison study of Sr isotope compositions of Archaen carbonates. Image source: Google Maps As the complexity and diversity of life on Earth keeps getting pushed further back in time with more and more data from the geologic record, the issue of the role of continents in the evolution of the early biosphere has become increasingly prominent. Are emergent continents required for life’s origin? Are nutrients such as P dependent on exposure of evolved continental crust? Are the ecological niches provided by extensive continental shelves required for a diverse ecosystem? Scientists with the NASA Astrobiology Institute team based at the University of Wisconsin conducted a detailed study of Sr isotope compositions of Archean carbonates, which adds to an earlier study of Archean barite published in 2016, to provide a fairly complete view of the Sr isotope composition of Archean seawater. The paper, “Initiation of modern-style plate tectonics recorded in Mesoarchean marine chemical sediments,” is published in Geochimica et Cosmochimica Acta. Their results show that emergent and evolved (granitic) continental crust was extensive since at least 3.2 Ga. When considered along with geochemical, metamorphic, and thermal models for crustal evolution, this time period likely coincides with the initiation of modern-style plate tectonics, highlighting the intimate linkages that exist between plate tectonics, continental evolution, and the biosphere in the Archean Earth. Although it has been said that early life simply “enjoyed” continental crust and plate tectonics, the convergence of multiple lines of evidence suggests a more important role for solid-Earth processes in the evolution of the biosphere than previously thought. The research was funded by the NASA Astrobiology Institute and the National Science Foundation, with support from the Natural Sciences and Engineering Research Council of Canada. Related story: Life in Ancient Oceans Enabled by Erosion from Land Source: [University of Wisconsin] - Electron Acceptors and Carbon Sources for a Thermoacidophilic Archaea - Yosemite Granite Tells New Story About Earth's Geologic History - Supporting SHERLOC in the Detection of Kerogen as a Biosignature - New Estimates of Earth's Ancient Climate and Ocean pH - How Microbes From Spacecrafts Survive Clean Rooms - Radical Factors in the Evolution of Animal Life - Understanding Oxygen as an Exoplanet Biosignature - Recap of the 2018 Astrobiology Graduate Conference (AbGradCon) - Astrobiologist Rebecca Rapf Receives Inaugural Maggie C. Turnbull Early Career Award - Searching for the Great Oxidation Event in North America
<urn:uuid:d395a9df-c4ff-47a9-8824-9ac13d591f38>
2.953125
607
News (Org.)
Science & Tech.
1.253124
95,617,840
Differential of a function |Part of a series of articles about| holds, where the derivative is represented in the Leibniz notation dy/dx, and this is consistent with regarding the derivative as the quotient of the differentials. One also writes The precise meaning of the variables dy and dx depends on the context of the application and the required level of mathematical rigor. The domain of these variables may take on a particular geometrical significance if the differential is regarded as a particular differential form, or analytical significance if the differential is regarded as a linear approximation to the increment of a function. Traditionally, the variables dx and dy are considered to be very small (infinitesimal), and this interpretation is made rigorous in non-standard analysis. History and usage The differential was first introduced via an intuitive or heuristic definition by Gottfried Wilhelm Leibniz, who thought of the differential dy as an infinitely small (or infinitesimal) change in the value y of the function, corresponding to an infinitely small change dx in the function's argument x. For that reason, the instantaneous rate of change of y with respect to x, which is the value of the derivative of the function, is denoted by the fraction The use of infinitesimals in this form was widely criticized, for instance by the famous pamphlet The Analyst by Bishop Berkeley. Augustin-Louis Cauchy (1823) defined the differential without appeal to the atomism of Leibniz's infinitesimals. Instead, Cauchy, following d'Alembert, inverted the logical order of Leibniz and his successors: the derivative itself became the fundamental object, defined as a limit of difference quotients, and the differentials were then defined in terms of it. That is, one was free to define the differential dy by an expression According to Boyer (1959, p. 12), Cauchy's approach was a significant logical improvement over the infinitesimal approach of Leibniz because, instead of invoking the metaphysical notion of infinitesimals, the quantities dy and dx could now be manipulated in exactly the same manner as any other real quantities in a meaningful way. Cauchy's overall conceptual approach to differentials remains the standard one in modern analytical treatments, although the final word on rigor, a fully modern notion of the limit, was ultimately due to Karl Weierstrass. In physical treatments, such as those applied to the theory of thermodynamics, the infinitesimal view still prevails. Courant & John (1999, p. 184) reconcile the physical use of infinitesimal differentials with the mathematical impossibility of them as follows. The differentials represent finite non-zero values that are smaller than the degree of accuracy required for the particular purpose for which they are intended. Thus "physical infinitesimals" need not appeal to a corresponding mathematical infinitesimal in order to have a precise sense. Following twentieth-century developments in mathematical analysis and differential geometry, it became clear that the notion of the differential of a function could be extended in a variety of ways. In real analysis, it is more desirable to deal directly with the differential as the principal part of the increment of a function. This leads directly to the notion that the differential of a function at a point is a linear functional of an increment Δx. This approach allows the differential (as a linear map) to be developed for a variety of more sophisticated spaces, ultimately giving rise to such notions as the Fréchet or Gâteaux derivative. Likewise, in differential geometry, the differential of a function at a point is a linear function of a tangent vector (an "infinitely small displacement"), which exhibits it as a kind of one-form: the exterior derivative of the function. In non-standard calculus, differentials are regarded as infinitesimals, which can themselves be put on a rigorous footing (see differential (infinitesimal)). The differential is defined in modern treatments of differential calculus as follows. The differential of a function f(x) of a single real variable x is the function df of two independent real variables x and Δx given by One or both of the arguments may be suppressed, i.e., one may see df(x) or simply df. If y = f(x), the differential may also be written as dy. Since dx(x, Δx) = Δx it is conventional to write dx = Δx, so that the following equality holds: This notion of differential is broadly applicable when a linear approximation to a function is sought, in which the value of the increment Δx is small enough. More precisely, if f is a differentiable function at x, then the difference in y-values where the error ε in the approximation satisfies ε/Δx → 0 as Δx → 0. In other words, one has the approximate identity in which the error can be made as small as desired relative to Δx by constraining Δx to be sufficiently small; that is to say, as Δx → 0. For this reason, the differential of a function is known as the principal (linear) part in the increment of a function: the differential is a linear function of the increment Δx, and although the error ε may be nonlinear, it tends to zero rapidly as Δx tends to zero. Differentials in several variables Following Goursat (1904, I, §15), for functions of more than one independent variable, the partial differential of y with respect to any one of the variables x1 is the principal part of the change in y resulting from a change dx1 in that one variable. The partial differential is therefore involving the partial derivative of y with respect to x1. The sum of the partial differentials with respect to all of the independent variables is the total differential which is the principal part of the change in y resulting from changes in the independent variables xi. where the error terms ε i tend to zero as the increments Δxi jointly tend to zero. The total differential is then rigorously defined as Since, with this definition, As in the case of one variable, the approximate identity holds in which the total error can be made as small as desired relative to by confining attention to sufficiently small increments. Application of the total differential to error estimation In measurement, the total differential is used in estimating the error Δf of a function f based on the errors Δx, Δy, ... of the parameters x, y, .... Assuming that the interval is short enough for the change to be approximately linear: - Δf(x) = f'(x) × Δx and that all variables are independent, then for all variables, This is because the derivative fx with respect to the particular parameter x gives the sensitivity of the function f to a change in x, in particular the error Δx. As they are assumed to be independent, the analysis describes the worst-case scenario. The absolute values of the component errors are used, because after simple computation, the derivative may have a negative sign. From this principle the error rules of summation, multiplication etc. are derived, e.g.: - Let f(a, b) = a × b; - Δf = faΔa + fbΔb; evaluating the derivatives - Δf = bΔa + aΔb; dividing by f, which is a × b - Δf/f = Δa/a + Δb/b That is to say, in multiplication, the total relative error is the sum of the relative errors of the parameters. To illustrate how this depends on the function considered, consider the case where the function is f(a, b) = a ln b instead. Then, it can be computed that the error estimate is - Δf/f = Δa/a + Δb/(b ln b) with an extra 'ln b' factor not found in the case of a simple product. This additional factor tends to make the error smaller, as ln b is not as large as a bare b. Higher-order differentials of a function y = f(x) of a single variable x can be defined via: and, in general, Informally, this justifies Leibniz's notation for higher-order derivatives When the independent variable x itself is permitted to depend on other variables, then the expression becomes more complicated, as it must include also higher order differentials in x itself. Thus, for instance, and so forth. Similar considerations apply to defining higher order differentials of functions of several variables. For example, if f is a function of two variables x and y, then Higher order differentials in several variables also become more complicated when the independent variables are themselves allowed to depend on other variables. For instance, for a function f of x and y which are allowed to depend on auxiliary variables, one has Because of this notational infelicity, the use of higher order differentials was roundly criticized by Hadamard 1935, who concluded: - Enfin, que signifie ou que représente l'égalité - A mon avis, rien du tout. That is: Finally, what is meant, or represented, by the equality [...]? In my opinion, nothing at all. In spite of this skepticism, higher order differentials did emerge as an important tool in analysis In these contexts, the nth order differential of the function f applied to an increment Δx is defined by or an equivalent expression, such as where is an nth forward difference with increment tΔx. This definition makes sense as well if f is a function of several variables (for simplicity taken here as a vector argument). Then the nth differential defined in this way is a homogeneous function of degree n in the vector increment Δx. Furthermore, the Taylor series of f at the point x is given by The higher order Gâteaux derivative generalizes these considerations to infinite dimensional spaces. A number of properties of the differential follow in a straightforward manner from the corresponding properties of the derivative, partial derivative, and total derivative. These include: - Linearity: For constants a and b and differentiable functions f and g, - Product rule: For two differentiable functions f and g, - If y = f(u) is a differentiable function of the variable u and u = g(x) is a differentiable function of x, then - If y = f(x1, ..., xn) and all of the variables x1, ..., xn depend on another variable t, then by the chain rule for partial derivatives, one has - Heuristically, the chain rule for several variables can itself be understood by dividing through both sides of this equation by the infinitely small quantity dt. - More general analogous expressions hold, in which the intermediate variables x i depend on more than one variable. If there exists an m × n matrix A such that in which the vector ε → 0 as Δx → 0, then f is by definition differentiable at the point x. The matrix A is sometimes known as the Jacobian matrix, and the linear transformation that associates to the increment Δx ∈ Rn the vector AΔx ∈ Rm is, in this general setting, known as the differential df(x) of f at the point x. This is precisely the Fréchet derivative, and the same construction can be made to work for a function between any Banach spaces. Another fruitful point of view is to define the differential directly as a kind of directional derivative: which is the approach already taken for defining higher order differentials (and is most nearly the definition set forth by Cauchy). If t represents time and x position, then h represents a velocity instead of a displacement as we have heretofore regarded it. This yields yet another refinement of the notion of differential: that it should be a linear function of a kinematic velocity. The set of all velocities through a given point of space is known as the tangent space, and so df gives a linear function on the tangent space: a differential form. With this interpretation, the differential of f is known as the exterior derivative, and has broad application in differential geometry because the notion of velocities and the tangent space makes sense on any differentiable manifold. If, in addition, the output value of f also represents a position (in a Euclidean space), then a dimensional analysis confirms that the output value of df must be a velocity. If one treats the differential in this manner, then it is known as the pushforward since it "pushes" velocities from a source space into velocities in a target space. Although the notion of having an infinitesimal increment dx is not well-defined in modern mathematical analysis, a variety of techniques exist for defining the infinitesimal differential so that the differential of a function can be handled in a manner that does not clash with the Leibniz notation. These include: - Defining the differential as a kind of differential form, specifically the exterior derivative of a function. The infinitesimal increments are then identified with vectors in the tangent space at a point. This approach is popular in differential geometry and related fields, because it readily generalizes to mappings between differentiable manifolds. - Differentials as nilpotent elements of commutative rings. This approach is popular in algebraic geometry. - Differentials in smooth models of set theory. This approach is known as synthetic differential geometry or smooth infinitesimal analysis and is closely related to the algebraic geometric approach, except that ideas from topos theory are used to hide the mechanisms by which nilpotent infinitesimals are introduced. - Differentials as infinitesimals in hyperreal number systems, which are extensions of the real numbers which contain invertible infinitesimals and infinitely large numbers. This is the approach of nonstandard analysis pioneered by Abraham Robinson. Examples and applications Differentials may be effectively used in numerical analysis to study the propagation of experimental errors in a calculation, and thus the overall numerical stability of a problem (Courant 1937a). Suppose that the variable x represents the outcome of an experiment and y is the result of a numerical computation applied to x. The question is to what extent errors in the measurement of x influence the outcome of the computation of y. If the x is known to within Δx of its true value, then Taylor's theorem gives the following estimate on the error Δy in the computation of y: where ξ = x + θΔx for some 0 < θ < 1. If Δx is small, then the second order term is negligible, so that Δy is, for practical purposes, well-approximated by dy = f'(x)Δx. The differential is often useful to rewrite a differential equation in the form in particular when one wants to separate the variables. - For a detailed historical account of the differential, see Boyer 1959, especially page 275 for Cauchy's contribution on the subject. An abbreviated account appears in Kline 1972, Chapter 40. - Cauchy explicitly denied the possibility of actual infinitesimal and infinite quantities (Boyer 1959, pp. 273–275), and took the radically different point of view that "a variable quantity becomes infinitely small when its numerical value decreases indefinitely in such a way as to converge to zero" (Cauchy 1823, p. 12; translation from Boyer 1959, p. 273). - Boyer 1959, p. 275 - Boyer 1959, p. 12: "The differentials as thus defined are only new variables, and not fixed infinitesimals..." - Courant 1937a, II, §9: "Here we remark merely in passing that it is possible to use this approximate representation of the increment Δy by the linear expression hƒ(x) to construct a logically satisfactory definition of a "differential", as was done by Cauchy in particular." - Boyer 1959, p. 284 - See, for instance, the influential treatises of Courant 1937a, Kline 1977, Goursat 1904, and Hardy 1905. Tertiary sources for this definition include also Tolstov 2001 and Ito 1993, §106. - Cauchy 1823. See also, for instance, Goursat 1904, I, §14. - Goursat 1904, I, §14 - In particular to infinite dimensional holomorphy (Hille & Phillips 1974) and numerical analysis via the calculus of finite differences. - Goursat 1904, I, §17 - Goursat 1904, I, §§14,16 - Eisenbud & Harris 1998. - See Kock 2006 and Moerdijk & Reyes 1991. - See Robinson 1996 and Keisler 1986. - Boyer, Carl B. (1959), The history of the calculus and its conceptual development, New York: Dover Publications, MR 0124178. - Cauchy, Augustin-Louis (1823), Résumé des Leçons données à l'Ecole royale polytechnique sur les applications du calcul infinitésimal. - Courant, Richard (1937a), Differential and integral calculus. Vol. I, Wiley Classics Library, New York: John Wiley & Sons (published 1988), ISBN 978-0-471-60842-4, MR 1009558. - Courant, Richard (1937b), Differential and integral calculus. Vol. II, Wiley Classics Library, New York: John Wiley & Sons (published 1988), ISBN 978-0-471-60840-0, MR 1009559. - Courant, Richard; John, Fritz (1999), Introduction to Calculus and Analysis Volume 1, Classics in Mathematics, Berlin, New York: Springer-Verlag, ISBN 3-540-65058-X, MR 1746554 - Eisenbud, David; Harris, Joe (1998), The Geometry of Schemes, Springer-Verlag, ISBN 0-387-98637-5. - Fréchet, Maurice (1925), "La notion de différentielle dans l'analyse générale", Annales Scientifiques de l'École Normale Supérieure, Série 3, 42: 293–323, ISSN 0012-9593, MR 1509268. - Goursat, Édouard (1904), A course in mathematical analysis: Vol 1: Derivatives and differentials, definite integrals, expansion in series, applications to geometry, E. R. Hedrick, New York: Dover Publications (published 1959), MR 0106155. - Hadamard, Jacques (1935), "La notion de différentiel dans l'enseignement", Mathematical Gazette, XIX (236): 341–342, JSTOR 3606323. - Hardy, Godfrey Harold (1908), A Course of Pure Mathematics, Cambridge University Press, ISBN 978-0-521-09227-2. - Hille, Einar; Phillips, Ralph S. (1974), Functional analysis and semi-groups, Providence, R.I.: American Mathematical Society, MR 0423094. - Ito, Kiyosi (1993), Encyclopedic Dictionary of Mathematics (2nd ed.), MIT Press, ISBN 978-0-262-59020-4. - Kline, Morris (1977), "Chapter 13: Differentials and the law of the mean", Calculus: An intuitive and physical approach, John Wiley and Sons. - Kline, Morris (1972), Mathematical thought from ancient to modern times (3rd ed.), Oxford University Press (published 1990), ISBN 978-0-19-506136-9 - Keisler, H. Jerome (1986), Elementary Calculus: An Infinitesimal Approach (2nd ed.). - Kock, Anders (2006), Synthetic Differential Geometry (PDF) (2nd ed.), Cambridge University Press. - Moerdijk, I.; Reyes, G.E. (1991), Models for Smooth Infinitesimal Analysis, Springer-Verlag. - Robinson, Abraham (1996), Non-standard analysis, Princeton University Press, ISBN 978-0-691-04490-3. - Tolstov, G.P. (2001) , "Differential", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4. - Differential Of A Function at Wolfram Demonstrations Project
<urn:uuid:d6924447-a12d-4f32-be12-4f612e075ce4>
3.609375
4,399
Knowledge Article
Science & Tech.
42.734964
95,617,848
Natural defence against natural disasters World Wetlands Day focuses on conservation that minimizes nature’s extremes Unprecedented. Extraordinary. Historic. Words like these were once reserved for the rarest of weather events. Today, we’re hearing them more often as the rare becomes the regular. Case in point: when flooding hit the Canadian Prairies in 2014, the excess water devastated rural communities. Then, severe drought challenged some of those same communities the next year. “It used to be rare to see these events piggybacking each other so quickly,” says Pascal Badiou, PhD, a research scientist with DUC’s Institute for Wetland and Waterfowl Research. Badiou is leading several studies that examine the impacts of wetland loss in Canada, particularly on the Prairies. Badiou views this more frequent cycling between droughts and floods as symptomatic of climate change. And a growing body of research, including his own, shows wetlands are the frontlines of defense against these kind of disasters. It’s knowledge like this that inspired the 2017 theme for World Wetlands Day – “Wetlands for Disaster Risk Reduction.” Celebrated annually on February 2, the global event emphasizes how everyone benefits from healthy wetlands. Research shows that wetlands can reduce the severity of flooding and drought, holding excess water during wet periods and slowly releasing it during dry periods. They also store carbon and provide essential habitat for migratory and threatened species. Wetlands are essential to Canada’s fresh water too, filtering out pollutants to protect water quality. Even though wetland conservation is important for the health and safety of all Canadians, we continue to lose wetlands to unsustainable development. Conservation organizations are working hard to reverse this trend. By securing and restoring wetlands, influencing policy, sharing knowledge and spearheading groundbreaking research, DUC is leading the way in protecting the wetlands that reduce our risk of suffering more natural disasters. About World Wetlands Day: February 2 marks the adoption of the Convention of Wetlands, which took place in 1971 in the Iranian city of Ramsar. It’s a treaty negotiated by countries and non-governmental organizations, which provides the framework for the conservation and wise use of wetlands. Canada is one of the treaty’s 169 contracting parties and currently has 37 designated Ramsar sites. Of these, 17 are national wildlife areas or migratory bird sanctuaries. Read These Stories NextFind more stories Habitat restoration project provides business and environmental benefits at Alberta cattle farm. School field trips connect Alberta youngsters to nature.
<urn:uuid:1047a154-b92d-444e-bd61-67e9f3353462>
3.640625
536
News Article
Science & Tech.
31.992436
95,617,867
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Transcript of Plate Tectonics Identifying Plate Boundaries There are three types of plate boundaries: convergent, divergent and transform boundaries. Transform boundaries are when two plates slide past one another. Convection currents are caused by very hot material at the deepest part of the mantle rising, then cooling, sinking again and then heating, rising and repeating the cycle over repeatedly. The Mid-Atlantic Ridge was created by convection currents that pushed hot mantle up, creating the ridge. Plate Boundary Identification Transform Boundaries: occurs when two plates slide past each other. As the plates jam against each other, earthquakes happen through a wide boundary zone. Divergent Boundaries: occurs when two boundaries move away from each other. Freguent earthquakes are caused along the rift. Convergent Boundaries: occurs when two plates come together. The impact of the colliding plates causes the plates to form a rugged mountain range. Resulting Features or Events Earthquakes: When plate tectonics slide past one another suddenly or movement in the outer crust. The biggest earthquake recorded was known as The Great Chilean Earthquake. This was a 9.5 on the magnitude scale. Approximately 1,655 died and 3,000 were injured. Rift Valley: lowland valley that forms where Erath's tectonic plates move apart (rift). Rift valleys can be found on land and on the seafloor. Tsunamis: a series of waves caused by earthquakes. Results of a 2011 tsunami in Japan: leaking radioactive water, 230,000 people lost their homes and $300 billion in damages. Modern Data supports the plate tectonics theory because now a days we have the GPS and it has been the most useful for studying the earth´s crustal movements. This supports the theory because we know the creep rate at the San Andreas fault is approximately 28 to 34 millimeters, or a little over 1 inch, per year. Scientists create large networks of GPS recievers near plate boundaries so they know how much they move. Another piece of evidence that supports the theory of plate tectonics is the robotic studies of the sea floor. We use sound waves that use pressure to move through gases, liquids and solids. Light cannot be used as it is absorbed by water very quickly; usually lighting no further than 30 meters. So by using sound scientists can find out different properties about the sea floor. Nuclear Waste Storage Site Ice Sheet Storage, Antarctica By: Amelia Perdue, Lauren Meadows and Kirsten Moore There are many pieces of historical evidence that suggest the continents were once together. One piece of evidence is that there are similar fossils of species found on continents that are nowhere near touching. This then states that because the fossils are similar, and because they are found on continents that are nowhere near each other, that these continents where once connected in those areas. National Oceanography Centre Website United States Geological Survey Ice sheet storage for nuclear waste is beneficial because it is away from most people, with only 4000 people living there in the summer, and 1000 people living there in the winter. The containers own heat would melt a shaft in the ice, and it would rest one or two kilometers below the surface, where it will rest on a rock/ice interface. Earthquakes could cause little problems because they don't occur very occasionally. With only two volcanic eruptions in the last 50 years, there is little chance that they will cause a problem.
<urn:uuid:f187bcbe-71dc-43c6-bd3f-b5f77b8eeed5>
4.375
796
Truncated
Science & Tech.
40.819408
95,617,899
Game Programming using Qt 5.x Beginner's Guide - Second Edition: Design and build fun games with Qt and Qt Quick 2 using associated toolsets If you are planning to learn about Qt and its associated toolsets to build apps and games, this book is a must have. - Learn to create simple 2D (and complex 3D) graphics and games using all of Qt's tools and widgets available for game development - Get acquainted with a small yet powerful addition-the Qt Gamepad module that makes it possible to integrate gamepad support in C++ and QML applications - Delve into OpenGL and learn how it is used in Qt applications Qt is the leading cross-platform toolkit for all significant desktop, mobile, and embedded platforms and is becoming more popular by the day, especially on mobile and embedded devices. It's a powerful tool that perfectly fits the needs of game developers. You only need to create your game once and deploy it on all major platforms such as iOS, Android, and WinRT, without changing a single source file. This book will help you learn the nitty-gritty of Qt and will equip you with the necessary toolsets to build apps and games. The book begins with a brief introduction to creating an application and preparing a working environment for both desktop and mobile platforms. You will learn how to use built-in Qt widgets and Form Editor to create a classic GUI application. You'll then explore the basics of creating graphical interfaces and Qt's core concepts (data processing and display) that will help you create high-performance games. As you progress through the chapters, you'll learn to enrich your games by implementing network connectivity and employing scripting. You will learn about Qt's capabilities for handling strings and files, data storage, and serialization. Moving on, you will also learn about the new Qt Gamepad module and how to add it in your game. You'll then delve into OpenGL, and how it can be used in Qt applications to implement hardware-accelerated 2D and 3D graphics. You will then explore various facets of Qt Quick: how it can be used in games to add game logic and design animations, add game physics, and build astonishing UIs for your games. By the end of this book, you will have developed the skillset to develop interesting games with Qt. What you will learn - Install the latest version of Qt on your system - Understand the basic concepts of every Qt game and application - Develop 2D object-oriented graphics using Qt Graphics View - Build multiplayer games or add a chat function to your games with Qt Network module - Script your game with Qt QML - Explore the Qt Gamepad module in order to integrate gamepad support in C++ and QML applications - Program resolution-independent and fluid UIs using QML and Qt Quick - Control your game flow in line with mobile device sensors - Test and debug your game easily with Qt Creator and Qt Test Who This Book Is For If you want to create great graphical user interfaces and astonishing games with Qt, this book is ideal for you. No previous knowledge of Qt is required; however knowledge of C++ is mandatory.
<urn:uuid:1cf97c07-dcee-4dce-be65-c0b6da7e3595>
2.78125
654
Product Page
Software Dev.
43.908075
95,617,936
WASHINGTON (AP) ? Worldwide levels of the chief greenhouse gas that causes global warming have hit a milestone, reaching an amount never before encountered by humans, federal scientists said Friday. Carbon dioxide was measured at 400 parts per million at the oldest monitoring station which is in Hawaii sets the global benchmark. The last time the worldwide carbon level was probably that high was about 2 million years ago, said Pieter Tans of the National Oceanic and Atmospheric Administration. That was during the Pleistocene Era. "It was much warmer than it is today," Tans said. "There were forests in Greenland. Sea level was higher, between 10 and 20 meters (33 to 66 feet)." Other scientists say it may have been 10 million years ago that Earth last encountered this much carbon dioxide in the atmosphere. The first modern humans only appeared in Africa about 200,000 years ago. The measurement was recorded Thursday and it is only a daily figure, the monthly and yearly average will be smaller. The number 400 has been anticipated by climate scientists and environmental activists for years as a notable indicator, in part because it's a round number ? not because any changes in man-made global warming happen by reaching it. "Physically, we are no worse off at 400 ppm than we were at 399 ppm," Princeton University climate scientist Michael Oppenheimer said. "But as a symbol of the painfully slow pace of measures to avoid a dangerous level of warming, it's somewhat unnerving." Environmental activists, such as former Vice President Al Gore, seized on the milestone. "This number is a reminder that for the last 150 years ? and especially over the last several decades ? we have been recklessly polluting the protective sheath of atmosphere that surrounds the Earth and protects the conditions that have fostered the flourishing of our civilization," Gore said in a statement. "We are altering the composition of our atmosphere at an unprecedented rate." Carbon dioxide traps heat just like in a greenhouse and most of it stays in the air for a century; some lasts for thousands of years, scientists say. It accounts for three-quarters of the planet's heat-trapping gases. There are others, such as methane, which has a shorter life span but traps heat more effectively. Both trigger temperatures to rise over time, scientists say, which is causing sea levels to rise and some weather patterns to change. When measurements of carbon dioxide were first taken in 1958, it measured 315 parts per million. Some scientists and environmental groups promote 350 parts per million as a safe level for CO2, but scientists acknowledge they don't really know what levels would stop the effects of global warming. The level of carbon dioxide in the air is rising faster than in the past decades, despite international efforts by developed nations to curb it. On average the amount is growing by about 2 parts per million per year. That's 100 times faster than at the end of the Ice Age. Back then, it took 7,000 years for carbon dioxide to reach 80 parts per million, Tans said. Because of the burning of fossil fuels, such as oil and coal, carbon dioxide levels have gone up by that amount in just 55 years. Before the Industrial Revolution, carbon dioxide levels were around 280 ppm, and they were closer to 200 during the Ice Age, which is when sea levels shrank and polar places went from green to icy. There are natural ups and downs of this greenhouse gas, which comes from volcanoes and decomposing plants and animals. But that's not what has driven current levels so high, Tans said. He said the amount should be even higher, but the world's oceans are absorbing quite a bit, keeping it out of the air. "What we see today is 100 percent due to human activity," said Tans, a NOAA senior scientist. The burning of fossil fuels, such as coal for electricity and oil for gasoline, has caused the overwhelming bulk of the man-made increase in carbon in the air, scientists say. The world pumps on average 2.4 million pounds of carbon dioxide into the air every second for a total of 38.2 billion tons in 2011, according international calculations published in a scientific journal in December. China spews 10 billion tons of carbon dioxide into the air per year, leading all countries, and its emissions are growing about 10 percent annually. The U.S. at No. 2 is slowly cutting emissions and is down to 5.9 billion tons per year. The speed of the change is the big worry, said Pennsylvania State University climate scientist Michael Mann. If carbon dioxide levels go up 100 parts per million over thousands or millions of years, plants and animals can adapt. But that can't be done at the speed it is now happening. Last year, regional monitors briefly hit 400 ppm in the Arctic. But those monitoring stations aren't seen as a world mark like the one at Mauna Loa, Hawaii. Generally carbon levels peak in May then fall slightly, so the yearly average is usually a few parts per million lower than May levels. NOAA monitoring at Mauna Loa: http://www.esrl.noaa.gov/gmd/ccgg/trends/weekly.html Seth Borenstein can be followed at http://twitter.com/borenbears
<urn:uuid:0840d846-ed6d-44d4-95da-15c4b0354acb>
3.484375
1,085
News Article
Science & Tech.
58.444714
95,617,938
Perl 5 to Perl 6 guide - Overview How do I do what I used to do? These documents should not be mistaken for a beginner tutorial or a promotional overview of Perl 6; it is intended as a technical reference for Perl 6 learners with a strong Perl 5 background and for anyone porting Perl 5 code to Perl 6. Perl 6 in a Nutshell provides a quick overview of things changed in syntax, operators, compound statements, regular expressions, command-line flags, and various other bits and pieces. The Syntax section provides an overview of the syntactic differences between Perl 5 and Perl 6: how it is still mostly free form, additional ways to write comments, and how switch is very much a Perl 6 thing. The Functions section describes all of the Perl 5 functions and their Perl 6 equivalent and any differences in behaviour. It also provides references to ecosystem modules that provide the Perl 5 behaviour of functions,either existing in Perl 6 with slightly different semantics (such as shift), or non-existing in Perl 6 (such as The Special Variables section describes if and how a lot of Perl 5's special (punctuation) variables are supported in Perl 6.
<urn:uuid:40b0d9f4-e310-4e40-9319-4bb34dd779f1>
2.5625
245
Documentation
Software Dev.
43.302048
95,617,942
Export fluxes of calcite in the eastern equatorial Pacific from the Last Glacial Maximum to present MetadataShow full item record The eastern equatorial Pacific (EEP) is an important center of biological productivity, generating significant organic carbon and calcite fluxes to the deep ocean. We reconstructed paleocalcite flux for the past 30,000 years in four cores collected beneath the equatorial upwelling and the South Equatorial Current (SEC) by measuring ex230Th-normalized calcite accumulation rates corrected for dissolution with a newly developed proxy for “fraction of calcite preserved.” This method produced very similar results at the four sites and revealed that the export flux of calcite was 30–50% lower during the LGM compared to the Holocene. The internal consistency of these results supports our interpretation, which is also in agreement with emerging data indicating lower glacial productivity in the EEP, possibly as a result of lower nutrient supply from the southern ocean via the Equatorial Undercurrent. However, these findings contradict previous interpretations based on mass accumulation rates (MAR) of biogenic material in the sediment of the EEP, which have been taken as reflecting higher glacial productivity due to stronger wind-driven upwelling. Author Posting. © American Geophysical Union, 2004. This article is posted here by permission of American Geophysical Union for personal use, not for redistribution. The definitive version was published in Paleoceanography 19 (2004): PA2018, doi:10.1029/2003PA000986. Suggested CitationArticle: Loubere, Paul, Mekik, Figen, Francois, Roger, Pichat, Sylvain, "Export fluxes of calcite in the eastern equatorial Pacific from the Last Glacial Maximum to present", Paleoceanography 19 (2004): PA2018, DOI:10.1029/2003PA000986, https://hdl.handle.net/1912/3428 Showing items related by title, author, creator and subject. Lower export production during glacial periods in the equatorial Pacific derived from (231Pa/230Th)xs,0 measurements in deep-sea sediments Pichat, Sylvain; Sims, Kenneth W. W.; Francois, Roger; McManus, Jerry F.; Leger, Susan Brown; Albarede, Francis (American Geophysical Union, 2004-12-16)The (231Pa/230Th)xs,0 records obtained from two cores from the western (MD97-2138; 1°25′S, 146°24′E, 1900 m) and eastern (Ocean Drilling Program Leg 138 Site 849, 0°11.59′N, 110°31.18′W, 3851 m) equatorial Pacific display ... Radiocarbon evidence for a possible abyssal front near 3.1 km in the glacial equatorial Pacific Ocean Keigwin, Lloyd D.; Lehman, Scott J. (2014-09)We investigate the radiocarbon ventilation age in deep equatorial Pacific sediment cores using the difference in conventional 14C age between coexisting benthic and planktonic foraminifera, and integrate those results with ... Seismic interpretation of pelagic sedimentation regimes in the 18–53 Ma eastern equatorial Pacific : basin-scale sedimentation and infilling of abyssal valleys Tominaga, Masako; Lyle, Mitchell; Mitchell, Neil C. (American Geophysical Union, 2011-03-10)Understanding how pelagic sediment has been eroded, transported, and deposited is critical to evaluating pelagic sediment records for paleoceanography. We use digital seismic reflection data from an Integrated Ocean Drilling ...
<urn:uuid:a6709d91-2b8f-4d73-8512-eba3d775d6e9>
2.625
764
Academic Writing
Science & Tech.
34.872003
95,617,955
General Theory of Finite Probability Spaces If S is any probability space (Definition 1.6), we have defined an event A as any subset of S (Definition 1.12). In Chapter 2 we worked with a uniform space S and considered the problem of computing the probability p(A) by counting the elements in A and in S and using Equation 1.11: p(A) = n(A)/ n(S). KeywordsConditional Probability Probability Space Sample Space Product Rule Exclusive Event Unable to display preview. Download preview PDF.
<urn:uuid:6f5eebf3-979a-4354-99bc-4b9cdb81d420>
2.65625
117
Truncated
Science & Tech.
50.741744
95,617,970
With the latest piece of the puzzle just published in a scientific journal, a solar system mystery that has perplexed people for more than 20 years has been solved, truly thanks to the support of Planetary Society members. If you want to test your planetary knowledge, or just have a masochistic love of tests, I’ve posted the midterm I’ve given to my students in my online Introductory Astronomy and Planetary Science class at California State University Dominguez Hills. I gave the first lecture of my Introduction to Astronomy and Planetary Science course last Wednesday, starting with a tour of the solar system. The course is Physics 195 at California State University Dominguez Hills. We explore space for the noblest goals of science and exploration, and we often persevere in spite of challenges. But space exploration is fraught with bad things happening, or, to use the technical term, ouchies. The Planetary Society's Phobos LIFE biomodule will re-enter the Earth's atmosphere in the next few days with the rest of the Phobos-Grunt mission. I am ecstatic to report that at 20:16 UTC, millions of passengers on board the Planetary Society's Phobos LIFE biomodule launched into space inside the Phobos Sample Return (also known as Phobos Grunt or Phobos Soil) spacecraft. Years in the making, our Phobos LIFE (Living Interplanetary Flight Experiment) is nearing launch this November. Phobos LIFE will send millions of passengers on a 34-month journey to Mars’ moon Phobos and back. We are super excited that the Planetary Society’s Phobos LIFE (Living Interplanetary Flight Experiment) is about ready to launch to Mars’ moon Phobos and back. We have been working for years preparing this unique test of the effects of long term exposure to deep space on a wide variety of life. In the middle of the night on June 1, 2011, millions of passengers returned safely to Earth as part of the great conclusion to space shuttle Endeavour's last flight, STS-134. Many of those millions of passengers were part of the Planetary Society's Shuttle LIFE experiment. Five different kinds of creatures from all three domains of life are part of Shuttle LIFE. NASA has selected the OSIRIS-REx mission as the next New Frontiers mission. OSIRIS-REx (Origins-Spectral Interpretation-Resource Identification-Security-Regolith Explorer) will be the first U.S. asteroid sample return.
<urn:uuid:b965809c-b24e-4621-8fe0-493c50b6cb49>
2.875
519
Personal Blog
Science & Tech.
40.818145
95,617,982
What do we have in this chapter 11 part 4? Using IP Header Include Option The one limitation of raw sockets is that you can work only with certain protocols that are already defined, such as ICMP and IGMP. You cannot create a raw socket with IPPROTO_UDP and manipulate the UDP header; likewise with TCP. To manipulate the IP header as well as either the TCP or UDP header (or any other protocol encapsulated in IP), you must use the IP_HDRINCL socket option with a raw socket. For IPv6, the option is IPV6_HDRINCL. This option allows you to build your own IP header as well as other protocols' headers. In addition to manipulating well-known protocols such as UDP, using raw sockets with the header include option allows you to implement your own protocol scheme that is encapsulated in IP. This is done by creating a raw socket and using the IPPROTO_RAW value as the protocol. This allows you to set the protocol field in the IP header manually and build your own custom protocol header. However, in this section we will take a look at how to build your own UDP packets so that you can gain a good understanding of the steps involved. Once you understand how to manipulate the UDP header, creating your own protocol header or manipulating other protocols encapsulated in IP is fairly trivial. Before getting into the details of using the header include option, you need to know one important difference between using this option with IPv4 and IPv6. For IPv4, the stack still verifies some fields within the supplied IPv4 header. For example, the IPv4 identification field is set by the stack and the stack will fragment the packet if necessary. That is, if you create a raw IPv4 packet and set IP_HDRINCL and send a packet larger than the MTU size, the stack will fragment the data into multiple packets for you. For IPv6, if the IPV6_HDRINCL option is set, it is your responsibility to compute all the headers and fields necessary. If you submit a send larger than the MTU size, your application must create the IPv6 fragment headers and compute the offsets correctly; otherwise, the IPv6 stack will drop the packet without sending it. When you use the header include option, you are required to fill in the IP header yourself for every send call, as well as the headers of any other protocols wrapped within. The UDP header is quite a bit simpler than the IP header. It is only 8 bytes long and contains only four fields, as shown in Figure 11-3. The first two fields are the source and destination port numbers. They are 16 bits each. The third field is the UDP length, which is the length, in bytes, of the UDP header and data. The fourth field is the checksum, which we will discuss shortly. The last part of the UDP packet is the data. Figure 11-3 UDP header format Because UDP is an unreliable protocol, calculating the checksum is optional. Unlike the IPv4 checksum, which covers only the IPv4 header, the UDP checksum covers the data and also includes part of the IPv4 header. The additional fields required to calculate the UDP checksum are known as a pseudo-header. The IPv4 UDP pseudo-header is composed of the following items: Added to these items are the UDP header and data. The method of calculating the checksum is the 16-bit one's complement sum. Because the data can be an odd number of bytes, it might be necessary to pad a zero byte to the end of the data to calculate the checksum. This pad field is not transmitted as part of the data. Figure 11-4 illustrates all of the fields required for the checksum calculation. The first three 32-bit words make up the UDP pseudo-header. The UDP header and its data follows that. Notice that because the checksum is calculated on 16-bit values, the data might need to be padded with a zero byte. Figure 11-4 IPv4 pseudo-header with UDP packet and data For IPv6, you have already seen how to calculate the IPv6 pseudo-header as is required to calculate the checksum for ICMPv6 packets. The calculation is the same for UDP with the IPv6 pseudo-header coming first and is followed by the UDP header and payload (zero padded to the next 16-bit boundary if necessary). The IPv6 pseudo-header is shown in Figure 11-5. Figure 11-5 IPv6 pseudo-header with UDP packet and data The following code snippet shows how to build an IPv4 and UDP header: // Define the ICMP header typedef struct icmp_hdr unsigned char icmp_type; unsigned char icmp_code; unsigned short icmp_checksum; unsigned short icmp_id; unsigned short icmp_sequence; unsigned long icmp_timestamp; } ICMP_HDR, *PICMP_HDR, FAR *LPICMP_HDR; char buf[sizeof(ICMP_HDR) + 32]; // IPv4 header typedef struct ip_hdr unsigned char ip_verlen; // 4-bit IPv4 version 4-bit header length (in 32-bit words) unsigned char ip_tos; // IP type of service unsigned short ip_totallength; // Total length unsigned short ip_id; // Unique identifier unsigned short ip_offset; // Fragment offset field unsigned char ip_ttl; // Time to live unsigned char ip_protocol; // Protocol(TCP,UDP etc) unsigned short ip_checksum; // IP checksum unsigned int ip_srcaddr; // Source address unsigned int ip_destaddr; // Source address } IPV4_HDR, *PIPV4_HDR, FAR * LPIPV4_HDR; // Define the UDP header typedef struct udp_hdr unsigned short src_portno; // Source port no. unsigned short dst_portno; // Dest. port no. unsigned short udp_length; // Udp packet length unsigned short udp_checksum; // Udp checksum (optional) } UDP_HDR, *PUDP_HDR; char buf[MAX_BUFFER], // large enough buffer USHORT sourceport=5000, Destport=5001; int payload=512, // size of UDP data // Initialize the IPv4 header v4hdr = (IPV4_HDR *)buf; v4hdr->ip_verlen = (4 << 4) │ (sizeof(IPV4_HDR) / sizeof(ULONG)); v4hdr->ip_tos = 0; v4hdr->ip_totallength = htons(sizeof(IPV4_HDR) + sizeof(UDP_HDR) + payload); v4hdr->ip_id = 0; v4hdr->ip_offset = 0; v4hdr->ip_ttl = 8; // Time-to-live is eight v4hdr->ip_protocol = IPPROTO_UDP; v4hdr->ip_checksum = 0; v4hdr->ip_srcaddr = inet_addr("184.108.40.206"); v4hdr->ip_destaddr = inet_addr("220.127.116.11"); // Calculate checksum for IPv4 header // The checksum() function computes the 16-bit one's // complement on the specified buffer. v4hdr->ip_checksum = checksum(v4hdr, sizeof(IPV4_HDR)); // Initialize the UDP header udphdr = (UDP_HDR *)&buf[sizeof(IPV4_HDR)]; udphdr->src_portno = htons(sourceport); udphdr->dst_portno = htons(destport); udphdr->udp_length = htons(sizeof(UDP_HDR) + payload); udphdr->udp_checksum = 0; // Initialize the UDP payload to something data = &buf[sizeof(IPV4_HDR) + sizeof(UDP_HDR)]; memset(data, '^', payload); // Calculate the IPv4 and UDP pseudo-header checksum - this routine // extracts all the necessary fields from the headers and calculates // the checksum over it. See the iphdrinc sample for the implementation // of Ipv4PseudoHeaderChecksum(). udphdr->udp_checksum = Ipv4PseudoHeaderChecksum(v4hdr, udphdr, data, sizeof(IPV4_HDR) + sizeof(UDP_HDR) + payload); // Create the raw UDP socket s = socket(AF_INET, SOCK_RAW, IPPROTO_UDP); // Set the header include option optval = 1; setsockopt(s, IPPROTO_IP, IP_HDRINCL, (char *)&optval, sizeof(optval)); // Send the data ((SOCKADDR_IN *)&dest)->sin_family = AF_INET; ((SOCKADDR_IN *)&dest)->sin_port = htons(destport); ((SOCKADDR_IN *)&dest)->sin_addr.s_addr = inet_addr("18.104.22.168"); sendto(s, buf, sizeof(IPV4_HDR) + sizeof(UDP_HDR) + payload, 0, (SOCKADDR *)&dest, sizeof(dest)); This code is straightforward and easy to follow. The IPv4 header is initialized with valid entries. In this case, a bogus source IPv4 address is used (22.214.171.124) but a valid destination address is supplied. Also, we set the TTL value to 8. Lastly, the checksum is calculated for the IPv4 header only. After the IPv4 header is the UDP header, as indicated by the ip_protocol field of the IPv4 header being set to IPPROTO_UDP. For that header, the source and destination ports are set in addition to the length of the UDP header and its payload. The last piece is to compute the pseudo-header checksum, which isn't shown but is an easy computation. The necessary fields are extracted out of the various headers after which the checksum can be computed. The following program example creates raw UDP packets over IPv4 and IPv6. This sample also has a routine to compute the pseudo-header checksum for both IPv4 and IPv6.
<urn:uuid:b20f3488-8f91-4b55-9bba-9580236aaae8>
2.828125
2,349
Documentation
Software Dev.
54.984873
95,617,983
BIG Corporation produces just about everything but is currently interested in the lifetimes of its batteries, hoping to obtain its share of a market boosted by the popularity of portable CD and MP3 players. To investigate its new line of Ultra batteries, BIG randomly selects 1000 Ultra batteries and finds that they have a mean lifetime of 919 hours. Suppose that this mean applies to the population of all Ultra batteries. Complete the following statements about the distribution of lifetimes of all Ultra batteries. According to Chebyshev's theorem, at least ? of the lifetimes lie within 1.5 standard deviations of the mean, 919 hours. Suppose that the distribution is bell-shaped. If approximately 99.7% of the lifetimes lie between 664 and 1174 hours, then the approximate value of the standard deviation for the distribution, according to the empirical rule, is ?.© BrainMass Inc. brainmass.com July 20, 2018, 5:07 am ad1c9bdddf This solution gives the step by step method for computing probability based on Chebyshev's theorem is discussed in the solution.
<urn:uuid:b6659895-d896-42b0-bc4a-88f7863036da>
2.65625
229
Tutorial
Science & Tech.
50.788966
95,618,003
Astronomers have discovered 41 new alien planets in one sweep by analyzing how each world gravitationally yanks on its neighbors. The newly confirmed exoplanets were spotted by NASA's prolific Kepler space telescope, which has detected more than 2,300 potential alien worlds since its March 2009 launch. The new finds, announced in two separate papers, bring the number of verified Kepler worlds to 115 and the total exoplanet tally to nearly 800. "Typically planets are announced one or two at a time — it's quite exceptional to have 27 announced in a single paper, or 41 in two," said Jason Steffen, an astrophysicist at the Fermilab Center for Particle Astrophysics in Batavia, Ill. Steffen is lead author of one of the studies. "It goes to show how rich the Kepler data are and how useful these new methods can be," Steffen told SPACE.com. Kepler flags exoplanet candidates via the transit method, which looks for dips in a star's brightness caused by a planet crossing in front of it. Confirming these candidates can be a tricky and laborious process, however, requiring follow-up observations by ground-based instruments or further analysis of Kepler's data. Two independent teams of researchers took the latter tack to confirm the 41 new alien planets. They delved deep into the telescope's observations, studying how each world's gravity tugs on its sibling planets. These slight pulls cause regular variations in the planets' orbits, affecting when they cross in front of their stars. One paper, by Jiwei Xie at the University of Toronto, confirms 24 new planets in 12 systems. Another study, by Steffen and his colleagues, confirms 27 planets in 13 systems. Five of the systems, and 10 of the planets, are the same in both papers. All in all, the new research adds 20 new planetary systems to the 47 that Kepler had previously confirmed, marking a more than 40% increase. "With systems like these, we can get really good information about the interactions among the planets in them," Steffen said. "This, in turn, helps us place the Earth and our solar system into the context of all planetary systems. Note that the number of planets in our solar system is now only 1% of the number of planets that are known. So, unlike 15 or 20 years ago, we can start to answer questions about how lucky we really are." Steffen and his colleagues submitted their paper to the Monthly Notices of the Royal Astronomical Society, while Xie submitted his study to the Astrophysical Journal. The $600 milllion Kepler observatory's main mission is to find Earth-size planets in the so-called habitable zones of their parent stars — a just-right range of distances that could support liquid water and, perhaps, life as we know it. - Observations of the Early Universe Reaffirm the Existence of Dark Matter and Dark Energy - It's Inspection Time for the James Webb Space Telescope's Huge Sun Shield (Photo) - Most Stars with Jupiters Have Giant Super-Earths - NASA Takes Over Fenway Park for Space-Mission Show (Video) This article originally published at Space.com here
<urn:uuid:ed2cdef4-544b-492f-bc67-f9df5dbb5a87>
3.15625
664
Truncated
Science & Tech.
47.230117
95,618,007
|Posted: Sep 30, 2013| Do black holes have hair? |(Nanowerk News) A black hole. A simple and clear concept, at least according to the hypothesis by Roy Kerr, who in 1963 proposed a “clean” black hole model, which is the current theoretical paradigm. From theory to reality things may be quite different. According to a new research carried out by a group of scientists that includes Thomas Sotiriou, a physicist of the International School for Advanced Studies (SISSA) of Trieste, black holes may be much “dirtier” than what Kerr believed ("Black Holes with Surrounding Matter in Scalar-Tensor Theories ").| |According to the traditional model, black holes are defined by only two quantities: mass and angular momentum (a black hole rotation velocity). Once their progenitor has collapsed (a high mass star, for instance, that at the end of its life cycle implodes inwards) its memory is lost forever. All that is left is a quiescent black hole, with almost no distinctive features: all black holes, mass and angular momentum aside, look almost the same.| |Black hole (Image: Alain Riazuelo, NASA)| |According to Sotiriou, things may not have occurred this way. “Black holes, according to our calculations, may have hair”, explains Sotiriou, referring to a well-known statement by physicist John Wheeler, who claimed that “black holes have no hair”. Wheeler meant that mass and angular momentum are all one needs to describe them.| |“Although Kerr’s ‘bald’ model is consistent with General Relativity, it might not be consistent with some well-known extensions of Einstein’s theory, called tensor-scalar theories”, adds Sotiriou. “This is why we have carried out a series of new calculations that enabled us to focus on the matter that normally surrounds realistic black holes, those observed by astrophysicists. This matter forces the pure and simple black hole hypothesized by Kerr to develop a new ‘charge’ (the hair, as we call it) which anchors it to the surrounding matter, and probably to the entire Universe.”| |The experimental confirmation of this new hypothesis may come from the observations carried out with the interferometers, instruments capable of recording the gravitational waves. “According to our calculations, the growth of the black hole’s hair,” concludes Sotiriou “is accompanied by the emission of distinctive gravitational waves. In the future, the recordings by the instrument may challenge Kerr’s model and broaden our knowledge of the origins of gravity.”| |Source: Sissa Medialab| Nanowerk Newsletter Email Digests with a compilation of all of the day's news. These articles might interest you as well:
<urn:uuid:44b31770-c9ef-4282-ada2-a52958cc7801>
3.53125
621
News Article
Science & Tech.
36.200741
95,618,014
User controls enable you to group logically related content and controls together so they can be used as a single unit in content pages, master pages, and inside other user controls. A user control is actually a sort of mini-ASPX page in that it has a markup section and optionally a Code Behind file in which you can write code for the control. To some extent, they look a bit like server controls in that they can contain programming logic and presentation that you can reuse in your pages. User controls have the following similarities with normal ASPX pages: 1-They have a markup section where you can add standard markup and server controls. 2-They can be created and designed with Visual Web Developer in Markup, Design, and 3-They can contain programming logic, either inline or with a Code Behind file. User controls have an .ascx extension instead of the regular .aspx extension. User controls are added to the site like any other content type: through the Add New Item dialog box. Similar to pages, you get the option to choose the programming language and whether you want to place the code in a separate Code Behind file. Although using controls for repeating content is already quite useful, they become even more useful when you add custom logic to them. By adding public properties or methods to a user control, you can influence its behavior at runtime. When you add a property to a user control, it becomes available automatically in IntelliSense and in the Properties Grid for the control in the page you’re working with, making it easy to change the behavior from an external file like a page. User controls can greatly improve the maintainability of your site. Instead of repeating the same markup and code on many different pages in your site, you encapsulate the code in a single control,which can then be used in different areas of your site. Some tips on User Controls: Don’t overuse user controls. User controls are great for encapsulating repeating content, but they also make it a little harder to manage your site because code and logic is contained in Keep user controls focused on a single task. Don’t create a user control that is able to display five different types of unrelated content with a property that determines what to display. When you create user controls that contain styled markup, don’t hardcode style information like the CssClass for the server controls contained in the user control.
<urn:uuid:033a1d8f-e952-4ba7-8b00-e56e962bb021>
2.703125
512
Documentation
Software Dev.
39.526455
95,618,025
Part of the Middle East and North Africa may become uninhabitable due to climate change The number of climate refugees could increase dramatically in future. Researchers of the Max Planck Institute for Chemistry and the Cyprus Institute in Nicosia have calculated that the Middle East and North Africa could become so hot that human habitability is compromised. The goal of limiting global warming to less than two degrees Celsius, agreed at the recent UN climate summit in Paris, will not be sufficient to prevent this scenario. The temperature during summer in the already very hot Middle East and North Africa will increase more than two times faster compared to the average global warming. This means that during hot days temperatures south of the Mediterranean will reach around 46 degrees Celsius (approximately 114 degrees Fahrenheit) by mid-century. Such extremely hot days will occur five times more often than was the case at the turn of the millennium. In combination with increasing air pollution by windblown desert dust, the environmental conditions could become intolerable and may force people to migrate. More than 500 million people live in the Middle East and North Africa - a region which is very hot in summer and where climate change is already evident. The number of extremely hot days has doubled since 1970. “In future, the climate in large parts of the Middle East and North Africa could change in such a manner that the very existence of its inhabitants is in jeopardy,” says Jos Lelieveld, Director at the Max Planck Institute for Chemistry and Professor at the Cyprus Institute. Lelieveld and his colleagues have investigated how temperatures will develop in the Middle East and North Africa over the course of the 21st century. The result is deeply alarming: Even if Earth’s temperature were to increase on average only by two degrees Celsius compared to pre-industrial times, the temperature in summer in these regions will increase more than twofold. By mid-century, during the warmest periods, temperatures will not fall below 30 degrees at night, and during daytime they could rise to 46 degrees Celsius (approximately 114 degrees Fahrenheit). By the end of the century, midday temperatures on hot days could even climb to 50 degrees Celsius (approximately 122 degrees Fahrenheit). Another finding: Heat waves could occur ten times more often than they do now. By mid-century, 80 instead of 16 extremely hot days In addition, the duration of heat waves in North Africa and the Middle East will prolong dramatically. Between 1986 and 2005, it was very hot for an average period of about 16 days, by mid-century it will be unusually hot for 80 days per year. At the end of the century, up to 118 days could be unusually hot, even if greenhouse gas emissions decline again after 2040. “If mankind continues to release carbon dioxide as it does now, people living in the Middle East and North Africa will have to expect about 200 unusually hot days, according to the model projections,” says Panos Hadjinicolaou, Associate Professor at the Cyprus Institute and climate change expert. Atmospheric researcher Jos Lelieveld is convinced that climate change will have a major impact on the environment and the health of people in these regions. “Climate change will significantly worsen the living conditions in the Middle East and in North Africa. Prolonged heat waves and desert dust storms can render some regions uninhabitable, which will surely contribute to the pressure to migrate,” says Jos Lelieveld. The research team recently also published findings on the increase of fine particulate air pollution in the Middle East. It was found that desert dust in the atmosphere over Saudi Arabia, Iraq and in Syria has increased by up to 70 percent since the beginning of this century. This is mainly attributable to an increase of sand storms as a result of prolonged droughts. It is expected that climate change will contribute to further increases, which will worsen environmental conditions in the area. In the now published study, Lelieveld and his colleagues first compared climate data from 1986 to 2005 with predictions from 26 climate models over the same time period. It was shown that the measurement data and model predictions corresponded extremely well, which is why the scientists used these models to project climate conditions for the period from 2046 to 2065 and the period from 2081 to 2100. Largest temperature increase in already hot summers The researchers based their calculations on two future scenarios: The first scenario, called RCP4.5, assumes that the global emissions of greenhouse gases will start decreasing by 2040 and that the Earth will be subjected to warming by 4.5 Watt per square meter by the end of the century. The RCP4.5 scenario roughly corresponds to the target set at the most recent UN climate summit, which means that global warming should be limited to less than two degrees Celsius. The second scenario (RCP8.5) is based on the assumption that greenhouse gases will continue to increase without further limitations. It is therefore called the “business-as-usual scenario”. According to this scenario, the mean surface temperature of the Earth will increase by more than four degrees Celsius compared to pre-industrial times. In both scenarios, the strongest rise in temperature in the Middle East and North Africa is expected during summer, when it is already very hot, and not during winter, which is more common in other parts of the globe. This is primarily attributed to a desert warming amplification in regions such as the Sahara. Deserts do not buffer heat well, which means that the hot and dry surface cannot cool by the evaporation of ground water. Since the surface energy balance is controlled by heat radiation, the greenhouse effect by gases such as carbon dioxide and water vapor will increase disproportionately. Regardless of which climate change scenario will become reality: both Lelieveld and Hadjinicolaou agree that climate change can result in a significant deterioration of living conditions for people living in North Africa and the Middle East, and consequently, sooner or later, many people may have to leave the region. J. Lelieveld, Y. Proestos, P. Hadjinicolaou, M. Tanarhte, E. Tyrlis, G. Zittis: Strongly increasing heat extremes in the Middle East and North Africa (MENA) in the 21st century. Climatic Change, doi:10.1007/s10584-016-1665-6, 2016, http://link.springer.com/article/10.1007/s10584-016-1665-6. K. Klingmüller, A. Pozzer, S. Metzger, G. Stenchikov, and J. Lelieveld: Aerosol optical depth trend over the Middle East; Atmospheric Chemistry and Physics, 16, 5063-5073, 2016, http://www.atmos-chem-phys.net/16/5063/2016/. Prof. Dr. Jos Lelieveld Cyprus Institute and Max Planck Institute for Chemistry Telephone: +49 (0) 6131-3053000 Prof. Dr. Panos Hadjinicolaou Telephone: +357 22 208627 Dr. Susanne Benner | Max-Planck-Institut für Chemie Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:9727616a-01a8-4f40-8c31-dbb53da38705>
3.328125
2,064
Content Listing
Science & Tech.
44.583818
95,618,058
34 Limestone is a sedimentary rock consisting mostly of calcium carbonate (CaCO3). Which process is most likely to cause a chemical change to limestone? F Freezing water cracking limestone G Flowing water eroding a limestone riverbed H Acid rain forming puddles on limestone J Coastal waves dissolving limestone sediments 35 A 500 mL quantity of vanilla ice cream has a mass of 400 grams. The manufacturer then bubbles air into the ice cream so that its volume increases by 300 mL. What is the ice cream’s approximate final density? F 0.30 g/cm 3 G 0.50 g/cm 3 H 0.80 g/cm 3 J 1.30 g/cm 3 36 Calcium ions play an important role in the function of neurons in the brain. Elements that are chemically similar to calcium can interfere with the function of neurons. Which of the following is most likely to imitate calcium’s role in the function of neurons? 37 The mass of a rusty bicycle is found to be slightly greater than the mass of the same bicycle before it rusted. The change in mass indicates that the rusting process — A is a physical change B involves an energy-to-matter conversion C decreases the density of the metal D involves metal bonding with other atoms 38 The diagram on the right shows water molecules and ions from an NaCl crystal. What is the most likely reason that each water molecule is arranged so that the oxygen part of the molecule faces a sodium ion? F The oxygen in a water molecule contains a partial negative charge. G Gravity rotates the oxygen atoms to face the more-massive sodium ions. H Hydrogen atoms create repulsive forces with chloride ions. J Oxygen atoms form covalent bonds with sodium ions. 39 Water acts as a solvent of ionic compounds because — F water is liquid over a wide range of temperatures G water molecules are polar H water is found in three states of matter J water takes the shape of its container 40 The table shows data from an investigation designed to find a liquid solution that is both an acid and a strong electrolyte. Based on the data, a solution that is both an acid and a strong electrolyte is — A Solution 1 B Solution 2 C Solution 3 D Solution 4 41 Which of these remains the same while water molecules go through the water cycle? F The ratio of oxygen to hydrogen in the molecules G The rate of vibration of the molecules H The kinds of dissolved substances between the molecules J The amount of energy the molecules can absorb 42 A bar of soap produced by this soap-making process normally sinks to the bottom of a container of water. Which of these processes could cause the bar of soap to float in water? A Making grooves in the surface of the thick paste B Adding air bubbles to the thick paste C Letting the thick paste sit for four days D Chilling the mold filled with the thick paste 43 Which of the following is an example of a chemical change? A Combustion of gasoline B An apple being bitten C An ice cube being swallowed D Absorption of a water molecule 44 Aluminum metal and oxygen gas combine to produce aluminum oxide (Al2O3). Which of these is the balanced equation for this reaction? F Al + O2 ;;Al2O3 G 2Al + 2O2 ;;2Al2O3 H 2Al + 3O2 ;;5Al2O3 J 4Al + 3O2 ;;2Al2O3
<urn:uuid:718bc41d-c9a8-4cb7-a23f-9d92ec5da198>
3.625
769
Content Listing
Science & Tech.
54.459827
95,618,118
|Scientific Name:||Hippocampus ingens Girard, 1858| Hippocampus ecuadorensis Fowler, 1922 Hippocampus gracilis Gill, 1862 Hippocampus hildebrandi Ginsburg, 1933 |Taxonomic Source(s):||Girard, C. 1858. Fishes. In General report upon zoology fo the several Pacific railroad routes, 1857. United States Senate Miscellaneous Document 78: 1-400.| |Red List Category & Criteria:||Vulnerable A2cd ver 3.1| |Contributor(s):||Czembor, C.A., Rojas, A., Acero, A. & Wiswedel, S.| Hippocampus ingens is an Eastern Pacific endemic seahorse that inhabits mangroves, coral and rocky reefs, seagrasses, and macroalgae to a depth of 60 m. Although there is limited information on changes in population numbers of this species, local estimates of population declines of between 50 and 90% were reported in the early 2000's, relative to 15-30 years prior. More recent population estimates are not available. However, declines are suspected to be continuing, as fishing pressure has not ceased and recent substantial illegal trade interceptions have indicated past levels of offtake for this species may have been underestimated. It is therefore conservatively suspected that population declines of at least 30% have taken place over the past 10 years (more than three generation lengths for this short-lived seahorse). Hippocampus ingens is therefore listed as Vulnerable under Criterion A2cd. |Previously published Red List assessments:| |Range Description:||Hippocampus ingens is endemic to the Eastern Pacific, and is found from Long Beach, California through the Gulf of California to Peru, including the Cocos, Malpelo and Galápagos Islands (Saarman et al. 2010, Lourie et al. 2016, Mathewson 2016).| Native:Colombia; Costa Rica (Cocos I., Costa Rica (mainland)); Ecuador (Ecuador (mainland), Galápagos); El Salvador; Guatemala; Honduras (Honduras (mainland)); Mexico (Baja California, Baja California Sur, Chiapas, Colima, Guerrero, Jalisco, Michoacán, Nayarit, Oaxaca, Sinaloa, Sonora); Nicaragua (Nicaragua (mainland)); Panama; Peru; United States (California) |FAO Marine Fishing Areas:| Pacific – southeast; Pacific – eastern central |Range Map:||Click here to open the map viewer and explore range.| |Population:||Interviews with shrimp fishers on the Pacific coast of Mexico in 2000 estimated that catch per unit effort of H. ingens had declined from hundreds or thousands caught per month to tens or none (a decline of 75–90% of estimated catch relative to the previous 15–30 years) attributed to over-exploitation and trade (Baum and Vincent 2005). Declines were also seen in Ecuador, and were likely due to heavy fishing pressure. Target H. ingens fisheries on the Pacific coasts of Mexico, Costa Rica, Panama and Peru have experienced declines of approximately 50% in a similar time period (Baum and Vincent 2005). It is therefore conservative to estimate a decline of just 30% over its entire range occurred over the preceding three generations. Although there is no more up-to-date information, pressures and declines are suspected to have continued and possibly even accelerated. This species has repeatedly been confiscated in Peruvian waters en route to China, the most recent seizure totalling 8 million animals (Actman 2016). Such a high level of exploitation, whether through bycatch or targeted fishing, is likely impacting the population.| |Current Population Trend:||Decreasing| |Habitat and Ecology:||Hippocampus ingens inhabits waters to 60 m depth. Habitats include mangroves, macroalgae, seagrasses, and rocky and coral reefs (Lourie et al. 2004). It is also known to be associated with flotsam as it has been collected at the surface and from the stomachs of the Pacific Yellowfin Tuna (Thunnus albacares) and Bluefin Tuna (Thunnus orientalis) (Humann and Deloach 1993, Lourie et al. 2004). This species is sometimes caught by tuna purse seiners in the open ocean, possibly from drifting algae. | Little is known about feeding, but this species likely consumes small benthic and/or planktonic crustaceans such as harpacticoid and cyclopoid copepods, gammarid shrimps, and mysids (Woods et al. 2002, Kendrick and Hyndes 2005, Kitsos et al. 2008, Yip et al. 2015, Valladares et al. 2016). Like all seahorses, they are ovovivparous and females transfer eggs to the male’s brood pouch where the embryos are nurtured prior to live birth (Foster and Vincent 2004). All seahorse species also have vital parental care, and many species studied to date have high site fidelity (e.g., Perante et al. 2002), highly structured social behaviour (e.g., Vincent and Sadler 1995), and relatively sparse distributions (Lourie et al. 1999). |Generation Length (years):||0-2| |Movement patterns:||Not a Migrant| |Use and Trade:||This species is of commercial importance for the international aquarium trade (Sánchez 1997), the traditional medicinal trade, and as curios (Baum and Vincent 2005, Vincent et al. 2011). It is commercially exported from Peru, Mexico, and the US (UNEP-WCMC 2012). Recently, Peruvian authorities have purportedly made multiple seizures of large hauls of dried Hippocampus ingens that were destined for China. In 2012 an illegal shipment of 16,000 animals was confiscated (BBC 2012), and in June of 2016 a substantially larger shipment of 8 million animals was intercepted (Actman 2016). Both seizures were presumably entirely composed of H. ingens, as it is the only species present in the Eastern Pacific (Lourie et al. 2016). Trade documented through CITES has ranged from hundreds to several thousand over the past several years, but illegal trade is suspected to dwarf this number. Although numbers of individuals caught per vessel as bycatch is low, the number of vessels in the water means that substantial numbers are taken (Lawson et al. 2017).| This species is threatened by being caught as by-catch in the shrimp trawl fisheries in Mexico, Guatemala, Nicaragua, Panama, Ecuador, and Peru. Surveys of Latin America in the early 2000s have estimated that between 199,000 and 380,000 seahorses are incidentally caught on the Pacific coast each year (Baum and Vincent 2005). There was also anecdotal evidence from fishers and traders of declines in seahorse availability, which raises conservation concerns for this species (Baum and Vincent 2005). This species may be particularly susceptible to decline resulting from degradation of habitat from coastal development, tourism and fisheries because they inhabit relatively shallow areas (Lourie et al. 2004) where these threats are most pronounced. Like most seahorses, H. ingens have been shown to have high site fidelity and relatively small broods (Lourie et al. 2004, Saarman et al. 2010), which makes them sensitive to disturbance and limits their potential for recovery. Hippocampus ingens is listed along with all seahorses on CITES Appendix II. This species' distribution falls into a number of Marine Protected Areas in the Eastern Tropical Pacific (WDPA 2016). Hippocampus ingens is listed on Mexico’s NOM-059-SEMARNAT-2001 as a species subject to special protection; intentional capture and trade of wild seahorses is prohibited. In Panama, H. ingens are included under the Ministry of Agriculture’s decree 19.450, which regulates the extraction of coral reef fishes. Further research is needed in order to estimate population size and to determine levels of offtake. |Citation:||Pollom, R. 2017. Hippocampus ingens. The IUCN Red List of Threatened Species 2017: e.T10072A54905720.Downloaded on 20 July 2018.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
<urn:uuid:aad86446-2ad9-4d00-acb1-5753fb9f7c0b>
2.515625
1,820
Knowledge Article
Science & Tech.
39.537914
95,618,119
Introduction To Random Processes by William A. Gardner Publisher: McGraw-Hill 1990 Number of pages: 560 Intended to serve primarily as a first course on random processes for graduate-level engineering and science students, particularly those with an interest in the analysis and design of signals and systems. This new edition includes over 350 exercises, new material on applications of cyclostationary processes, detailed coverage of minimum-mean-squared-error estimation, and much more. Includes coverage of spectral analysis, dynamical systems, and statistical signal processing. Home page url Download or read it online for free here: by Sophocles J. Orfanidis - Prentice Hall An applications-oriented introduction to digital signal processing. The author covers all the basic DSP concepts, such as sampling, DFT/FFT algorithms, etc. The book emphasizes the algorithmic, computational, and programming aspects of DSP. by Daniel N. Rockmore, Jr, Dennis M. Healy - Cambridge University Press The book about the mathematical basis of signal processing and its many areas of application for graduate students. The text emphasizes current challenges, new techniques adapted to new technologies, and recent advances in algorithms and theory. - Agilent Technologies This text is a primer for those who are unfamiliar with the advantages of analysis in the frequency and modal domains and Dynamic Signal Analyzers. The authors avoid the use of rigorous mathematics and instead depend on heuristic arguments. by C. Sidney Burrus, at al. - Connexions This book uses an index map, a polynomial decomposition, an operator factorization, and a conversion to a filter to develop a very general description of fast algorithms to calculate the discrete Fourier transform. Computer programs are provided.
<urn:uuid:55713e41-a31b-4739-aac0-884ca444d711>
2.734375
365
Content Listing
Science & Tech.
24.163436
95,618,136
The tubulin homolog FtsZ is the major cytoskeletal protein in bacterial cytokinesis. It can generate a constriction force on the bacterial membrane or inside tubular liposomes. Several models have recently been proposed for how this force might be generated. These fall into 2 categories. The first is based on a conformational change from a straight to a curved protofilament. The simplest "hydrolyze and bend" model proposes a 22 degrees bend at every interface containing a GDP. New evidence suggests another curved conformation with a 2.5 degrees bend at every interface and that the relation of curvature to GTP hydrolysis is more complicated than previously thought. However, FtsZ protofilaments do appear to be mechanically rigid enough to bend membranes. A second category of models is based on lateral bonding between protofilaments, postulating that a contraction could be generated when protofilaments slide to increase the number of lateral bonds. Unfortunately these lateral bond models have ignored the contribution of subunit entropy when adding bond energies; if included, the mechanism is seen to be invalid. Finally, I address recent models that try to explain how protofilaments 1-subunit-thick show a cooperative assembly. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:f2d275e0-1ac7-4854-84f5-e9ddc068ce40>
2.859375
275
Academic Writing
Science & Tech.
21.352337
95,618,176
, PARIS, Jan 30 – As fish get smaller under Man’s environmental impact, they become more exposed to predators, which means a crucial food source will become more endangered than thought, scientists said on Wednesday. Previous research has found that some key fish species dwindle in size as larger specimens are trawled out and climate change starts to affect the food chain. But, until now, the broader impact of this shrinkage has not been explored. A team from Australia and Finland used computers to predict what would happen when five species of fish decline in average length over a 50-year period. The shrinkage was quite small, up to four percent. Yet mortality from predators increased by as much as 50 percent, they found. The repercussions for catches are significant. Total biomass for four of the five species declined by as much as 35 percent, and catches by the same margin, the researchers wrote in a paper published in the Royal Society journal Biology Letters. “Even small decreases in the body size of fish species can have large effects on their natural mortality,” the team wrote. The researchers looked at five southeast Australian trawl fisheries species – the jackass morwong, the tiger flathead, silver warehou, blue grenadier and pink ling. Species biomass decreased for all but the grenadier, which also shrank in size but whose numbers actually rose by up to 10 percent as the fish moved to more coastal areas where it was less vulnerable to predators, according to the simulation. Man is changing marine ecosystems worldwide – directly through fishing and indirectly through global warming, the researchers wrote. “Fisheries management practices that ignore contemporary life-history changes are likely to overestimate long-term yields and can lead to overfishing,” they warned.
<urn:uuid:4c6c6172-8797-4a9d-b0ae-e99be5a911b1>
3.765625
370
News Article
Science & Tech.
37.747175
95,618,192
This scientific paper is a massive achievement for the Jumeirah Groups sea turtle conservation work, with the collation of nearly 12 years of green sea turtle Thirty-year recovery trend in the once depleted Hawaiian green sea turtle Officials from the Supreme Council for Environment (SCE) have blamed fishermen, who go trawling for shrimp, for killing the Green Sea Turtles (Chelonia mydas) which get caught in fishing nets. The road to recovery for the green sea turtle has been long, and conservationists say it will be another 20-plus years before success can be truly declared. During the meeting Arif Ahmed Khan was informed of the dwindling numbers of numerous animal and plant species such as green sea turtle and vultures that were on the verge of extinction in Pakistan, and for which immediate and lasting measures were required for their conservation. A logistic regression, performed for green sea turtle data, showed a significant relationship between the CCL and ingestion of anthropogenic debris (estimate = -0. In 1989, biologists counted only 464 nests on 26 "index" beaches in Florida, the state with the largest green sea turtle In the context of invertebrates, top shell, green sea turtle , whale, dolphin and crabs are studies. In Florida, the Hobe Sound National Wildlife Refuge had nearly 1,150 green sea turtle nests, more than double the record it set two years ago. An endangered green sea turtle was found dead in Famagusta's Ayia Napa with its shell missing just a few days after a diver uploaded a video on YouTube of a turtle swimming in a nearby area. The green sea turtle is also found in Qatari waters. com)-- Isla, a Hawaiian Green Sea Turtle resting on the beach at Waikiki, notices a couple practicing a partner routine for a dance competition.
<urn:uuid:cd6ea083-cd0a-46b5-91bb-0c219ed72633>
2.671875
391
Knowledge Article
Science & Tech.
34.472304
95,618,196
Tokyo: Japanese scientists on Friday began a test run of an underground telescope to detect gravitational waves and gain a better understanding of the universe through their observations. The test run, which will continue until March 31, comes a month after a US-led team of scientists said it had identified the gravitational waves, theorised 100 years ago by Albert Einstein, EFE news reported. The KAGRA telescope is installed inside a tunnel located more than 200 metres underground at the Kamioka mine site in the Gifu prefecture to minimise seismic noise. The facility uses laser beams moving back and forth inside vacuum pipes that have mirrors placed at each end to detect the very small waves. The Japanese efforts to detect gravitational waves are being led by 2015 Physics Nobel laureate Takaaki Kajita from the University of Tokyo. After checking the telescope's performance with another test run in April, the Japanese team plans to make modifications to boost its sensitivity and start full-fledged operation between 2017 and 2018. "We want to join the international network of gravitational wave observation as soon as possible," Kajita said in a statement. Gravitational waves GW150914 were discovered on September 14, 2015, by twin Laser Interferometer Gravitational-wave Observatory (LIGO) detectors in the USA's Livingston, Louisiana, and Hanford, Washington.
<urn:uuid:a27db6b4-d5c8-41f0-865c-ecafc057122a>
3.25
278
News Article
Science & Tech.
20.239531
95,618,203
Other Statistical Websites Google and YouTube are your friends – you can find information and explanations (much of excellent quality) on virtually any topic of interest. Here are some of our favorite links. - Dance of the p-values by Geoff Cumming is an unforgettable multimedia masterpiece. Additional videos that accompany his book on The New Statistics can be found on YouTube. - Khan Academy includes thousands of short video presentations (most are about 10-12 minutes) on a wide range of topics in science and mathematics, including more than 100 on statistics and probability. - coursera provides many excellent free online courses, including statistics, data science, and programming. - μ-Tube by Andy Field provides clear and entertaining explanations on many statistical topics, with humor that may amuse, annoy, or offend. - Open Online Statistics Courses will take you to links to free statistics courses, compiled by Jasmine Parker. - David Howell’s Statistical Homepage An author of popular statistics textbooks, Howell offers data sets, exercises, archived discussion lists, lists of errors, and examples that correspond with the texts. The resources may be used independent of the texts. - College-level booklets on many statistical topics David Garson provides .pdf and/or Kindle versions of papers on a wide range of statistical topics. Many of the papers are free for registered users, though the Kindle versions are not. - The Rice Virtual Lab in Statistics contains statistical simulations and Hyperstat, an online text book. - Karl Wuensch’s Statistical Help Page has a vast number of links to statistics resources, including discussions of specific topics. - Alan Reifman’s Collection of Practical Statistical Resources includes links to a wide range of useful resources. - Choosing the Right Statistical Test from UCLA is a table depicting guidelines for choosing the appropriate statistical test, with links to guides on how to conduct the corresponding test in SAS, Stata, and SPSS. Under ‘Resources’ see ‘Which Statistical Test?’ - CAUSE (Consortium for the Advancement of Undergraduate Statistics Education) is an excellent source of resources, professional development, and research in support of teaching statistics. Also see CAUSE Resources - Merlot (Multimedia Educational Resource for Learning and Online Teaching) is a portal to a collection of peer reviewed online learning resources, with over 1000 statistics resources. - Statistics Online Computational Resource (SOCR) at UCLA provides online aids for statistics education, technology based instruction, and statistical computing. Resources include applets, computational and graphing tools, and instructional materials. - Journal of Statistics Education Information Service is the homepage for the Journal of Statistics Education Information Service from which users can access the journal and other resources. - ARTIST (Assessment Resource Tool for Improving Statistical Thinking) provides assessment resources for teaching introductory statistics courses, including an assessment builder that generates test items drawn from a database. Be sure to consider the CAOS test. - Archives of EDSTAT-L@LISTS.PSU.EDU A free subscription allows participation in ongoing discussions and access to archived discussion since 1991 regarding teaching and learning statistics. - Teaching Statistics and Research Methods: Tips from ToP is an organized collection of over 100 articles on statistics and research methods from the journal Teaching of Psychology, from the years 1999 through June 2012. - British Government Statistical Services includes links to better data presentation and open data sets. - David Kenny’s Structural Equation Modeling site is a good source for information on structural equation modeling and much more. - Multiple Imputation Online discuses multiple imputation, a technique for dealing with missing data. - Gallery of Data Visualization is maintained by Michael Friendly, York University — the good, the bad, and the ugly of graphing techniques in statistics. - Andrew Hayes provides PROCESS, a wonderful SPSS macro for analysis of simple, complex, and very complex mediation and moderation models. - p-rep Excel workbook downloads Peter Killeen’s Excel workbook for computing p-rep (probability of replication) from raw data or summary statistics. - Statistical Methods of Rater Agreement covers a range of topics related to rater agreement, including latent class analysis, Kappa, and odds ratio. - Raynald’s SPSS Tools offers an FAQ and many tips to help out the SPSS user. Includes example syntax, macros, and scripts. - Free Statistical Software is a listing of software packages that generally offer more specialized analysis capabilities than those available in the major packages. - Tutorials for R R is a free but very powerful statistical and graphical programming language. There is a large and enthusiastic user base who provide extensive support with tutorials and sample programs. 6,033 total views, 2 views today
<urn:uuid:a8160f2c-5d9c-4b36-a2ae-ebc98c02b59a>
2.53125
993
Content Listing
Science & Tech.
23.377195
95,618,210
Authors: Osvaldo Domann Special Relativity derived by Einstein presents time and space distorsions and paradoxes. This paper presents an approach where the Lorenz transformations are build on equations with speed variables instead of space and time variables as done by Einstein. The result are transformation rules between inertial frames that are free of time dilation and length contraction for all relativistiv speeds. Particles move according to Galilei relativity and the transformed speeds (virtual speeds) describe the non linearity of the physical magnitudes relative to the Galilei speeds. All the transformation equations already existent for the electric and magnetic fields, deduced on the base of the invariance of the Maxwell wave equations are still valid. The present work shows the importance of including the characteristics of the measuring equipment in the chain of physical interactions to avoid unnatural conclusions like time dilation and lengthcontraction. Comments: 22 Pages. Copyright. All rights reserved. The content of the present work, its ideas, axioms, postulates, definitions, derivations, results, findings, etc., can be reproduced only by making clear reference to the author. Unique-IP document downloads: 147 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:96823102-56f5-46b4-be4e-030529a6d9e4>
2.640625
378
Academic Writing
Science & Tech.
29.30157
95,618,245
1) Which material would have the strongest compressive strength? i) egg shell 11) a ream of paper 111) pile of dry sand iv) rope v)shopping bag 2) Hydrogen is to a proton as helium is to. a) an alpha particle. b) a beta particle c) a gamma particle d) an electron e) a neutron 3) A high -energy Physicist might request research funding for studies pertaining to: a) aerobic physiology. b) hydrogen bombs d) ultimate field theories c) Elementary particles e) tertiary particles. 4) Which elementary particle cannot participate in the Strong Force? a) lepton b) electron c)tau d) neutrino e) all of the above 5) The force that holds the nucleus together despite the electrical repulsion between protons is: a) gravitational force b) weak force c) electromagnetic force d) strong force e) electroweak force© BrainMass Inc. brainmass.com July 21, 2018, 9:58 am ad1c9bdddf 1. iv) rope - out of all these options the rope is the odd ball. The others can be crushed rather easily, but a rope due to its elastic properties has a higher compressive strength. 2. a) an alpha particle - Hydrogen has one proton, Helium as two protons. An alpha particle is defined as having 2 protons ... Solution ranks materials in terms of compressive strength as well as answering 4 other interesting multiple choice questions about atomic physics forces.
<urn:uuid:5a6345ad-7af9-4479-a347-d9d112d37e77>
3.609375
328
Q&A Forum
Science & Tech.
57.083064
95,618,249
San Antonio, Texas, based Living Slides (www.theliveslide.com) is pleased to announce the launch of an innovative new microscope slide, LiveSlide®, developed in San Antonio and proudly manufactured in… MicroscopeMaster.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means to earn fees by linking to Amazon.com and affiliated sites. The ability to spin silk is one of the things that distinguish spiders (and a few other insects) from the other animals in the animal kingdom. Spider webs are made up of chains of amino acids, which is dissolved in to a water based solution before being released into the air and stretched. It is this string fiber that spiders use to make their webs. ** Fun fact - spider silk is five times stronger than steel of Although observing a spider web under the microscope is not complicated, it's important to note that the web is delicate and should be handled with care for the best possible results. fluid/ or clear nail polish Microscopic cover slips Dry spider web Find a dry spider Make a thin layer of nail polish at the center of the slide (about the size of the cover slip) Allow the layer to dry for about one minute Holding the slide firmly, touch the slide on to the central part (thickest part) of the spider web to obtain some spider web (you can hold the slide using the index finder and the thumb on either ends of the slide). It is important to avoid spiders you are not familiar with and avoid harming or killing the spider Remove any excess spider web Cover the slide using a cover slip, pressing it down gently Place the slide on the microscope for observation ** If nail polish is not available, then a slide mounting fluid may be used to attach the spider web on to the slide Under the microscope, the student will be able to observe many, shiny (silver like threads) thread-like structures of the web/silk. On the other hand, an electron microscope will show the silk to be a thicker strand, some of which may be single, double or multi-stranded held together. Spider silk serves a number of purposes for different types of spiders, including among others; To catch prey, Although students will be able to see long, smooth strands of the spider web under the microscope, it may be interesting to note that some strands of spider are coated with other substances to make the web more sticky or waterproof, and the fiber material may vary in consistency depending on the This is an easy and fun experiment that will allow the student to observe a spider web closely. Silk produced by spiders has captured the attention of scientists, who are still fascinated by its elasticity and strength and continue to study it more closely. This should therefore be an important starting point to learn more about spider produced silk. However, students should avoid any spiders they are not familiar with. Amazon and the Amazon logo are trademarks of Amazon.com, Inc. or its affiliates The material on this page is not medical advice and is not to be used for diagnosis or treatment. Although care has been taken when preparing this page, its accuracy cannot be guaranteed. Scientific understanding changes over time.
<urn:uuid:99057535-f23c-461c-8e8b-955e33c633cd>
3.109375
704
Tutorial
Science & Tech.
38.621489
95,618,253
Authors: W.C. Wilson, G.M. Atkinson Affilation: NASA Langley Research Center, United States Pages: 298 - 301 Keywords: space, aerospace, wireless sensors, applications Reducing the weight of spacecraft will reduce both fabrication and launch costs. The elimination of wiring and wiring harnesses reduces the total mass of the vehicle. Wireless sensor technology can reduce the weight and therefore the costs of spacecraft. The Decadal Survey of Civil Aeronautics survey identified that “self-powered, wireless microelectromechanical sensors” warrant attention over the next decade. Current wireless sensor systems have low data rates and require batteries. The environment of aerospace vehicles is often very harsh, with temperature extremes ranging from cryogenic to very high temperatures during re-entry. For example, X-37B mini unmanned shuttle will require high temperature sensors mounted on the structure, as well as cryogenic sensors for monitoring fuel tanks. Batteries do not work well at either temperature extreme. Also, sensors are typically located in internal structures with limited access, making the periodic changing of batteries costly and time consuming. Passive wireless sensors are needed that operate across an extremely large temperature range and do not require batteries. NASA recently instrumented an all Composite Crew Module for structural testing on the ground. Wireless sensors could have reduced the time to instrument the module and check out the sensor wiring. The European Space Agency (ESA) , the Indian Space Agency and NASA are all investigating wireless sensor systems for operation on the lunar surface. In addition NASA is developing wireless sensor networks that can handle integrated system health monitoring of spacecraft with health monitoring astronauts. From ground tests to operation on orbit, to operation on the Moon, many applications could receive benefits from small, passive, wireless sensors. It is for these reasons that NASA is investigating the use of wireless technology for a variety of spacecraft applications. This paper will present a survey of opportunities for universities, industry, and the government to partner in developing new wireless sensors to address the future sensing needs for space vehicles. Nanotech Conference Proceedings are now published in the TechConnect Briefs
<urn:uuid:cd3f2a03-2a93-4ce9-85d4-8d213a13df3e>
3.53125
427
Academic Writing
Science & Tech.
17.526957
95,618,267
Act III. Equivalence, Conservation, Interconvertibility: When and of What? That heat could sometimes cause mechanical effect, and much of it, had been known since the disaster that befel Strepsiades while he was cooking the haggis for the feast of Zeus, but apparently it was the sooty proliferation of the steam engine in the early nineteenth century that first roused physicists to pay much attention to the phenomenon. As Carnot had seen, and as Clapeyron had made widely known, by absorbing and emitting heat a given body undergoing a cyclic process may do a definite amount of work, and by doing work cyclically a body may absorb and emit definite amounts of heat. Certain ideal bodies, described by the theory of calorimetry, give out in undergoing the reverse of a given process the heat they would gain and the work they would do in the given process. KeywordsThermal Agency Cyclic Process Motive Power Isothermal Process Saturated Steam Unable to display preview. Download preview PDF.
<urn:uuid:fbb6ecb2-e706-489b-a24c-1a5bcceaa940>
3.109375
216
Truncated
Science & Tech.
33.722699
95,618,279
12 July 2018 Red Sea corals will stand the test of time Published online 19 November 2017 Robust corals in the northern region could outlive the rest of the population by a century. Despite warming temperatures and destructive human practices, the Red Sea may be able to sustain its robust coral population in the north for another 100 to 150 years, according to new research in Global Change Biology.1 Scientists from the King Abdullah University of Science and Technology (KAUST), Saudi Arabia, the University of Essex, UK, and Al-Azhar University, Egypt, believe that the northern part of the Red Sea and its stretch of 1,800-kilometer coastline could become one of the world's largest coral refuge. “All corals located north of the 24th parallel north in the Red Sea are way below their thermal temperature threshold,” explains Christian Voolstra, co-author of the study and marine biologist at KAUST. Scientists had previously singled out the Gulf of Aqaba as a safe haven for corals, but this study extends the refuge's geographical scope dramatically. “We are not talking about small pockets of healthy corals,” says Voolstra, “we are talking about roughly 2000-km of coastline.” The north area of the Red Sea is where the surface water temperature is the coolest, and while heat-tolerant corals exist across the Red Sea, the population located further south is alarmingly close to tipping point. “Bleaching usually occurs at 1°C over the summer mean average temperature,” says Voolstra. But in the cooler waters of the north, corals enjoy a 5°C margin. To reach these conclusions, the scientists first investigated patterns of coral heat sensitivity in relation to ambient temperatures across the Red Sea, as well as compiling a dataset of coral bleaching since 1982 to identify the key areas least susceptible to thermal stress. They contrasted the thermal histories of Hurghada in Egypt and Thuwal in Saudi Arabia with their respective bleaching patterns, and by observing the impacts of the 2015-2016 El Niño events on corals from both areas, concluded that northern corals' susceptibility to spikes in water temperature was much lower in the Red Sea. “This anomaly, which is only found in the Red Sea, gives us a window of opportunity to take action,” he says. He adds that local solutions, as well as collaboration between Egypt, Jordan, Israel and Saudi Arabia were necessary to protect this unique ecosystem. In addition to threats from pollution and coastal development, infrastructure projects such as the planned bridge linking Saudi Arabia to Egypt could further unsettle this ecosystem. Ameer Abdulla, a senior research fellow at the Center of Biodiversity and Conservation Sciences at the University of Queensland in Australia, who was not involved in the study, believes that these findings reinforce the global conservation importance of this zone as, what he refers to, a regional genetic seed bank. “However, the resilience of the corals will be greatly influenced by local external conditions such as oceanography and human impacts,” says Abdulla. - Osman, E.O. et al. Thermal refugia against coral bleaching throughout the northern. Red Sea Global Change Biology http://dx.doi.org/10.1111/gcb.13895 (2017)
<urn:uuid:114a1841-d354-4697-84b5-349251427b1f>
3.359375
697
Truncated
Science & Tech.
38.931517
95,618,315
An international team of scientists have put forward a blueprint for a purely space-based system to solve the growing problem of space debris. The proposal, published in Acta Astronautica, combines a super-wide field-of-view telescope, developed by RIKEN’s EUSO team, which will be used to detect objects, and a recently developed high-efficiency laser system, the CAN laser that was presented in Nature Photonics in 2013, that will be used to track space debris and remove it from orbit. Space debris, which is continuously accumulating as a result of human space activities, consists of artificial objects orbiting the earth. The number of objects nearly doubled from 2000 to 2014 and they have become a major obstacle to space development. The total mass of space debris is calculated to be about 3,000 tons. It consists of derelict satellites, rocket bodies and parts, and small fragments produced by collisions between debris. Because the debris exists in different orbits, it is difficult to capture. The objects can collide with space infrastructure such as the International Space Station (ISS) and active satellites. As a result, developing remediation technology has become a major challenge. The EUSO telescope, which will be used to find debris, was originally planned to detect ultraviolet light emitted from air showers produced by ultra-high energy cosmic rays entering the atmosphere at night. “We realized,” says Toshikazu Ebisuzaki, who led the effort, “that we could put it to another use. During twilight, thanks to EUSO’s wide field of view and powerful optics, we could adapt it to the new mission of detecting high-velocity debris in orbit near the ISS.” The second part of the experiment, the CAN laser, was originally developed to power particle accelerators. It consists of bundles of optical fibers that act in concert to efficiently produce powerful laser pulses. It achieves both high power and a high repetition rate. The new method combining these two instruments will be capable of tracking down and deorbiting the most dangerous space debris, around the size of one centimeter. The intense laser beam focused on the debris will produce high-velocity plasma ablation, and the reaction force will reduce its orbital velocity, leading to its reentry into the earth's atmosphere. The group plans to deploy a small proof-of-concept experiment on the ISS, with a small, 20-centimeter version of the EUSO telescope and a laser with 100 fibers. “If that goes well,” says Ebisuzaki, “we plan to install a full-scale version on the ISS, incorporating a three-meter telescope and a laser with 10,000 fibers, giving it the ability to deorbit debris with a range of approximately 100 kilometers. Looking further to the future, we could create a free-flyer mission and put it into a polar orbit at an altitude near 800 kilometers, where the greatest concentration of debris is found.” According to Ebisuzaki, “Our proposal is radically different from the more conventional approach that is ground based, and we believe it is a more manageable approach that will be accurate, fast, and cheap. We may finally have a way to stop the headache of rapidly growing space debris that endangers space activities. We believe that this dedicated system could remove most of the centimeter-sized debris within five years of operation.” The research was done by Toshikazu Ebisuzaki, Satoshi Wada, Lech Wiktor Piotrowski, Yoshiyuki Takizawa, and Marco Casolino of RIKEN, Toshiki Tajima of the University of California at Irvine, Mark N. Quinn, Remi Soulard and Gerard Mourou of IZEST EcolePolytechnique, Philippe Gorodetzky and Etienne Parizot of The AstroParticle and Cosmology laboratory/University of Paris 7, and Mario Bertaina of the University of Torino. Full bibliographic information Toshikazu Ebisuzaki, Mark N. Quinn, Satoshi Wada, Lech Wiktor Piotrowski, Yoshiyuki Takizawa, Marco Casolino, Mario E. Bertaina, Philippe Gorodetzky, Etienne Parizot, Toshiki Tajima, Rémi Soulard, and Gérard Mourou, "Demonstration designs for the remediation of space debris from the International Space Station", Acta Astronautica, doi:10.1016/j.actaastro.2015.03.004 Dr. Toshikazu Ebisuzaki RIKEN Computational Astrophysics Laboratory, Japan Jens Wilkinson | AlphaGalileo What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:37bfae86-f637-4dd3-a2d3-1d23168e59ea>
3.78125
1,613
Content Listing
Science & Tech.
34.287896
95,618,317
This discusses methods of long term software preservation. Briefly about hardware that will not degrade over time, but the majority of the paper is about how to design a software stack that can be executed in the far future. In order to achieve this they recommend build everything in terms of a machine with a short simple specification. In depth literate programming describing a complete implementation of forth. Bootstrapped from intel 32 bit assembly with lots of assembler macros into a fully self extensible forth. This is a really illuminating read, teaching a lot of details about forth as well as showing just how minimal a runtime it is possible to make a programming language with. These slides outline the developement of rowl and amber. This is a programming language bootstrapped up from assembly. rowl is implemented directly in assembly then parts of the amber vm and compiler are implemented in rowl, then the rest of amber is implemented by self hosting. This project builds a SICP-style, Scheme interpreter with a REPL in Go. The blog post describes each phase. They're simple-looking. The Github integrates it into a total of 240 lines of code. Being a simple language, the Go implementation could be ported to anything else in our collection or straight hand-assemblied. Then, more complex stuff built on it like nineties or other LISPers do. A big concern in dealing with trust in hardware is whether it's subverted or not. Intel, AMD, and many other big names have backdoors in their chips for management purposes. Among other things... ;) One cheat to get trustworthy image is to just use a computer you have no reason to believe is subverted. Acquire it under a boring buyer, it itself is a boring tech, do your bootstrapping thing in it air gapped, and use what it produces. It will likely *not* be subverted *by default* since the interdictors and TAO folks have limited resources w/ no reason to target the system. Use several that are different for best results. To help with that, I (Nick P.) put together a list of all kinds of CPU's and execution strategies on Schneier's blog. Something I left off the list are old TI-82 calculators, Palm Pilots, etc. Lots of old stuff lying around you can get in person with cash that is probably unsubverted. "It's time for the Go compilers to be written in Go, not in C. I'll talk about the unusual process the Go team has adopted to make that happen: mechanical conversion of the existing C compilers into idiomatic Go code". They wrote the compiler in C then translated the source code from C into Go almost automatically (had to do some manual fixing up). This is an interesting approach. Let's name it the transpile approach to self hosting. asmutils a linux distro/userland implemented in assembly This is a linux distribution implemented entirely in assembly. It doesn't depend on libc or anything. This is a teaching document that explains how to make an assembler in forth! It shows a very forth-idiomatic style of programming, and how easy it is to make an advanced assembler once you have a working forth. This is a rust compiler written in C++, it translates rust to C. it makes the normal self hosted rustc compiler bootstrappable! It neglects the borrow checker but is still able to compile valid input source correctly. CakeML is really really fascinating. They have created a theory of SML programs inside HOL, allowing them to prove properties of SML programs embedded inside HOL. They have created a (serious) compiler from SML down to assembly and proved that it preserves semantics all the way. They are then able to compile the compile simultaneously bootstrapping the proof to create a verified compiler binary for which it is proven that it compiles input programs and preserves their semantics. To my knowledge this is the first such development. This is an incredibly well developed bootstrapping project. hex assembler. elf maker. x86 assembler. linker. B compiler. C compiler. Includes implementations of various POSIX style libc functions along the way. It is extremely well written and worth studying! The asmc project is a small bootable kernel that loads up a payload which. payloads exist for assembly compilers and "G language" compilers. The G language is a low level lang below C which was invented to ease bootstrapping. An assembler (which can build the kernel) has been implemented in G. cmeta - Using ideas from META compiler compiler Pim builds the meta language up from raw hex. blc - binary lambda calculus implementation, capable of computing matt mights factorial program. built using the cmeta system. Incredibly terse. Surprising that the techniques of metacompiler compilers can be applied at such a low level. The amount of leverage may be highest in this project.
<urn:uuid:f9000d1b-bbe2-48d7-81f0-1f0ca6bc4602>
2.796875
1,022
Content Listing
Software Dev.
53.514412
95,618,328
Osmium tetroxide, OsO//4, is the most important and most easily prepared compound of osmium. It is one of the few volatile oxides of a heavy metal and although the osmium is octavalent (of all elements only osmium and ruthenium reach as high an oxidation state) it is a reasonably controllable oxidizing agent. Most of its applications derive from this property. One of the major uses of the compound OsO//4 is for the oxidation of olefins. The compound is extensively used (normally in 2 per cent aqueous solution, often called osmic acid for cell and tissue studies, because of its unique fixation and staining properties. Its ability to passify iron electrodes provides corrosion protection in electrolytes. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:407be39c-8224-4126-9403-65e314801853>
2.640625
184
Knowledge Article
Science & Tech.
24.041096
95,618,342
If a prediction by two US scientists were true then the world could see an increase in the number of strong earthquakes in 2018 and the next few years, a media report said. Roger Bilham of the University of Colorado and Rebecca Bendick of the University of Montana, who presented their research at the annual conference of the Geological Society of America recently, argued that there is a clear correlation between the speed of earth’s rotation and global earthquake activity, Xinhua news agency reported. “On five occasions in the past century, a 25-30 per cent increase in annual numbers of (earthquakes of magnitude 7.0 or greater) has coincided with a slowing in the mean rotation velocity of the earth, with a corresponding decrease at times when the length-of-day (LoD) is short,” the duo said in a research abstract. “The correlation … can be shown to precede seismicity by 5-6 years, permitting societies at risk from earthquakes an unexpected glimpse of future seismic hazard.” Fluctuations in earth’s rotation are tiny, changing the length of the day by several milliseconds, but these minute changes could be enough to release vast amounts of underground energy, the two scientists said. They could not explain exactly what happened but they suspected that slight changes in the behaviour of earth’s core may be responsible for this effect. They also said the observed relationship is unable to indicate precisely when and where these future earthquakes will occur, but most of the additional strong earthquakes have historically occurred near the equator in the West and East Indies. “Whatever the mechanism, the 5-6 year advanced warning of increased seismic hazards afforded by the first derivative of the LoD is fortuitous, and has utility in disaster planning,” they wrote. “The year 2017 marks six years following a deceleration episode that commenced in 2011, suggesting that the world has now entered a period of enhanced global seismic productivity with a duration of at least five years.”
<urn:uuid:94d5ed63-86d7-476e-8549-790965fe29f8>
3.5
415
News Article
Science & Tech.
29.240905
95,618,355
Collaborative Research: Links Between Magma Source Characteristics, Shallow Plumbing, and Eruptive Styles in Mafic Intraplate Volcanic Fields (Lunar Crater Volcanic Field, Nevada) Greg Valentine Principal Investigator MetadataShow full item record Intellectual merit. Volcanoes that form in intraplate fields, although small and (mainly) monogenetic, result from eruptive phenomena ranging from quiet effusion of lava to relatively explosive Strombolian activity and, if external water interacts with ascending magma, hydrovolcanic activity. The generation and eruption of magmas in relatively small batches over dispersed areas, often with evidence for direct ascent from the upper mantle to the surface, suggests a driving process that is hybrid between strong hotspot-style mantle upwelling and scattered pockets of incipient melt that are passively mobilized by tectonic deformation (high- and low-magma flux end members of volcanic fields, respectively).<br/><br/>Integrated approaches involving physical volcanology, petrology, and geochemistry can provide important insights into these processes and the links between magma dynamics at depth and eruption processes on the surface. The lack of long-lived crustal magma reservoirs) allows investigation of the influences of deep magma source(s) and shallow plumbing systems on eruption styles. It is hypothesized that eruption style is influenced by physical and chemical characteristics of magma sources. To test this hypothesis comprehensive data will be integrated from the well-exposed Lunar Crater Volcanic Field (central Nevada). Direct studies of eruptive facies, of exposed shallow conduits (at older, eroded volcanoes), and of abundant mantle xenoliths will provide insights into the magma sources and ascent processes. Existing data on broad geochemicsl trends and age relationships will be integrated with new EarthScope seismic tomography data to provide a framework for understanding: (1) the interplay between pre-existing structure, topography, and vent location; (2) shallow plumbing geometries; (3) shallow controls on magmatic eruption styles, including relationships between eruption style, clast texture and shape, volatile content, and mineral chemistry; (4) spatial and temporal variations in magma sources and magma differentiation processes at individual volcanoes and across the field as a whole; (5) depth of melting and volatile contents of parent magmas; and (6) correspondence between volcano location, melting depths, and upper mantle seismic structure.<br/><br/><br/>Broader impacts. This work will support improvements in volcanic risk assessment, both in terms of the probability (volcano timing and location) of future events and in terms of their consequences (related to eruption processes). The project will also support the training of three Ph.D. students (two of whom are female minorities) and will have a component of international collaboration with the volcanology and geochemistry group at the Universidad Autónoma de México. The proposing team is in communication both with the Shoshone Tribe (Duckwater Reservation) and the Bureau of Land Management so that we can share our results with them and (through BLM) with the public.
<urn:uuid:0c7eba9f-772c-430c-aaab-b2ea6b96ada2>
2.734375
655
Academic Writing
Science & Tech.
5.143701
95,618,371
continue statement jumps to the top of the closest enclosing loop The following example uses continue to skip odd numbers. This code prints all even numbers less than 10 and greater than or equal to 0. Remember, 0 means false and % is the remainder of division (modulus) operator. This loop counts down to 0, skipping numbers that aren't multiples of 2-it prints 8 6 4 2 0: x = 10 while x: # from w w w.j ava 2 s .c o m x = x-1 # Or, x -= 1 if x % 2 != 0: continue # Odd? -- skip print print(x, end=' ') The code might be clearer if the print were nested under the if: x = 10 while x: # ww w . ja v a 2 s .c o m x = x-1 if x % 2 == 0: # Even? -- print print(x, end=' ')
<urn:uuid:f92fd249-86d1-47a4-9803-288ca3f1a38b>
3.546875
202
Tutorial
Software Dev.
108.025
95,618,393
Developing a Basic Web Application Using Python Developing a Basic Web Application Using Python In this in-depth, code-heavy tutorial, learn how to use Python to create basic web apps. Once you know how to do this, you can make any web app to fit your needs! Join the DZone community and get the full member experience.Join For Free There are a few things we need to explain before getting into the thick of things. Let's focus on the overall picture with a few analogies: - The internet is a network of computers. Its goal is to enable communication between them. - A network is composed of nodes and edges. Visually, it is a set of dots and connections. The London tube map is an example. - Your family, friends, colleagues, and acquaintances can be thought of as a network of people. (This is how social networks model our relationships.) - To communicate, we must have a means by which our messages reach the intended destination. - On one hand, we need something physical to connect the computers. These are the wires. - On the other hand, we need some conventions (software) to ensure messages reach their destinations. - One way this is done over the internet is called TCP/IP. - TCP ensures the messages arrive safely with nothing missing. Every computer has an IP which is a unique address. - You can think of TCP as an envelope and IP as the address on it. HTTP and the Request/Response Cycle To communicate effectively, the elements of a network need to agree on some protocol. That protocol for humans can be English, but there are other protocols (Chinese, for example). Many computers on the internet use HTTP to communicate. Every time you click on a link, or type a URL and enter into a browser, you are making what is called an HTTP Here is an example that uses curl from the command line as a client: $ curl - sv www.example.com - o / dev / null * About to connect() to www.example.com port 80(#0) * Trying 126.96.36.199... * Connected to www.example.com (188.8.131.52) port 80 (# 0) > GET / HTTP / 1.1 > User - Agent: curl / 7.30 .0 > Host: www.example.com > Accept: * /* > < HTTP/1.1 200 OK < Accept-Ranges: bytes < Cache-Control: < Content-Type: text/html < Date: Thu, 21 Aug 2014 12:09:46 GMT < Etag: "359670651" < Expires: Thu, 28 Aug 2014 12:09:46 GMT < Last-Modified: Fri, 09 Aug 2013 23:54:35 GMT < Server: ECS (iad/182A) < Content-Length: 1270 <!doctype html> <html> <head> <title>Example Domain</title> </head> <body> <div> <h1>Example Domain</h1> <p>This domain is established to be used for illustrative examples in documents. </p> </div> </body> </html> *is information from the >is the HTTP request text that <is the HTTP response text that Note that the response includes the HTML page that will be rendered in a browser. Tip: HTTP is just text. We send text requests and we receive text responses. All complex pretty pages in the browser are created from these text responses. In software development, architecture is a way of organizing code that you see time and time again. It's also called a pattern. A browser is a great example of a client. It sends HTTP requests to a server. A server returns an HTTP response, which the browser then renders as a web page. We will see other examples of a client-server architecture when we introduce using databases. Browsers understand how to render HTML. HTML is a way to structure text. <!doctype html> <html> <head> <title>Example Domain</title> </head> <body> <div> <h1>A Header</h1> <p>Here is some text between p elements</p> </div> </body> </html> Note it consists of elements like this: <el>content<el>. We don't need to dive any deeper than this. Data needs to be stored somewhere. Typically, we save data in files. Databases are another way of saving data which has some advantages over plain files. Web applications often save data in databases rather than files. You can think of a database much as you would spreadsheet software. It stores information in a collection of tables. Using Chrome, open developer tools: view/Developer/DeveloperTools. A tab will pop up. Click on the Network tab. Now type a URL (web address) that is familiar to you. Inspect the HTTP GET request. Here, we try with www.example.com: Note that we have the same information we found with curl above. It is presented in a more user-friendly way, however. Explore one of your favorite websites using the developer tools to inspect what is going on at the HTTP network level. All internet experiences, online shopping, news, videos, sending texts, etc. boils down to computers sending messages, much like what we have described above. HTTP is not the only protocol in town, but the concept of computers acting as clients and servers communicating by sending requests and responses is almost universal. Let's prepare to create our web services. Lets create a project directory: mkdir website cd website pip is a way to install Python code. Python code is installed as a package. To list all currently installed python packages: $ pip freeze.To install Django: $ pip install django. Creating Django/Python Project We use a script supplied by Django to set up a new project: $ django-admin.py startproject website You should see this folder structure and files generated: website - manage.py - website - __init__.py - settings.py - urls.py - wsgi.py The important files are A lot of configuration is needed to set up a web application. website/settings.py contains a lot of names that define all the configuration for our website. All the defaults are good for now. Note that the INSTALLED_APPS name is defined as a tuple of strings. We will be adding to that tuple shortly. Note also that the DATABASES name is defined as a dictionary. Creating the Database Notice that the current directory doesn’t include a db.sqlite3 file. Django, like all web frameworks, stores its data in a database. Let's create that database now: python manage.py syncdb You will see some output such as: (django) website $ ./manage.py syncdb Creating tables ... Creating table django_admin_log Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_groups Creating table auth_user_user_permissions Creating table auth_user Creating table django_content_type Creating table django_session You just installed Django's auth system, which means you don't have any superusers ˓→defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'greg'): Email address: Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) Now, the top-level folder website contains a file called db.sqlite3. This is your database. Inspecting the Database . Choose the sqlite-shell-win32-x86-....zip file. Unzip it by double-clicking it. Then, drag and drop it into C:BOOTCAMPPython34. The last step is to add it to a directory on the A database application is like a server. We send requests using clients. The clients in this case aren’t the browser but typically programs such as our Python website. We will use another server to independently inspect our database. You launch the client by typing The sqlite3 program provides a new type of shell, which is meant for inspecting our database. Here is an example interaction: (django) website sqlite3 db.sqlite3 SQLite version 3.7.13 2012-07-17 17:46:21 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> .tables auth_group auth_user_user_permissions auth_group_permissions django_admin_log auth_permission django_content_type auth_user django_session auth_user_groups sqlite> select * from auth_user; 1|pbkdf2_sha256$12000$YqWBCAkWemZC$+hazwa/dPJNczpPitJ2J0KR8UuAX11txLlSkrtAXk5k=|2014- ˓→08-21 14:59:05.171913|1|greg||||1|1|2014-08-21 14:59:05.171913 sqlite> .tables command lists all the tables that exist in the database. We rrecognizethese as being the same that were created earlier by running the .manage.py syncdb command. select * from auth_user; is SQL. SQL is a language dedicated to programming databases. This command means, "Give me everything in the sqlite3 .quit to exit. Running the Server You run the server with Now you can send http requests using your browser as client. Enter http://127.0.0.:8000/. You should see: You can quit the server at any point by with ctrl + c. Creating and Installing the Blog App Tip: Django, like any framework, provides a way of organizing your code. It provides a proven architecture that you learn to work within. A good web framework makes a lot of decisions for you. You build on the combined experience of the developers who created it. Django introduces the concept of an app as a way to organize code. Our blog will be an app. We create it with: ./manage.py startapp blog. We now have a folder directory generated that looks like: - blog | - __init__.py | - admin.py | - models.py | - tests.py | - views.py - db.sqlite3 - manage.py - website - __init__.py - settings.py - urls.py - wsgi.py We now need to tell our website about the blog apps’ existence. We do this by adding it to the INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'website', 'blog', ) Creating Web Services We will start by programming the server to return a responses to an HTTP GET request. We will always need to do two things: Map a url to a view function Define the view function This file matches URLs to view functions. When the Django server receives a URL, it searches in this file for one that matches. If it matches, it executes the mapped function. If it doesn’t find anything, you get a 404 Page Not Found error. Django provides us with what it calls view functions. These are ordinary Python functions, but they take a request object and they respond with a string or what is called an HTTP Response object. In your blog app, open the Add this to it: from django.http import HttpResponse def hello(request): return HttpResponse('hello') Now we need to configure our website with which request will trigger this view function. We do this by adding a line to urlpatterns = patterns('', url(r'^hello$', 'blog.views.hello'), url(r'^admin/', include(admin.site.urls)), ) In our browser, http://localhost:8000 responds with "hello". We have responded to a GET request. We will often follow this pattern of creating a view function and hooking it up to a URL. GET requests can pass parameters in the URL. Here is an example: http://localhost:8000/whoami/?name=greg&sex=male. The parameter section is defined by ? followed by &- separated keys and values. Here, we have the parameters: name, equal to greg; sex, equal to male. As usual, we need to do two things: create a view function and hook it up in website/urls.py. First, the view function: def whoami(request): sex = request.GET['sex'] name = request.GET['name'] response = 'You are ' + name + ' and of sex ' + sex return HttpResponse(response) Note that we can extract anything passed in the URL after the ? character using the request urlpatterns = patterns('', url(r'^$', 'blog.views.hello'), url(r'^time$', 'blog.views.time'), url(r'^whoami/$', 'blog.views.whoami'), url(r'^admin/', include(admin.site.urls)), ) You should now get a response: You are greg and of sex male Here are two exercises you can create. You can get an exact time by doing the following: >>> import datetime >>> datetime.datetime.now() Program your server to respond the time when it receives an HTTP GET request to this URL: http://localhost:8000/time. You will need to create a view function in blog/views.py, and hook it up to a URL in Body Mass Index Service You have just been contracted by the NHS to provide a service that calculates the BMI. Both other websites and mobile apps will be using your service. The endpoint (URL) will respond successfully to the following type of URL: bmi/?mass=75&height=182. Look up the BMI equation on Wikipedia, write a BMI view function, and hook it up to the website URLs. You may have to revisit the notion of type in Python. Remember there is a difference between ‘5’ and 5. To transform a number as a string into a number you can cast it using either >>> float('5') 5.0 >>> int('5') 5 By now you have discovered that you can trigger any type of programming sending a GET request to your server. You simply hook up a URL to a view function. Come up with something that is useful to you! Opinions expressed by DZone contributors are their own.
<urn:uuid:af8f1569-f595-4871-b5a2-7f797fbbf8ae>
3.78125
3,249
Tutorial
Software Dev.
66.279326
95,618,394
Animals Using Solar Energy for Photosynthesis or Electric Power Plants and Animals That Use Light Energy Most people consider plants to be simpler creatures than animals, but green plants have one big advantage that animals lack. They have the wonderful ability to make food inside their bodies by photosynthesis, using simple chemicals that they absorb from their environment and the energy of sunlight. Photosynthesis takes place inside the chloroplasts of plant cells. Despite their more advanced structure and functions, the bodies of humans and most animals can’t use the sun’s energy (except in reactions such as the production of vitamin D in human skin) and can't produce food. Their cells have no chloroplasts, so they are dependent on plants for their survival, either directly or indirectly. Researchers have discovered that some animals can use the sun’s energy, however. Some have incorporated plant chloroplasts into their bodies and even genes from the plant cell nucleus into their DNA. The chloroplasts carry out photosynthesis inside the animal, producing a carbohydrate and oxygen. The animal uses the carbohydrate for food. At least one animal has developed a solar cell—one that converts solar energy into electricity. Four amazing, solar-powered animals are a green sea slug named the eastern emerald elysia, an animal known as the mint-sauce worm, an insect named the oriental hornet, and the embryos of the spotted salamander. Solar-Powered Sea Slugs: Elysia chlorotica The Eastern Emerald Elysia The beautiful eastern emerald elysia (Elysia chlorotica) is a type of sea slug. It's found along the east coast of the United States and Canada in shallow water. The slug is about an inch long and is green in colour. Its body is often decorated with small white spots. Elysia chlorotica has wide, wing-like structures called parapodia that extend from the sides of its body as it floats. The parapodia undulate and contain vein-like structures, making the slug look like a leaf that has fallen into the water. This appearance may help to camouflage the animal. The parapodia are folded over the body when the animal is crawling over a solid surface. Algae in the Eastern Emerald Elysia The eastern emerald elysia feeds on a filamentous green alga called Vaucheria litoria that lives in the intertidal zone. When it takes a filament into its mouth, the slug pierces it with its radula (a band covered with tiny chitinous teeth) and sucks the contents out. Due to a process that is not completely understood, the chloroplasts in the filament are not digested and are retained. The process of acquiring chloroplasts from the alga is known as kleptoplasty. The chloroplasts collect in the branches of the slug's digestive tract, where they absorb sunlight and carry out photosynthesis. The branches of the digestive tract extend throughout the animal's body, including the parapodia. The slug's expanded "wings" provide a greater surface area for the chloroplasts to absorb light. Young slugs that haven't collected chloroplasts are brown in color and have red spots. The chloroplasts build up as the animal feeds. Eventually they become so numerous that the slug no longer needs to eat. The chloroplasts make glucose, which the slug's body absorbs. Researchers have discovered that the slugs can survive as long as nine months without eating. Gene Transfer for Photosynthesis Chloroplasts contain DNA, which in turn contains genes. Scientists have discovered that a chloroplast doesn't contain all the genes needed to direct the process of photosynthesis. The other required genes are present in the DNA of a plant cell's nucleus. Scientists have found that at least one of the required algal genes is also present in the DNA of the eastern emerald elysia's cells, however. At some point in time, the algal gene became incorporated into the slug's DNA. The fact that the chloroplast—a plant organelle—can survive and function in an animal's body is amazing enough, but even more amazing is the fact that the sea slug's genome (genetic material) is made of both its own DNA and algal DNA. Mint sauce is made from mint leaves, vinegar, and sugar. It's a popular addition to lamb dishes in Britain. The name of the sauce is used for a tiny beach worm found in Europe. A collection of mint-sauce worms looks very much like culinary mint sauce under certain lighting conditions. The Mint-Sauce Worm A green worm (Symsagittifera roscoffensis) can be found on certain beaches on the Atlantic coast of Europe. The animal is only a few millimetres long and is often known as the mint-sauce worm. Its colour comes from the photosynthetic algae living in its tissues. The adult worms rely entirely on substances made by photosynthesis for their nutrition. They are found in shallow water, where their algae can absorb sunlight. The worms collect to form a circular group when their population is sufficiently dense. Furthermore, the circle rotates—almost always in a clockwise direction. At lower densities the worms move in a linear mat, as shown in the video below. Researchers are very interested in the reasons why the worms move as a group and in the factors that control this movement. The Mint-Sauce Worm Moving Over a Beach in Portugal The Oriental Hornet The oriental hornet, or Vespa orientalis, is a red-brown insect with yellow markings. There are two wide, yellow stripes next to each other near the end of the insect's abdomen. The hornet also has a narrow yellow stripe near the start of its abdomen and a yellow patch on its face. Oriental hornets are found in southern Europe, southwest Asia, northeast Africa, and Madagascar. They have also been introduced to part of South America. The hornets live in colonies and usually build their nest underground. The nests are occasionally constructed above ground in a sheltered area, however. Like bees, the hornet colony consists of one queen and many workers, which are all females. The queen is the only hornet in the colony that reproduces. The workers take care of the nest and colony. The male hornets, or drones, die after fertilizing the queens. The hard outer covering of an insect is called an exoskeleton or cuticle. Scientists have discovered that the exoskeleton of the oriental hornet produces electricity from sunlight and acts as a solar cell. Oriental Hornets Cooling Their Nest Down on a Hot Day How Does the Oriental Hornet Produce Electricity? By examining the hornet's exoskeleton under very high magnification and investigating its composition and properties, scientists have discovered the following facts. - The brown areas of the exoskeleton contain grooves that split incoming sunlight into diverging beams. - The yellow areas are covered by oval protrusions which each have a tiny depression that resembles a pinhole. - The grooves and holes are thought to reduce the amount of sunlight that bounces off the exoskeleton. - Lab results have shown that the surface of the hornet absorbs most of the light that strikes it. - The yellow areas contain a pigment called xanthopterin, which can turn light energy into electrical energy. - Scientists think that the brown areas pass light to the yellow areas, which then produce electricity. - In the lab, shining light on the oriental hornet's exoskeleton generates a small voltage, showing that it can act as a solar cell. Inside an Oriental Hornet Nest Why Does the Oriental Hornet Need Electrical Energy? It's not yet known why the oriental hornet needs electrical energy, although researchers have made some suggestions. The electricity might give the insect's muscles extra energy or it might increase the activity of certain enzymes. Unlike most insects, the oriental hornet is most active in the middle of the day and early afternoon when the sunlight is most intense. Its exoskeleton is thought to provide a boost in energy as sunlight is absorbed and converted into electrical energy. The Spotted Salamander The spotted salamander (Ambystoma maculatum) lives in the eastern United States and Canada, where it's a widespread amphibian. The adults are black, dark brown, or dark grey in colour and have yellow spots. Researchers have discovered that the embryos of the spotted salamander contain chloroplasts. The discovery is exciting because the salamander is the only vertebrate known to incorporate chloroplasts into its body. Spotted salamanders live in deciduous forests. They are rarely seen because they spend most of their time under logs or rocks or in burrows. They emerge at night to feed under the cover of darkness. The salamanders are carnivores and eat invertebrates such as insects, worms, and slugs. Spotted salamanders also emerge from their hiding place in order to mate. The female generally finds a vernal (temporary) pool in which to lay her eggs. The advantage of a pool of water compared to many ponds is that the pool doesn't contain fish that would eat the eggs. Adult Spotted Salamanders How Do Spotted Salamander Embryos Obtain Chloroplasts? Once the salamander's eggs are laid in a pool, a single-celled green alga called Oophila amblystomatis enters them within a few hours. The relationship between the developing embryo and the alga is mutually beneficial. The alga uses the wastes made by the embryos and the embryos use oxygen produced by the alga during photosynthesis. Researchers have found that in eggs with algae, embryos grow faster and have a better survival rate. It used to be thought that the algae entered the salamander eggs but not the embryos inside the eggs. Now scientists know that some of the algae do enter the embryo's body, and some even enter the embryo's cells. The algae survive and continue to photosynthesize, producing food for the embryo as well as oxygen. Embryos without the algae can survive, but they grow more slowly and their survival rate is lower. Spotted Salamander Eggs and Embryos Animals and Photosynthesis Now that one vertebrate has been found to carry out photosynthesis, scientists are on the lookout for more. They feel that it's more likely in vertebrates that reproduce by releasing eggs into water, where the eggs can be penetrated by algae. The young of mammals and birds are well protected and aren't likely to absorb algae. The idea that animals can use solar energy vía isolated chloroplasts or algae or entirely on their own is a fascinating one. It will be interesting to see if more animals with these abilities are discovered. Questions & Answers We use plant material like alfalfa (lucerne) to make pellets for animal feeds. Is it at all possible to "manufacture" pellets from sunlight with artificial photosynthesis and thus bypass the plants' processes? At the moment, this isn’t possible. Researchers are exploring artificial photosynthesis, however, so it may one day be feasible. During natural photosynthesis, plants convert the energy of sunlight into chemical energy, which is then stored in the molecules of carbohydrates. At the moment, the focus of the artificial photosynthesis research seems to be the creation of a different type of energy from sunlight instead of the chemical energy stored in molecules. New goals for the research may be established in the future, though. © 2013 Linda Crampton
<urn:uuid:a1152af0-3be3-423d-8853-f890f77f3d0b>
3.6875
2,431
Knowledge Article
Science & Tech.
40.570199
95,618,419
+44 1803 865913 This last decade has witnessed a revolution in our observations of galaxies; in particular deep imaging with HST and spectroscopy with 10m-class ground-based telescopes have uncovered many objects that are difficult to place along the Hubble sequence. High resolution spectroscopy of extremely faint objects has enabled the study of the kinematic evolution and, hence, the mass assembly of galaxies to unprecedented look-back times for direct comparison with cosmological structure formation scenarios. Thus, it is now possible to study all three aspects of galaxy evolution - their morphological-dynamical, chemical and spectral evolution out to redshift larger than six, exploring more than 95% of the age of the universe. These Proceedings of the IAU Symposium 235 report the considerable progress made in recent years on galaxy formation and evolution, and look forward to the expected breakthroughs in the domain of remote galaxies, with ALMA, the ELT and the next generation space telescopes. 'Galaxy evolution is in a phase of astonishingly rapid development ... over 200 papers in this volume [give] a snapshot of [this] progress.' The Observatory There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects NHBS is one of my favorite vendors. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:d159fdec-d333-486c-a755-4be6bb0dd57a>
2.546875
298
Product Page
Science & Tech.
29.348386
95,618,420