text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
|시간 제한||메모리 제한||제출||정답||맞은 사람||정답 비율| |3 초||512 MB||60||11||10||32.258%| For every positive integer we may obtain a non-negative integer by multiplying its digits. This defines a function f, e.g. f(38) = 24. This function gets more interesting if we allow for other bases. In base 3, the number 80 is written as 2222, so: f3(80) = 16. We want you to solve the reverse problem: given a base B and a number N, what is the smallest positive integer X such that fB(X) = N? The input consists of a single line containing two integers B and N, satisfying 2 < B ≤ 10 000 and 0 < N < 263. Output the smallest positive integer solution X of the equation fB(X) = N. If no such X exists, output the word “impossible”. The input is carefully chosen such that X < 263 holds (if X exists).
<urn:uuid:0f7c41fc-83bb-4c0e-9fb5-f87001694311>
3.484375
272
Academic Writing
Science & Tech.
97.123452
95,558,927
Calcium phosphate is a typical component of teeth and bones. It has recently been shown that plants of the rock nettle family also use this very hard mineral in their „teeth“ to defend themselves against their animal enemies. Botanists of Bonn University have now demonstrated that calcium phosphate is a lot more widespread in plants than previously suspected. Even thal cress (Arabidopsis thaliana) uses trichomes hardened with an incrustation of this biomineral to defend itself against enemies such as aphids. The results have now been published online first in the scientific journal „Planta“. The print version will be published in January. In agriculture unspectacular thal cress (Arabidopsis thaliana) is simply considered as a weed. In science, however, the plant has for decades been the modell organism in studies on genetics, molecular biology and physiology. „It is certainly the most well investigated plant of all“, says Prof. Dr. Maximilian Weigend of the Nees-Institut for Plant Biodiversity of Bonn University. Aphid on the leaf surface of Caiophora deserticola (Loasaceae): The sharp mineralized trichomes represent a deadly forest of needles that the animal has to walk over. © Adeel Mustafa/Uni Bonn „The more surprising, that calcium phosphate in the tips of the trichomes of Arabidopsis was discovered only now.“ The team around Prof. Weigend identified the hard-as-teeth substance with the help of the electron microskope and Raman-spectroscopy. The fact that „teeth“ are not restricted to animals, but also found in plants, had been previously demonstrated by the Bonn botanists with the help of Hans-Jürgen Ensikat in the rock nettle family (Losaceae). Subsequently, the scientists expanded their studies onto various other plant orders. They could demonstrate the presence of calcium phosphate biomineralization in several dozens plant species, e.g., the orders Rosales, Boraginales and Brassicales – thal cress belongs to the latter. Deceptively soft hairs are sharp weapons „It has long been known that many plants use glass-like silica or calcium carbonate to stiffen their trichomes“, reports Adeel Mustafa of the Weigend working group. „The surprising thing was that very hard calcium phosphate is also used by a whole range of species and has yet been overlooked completely until recently.“ However, thal cress lacks spectacular spines or stinging hairs like stinging nettles – that use them to defend themselves against browsing mammals such as cows. In Arabidopsis the trichomes are small and comparatively soft – only the tiny tips are incrusted with the particularly hard substance calcium phosphate. „The biomineral is apparently deposited in precisely the place where maximum mechanical stability is required“, explains Weigend. Microscopic image shows impaled aphids Thal cress uses its hairs to defend itself mostly against small insects such as aphids. Microscopic images demonstrate how the mineralized trichomes represent unsurmountable obstacle. Like an iron maiden, the medieval instrument of torture, the particularly hardened hairs impale the aphids. „We are dealing with a microscale defense weapon, deterring many types of insects from damaging these plants“, says Weigend. „In some way it is surprising that not all plants use calcium phosphate in structural biomineralization,“ concludes Mustafa. Calcium and phosphate are nearly universally present in plants in the form of other chemical compounds, but the use as a biomineral is not universal. Silica and calcium phosphate are far superior to calcium carbonate – the most common biomineral overall – due to their much higher hardness. The ability to harden hairs with calcium phosphate appears to have a genetic basis. Weigend outlines possible future search topics: „Unravelling the genetic basis for the productions of these defense weapons would be the next logical step. This would enable us to use these self-defending plants as models for breeding more insect resistant crops.“ Publication: Maximilian Weigend, Adeel Mustafa, Hans-Jürgen Ensikat: Calcium phosphate in plant trichomes: the overlooked biomineral, Planta, DOI: 10.1007/s00425-017-2826-1 Contact für media: Prof. Dr. Maximilian Weigend Nees-Institut for Plant Biodiversity Johannes Seiler | idw - Informationsdienst Wissenschaft Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ef846d9a-3652-494c-b80d-cb68d31e7e12>
3.3125
1,586
Content Listing
Science & Tech.
32.632933
95,558,944
As climate change alters habitats for birds and bees and everything in between, so too does the way humans decide to use land. Researchers at the University of Wisconsin-Madison and Aarhus University in Denmark have, for the first time, found a way to determine the potential combined impacts of both climate and land-use change on plants, animals and ecosystems across the country. The study, which looks at estimates of climate and land-use change speeds, is from Jack Williams, UW-Madison professor of geography; Volker Radeloff, UW-Madison associate professor in the Department of Forest and Wildlife Ecology; and postdoctoral researchers Alejandro Ordonez, from Aarhus University and UW-Madison, and Sebastian Martinuz, of UW-Madison. It was published today (Aug. 18, 2014) in the journal Nature Climate Change. The estimates — relevant to the first 50 years of the 21st century — provide a basis for national, regional and local policy discussions about how to conserve biodiversity and ecosystems in a rapidly changing world. Combining climate and land-use change, the researchers say, may lead to different actions than consideration of either alone. "For conservation, as the world is changing, we want to know, how will wildlife respond," Radeloff says. "We need to take both land use and climate into account as we look at the future." For example, flat areas of the Midwest are more vulnerable to climate change than mountainous regions of the country. Conversely, areas in the northeastern U.S. may experience more intensive rates of land use. High demand for cropland in New England would lead to greater destruction of forest, while, in the upper Midwest, it would lead to slower growth of cities. The analyses thus show different impacts for different regions. Regions exposed to high climate change rates and reductions in habitat due to more rapid land-use change may be higher priority for policy efforts than other areas. In some regions, such as the Great Plains, high rates of land-use change may actually lead to increased forest cover. "There are lots of studies that look at climate change and a lot of studies that look at land-use change, but very few quantitatively integrate the two together," says Williams, who is also director of the Center for Climatic Research in the UW-Madison Nelson Institute for Environmental Studies. In their approach, the researchers used the Intergovernmental Panel on Climate Change 5th Assessment Report and socioeconomic parameters from the U.S. Natural Resources Inventory to create scenarios that looked at the rate of change of both climate and land use, referred to as the speed of climate and land-use change. The land-use scenarios came from models previously developed by Radeloff and his team in 2012. For climate, this meant looking at changes in variables like precipitation, water deficit, and temperature. For land use, it meant assessing changes to housing prices, agricultural taxes, carbon subsidies and more. The speed of climate change in a particular place matters because it determines how quickly a given species of plant or animal must migrate from one region to another to stay within its optimal climate, or how quickly it must adapt to new conditions. Similarly, land-use speeds measure how quickly land cover changes, which can lead to new or lost habitat, species isolation, or barriers to species entering or leaving an area. The combined scenarios are not, Williams and Radeloff say, meant to advise policymakers what to do, but rather, to show what is likely given specific changes to policy in the context of a changing climate and changing land. It's not "what's going to happen, but a range of what might be likely," says Radeloff. "If we change these policies, this is what's likely to happen." The team found that, overall, climate change has an order of magnitude more impact than land use, but the relative impact of both differs by region. "Across the U.S., the rates of climate change are a big deal," says Williams. "If we are thinking about land use and conservation planning, these results put both into perspective." The researchers joined forces across fields because they say the sweep of global change requires coordinated research. Change is inevitable, they say, but humans have the chance to mitigate their impact in ways that give the world's wildlife a chance to thrive. "We won't stop climate change but maybe we can slow it … we may be able to give species time to adapt," says Radeloff. "Now we have geese living on golf courses, but Aldo Leopold was worried they were going to go extinct. That's probably not going to happen." The work was supported by the Bryson Climate, People and Environment Program; the HISTFUNC project; the National Science Foundation; and NASA's Land Cover and Land Use Change Program. Kelly April Tyrrell, firstname.lastname@example.org Jack Williams | Eurek Alert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:98d52f3d-6dc7-4f87-b06d-09ab4a3bc0a7>
3.625
1,674
Content Listing
Science & Tech.
41.988677
95,558,999
What is CIN? CIN meaning Convective INhibition. A measure of the amount of energy needed in order to initiate convection. Values of CIN typically reflect the strength of the cap. They are obtained on a sounding by computing the area enclosed between the environmental temperature profile and the path of a rising air parcel, over the layer within which the latter is cooler than the former. (This area sometimes is called negative area.) See CAPE. reference: National Weather Service Glossary
<urn:uuid:fed98ef1-536d-4dbf-918b-e18caeb56e41>
2.71875
101
Knowledge Article
Science & Tech.
35.5525
95,559,009
Natural [Language|Logic] Inference NaturalLI is a Natural Logic reasoning engine aimed at fast inference from a large database of known facts. The project's primary goal is to infer whether arbitrary common-sense facts are true, given a large database of known facts. The system is described in: Gabor Angeli and Christopher D. Manning. "NaturalLI: Natural Logic for Common Sense Reasoning." EMNLP 2014. On Ubuntu, the program can be built (including all dependencies) with: ./install_deps.sh # optional; read this before running it! ./autogen.sh ./configure make make install # optional; make sure to set --prefix appropriately The code compiles with both g++ (4.8+; 4.9+ highly recommended) With GCC 4.8, regular expressions will not work properly, which means sending flags to the C program will not work properly (an exception will be raised and the program will crash). On Ubuntu, you can install g++ 4.9 with: sudo add-apt-repository ppa:ubuntu-toolchain-r/test sudo apt-get update sudo apt-get install g++-4.9 Clang should already be at a new enough version as of Ubuntu 14.04 and onwards. The following is a list of options that can be passed into the configure script, and a description of each. --enable-debug: Enables code coverage and assertions. --with-java: The root of the Java runtime to use. This should be Java 8+, and must include both javac(i.e., not just a JRE). --with-corenlp: The location of CoreNLP -- generally, a Jar file. This must be CoreNLP Version 3.5.3 or newer. Must be an absolute path! --with-corenlp-models: The location of the CoreNLP models jar. Must be an absolute path! --with-corenlp-caseless-models: The location of the CoreNLP caseless models jar. Must be an absolute path! In addition, a number of environment variables are relevant for the configure script. These are listed below: CXX: The C compiler to use. Both g++(4.8+; 4.9+ highly recommended) and clang++(3.5+) should compile. MAX_FACT_LENGTH: The maximum number of tokens a fact can be. This has to be less than 255. MAX_QUERY_LENGTH: The maximum length of a query. Note that memory during search scales linearly with this value. Default is 39. Default value has nice cache-alignment properties. MAX_QUANTIFIER_COUNT: The maximum number of quantifiers in a sentence. Note that search memory and search runtime scales linearly with this. Default is 6. Default value has nice cache-alignment properties. SERVER_PORT: The port to serve the server from. Default is 1337. SEARCH_TIMEOUT: The maximum number of ticks to search for. Default is 1000000. SEARCH_CYCLE_MEMORY: The depth to backtrack during seach looking for cycles. Lower values cause search to run faster, but nodes to be repeated more often. Default is 3. SEARCH_CYCLE_FULL_MEMORY: Check all the way back to the start node for cycles. Default is false (0). MAX_FUZZY_MATCHES: The number of premises to consider from the explicit premise set for fuzzy alignment matches. Default is 0. MAX_BRANCHOUT: The maximum branching factor of the graph search. This is to prevent runaway nodes from taking too much time. Default is 100. Command Line Inference The NaturalLI interface takes as input lines from standard in, and outputs JSON output to standard out (with debugging information on stderr). You can therefore run the program simply by running: You can then enter premise/hypothesis pairs as a series of sentences, one per line. All but the last sentence are treated as a premise; the last sentence is treated as the hypothesis. A double-newline (i.e., a blank line) marks the end of an example. You can also mark hypotheses as True or False; this will cause the program to exit with a nonzero error code if some of the hypothesis are not what they are annotated as. The error code corresponds to the number of failed examples. # This is a comment All cats have tails Some cats have tails # This is a new example # The "True: " prepended to the hypothesis denotes that we expect # this fact to be true given the premises Some cats have tails An irrelevant premise True: Some animals have tails A useful side-effect of this framework is the ability to pipe in test files, and get as output annotated json. For example, the test cases for the program can be run with: cat test/data/testcase_wordnet.examples | src/naturalli In addition to the command line interface, you can also talk to the program via a websocket. By default, NaturalLI listens on port 1337; the communication protocol is exactly the same as the command line interface. For example: $ telnet localhost 1337 Trying 127.0.0.1... Connected to localhost (127.0.0.1). Escape character is '^]'. all cats have tails some cats have tails
<urn:uuid:a462a210-36bd-46b7-8357-ce3f9b2b0789>
3.09375
1,176
Documentation
Software Dev.
60.38023
95,559,011
Astronomers see more planets than stars in galaxy The whole Milky Way 'must be just swarming with little habitable planets.' It's not your father's galaxy. With breathtaking speed astronomers are rewriting our understanding of the universe in which we live. It was only 15 years ago the scientists spied the first planet around another star. On Wednesday, astronomers announced there are at least 100 billion planets in our Milky Way Galaxy alone, many of them rocky and Earth-size. Separately, another scientific group announced the discovery of the three smallest exoplanets ever found, one about half the size of Earth. The others are three-quarters the size of Earth. Each of these likely rocky planets closely orbit their small, red dwarf star, moving so fast that the furthest planet had an orbital period — a “year” — of just two days. Based upon their survey of nearby red dwarf stars, the astronomers believe many of these small stars, the most common type in the galaxy, must harbor Earth-size worlds. “If these planets are as common as they appear, then the whole galaxy must be just swarming with little habitable planets around faint red dwarfs,” said John Johnson, a California Institute of Technology astronomer who helped make the discovery. MySA News Videos - Woman fatally ejected from vehicle during rollover crash on the far NW side San Antonio Express-News - SAPD: Gunman fired into woman's East Side home, grazing her head San Antonio Express-News - SAPD: Teen ambushed by two gunmen who lured him in with fake Instagram persona San Antonio Express-News - Fight over woman leads to West Side shooting San Antonio Express-News - SAPD: Mom drunkenly crashes into pole with 2 children in car San Antonio Express-News - SAPD: 3-year-old in critical condition after shooting Caleb Downs - Manhunt on after viral S.A. video shows chaotic scene San Antonio Express-News - Bexar County SWAT team responding to barricaded man in far West Side home Facebook/ Bexar County Sheriff's Office - Police: Driver fled after running over homeless man on S.A. road San Antonio Express-News - Teen driver fatally strikes pedestrian San Antonio Express-News A decade or two ago scientists wondered whether our solar system, filled with planets, was a rare thing as they could only theorize about how planets formed during the fiery birth of stars. But in the past 15 years, using half a dozen improvised astronomical methods, scientists have pushed their observational technology to the point where they can not only measure faint dips and increases in light from distant stars, but also parse out what the pulses mean. The process of discovering distant worlds greatly accelerated with the Kepler telescope, which NASA launched in 2009 and produced its first results in early 2010. The instrument has so far identified more than 2,000 worlds around other stars, about 200 of which are similar in size to Earth. “For the last decade-plus the process of finding planets has been a duct-tape like effort,” Johnson said. “With the launch of the Kepler mission you had a single-minded focus: find planets. And find planets it did.” Johnson's discovery is one. The star, KOI-961, is about 120 light years from Earth and has 13 percent the mass of the Sun. It is colder than the Sun, and therefore has a reddish hue. The tightly packed system is similar in size to Jupiter and its moons. Until Wednesday astronomers had found just one planet, Kepler-20e, that was smaller than Earth. Johnson said its likely as many as 1-in-3 red dwarf stars have rocky planets around them. These stars, which have 40 percent or less than the mass of the Sun, are incredibly common throughout the galaxy.
<urn:uuid:dac468b8-be5b-4de7-b3c1-0b96bff28016>
3.1875
805
Truncated
Science & Tech.
48.548344
95,559,013
Atomic bonding point of view Pure aluminum is ductile metal withlow tensile strength and hardness. Its oxide Al2O3 (alumaina) is extremenly strong, hard, and brittle. Can you explain this difference from am atomic bonding point of view? Now Priced at $5 (50% Discount) If a solution of caffeine in chloroform (CHCl3) as a solvent has a concentration of 0.0870 m, calculate the following. (a) the percent caffeine by mass (b) the mole fraction of caffeine. A compound contains 78.14% boron and 21.86% oxygen. Determine the empirical formula for this compound. need help on. What will be the concentration of Ca2 (aq) when Ag2SO4(s) begins to precipitate. What percentage of the Ca2 (aq) can be separated from the Ag (aq) by selective precipitation Calcium oxide reacts with water in a combination reaction to produce calcium hydroxide: CaO(s) + H2O(l) --> Ca(OH)2(s). In a beaker, 1.45 g of H2O is added to 1.50g of CaO. An insecticide has the weight percents C=55.6%, H=4.38%, Cl=30.8% and O=9.26%. the approximate molar mass is 345 g/mol. what is the molecular formula? For each of the following determine whether the calculated molarity would be too low too high or unaffected and explain why a. you did not react all of the excess zinc before filtering the reaction mixture The acid dissociation constant for hypochlorous acid (HClO) is 3.0x10-8. Calculate the concentrations of H3O+, ClO- and HClO at equilibrium if the initial concentration of the acid is 0.05 M. What is the pH? I made an ice table and found x to be 3 How many grams of dry NH4Cl need to be added to 2.50 L of a 0.500 M solution of ammonia, NH3, to prepare a buffer solution that has a pH of 8.74? Kb for ammonia is 1.8*10^-5. 897.4 grams of sodium sulfide reacts with excess nitric acid. What volume of H2S gas is produced if the reaction is carried out at 760. Torr and 298 K? A baseball is 150g and thrown at 100miles/hour with the same uncertainity. Determine the uncertainity in its position. The uncertainity was set at 1% in the previous problem not sure if it matters pertain to this problem. Start Excelling in your courses, Ask an Expert and get answers for your homework and assignments!!
<urn:uuid:1e0fc675-4c11-49df-9d77-038c35d44ade>
3.234375
595
Q&A Forum
Science & Tech.
78.007942
95,559,053
Nucleic acids. Nucleic Acids. Information storage. proteins. DNA. Nucleic Acids. Function: genetic material stores information genes blueprint for building proteins DNA RNA proteins transfers information blueprint for new cells blueprint for next generation. T. G. A. C. Nitrogen baseI’m the A,T,C,G or Upart! Are nucleic acidscharged molecules? Purine = AG Dangling bases?Why is this important? Matching bases?Why is this important? H bonds?Why is this important? Matching halves?Why is this a good system? “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” ester bond (in a fat) Passing on information?Why is this important?
<urn:uuid:585fe832-c1d5-40f1-8c8a-6de75d9d66e6>
3.15625
180
Content Listing
Science & Tech.
53.592697
95,559,063
Scientists, engineers, and others create geologic maps to determine the best places for people to settle, build, farm, and use land in a variety of ways. They also use geologic maps to monitor the ways that human activity might be changing the land itself over time.In this activity, you will examine geologic maps — and other maps — to consider the relationship between karst and human activity. Fossil fuels play an important role in allowing us to have lifestyles we’re accustomed to, but they do emit carbon dioxide, and we all want to be good stewards of our resources. The goal of this activity is to become aware of how much energy you use at school — and the financial and environmental costs. You may have seen or used Global Positioning System (GPS) devices in cars or on camping trips. These devices use data from satellites orbiting the Earth to locate places on our planet. GPS devices describe the locations to us in the form of latitude and longitude coordinates. Static electricity can be used to demonstrate the electricity of lightning. This activity will demonstrate the attraction of positive and negative charges and what happens when those opposite charges meet each other. When it comes to slipping, sliding, and stability in soils, the key word is “liquefaction.” During an event like an earthquake, liquefaction is the process by which saturated soil behaves like a liquid. This can be problematic, as a liquid soil loses structure and can cause buildings to sink, foundations to crack, and soil to slide down slopes all at once. How does the type of soil affect how much a house will sink or shift during an earthquake? Conduct an experiment to test your ideas! Dendrochronologists use tree rings to go back in time to learn more about past climate. Using straws to recreate tree rings, you can learn how dendrochronologists work. The following activity is designed to help you learn to listen, read, and communicate in both written and oral formats about the sky. In this activity, students will explore local places with wild elements, such as wildlife refuges. Students also will create maps showing spatial relationships between wild places and school, and they will find creative ways to record experiences. Learn about the Earth's magnetic poles and paleomagnetism in this activity from Consortium for Ocean Leadership. When warm and cold air masses meet, a thunderstorm can grow. Thunderstorms also cause heavy rain, flash flooding, hail, strong winds and tornadoes. In this activity, you will learn about convection and how air moves.
<urn:uuid:010682a1-49b2-454c-9f06-8be16a162daf>
4
532
Content Listing
Science & Tech.
43.449736
95,559,071
Supermassive black hole A supermassive black hole (SMBH or SBH) is the largest type of black hole, on the order of hundreds of thousands to billions of solar masses (M☉), and is found in the centre of almost all currently known massive galaxies. In the case of the Milky Way, the SMBH corresponds with the location of Sagittarius A*. Supermassive black holes have properties that distinguish them from lower-mass classifications. First, the average density of a SMBH (defined as the mass of the black hole divided by the volume within its Schwarzschild radius) can be less than the density of water in the case of some SMBHs. This is because the Schwarzschild radius is directly proportional to mass, while density is inversely proportional to the volume. Since the volume of a spherical object (such as the event horizon of a non-rotating black hole) is directly proportional to the cube of the radius, the density of a black hole is inversely proportional to the square of the mass, and thus higher mass black holes have lower average density. In addition, the tidal forces in the vicinity of the event horizon are significantly weaker for massive black holes. As with density, the tidal force on a body at the event horizon is inversely proportional to the square of the mass: a person on the surface of the Earth and one at the event horizon of a 10 million M☉ black hole experience about the same tidal force between their head and feet. Unlike with stellar mass black holes, one would not experience significant tidal force until very deep into the black hole. History of research Donald Lynden-Bell and Martin Rees hypothesized in 1971 that the center of the Milky Way galaxy would contain a supermassive black hole. Sagittarius A* was discovered and named on February 13 and 15, 1974, by astronomers Bruce Balick and Robert Brown using the baseline interferometer of the National Radio Astronomy Observatory. They discovered a radio source that emits synchrotron radiation; it was found to be dense and immobile because of its gravitation. This was, therefore, the first indication that a supermassive black hole exists in the center of the Milky Way. The origin of supermassive black holes remains an open field of research. Astrophysicists agree that once a black hole is in place in the center of a galaxy, it can grow by accretion of matter and by merging with other black holes. There are, however, several hypotheses for the formation mechanisms and initial masses of the progenitors, or "seeds", of supermassive black holes. One hypothesis is that the seeds are black holes of tens or perhaps hundreds of solar masses that are left behind by the explosions of massive stars and grow by accretion of matter. Another model hypothesizes that before the first stars, large gas clouds could collapse into a "quasi-star", which would in turn collapse into a black hole of around 20 M☉. The "quasi-star" becomes unstable to radial perturbations because of electron-positron pair production in its core and could collapse directly into a black hole without a supernova explosion (which would eject most of its mass, preventing the black hole from growing as fast). Given sufficient mass nearby, the black hole could accrete to become intermediate-mass black hole and possibly a SMBH if the accretion rate persists. Another model involves a dense stellar cluster undergoing core-collapse as the negative heat capacity of the system drives the velocity dispersion in the core to relativistic speeds. Finally, primordial black holes could have been produced directly from external pressure in the first moments after the Big Bang. These primordial black holes would then have more time than any of the above models to accrete, allowing them sufficient time to reach supermassive sizes. Formation of black holes from the deaths of the first stars has been extensively studied and corroborated by observations. The other models for black hole formation listed above are theoretical. The difficulty in forming a supermassive black hole resides in the need for enough matter to be in a small enough volume. This matter needs to have very little angular momentum in order for this to happen. Normally, the process of accretion involves transporting a large initial endowment of angular momentum outwards, and this appears to be the limiting factor in black hole growth. This is a major component of the theory of accretion disks. Gas accretion is the most efficient and also the most conspicuous way in which black holes grow. The majority of the mass growth of supermassive black holes is thought to occur through episodes of rapid gas accretion, which are observable as active galactic nuclei or quasars. Observations reveal that quasars were much more frequent when the Universe was younger, indicating that supermassive black holes formed and grew early. A major constraining factor for theories of supermassive black hole formation is the observation of distant luminous quasars, which indicate that supermassive black holes of billions of solar masses had already formed when the Universe was less than one billion years old. This suggests that supermassive black holes arose very early in the Universe, inside the first massive galaxies. A vacancy exists in the observed mass distribution of black holes. Black holes that spawn from dying stars have masses 5–80 M☉. The minimal supermassive black hole is approximately a hundred thousand solar masses. Mass scales between these ranges are dubbed intermediate-mass black holes. Such a gap suggests a different formation process. However, some models suggest that ultraluminous X-ray sources (ULXs) may be black holes from this missing group. There is, however, an upper limit to how large supermassive black holes can grow. So-called ultramassive black holes (UMBHs), which are at least ten times the size of supermassive black holes, appear to have a theoretical upper limit of around 50 billion solar masses, as anything above this slows growth down to a crawl (the slowdown tends to start around 10 billion solar masses) and causes the unstable accretion disk surrounding the black hole to coalesce into stars that orbit it. A small minority of sources argue that distant supermassive black holes whose large size is hard to explain so soon after the Big Bang, such as ULAS J1342+0928, may be evidence that our universe is the result of a Big Bounce, instead of a Big Bang, with these supermassive black holes being formed before the Big Bounce. Some of the best evidence for the presence of black holes is provided by the Doppler effect whereby light from nearby orbiting matter is red-shifted when receding and blue-shifted when advancing. For matter very close to a black hole the orbital speed must be comparable with the speed of light, so receding matter will appear very faint compared with advancing matter, which means that systems with intrinsically symmetric discs and rings will acquire a highly asymmetric visual appearance. This effect has been allowed for in modern computer generated images such as the example presented here, based on a plausible model for the supermassive black hole in Sgr A* at the centre of our own galaxy. However the resolution provided by presently available telescope technology is still insufficient to confirm such predictions directly. What already has been observed directly in many systems are the lower non-relativistic velocities of matter orbiting further out from what are presumed to be black holes. Direct Doppler measures of water masers surrounding the nuclei of nearby galaxies have revealed a very fast Keplerian motion, only possible with a high concentration of matter in the center. Currently, the only known objects that can pack enough matter in such a small space are black holes, or things that will evolve into black holes within astrophysically short timescales. For active galaxies farther away, the width of broad spectral lines can be used to probe the gas orbiting near the event horizon. The technique of reverberation mapping uses variability of these lines to measure the mass and perhaps the spin of the black hole that powers active galaxies. In the Milky Way - The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light-hours (×1013 m or 120 AU) from the center of the central object. 1.8 - From the motion of star S2, the object's mass can be estimated as 4.1 million M☉, or about ×1036 kg. 8.2 - The radius of the central object must be less than 17 light-hours, because otherwise, S2 would collide with it. In fact, recent observations of the star S14 indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit. - No known astronomical object other than a black hole can contain 4.1 million M☉ in this volume of space. The Max Planck Institute for Extraterrestrial Physics and UCLA Galactic Center Group have provided the strongest evidence to date that Sagittarius A* is the site of a supermassive black hole, based on data from ESO's Very Large Telescope and the Keck telescope. On January 5, 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers. Outside the Milky Way Unambiguous dynamical evidence for supermassive black holes exists only in a handful of galaxies; these include the Milky Way, the Local Group galaxies M31 and M32, and a few galaxies beyond the Local Group, e.g. NGC 4395. In these galaxies, the mean square (or rms) velocities of the stars or gas rises proportionally to 1/r near the center, indicating a central point mass. In all other galaxies observed to date, the rms velocities are flat, or even falling, toward the center, making it impossible to state with certainty that a supermassive black hole is present. Nevertheless, it is commonly accepted that the center of nearly every galaxy contains a supermassive black hole. The reason for this assumption is the M-sigma relation, a tight (low scatter) relation between the mass of the hole in the 10 or so galaxies with secure detections, and the velocity dispersion of the stars in the bulges of those galaxies. This correlation, although based on just a handful of galaxies, suggests to many astronomers a strong connection between the formation of the black hole and the galaxy itself. The nearby Andromeda Galaxy, 2.5 million light-years away, contains a (1.1–×108 (110–230 million) M☉ central black hole, significantly larger than the Milky Way's. 2.3) The largest supermassive black hole in the Milky Way's vicinity appears to be that of M87, at a mass of ±0.5)×109 (c. 6.4 billion) M☉ (6.4 at a distance of 53.5 million light-years. On December 5, 2011, astronomers discovered the largest supermassive black hole in the universe found yet, that of the supergiant elliptical galaxy NGC 4889, with a mass of ×1010 (21 billion) M☉ at a distance of 336 million light-years away in the 2.1Coma Berenices constellation. Black holes in distant, highly luminous quasars are much larger. The hyperluminous quasar APM 08279+5255 has a supermassive black hole with a mass of ×1010 (23 billion) M☉. Larger still is at another hyperluminous quasar 2.3S5 0014+81, one of the largest supermassive black holes yet found, which has a mass of ×1010 (40 billion) M☉, or 10,000 times the size of the black hole at the Milky Way Galactic Center. Both quasars are 12.1 billion light years away. 4.0 The most massive black hole ever discovered, TON 618, weighs in at ×1010 (66 billion) M☉. It is located 10.4 billion light years away from us. 6.6 Some galaxies, such as the galaxy 4C +37.11, appear to have two supermassive black holes at their centers, forming a binary system. If they collided, the event would create strong gravitational waves. Binary supermassive black holes are believed to be a common consequence of galactic mergers. The binary pair in OJ 287, 3.5 billion light-years away, contains the most massive black hole in a pair, with a mass estimated at 18 billion M☉. A supermassive black hole was recently discovered in the dwarf galaxy Henize 2-10, which has no bulge. The precise implications for this discovery on black hole formation are unknown, but may indicate that black holes formed before bulges. On March 28, 2011, a supermassive black hole was seen tearing a mid-size star apart. That is the only likely explanation of the observations that day of sudden X-ray radiation and the follow-up broad-band observations. The source was previously an inactive galactic nucleus, and from study of the outburst the galactic nucleus is estimated to be a SMBH with mass of the order of a million solar masses. This rare event is assumed to be a relativistic outflow (material being emitted in a jet at a significant fraction of the speed of light) from a star tidally disrupted by the SMBH. A significant fraction of a solar mass of material is expected to have accreted onto the SMBH. Subsequent long-term observation will allow this assumption to be confirmed if the emission from the jet decays at the expected rate for mass accretion onto a SMBH. In 2012, astronomers reported an unusually large mass of approximately 17 billion M☉ for the black hole in the compact, lenticular galaxy NGC 1277, which lies 220 million light-years away in the constellation Perseus. The putative black hole has approximately 59 percent of the mass of the bulge of this lenticular galaxy (14 percent of the total stellar mass of the galaxy). Another study reached a very different conclusion: this black hole is not particularly overmassive, estimated at between 2 and 5 billion M☉ with 5 billion M☉ being the most likely value. On 28 February 2013 astronomers reported on the use of the NuSTAR satellite to accurately measure the spin of a supermassive black hole for the first time, in NGC 1365, reporting that the event horizon was spinning at almost the speed of light. In September 2014, data from different X-ray telescopes has shown that the extremely small, dense, ultracompact dwarf galaxy M60-UCD1 hosts a 20 million solar mass black hole at its center, accounting for more than 10% of the total mass of the galaxy. The discovery is quite surprising, since the black hole is five times more massive than the Milky Way's black hole despite the galaxy being less than five-thousandths the mass of the Milky Way. Some galaxies, however, lack any supermassive black holes in their centers. Although most galaxies with no supermassive black holes are very small, dwarf galaxies, one discovery remains mysterious: The supergiant elliptical cD galaxy A2261-BCG has not been found to contain an active supermassive black hole, despite the galaxy being one of the largest galaxies known; ten times the size and one thousand times the mass of the Milky Way. Since a supermassive black hole will only be visible while it is accreting, a supermassive black hole can be nearly invisible, except in its effects on stellar orbits. In December 2017, astronomers reported the detection of the most distant quasar currently known, ULAS J1342+0928, containing the most distant supermassive black hole, at a reported redshift of z = 7.54, surpassing the redshift of 7 for the previously known most distant quasar ULAS J1120+0641. This section needs expansion. You can help by adding to it. (March 2018) - "ALMA reveals intense magnetic field close to supermassive black hole". European Southern Observatory. 16 April 2015. Retrieved 19 May 2018. - Antonucci, R. (1993). "Unified Models for Active Galactic Nuclei and Quasars". Annual Review of Astronomy and Astrophysics. 31 (1): 473–521. Bibcode:1993ARA&A..31..473A. doi:10.1146/annurev.aa.31.090193.002353. - Urry, C.; Padovani, P. (1995). "Unified Schemes for Radio-Loud Active Galactic Nuclei". Publications of the Astronomical Society of the Pacific. 107: 803–845. arXiv: . Bibcode:1995PASP..107..803U. doi:10.1086/133630. - Schödel, R.; et al. (2002). "A star in a 15.2-year orbit around the supermassive black hole at the centre of the Milky Way". Nature. 419 (6908): 694–696. arXiv: . Bibcode:2002Natur.419..694S. doi:10.1038/nature01121. PMID 12384690. - Overbye, Dennis (8 June 2015). "Black Hole Hunters". NASA. Retrieved 8 June 2015. - Celotti, A.; Miller, J.C.; Sciama, D.W. (1999). "Astrophysical evidence for the existence of black holes". Class. Quantum Grav. 16 (12A): A3–A21. arXiv: . doi:10.1088/0264-9381/16/12A/301. - Melia 2007, p. 2. - Begelman, M. C.; et al. (Jun 2006). "Formation of supermassive black holes by direct collapse in pre-galactic haloed". Monthly Notices of the Royal Astronomical Society. 370 (1): 289–298. arXiv: . Bibcode:2006MNRAS.370..289B. doi:10.1111/j.1365-2966.2006.10467.x. - "Biggest Black Hole Blast Discovered". ESO Press Release. Retrieved 28 November 2012. - "Artist's illustration of galaxy with jets from a supermassive black hole". Retrieved 8 June 2015. - Spitzer, L. (1987). Dynamical Evolution of Globular Clusters. Princeton University Press. ISBN 0-691-08309-6. - "Stars Born in Winds from Supermassive Black Holes – ESO's VLT spots brand-new type of star formation". www.eso.org. Retrieved 27 March 2017. - Winter, L.M.; et al. (Oct 2006). "XMM-Newton Archival Study of the ULX Population in Nearby Galaxies". Astrophysical Journal. 649 (2): 730–752. arXiv: . Bibcode:2006ApJ...649..730W. doi:10.1086/506579. - https://arxiv.org/abs/1511.08502 - paper on ArVix - "Is There a Limit to How Large Black Holes Can Become?". futurism.com. May 5, 2014. - "Limit to how big black holes can grow is astonishing". sciencemag.org. December 21, 2015. - ap507. "Black holes could grow as large as 50 billion suns before their food crumbles into stars, research shows — University of Leicester". www2.le.ac.uk. - Landau, Elizabeth; Bañados, Eduardo (6 December 2017). "Found: Most Distant Black Hole". NASA. Retrieved 6 December 2017. "This black hole grew far larger than we expected in only 690 million years after the Big Bang, which challenges our theories about how black holes form," said study co-author Daniel Stern of NASA's Jet Propulsion Laboratory in Pasadena, California. - Jamie Seidel (7 December 2017). "Black hole at the dawn of time challenges our understanding of how the universe was formed". News Corp Australia. Retrieved 9 December 2017. It had reached its size just 690 million years after the point beyond which there is nothing. The most dominant scientific theory of recent years describes that point as the Big Bang—a spontaneous eruption of reality as we know it out of a quantum singularity. But another idea has recently been gaining weight: that the universe goes through periodic expansions and contractions—resulting in a 'Big Bounce'. And the existence of early black holes has been predicted to be a key telltale as to whether or not the idea may be valid. This one is very big. To get to its size—800 million times more mass than our Sun—it must have swallowed a lot of stuff. ... As far as we understand it, the universe simply wasn't old enough at that time to generate such a monster. - Youmagazine staff (8 December 2017). "A Black Hole that is more ancient than the Universe" (in Greek). You Magazine (Greece). Retrieved 9 December 2017. This new theory that accepts that the Universe is going through periodic expansions and contractions is called "Big Bounce" - O. Straub, F.H. Vincent, M.A. Abramowicz, E. Gourgoulhon, T. Paumard, "Modelling the black hole silhouette in Sgr A* with ion tori", Astron. Astroph. 543} (2012) A83. - Gultekin K; et al. (2009). "The M and M-L Relations in Galactic Bulges, and Determinations of Their Intrinsic Scatter". The Astrophysical Journal. 698 (1): 198–221. arXiv: . Bibcode:2009ApJ...698..198G. doi:10.1088/0004-637X/698/1/198. - Eisenhauer; et al. (2005). "SINFONI in the Galactic Center: Young Stars and Infrared Flares in the Central Light-Month". The Astrophysical Journal. 628: 246–259. arXiv: . Bibcode:2005ApJ...628..246E. doi:10.1086/430667. - Henderson, Mark (December 9, 2008). "Astronomers confirm black hole at the heart of the Milky Way". London: Times Online. Retrieved 2009-05-17. - Schödel, R.; et al. (17 October 2002). "A star in a 15.2-year orbit around the supermassive black hole at the centre of the Milky Way". Nature. 419 (6908): 694–696. arXiv: . Bibcode:2002Natur.419..694S. doi:10.1038/nature01121. PMID 12384690. - Ghez, A. M.; et al. (December 2008). "Measuring Distance and Properties of the Milky Way's Central Supermassive Black Hole with Stellar Orbits". Astrophysical Journal. 689 (2): 1044–1062. arXiv: . Bibcode:2008ApJ...689.1044G. doi:10.1086/592738. - "Milky Way's Central Monster Measured - Sky & Telescope". skyandtelescope.com. August 28, 2008. - Ghez, A. M.; Salim, S.; Hornstein, S. D.; Tanner, A.; Lu, J. R.; Morris, M.; Becklin, E. E.; Duchêne, G. (May 2005). "Stellar Orbits around the Galactic Center Black Hole". The Astrophysical Journal. 620 (2): 744–757. arXiv: . Bibcode:2005ApJ...620..744G. doi:10.1086/427175. - @sciencemusicart.com, Liz Jensen. "UCLA Galactic Center Group". www.astro.ucla.edu. - ESO – 2002 Archived 2005-10-25 at the Wayback Machine. - "| W. M. Keck Observatory". Keckobservatory.org. Archived from the original on 2012-03-01. Retrieved 2013-07-14. - Chou, Felicia; Anderson, Janet; Watzke, Megan (5 January 2015). "Release 15-001 – NASA's Chandra Detects Record-Breaking Outburst from Milky Way's Black Hole". NASA. Retrieved 6 January 2015. - "Chandra :: Photo Album :: RX J1242-11 :: 18 Feb 04". chandra.harvard.edu. - Merritt, David (2013). Dynamics and Evolution of Galactic Nuclei. Princeton, NJ: Princeton University Press. p. 23. ISBN 9780691158600. - King, Andrew (2003-09-15). "Black Holes, Galaxy Formation, and the MBH-σ Relation". The Astrophysical Journal Letters. 596: L27–L29. arXiv: . Bibcode:2003ApJ...596L..27K. doi:10.1086/379143. - Ferrarese, Laura; Merritt, David (2000-08-10). "A Fundamental Relation between Supermassive Black Holes and Their Host Galaxies". The Astrophysical Journal. The American Astronomical Society. 539 (1): L9–12. arXiv: . Bibcode:2000ApJ...539L...9F. doi:10.1086/312838. - Bender, Ralf; et al. (2005-09-20). "HST STIS Spectroscopy of the Triple Nucleus of M31: Two Nested Disks in Keplerian Rotation around a Supermassive Black Hole". The Astrophysical Journal. 631 (1): 280–300. arXiv: . Bibcode:2005ApJ...631..280B. doi:10.1086/432434. - Gebhardt, Karl; Thomas, Jens (August 2009). "The Black Hole Mass, Stellar Mass-to-Light Ratio, and Dark Halo in M87". The Astrophysical Journal. 700 (2): 1690–1701. arXiv: . Bibcode:2009ApJ...700.1690G. doi:10.1088/0004-637X/700/2/1690. - Macchetto, F.; Marconi, A.; Axon, D. J.; Capetti, A.; Sparks, W.; Crane, P. (November 1997). "The Supermassive Black Hole of M87 and the Kinematics of Its Associated Gaseous Disk". Astrophysical Journal. 489 (2): 579. arXiv: . Bibcode:1997ApJ...489..579M. doi:10.1086/304823. - Overbye, Dennis (2011-12-05). "Astronomers Find Biggest Black Holes Yet". The New York Times. - Major, Jason. "Watch what happens when two supermassive black holes collide". Universe today. Retrieved 4 June 2013. - D. Merritt; M. Milosavljevic (2005). "Massive Black Hole Binary Evolution". Archived from the original on 2012-03-30. Retrieved 2012-03-03. - Shiga, David (10 January 2008). "Biggest black hole in the cosmos discovered". NewScientist.com news service. - Kaufman, Rachel (10 January 2011). "Huge Black Hole Found in Dwarf Galaxy". National Geographic. Retrieved 1 June 2011. - "Astronomers catch first glimpse of star being consumed by black hole". The Sydney Morning Herald. 2011-08-26. - Burrows, D. N.; Kennea, J. A.; Ghisellini, G.; Mangano, V.; et al. (Aug 2011). "Relativistic jet activity from the tidal disruption of a star by a massive black hole". Nature. 476 (7361): 421–424. arXiv: . Bibcode:2011Natur.476..421B. doi:10.1038/nature10374. PMID 21866154. - Zauderer, B. A.; Berger, E.; Soderberg, A. M.; Loeb, A.; et al. (Aug 2011). "Birth of a relativistic outflow in the unusual γ-ray transient Swift J164449.3+573451". Nature. 476 (7361): 425–428. arXiv: . Bibcode:2011Natur.476..425Z. doi:10.1038/nature10366. PMID 21866155. - Remco C. E. van den Bosch, Karl Gebhardt, Kayhan Gültekin, Glenn van de Ven, Arjen van der Wel, Jonelle L. Walsh, An over-massive black hole in the compact lenticular galaxy NGC 1277, Nature 491, pp. 729–731 (29 November 2012) doi:10.1038/nature11592, published online 28 November 2012 - Emsellem, Eric (2013). "Is the black hole in NGC 1277 really overmassive?". Monthly Notices of the Royal Astronomical Society. 433 (3): 1862–1870. arXiv: . Bibcode:2013MNRAS.433.1862E. doi:10.1093/mnras/stt840. - Reynolds, Christopher (2013). "Astrophysics: Black holes in a spin". Nature. 494 (7438): 432–433. Bibcode:2013Natur.494..432R. doi:10.1038/494432a. PMID 23446411. - Prostak, Sergio (28 February 2013). "Astronomers: Supermassive Black Hole in NGC 1365 Spins at Nearly Light-Speed". Sci-News.com. Retrieved 20 March 2015. - "Hubble views a supermassive black hole burping – twice". www.spacetelescope.org. Retrieved 15 January 2018. - Bañados, Eduardo; et al. (6 December 2017). "An 800-million-solar-mass black hole in a significantly neutral Universe at a redshift of 7.5". Nature. 553 (7689): 473. arXiv: . Bibcode:2018Natur.553..473B. doi:10.1038/nature25180. Retrieved 6 December 2017. - Landau, Elizabeth; Bañados, Eduardo (6 December 2017). "Found: Most Distant Black Hole". NASA. Retrieved 6 December 2017. - Choi, Charles Q. (6 December 2017). "Oldest Monster Black Hole Ever Found Is 800 Million Times More Massive Than the Sun". Space.com. Retrieved 6 December 2017. - Fulvio Melia (2003). The Edge of Infinity. Supermassive Black Holes in the Universe. Cambridge University Press. ISBN 978-0-521-81405-8. - Laura Ferrarese & David Merritt (2002). "Supermassive Black Holes". Physics World. 15 (1): 41–46. arXiv: . Bibcode:2002astro.ph..6222F. - Fulvio Melia (2007). The Galactic Supermassive Black Hole. Princeton University Press. ISBN 978-0-691-13129-0. - Merritt, David (2013). Dynamics and Evolution of Galactic Nuclei. Princeton University Press. ISBN 978-0-691-12101-7. - Julian Krolik (1999). Active Galactic Nuclei. Princeton University Press. ISBN 0-691-01151-6. |Wikinews has news related to:| - Black Holes: Gravity's Relentless Pull Award-winning interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute - Images of supermassive black holes - NASA images of supermassive black holes - The black hole at the heart of the Milky Way - ESO video clip of stars orbiting a galactic black hole - Star Orbiting Massive Milky Way Centre Approaches to within 17 Light-Hours ESO, October 21, 2002 - Images, Animations, and New Results from the UCLA Galactic Center Group - Washington Post article on Supermassive black holes - A simulation of the stars orbiting the Milky Way's central massive black hole
<urn:uuid:a4e9fe69-4b7d-4389-8919-35321cf327eb>
4.4375
6,943
Knowledge Article
Science & Tech.
66.917685
95,559,080
UNsolving Quadratic Equations & Inequalities - Judah L Schwartz The solution of a quadratic equation is a pair of number - let's call them a and b [assume a smaller than or equal to b]. The solution of a quadratic inequality is EITHER all the numbers between a and b, OR all the numbers less than a AND all the numbers greater than b. To UNsolve a quadratic equation or inequality, drag the GOLD dots in this le panel to fix the solution set. The right hand panel will show you a quadratic equation or inequality that has that solution set. You can drag the WHITE dots in the right hand panel to see other quadratic equations or inequalities that have the same solution set. The GREEN dot and the BLUE dot each control one function - each of the WHITE dots control both functions. Why is it permissible to change only one function in an equation or inequality that is a comparison of two functions? For a given solution set, how many equations or inequalities are there that have that solution set? How do you know? Can you prove it?
<urn:uuid:d4cdec65-b22d-4119-9fad-70840ee37841>
3.515625
231
Tutorial
Science & Tech.
54.553
95,559,096
ERIC Number: ED071895 Record Type: Non-Journal Publication Date: 1968 Reference Count: N/A Project Physics Text 3, The Triumph of Mechanics. Harvard Univ., Cambridge, MA. Harvard Project Physics. Mechanical theories are presented in this unit of the Project Physics text for senior high students. Collisions, Newton's laws, isolated systems, and Leibniz' concept are discussed, leading to conservation of mass and momentum. Energy conservation is analyzed in terms of mechanical energy, heat energy, steam engines, Watt's engine, Joule's experiment, and energy in biological systems. Kinetic theory of gases is studied in connection with molecular sizes and speeds, ideal gas, second thermodynamic law, statistical representations, time's arrow, and recurrence paradox. Wave models are introduced to deal with the superposition principle, sound properties, and wave interference, diffraction, reflection, and refraction. Historical developments are stressed in the description of this unit. Included is a chart of renowned people's life spans from 1700 to 1850. Besides illustrations for explanation use, problems with their answers are also provided in two categories: study guide and end of section questions. The work of Harvard Project Physics has been financially supported by: the Carnegie Corporation of New York, the Ford Foundation, the National Science Foundation, the Alfred P. Sloan Foundation, the United States Office of Education, and Harvard University. (CC) Publication Type: N/A Education Level: N/A Sponsor: Office of Education (DHEW), Washington, DC. Bureau of Research. Authoring Institution: Harvard Univ., Cambridge, MA. Harvard Project Physics.
<urn:uuid:46134721-3c87-4d28-a4f1-e671d56cbae0>
2.765625
343
Structured Data
Science & Tech.
30.147804
95,559,105
Master of Science (M.Sc.) John P Swaddle Bird strike is the often fatal collision between a bird and a surface, such as a window or tower. Collisions kill millions of birds each year in the US alone, and cost industries millions of dollars per year. As more buildings, wind turbines, communication towers and other structures are built, bird strikes and its associated costs are predicted to increase. Researchers have explored mitigative measures to alleviate bird strikes but to date none have solved this growing problem. Recent research suggests that current technologies fail because their design does not take into account birds' sensory ecology, including habituation to loud sounds and some species may lack the ability to effectively see visual deterrents while flying. In this study we explored an acoustic mitigative measure against bird strike. Our goal was to use directional sound as an instrument to warn flying birds of an upcoming visible barrier in their flight path. We hypothesized that when birds experienced a strong sound field (80 dB SPL) in the presence of a visible mist net, they would increase their body and tail angles of attack, enabling them to slow down. Our results show that when flying zebra finches (Taeniopygia guttata) encountered a loud sound field in front of a visible barrier, they slowed their flight (relative to a control flight) by approximately 25% and simultaneously increased their body and tail angles of attack by 25° and 50°, respectively. This alteration of velocity and flight posture will likely increase birds’ capacity to maneuver, due to increased tail drag and improved tail lift, and potentially afford individuals more time to initiate avoidance maneuvers. Collectively, our results support the conclusion that a conspicuous sound can decrease birds’ risk of striking a static surface or object. Our study suggests that emitting sound in front of windows, wind turbines, power lines, as well as cell, radio and communication towers could decrease bird strikes and associated damage and costs. © The Author Ingrassia, Nicole, "Does Sound Help Prevent Birds From Flying Into Objects?" (2016). Dissertations, Theses, and Masters Projects. Paper 1477068550.
<urn:uuid:278e14d7-3126-4504-b811-360903dc345b>
3.5
435
Academic Writing
Science & Tech.
38.918563
95,559,106
Dies ist das Archiv der alten Pressemeldungen (bis Mai 2018) ► Zu den neuen Meldungen Global Ocean De-Oxygenation Quantified The first in-depth study on the observed global ocean oxygen content was just published by Kiel scientists in "Nature" Photo to download: www.uni-kiel.de/download/pm/2017/2017-043-1.jpg The ongoing global change causes rising ocean temperatures and changes the ocean circulation. Therefore less oxygen is dissolved in surface waters and less oxygen is transported into the deep sea. This reduction of oceanic oxygen supply has major consequences for the organisms in the ocean. In the international journal Nature, oceanographers of GEOMAR Helmholtz Centre for Ocean Research Kiel have now published the most comprehensive analysis on oxygen loss in the world's oceans and their cause so far. Oxygen is an essential necessity of life on land. The same applies for almost all organisms in the ocean. However, the oxygen supply in the oceans is threatened by global warming in two ways: Warmer surface waters take up less oxygen than colder waters. In addition, warmer water stabilizes the stratification of the ocean. This weakens the circulation connecting the surface with the deep ocean and less oxygen is transported into the deep sea. Therefore, many models predict a decrease in global oceanic oxygen inventory of the oceans due to global warming. The first global evaluation of millions of oxygen measurements seems to confirm this trend and points to first impacts of global change. In the renowned scientific journal Nature the oceanographers Dr. Sunke Schmidtko, Dr. Lothar Stramma and Professor Martin Visbeck from GEOMAR Helmholtz Centre for Ocean Research Kiel just published the most comprehensive study on global oxygen content in the world's oceans so far. It demonstrates that the ocean’s oxygen content has decreased by more than two percent over the last 50 years. “Since large fishes in particular avoid or do not survive in areas with low oxygen content, these changes can have far-reaching biological consequences,” says Dr. Schmidtko, the lead-author of the study. The researchers used all historic oxygen data available around the world for their work, supplemented it with current measurements and refined the interpolation procedures to more accurately reconstruct the development of the oxygen budget over the past 50 years. In some areas previous research had already shown a decrease in oxygen. “To quantify trends for the entire ocean, however, was more difficult since oxygen data from remote regions and the deep ocean is sparse,” explains Dr. Schmidtko, “we were able to document the oxygen distribution and its changes for the entire ocean for the first time. These numbers are an essential prerequisite for improving forecasts for the ocean of the future.” The study also shows that, with the exception of a few regions, the oxygen content decreased throughout the entire ocean during the period investigated. The greatest loss was found in the North Pacific. “While the slight decrease of oxygen in the atmosphere is currently considered non-critical, the oxygen losses in the ocean can have far-reaching consequences because of the uneven distribution. For fisheries and coastal economies this process may have detrimental consequences,” emphasizes the co-author Dr. Lothar Stramma. “However, with measurements alone, we cannot explain all the causes,” adds Professor Martin Visbeck, “natural processes occurring on time scales of a few decades may also have contributed to the observed decrease.” However, the results of the research are consistent with most model calculations that predict a further decrease in oxygen in the oceans due to higher atmospheric carbon dioxide concentrations and consequently higher global temperatures. The new study is an important result for the ongoing work in the Collaborative Research Center (SFB) 754 funded by the German Research Foundation (DFG) at the Kiel University and GEOMAR. The SFB 754’s aim is to better understand the interaction between climate and biogeochemistry of the tropical ocean. “From the beginning of March onwards, four expeditions aboard the German research vessel METEOR will investigate the tropical oxygen minimum zone in the eastern Pacific off Peru. We hope to obtain further data on regional development which will also help us to better understand the global trends,” emphasizes Dr. Stramma, the expedition coordinator for the SFB. Note: This study was supported by the project MIKLIP which is funded by the German Federal Ministry of Education and Research and by the Collaborative Research Center (SFB) 754 “Climate – Biogeochemical Interactions in the Tropical Ocean”. Schmidtko, S., L. Stramma und M. Visbeck (2017): Decline in global oxygen content during the past five decades. Nature, http://dx.doi.org/10.1038/nature21399 Jan Steffen (GEOMAR) phone: +49 431 600 2811, email@example.com Press, Communication and Marketing, Dr. Boris Pawlowski Address: D-24098 Kiel, phone: +49 (0431) 880-2104, fax: +49 (0431) 880-1355 E-Mail: ► firstname.lastname@example.org, Internet: ► www.uni-kiel.de Twitter: ► www.twitter.com/kieluni, Facebook: ► www.facebook.com/kieluni, Instagram: ► www.instagram.com/kieluni
<urn:uuid:e11562fd-03a8-4863-b208-a404c2dcfc35>
3.0625
1,163
News (Org.)
Science & Tech.
41.259818
95,559,138
Researchers develop new super high-resolution imaging technique Scientists from UMass Lowell and King’s College London in the U.K. have demonstrated a new way of capturing ultrasharp images of structures of extremely tiny objects measuring billionths of a meter in size. Called “interscale mixing microscopy,” or IMM, the technique can obtain details in viruses and nanoparticles much smaller than the wavelength of light. Such technology would be helpful in developing new vaccines against pathogens as well as innovative nanomaterials for industrial applications and novel pharmaceutical drugs to fight diseases. “Our research addresses a fundamental problem in the field of microscopy,” says physics Prof. Viktor Podolskiy, who is the principal investigator for the UMass Lowell team. “When an object is smaller than the wavelength of light, you cannot really resolve the object’s size, shape or structure. Our technique is designed to go beyond this so-called ‘diffraction limit.’ ” Podolskiy adds: “Sub-wavelength imaging with IMM can potentially be used to obtain the colors, or spectra, of small objects such as bacteria, viruses and nanoparticles. By knowing their color signatures we can rapidly identify and characterize the objects and determine their precise chemical composition.” The team’s findings were recently published in Optica, the prestigious journal of The Optical Society. Funding for the research was provided by the U.S. National Science Foundation and the U.K.’s Engineering and Physical Sciences Research Council, Royal Society and Wolfson Foundation. A Cost-effective Alternative to Electron Microscopes “Conventional optical microscopes, such as those found in biology classrooms and hospital labs, use lenses to bend light and form images of everything, from tissues down to dust, pollen and blood cells,” explains Podolskiy. “However, objects whose size is smaller than the wavelength of light cannot be seen or even detected with these optical microscopes.” He says while other imaging techniques, such as fluorescence microscopy, electron microscopy or scanning near-field microscopy, can in principle be used to assess the properties of small objects, none of these techniques is versatile or rapid enough in imaging and characterizing relatively large objects.“A scanning electron microscope [SEM] has a tip that scans the surface of an object point by point. You then record the backscattering of light from that tip to build up an image,” says Podolskiy. “For large objects, this can take a long time.” The IMM technique uses a conventional optical microscope and ingenious signal processing to decode the object’s properties based on the measurement of light that gets scattered by the object in close proximity to a special, finely ruled plate called a diffraction grating. The researchers showed that a single measurement with the grating may be enough to decipher with great precision the position, size and optical spectrum of the object. SEMs can typically resolve details down to 5 to 10 nanometers. Right now, the IMM is constrained to about 70 nanometers. “Although our technique is not yet as powerful, a brand-new scanning electron microscope can cost anywhere from hundreds of thousands of dollars to a million. The IMM can be retrofitted to older, existing research optical microscopes, thereby saving universities and companies a lot of money,” notes Christopher Roberts, a Ph.D. student in physics who conducted the project’s data processing and analysis. He adds: “Moreover, you can’t observe living cells in an electron microscope; you have to kill and prepare them first. The IMM, in principle, can be used to observe live specimens in real time. Our technique could pave the way for the next generation of optical microscopy and nanoscale spectroscopy.”
<urn:uuid:673f4245-bff0-47be-a666-ee464fd4a2fa>
3.53125
811
News (Org.)
Science & Tech.
33.416814
95,559,142
Scientists are sceptical political leaders can meet climate goals The scientific studies suggest that every year that goes by without global emissions peaking would require larger pollution cuts in the future New York: Climate negotiators inserted a dramatic charge in the 2015 Paris accord, asking world leaders to strive to keep global temperatures at just 1.5 degrees Celsius above pre-industrial levels. Now new studies have begun to sketch out what the tighter target — compared to the longtime benchmark goal of 2 degrees (3.6 degrees Fahrenheit) — actually means. Their overall message to climate envoys meeting in Bonn, Germany this week: Better get cracking. “We would need an incredibly dramatic reduction in emissions in the very near future,” said Zeke Hausfather, a climate scientist with Berkeley Earth. He called the 1.5 degree target “a little ridiculous and implausible.” The scientific studies suggest that every year that goes by without global emissions peaking would require larger pollution cuts in the future. As it stands, the world has “room” left in the atmosphere for less than 20 years of emissions at current rates. An essay Hausfather published on the website CarbonBrief estimates that if emissions peak in 2020, then by 2030, the carbon-emissions rate will have to drop by 9% a year. If the peak had come in 1995, required cuts in 2030 would have been just 2% — and off a much lower baseline. But emissions are still rising. In 2017, they’re expected to go up by 2%, according to researchers in the Global Carbon Project. That’s much lower than rates seen in the early 21st century, but still the wrong direction. Already, even the most optimistic scenarios can’t hit a 2-degree goal without assuming that whiz-bang future technology will emerge to pull carbon dioxide out of the air. The climate models tend to show that it’s unrealistic to reduce pollution by more than 5% or 6% a year, Hausfather said. To get around that sticking point, the models build in “negative emissions” later in the century -- perhaps the most significant TBD of all time. “The idea that we’re going to depend on this largely unknown technology to get us to these targets is a little worrisome,” Hausfather said. The world has already warmed by about 1 degree Celsius since the end of the 19th century, and there’s momentum in the system. In a thought experiment, authors of a new US National Climate Assessment found that if atmospheric carbon dioxide stayed at its current level, that would lock in another 0.6 degrees of warming. Another study found that if all emissions magically stopped, the planet would eventually warm between 1.1 degrees to 1.5 degrees. Veerabhadran Ramanathan of the Scripps Institution of Oceanography in September envisioned an aggressive scenario in which nations yank hard on three main “levers” —zeroing out carbon emissions, slashing other greenhouse gases that don’t hang in the air as long, and deploying machines that suck carbon out of smokestacks and stick it underground. In that scenario, efforts to “bend the warming curve to a cooling trend” should begin by 2020. Negative emissions would come later. “Since 2020 is just a few years away, this is a highly optimistic option,” they write, with an understatement characteristic of the climate scientists. A June study in Earth’s Future led by Xuanming Su of Japan’s National Institute for Environmental Studies concluded that temperatures could stabilize below 1.5 degrees, after shooting past it for a brief time, with immediate action including a tripling of carbon prices and doubling of funds for preventing emissions above what would be needed to meet the 2 degree target. The most forgiving of the studies came in September, when Richard Millar of the University of Oxford and colleagues reported findings that the carbon budget — a gauge of how much humanity can pump out before it enters the danger zone — may be bigger than previously believed. While peers criticized the paper for making optimistic assumptions, the paper itself demonstrates that fixing the climate won’t be a cakewalk. The Paris goal “is not chasing a geophysical impossibility, but is likely to require a significant strengthening” of national commitments, Millar and his co-authors wrote, suggesting “sustained reductions at historically unprecedented rates after 2030.” The biggest challenge for the scientists may be that the largest source of uncertainty has less to do with the thermal physics of the Earth, or the melt rate of glaciers, than with the actions of the very people who have asked them to study the problem. A study led by Swiss scientist Reto Knutti assigns the greatest unpredictability to politicians. “Current and proposed” policies under the Paris Agreement “are inconsistent with what would be required for the 1.5 degree or 2 degree target, and even these are politically difficult,” Knutti and his co-authors write in a study released in September. They brush aside the thought that to make policy more work needs to be done on assessing temperature paths, saying more precision “is not necessary for eliminating those roadblocks.” What to do about climate change is so self-evident at this point that non-scientists needn’t even consider degree targets, said Kate Marvel, a climate researcher affiliated with Nasa and Columbia University in New York. “Things that make us not put as much carbon dioxide in the air are good, and things that make us put more carbon dioxide in the air are bad,” Marvel said. “If you want to think about this in a binary sense, don’t use 2 degrees/not-2 degrees.” Bloomberg - Cities seen taking more eco-friendly steps to become climate resilient - Average sea levels may rise by up to 30 ft on global warming, says study - Safdarjung Hospital to run OPDs for 12 hours a day - USFDA orders label changes for fluoroquinolones - Alzheimer’s research gets glimmer of hope, not for first time Editor's Picks » - Market optimism before 2019 general election: History may not repeat itself - UltraTech Cement: No respite from cost pressures - Mindtree sees strong revenues but client concentration remains high - Bandhan Bank’s share defies gravity as growth story is intact - Fund managers slashing allocations to equities in emerging markets, shows BAML survey
<urn:uuid:32c2afed-a271-42f4-9c6f-7c421d299ba7>
3.3125
1,368
Truncated
Science & Tech.
40.774907
95,559,204
However, the paleontologists Malvina Lak, her colleagues from the University of Rennes and the ESRF paleontologist Paul Tafforeau, together with the National Museum of Natural History of Paris, have applied to opaque amber a synchrotron X-ray imaging technique known as propagation phase contrast microradiography. It sheds light on the interior of this dark amber, which resembles a stone to the human eye. “Researchers have tried to study this kind of amber for many years with little or no success. This is the first time that we can actually discover and study the fossils it contains”, says Paul Tafforeau. The scientists imaged 640 pieces of amber from the Charentes region in southwestern France. They discovered 356 fossil animals, going from wasps and flies, to ants or even spiders and acarians. The team was able to identify the family of 53% of the inclusions. Most of the organisms discovered are tiny. For example, one of the discovered acarians measures 0.8 mm and a fossil wasp is only 4 mm. “The small size of the organisms is probably due to the fact that bigger animals would be able to escape from the resin before getting stuck, whereas little ones would be captured more easily”, explains Malvina Lak. Water to see tiny fossils better The surface features of amber pieces, like cracks, stand out more in the images than the fossil organisms in the interior when using synchrotron radiation. In order to solve this problem, scientists soaked the amber pieces in water before the experiment. Because water and amber have very similar densities, immersion made the outlines of the amber pieces and the cracks almost invisible. At the same time, it increased overall inclusion visibility, leading to better detection and characterization of the fossils. Classification of species Once discovered on the radiographs, some of the organisms were imaged in three dimensions and virtually extracted from the resin. The high quality of these 3D reconstructions enables paleontologists to precisely study and describe the organisms. The success of this experiment shows the high value of the ESRF for the study of fossils. “Opaque amber hosts many aspects of past life on our planet that are still unknown, and the use of third generation synchrotron sources will continue to play an important role in unveiling them”, asserts Malvina Lak. M. Lak, D. Néraudeau, A. Nel, P. Cloetens, V. Perrichot and P. Tafforeau, Phase Contrast X-ray Synchrotron Imaging: Opening Access to Fossil Inclusions in Opaque Amber, Microscopy and Microanalysis, Forthcoming article doi:10.1017/S1431927608080264. Montserrat Capellas | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ed66ca74-5584-460c-8950-36787ea44c49>
3.546875
1,162
Knowledge Article
Science & Tech.
40.186586
95,559,233
The food-poisoning bacterium C. jejuni is one of the major causes of gastroenteritis in humans, causing diarrhoea, stomach cramps and in rare cases a nervous condition called Guillain-Barré syndrome. Humans are commonly infected by eating undercooked poultry meat, which is contaminated during processing of the chickens. Surprisingly, the Campylobacter bacterium is commonly carried in the gut of birds without causing disease in the birds. Like many bacteria, C. jejuni is able to avoid our body’s defences by altering the nature and content of its surface. These alterations are achieved by having regions of the bacterial chromosome that are able to make small random variations, resulting in different surface structures. Genomic variability has been a problem for researchers investigating C. jejuni, since it potentially also causes differences between laboratories and even between experiments. In a project funded by the Biotechnology and Biological Sciences Research Council (BBSRC) and Intervet, the Campylobacter group at IFR has determined and analysed the complete genome sequence of Campylobacter jejuni strain 81116 (also known as NCTC11828). This strain was selected because of its previously reported genomic stability over time. The genome sequence reported by IFR and Intervet is 1,628,114 bases in length and notable for having fewer of the variable regions than the previously reported C. jejuni sequences. Strain 81116 is widely studied as it is amenable to genetic alterations, and grows well in poultry allowing this important natural reservoir to be studied. Thus the reported sequence will provide useful information for Campylobacter researchers worldwide, and is predicted to be a valuable resource for the research community.Campylobacter research at IFR Zoe Dunford | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e7883d04-7edf-45e8-9b89-dc939a92c466>
3.375
951
Content Listing
Science & Tech.
32.455212
95,559,240
Special Issue "Remote Sensing in Coastal Zone Monitoring and Management—How Can Remote Sensing Challenge the Broad Spectrum of Temporal and Spatial Scales in Coastal Zone Dynamic?" Deadline for manuscript submissions: 1 September 2018 Dr. David Doxaran Laboratoire d'Océanographie de Villefranche UMR 7093 - CNRS / UPMC, France Website | E-Mail Interests: ocean colour remote sensing, optical properties of turbid estuarine and coastal waters; bio-optical modelling; atmospheric corrections; river plumes; sediment transport modelling Dr. Ana Ines Dogliotti Dr. Tim J Malthus Coastal Sensing and Modelling Group - Coastal Development and Management Program - CSIRO Oceans and Atmosphere Business Unit, Australia Website 1 | Website 2 | E-Mail Interests: coastal management; field spectroscopy; airborne and satellite Earth observations data; management of land and water resources Coastal zones are sensitive areas responding at various scales (events to long-term trends) where the monitoring and management of physico-chemical, biological, morphological processes, and fluxes are highly challenging. They are directly affected by anthropization (urbanization, industrialization, agri- and aquaculture) and climate change (e.g., river discharges, waves, sea-level rise). Coastal waters only represent 15% of the global ocean, but concentrate 90% of commercial fisheries, contribute to 25% of global biological productivity, and represent 80% of the marine biodiversity, while being associated with an intensive tourism-related economy. The monitoring and management of coastal zones requires past, present, and future observations adapted to quite diverse and dynamic environments. To complement field measurements, the use of remote sensing data provides useful information to map the hydromorphological (freshwater discharge, currents, shoreline evolution), physico-chemical (water transparency, temperature, salinity, oxygen, nutrients, and pollutants), and biological (habitats, phytoplankton blooms) properties of the coastal zones. This Special Issue will highlight how remote sensing can tackle the monitoring of nearshore dynamics thanks to recent progress made in terms of sensors’ radiometric, spatial, and temporal resolutions, together with new data processing methods, products, and applications. We are inviting submissions including, but not limited to: - high spatial and high temporal resolution remote sensing observations, - atmospheric correction in optically complex waters, - synergetic use of multi-mission remote sensing datasets, - techniques for assessing change in the coastal zone, - dredging activities, - mangrove systems, - coastal geomorphology and change, - turbidity evolution in coastal waters, - monitoring changes in river discharge, - beach morphology evolution, - mapping submerged aquatic vegetation, - change dynamic in coastal marshes, - coastal urbanization trends. Dr. Javier Bustamante Dr. Ana Ines Dogliotti Dr. Tim J Malthus Dr. Nadia Senechal Manuscript Submission Information Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website. Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access monthly journal published by MDPI. Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions. - coastal zones - remote sensing - river plumes - optically complex waters
<urn:uuid:4ab6be04-7f68-4442-a4e9-175473104ae6>
2.703125
939
Content Listing
Science & Tech.
11.631081
95,559,249
Space Storms and Space Weather Hazards - NATO Science Series II 38 (Paperback)I.A. Daglis (editor) - We can order this Space storms, the manifestation of bad weather in space, have a number of physical effects in the near-Earth environment: acceleration of charged particles in space, intensification of electric currents in space and on the ground, impressive aurora displays, and global magnetic disturbances on the Earth's surface. Space weather has been defined as `conditions on the Sun and in the solar wind, magnetosphere, ionosphere, and atmosphere that can influence the performance and reliability of space- and ground-based technological systems and can endanger human life'. The 19 chapters of this book, written by some of the foremost experts on the topic, present the most recent developments in space storm physics and related technological issues, such as malfunction of satellites, communication and navigation systems, and electric power distribution grids. Readership: researchers, teachers and graduate students in space physics, astronomy, geomagnetism, space technology, electric power and communication technology, and non-specialist physicists and engineers. As recommended in the United Nations Space & Atmospheric Science Education Curriculum booklet. Please find it amongst classics such as T.J.M. Boyd, J.J. Sanderson, J.K. Hargreaves and M.C. Kelly etc. Publisher: Springer-Verlag New York Inc. Number of pages: 482 Weight: 833 g Dimensions: 240 x 160 x 25 mm Edition: Softcover reprint of the original 1st ed. 200 You may also be interested in... Please sign in to write a review
<urn:uuid:48a83bf9-f8fd-4e53-b62d-9b29f230aa27>
2.546875
349
Product Page
Science & Tech.
43.474756
95,559,254
Development is challenged by, at least until 2050, a strong population, more severe environmental strains, growing mobility, and dwindling energy resources. All these factors will lead to serious consequences for humankind. Inadequate agricultural resources, water supply and non renewable energy sources, epidemics, climate change, and natural disasters will further heavily impact human life. The European Space Policy Institute (ESPI) sheds a new light on threats, risks and sustainability by combining approaches from various disciplines. It analyzes what could be the contribution of space tools to predict, manage and mitigate those threats. It aims at demonstrating that space is not a niche but has become an overarching tool in solving today's problems. Publisher: Springer Verlag GmbH Number of pages: 321 Weight: 689 g Dimensions: 235 x 155 x 19 mm Edition: 2009 ed. You may also be interested in... Please sign in to write a review
<urn:uuid:e91765c6-84f7-4cbf-8c04-1e9821936f5a>
3.109375
190
Product Page
Science & Tech.
37.881753
95,559,255
The Monsoon Rainfall Manipulation Experiment (MRME) is to understand changes in ecosystem structure and function of a semiarid grassland caused by increased precipitation variability, which alters the pulses of soil moisture that drive primary productivity, community composition, and ecosystem functioning. The overarching hypothesis being tested is that changes in event size and variability will alter grassland productivity, ecosystem processes, and plant community dynamics. In particular, we predict that many small events will increase soil CO2 effluxes by stimulating microbial processes but not plant growth, whereas a small number of large events will increase aboveground NPP and soil respiration by providing sufficient deep soil moisture to sustain plant growth for longer periods of time during the summer monsoon. Additional Study Area Information Study Area Name: Monsoon site Study Area Location: Monsoon site is located just North of the grassland Drought plots Vegetation: dominated by black grama (Bouteloua eriopoda), and other highly prevalent grasses include Sporabolus contractus, S.cryptandrus, S. lexuosus, Muhlenbergia aernicola and Bouteloua gracilis.North Coordinate:34.20143South Coordinate:34.20143East Coordinate:106.41489West Coordinate:106.41489 Additional Information on the Data Collection Period See all Sevilleta Publications
<urn:uuid:3aac88c1-5f87-427e-9cfd-8fa448a216b0>
2.78125
290
Academic Writing
Science & Tech.
10.453
95,559,258
|No such thing as a small change| Working with other processes and programsby vroom (Pope) |on Jan 04, 2000 at 23:31 UTC||Need Help??| Perl makes it very easy to interface between other programs. In this tutorial you'll learn how to do some very basic things like writing to another program, reading the output from another program, just running an outside program. The easiest and most often used way to run a program is to use the system function. When system is called a child process is made and executed, once it is finished it returns to the parent program (your script) and continues with its execution. For example if you had an image processing script you might want to allow a user to view the resulting image at some point. You could do this with a call like the following: When the system call was made in the program it would launch the program in this case xv. The execution of our program would stop until after we had closed our xv program. After we close xv however our Perl script would continue to run the code after the system statement. Now lets say you want to collect the output from a program and do something to it. There are at least two ways to do this. One is with backticks (usually on the key to the left of your 1 key). This allows you to collect all of the output from a program into a variable; Another way is to use open with a pipe. Basically this works the same as working with a filehandle that you're reading from. All you do is something like: This open call returns the process id of the process it spawns. Then you just read from the handle with the <> operator and close it when you're finished. If you think about how you write to files you can probably guess how you write to processes. All you have to do is open the process with the pipe on the left side, and then handle it like you would handle printing to a file. If you want to read and write to the same process take a look at IPC::Open2 if you want to handle stderr in addition to that check out IPC::Open3
<urn:uuid:b219e7b8-1c04-4a9e-9ede-662ac8ec0cce>
3.421875
453
Comment Section
Software Dev.
65.451261
95,559,268
Authors: Harry Watson We make the following Ansatz for the mass ratio of the neutron to the electron: m_n /m_e is approx (4pi)(4pi-1/pi)(4pi-2/pi) +ln(4pi) = 1838.682763 where m_n is the neutron rest mass and m_e is the electron rest mass. The CODATA value is 1838.68366158. The neutron decays into a proton and an electron. If ln(4pi) is the neutron-proton mass difference, then m_p/m_e is (4pi)(4pi-1/pi)(4pi-2/pi), where m_p is the proton rest mass. email@example.com Comments: 2 Pages. [v1] 2017-12-24 19:34:26 Unique-IP document downloads: 22 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:e8db5cfe-4219-40e0-9289-93c4c5281879>
2.71875
339
Academic Writing
Science & Tech.
65.849563
95,559,271
Please use this identifier to cite or link to this item: Physics of drained paddy soils |uhm_phd_7004306_r.pdf||Version for non-UH users. Copying/Printing is not permitted||4.5 MB||Adobe PDF||View/Open| |uhm_phd_7004306_uh.pdf||Version for UH users||4.44 MB||Adobe PDF||View/Open| |Title:||Physics of drained paddy soils| |Authors:||Briones, Aurelio Aguila| |Abstract:||One way in which the tropical world can surpass the temperate regions in food and fiber production is to employ a system of continuous cropping. In tropical Asia for example large areas of paddy land remain idle during the dry season owing to a lack of water and inability of the farmers to obtain adequate soil tilth. When the water resources of these areas are fully developed soil tilth will become a limiting factor for implementing a system continuous cropping. In Southeast Asia alone 27 to 54 million hectares of additional land area can be planted to crops if the problem of water and soil tilth can be solved. This dissertation concerns itself with the problem of regenerating soil structure in paddy soils so that crops other than rice can be grown on these lands. It focuses its attention on the problem of obtaining adequate tilth so that non-paddy crops might be grown as a second, third or even fourth crop on an annual basis. In this study paddy soils are viewed as rheological bodies which behave viscously in the puddled and saturated state, plastically in the moist state and elastically in its driest state. Since this dissertation concerns itself with the physics of drained paddy soils the Hookian or elastic model finds the widest application. Procedures for obtaining elastic constants from sound velocity measurements are described. Physical models are employed to describe shrinking, strengthening and ultimate cracking in drying paddy soils. An attempt is made to explain how and why soil material breaks down into aggregates. Stress-strain relations in drying paddy soils are discussed and the resultant rupture at critical stresses is described by several failure criteria. Lastly, the structural regenerative capacity of a paddy soil is predicted on the basis of a number of soil physical parameters.| Bibliography: leaves 181-188. xiv, 188 l illus., tables |Rights:||All UHM dissertations and theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission from the copyright owner.| |Appears in Collections:||Ph.D. - Soil Science| Please email firstname.lastname@example.org if you need this content in an ADA-compliant format. Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
<urn:uuid:cc9c3888-ad5c-4017-b906-6c59cc099e7b>
2.703125
613
Academic Writing
Science & Tech.
44.389423
95,559,275
"The New Science of Strong Materials" by J.E. Gordon: A Review This book looks at the science of materials. As Professor Gordon explains in his introduction this science is concerned with problems such as "Why do things break? Why do materials have any strength at all? Why are some solids stronger than others? Why is steel tough and why is glass brittle? Why does wood split? What do we really mean by 'strength' and ' toughness ' and ' brittleness ' ." These problems, so stated, may appear simple, easily solved by common sense; the task of material science, however is to look for universal theories to explain the behaviour of materials, not to rely upon 'rule of thumb' methods, and to criticise and improve upon what may appear, at first sight, to be intuitively obvious. For instance, if you have a hole in a material, such as a hole in a sheet of metal, then this causes a local stress which weakens the metal around that point. Intuition may suggest that the stress is related to the size of the hole, but intuition would be wrong. In fact, as Professor Gordon points out, "the increase in local stress, which can be calculated, depends solely upon the shape of the hole and has nothing at all to do with its size.. The root cause of the Comet aircraft disasters was a rivet hole perhaps an eighth of an inch in diameter." It is interesting how many things come under the purview of materials science. Professor Gordon takes the reader on a guided tour which looks at architecture, submarines, aeroplanes, timber, glue, metals and plastics. As well as considering the aspect of materials science involved, his narrative is enlivened by historical sketches of the construction methods, explaining why some constructs succeeded, and other failed, often with catastrophic consequences. Professor Gordon is not without a sense of humour, and this book contains many apposite anecdotes. My personal favourite is the comment made on Pliny the Elder. "Pliny the Elder", he writes, "gives directions for distinguishing a genuine diamond. It should be put, he says, on a blacksmith's anvil and smitten with a heavy hammer as hard as possible. If it breaks it is not a true diamond. It is likely that a good many valuable stones were destroyed in this way because Pliny was muddling up hardness and toughness." This is not only an amusing story, it also illustrates, with great clarity, the difference between hardness and toughness; such clarity is a characteristic of Professor Gordon's approach. After reading this book, the reader will understand how a good engineer must consider both forces and materials. As an example, consider the building of a dam. The engineer must not only consider forces on the dam wall, and forces exerted by the dam wall (a problem in statics), but also the structure of the material involved, and how well it can stand up to stress. One dam wall may have the same size and weight as another, but it is a fallacy to assume that it must, therefore, be as reliable. Only precise measurements on a sample of the materials used can enable the engineer to calculate this, which is why specifications given are so important. This is an interesting and informative book. It contains a very small amount of elementary algebra, but nothing that is beyond the reach of anyone of moderate intelligence, and that can be skipped by those not mathematically inclined. SCRUTINY is BACK - but where's Charlie, Richard and Kate ? - The new States term really got under way today 17 July 2018 with the first open to the public scrutiny hearing. But although there is a new smart table in ... 1 day ago
<urn:uuid:7714126e-fd98-4c18-91a5-83ca8a77381c>
3.375
760
User Review
Science & Tech.
51.524097
95,559,299
Substructural type system Substructural type systems are a family of type systems analogous to substructural logics where one or more of the structural rules are absent or only allowed under controlled circumstances. Such systems are useful for constraining access to system resources such as files, locks and memory by keeping track of changes of state that occur and preventing invalid states. Different substructural type systems - Linear type systems (allow exchange, not weakening or contraction): Every variable is used exactly once. - Affine type systems (allow exchange and weakening, not contraction): Every variable is used at most once. - Relevant type systems (allow exchange and contraction, not weakening): Every variable is used at least once. - Ordered type systems (discard exchange, contraction and weakening): Every variable is used exactly once in the order it was introduced. The explanation for affine type systems is best understood if rephrased as "every occurrence of a variable is used at most once". Linear type systems Linear type systems allow references but not aliases. To enforce this, a reference goes out of scope after appearing on the right-hand side of an assignment, thus ensuring that only one reference to any object exists at once. Note that passing a reference as an argument to a function is a form of assignment, as the function parameter will be assigned the value inside the function, and therefore such use of a reference also causes it to go out of scope. A linear type system is similar to C++'s unique_ptr class, which behaves like a pointer but can only be moved (i.e. not copied) in an assignment. Although the linearity constraint is checked at compile time, dereferencing an invalidated unique_ptr causes undefined behavior at run-time. The single-reference property makes linear type systems suitable as programming languages for quantum computation, as it reflects the no-cloning theorem of quantum states. From the category theory point of view, no-cloning is a statement that there is no diagonal functor which could duplicate states; similarly, from the combinator point of view, there is no K-combinator which can destroy states. From the lambda calculus point of view, a variable x can appear exactly once in a term. Linear type systems are the internal language of closed symmetric monoidal categories, much in the same way that simply typed lambda calculus is the language of Cartesian closed categories. More precisely, one may construct functors between the category of linear type systems and the category of closed symmetric monoidal categories. Affine type systems Affine types are a version of linear types imposing weaker constraints, corresponding to affine logic. An affine resource can only be used once, while a linear one must be used once. Relevant type system Relevant types correspond to relevant logic which allows exchange and contraction, but not weakening, which translates to every variable being used at least once. Ordered type system Ordered types correspond to noncommutative logic where exchange, contraction and weakening are discarded. This can be used to model stack-based memory allocation (contrast with linear types which can be used to model heap-based memory allocation). Without the exchange property, an object may only be used when at the top of the modelled stack, after which it is popped off resulting in every variable being used exactly once in the order it was introduced. The following programming languages support linear or affine types: - Walker 2002, p. X. - Walker 2002, p. 4. - Walker 2002, p. 6. - Walker 2002, p. 43. - std::unique_ptr reference - John c. Baez and Mike Stay, "Physics, Topology, Logic and Computation: A Rosetta Stone", (2009) ArXiv 0903.0340 in New Structures for Physics, ed. Bob Coecke, Lecture Notes in Physics vol. 813, Springer, Berlin, 2011, pp. 95-174. - S. Ambler, "First order logic in symmetric monoidal closed categories", Ph.D. thesis, U. of Edinburgh, 1991. - Walker 2002, pp. 30–31.
<urn:uuid:fe46b516-6da2-496a-b565-5fecafa1d43b>
2.9375
879
Knowledge Article
Software Dev.
36.998854
95,559,333
The obvious question to ask about the sea floor is how deep it is and why. The overall depth distribution first became known through the voyage of HMS Challenger (Fig. 1.1). We see that there are two most common dephts: a shallow one near sea level (shelf seas) and a deep one between 1 and 5 km (normal deep ocean). The sea floor connecting shelves and deep ocean is of intermediate depths and makes up the continental slopes and rises.There is a portion of sea floor which is twice as deep as normal: such depths occur only in narrow trenches, mainly in a ring around the Pacific Ocean (Table 2.1). KeywordsOceanic Crust Magnetic Anomaly Ocean Basin Lower Mantle Ocean Floor Unable to display preview. Download preview PDF. - Cox A (ed) (1973) Plate tectonics and geomagnetics reversals. Freeman, San FranciscoGoogle Scholar - LePichon X, Francheteau J, Bonnin J (1973) Plate tectonics. Elsevier, AmsterdamGoogle Scholar - Uyeda S (1978) The new view of the earth — moving continents and moving oceans. Freeman, San FranciscoGoogle Scholar - Anderson RN (1986) Marine geology — a planet Earth perspective. Wiley, New YorkGoogle Scholar - Kearey P, Vine FJ (1990) Global tectonics. Blackwell Scientific, OxfordGoogle Scholar
<urn:uuid:4bebd906-9d71-4bc0-887c-78d4e444f8bd>
3.921875
294
Academic Writing
Science & Tech.
58.659997
95,559,334
For the first time, researchers at CERN have found evidence for the direct decay of the Higgs boson into fermions – another strong indication that the particle discovered in 2012 behaves in the way the standard model of particle physics predicts. Researchers from the University of Zurich made a significant contribution to the study published in Nature Physics. For the first time, scientists from the CMS experiment on the Large Hadron Collider (LHC) at CERN have succeeded in finding evidence for the direct decay of the Higgs boson into fermions. Previously, the Higgs particle could only be detected through its decay into bosons. “This is a major step forwards,” explains Professor Vincenzo Chiochia from the University of Zurich’s Physics Institute, whose group was involved in analyzing the data. “We now know that the Higgs particle can decay into both bosons and fermions, which means we can exclude certain theories predicting that the Higgs particle does not couple to fermions.” As a group of elementary particles, fermions form the matter while bosons act as force carriers between fermions. According to the standard model of particle physics, the interaction strength between the fermions and the Higgs field must be proportional to their mass. “This prediction was confirmed,” says Chiochia; “a strong indication that the particle discovered in 2012 actually behaves like the Higgs particle proposed in the theory.” Combined data analysis The researchers analyzed the data gathered at the LHC between 2011 and 2012, combining the Higgs decays into bottom quarks and tau leptons, both of which belong to the fermion particle group. The results reveal that an accumulation of these decays comes about at a Higgs particle mass near 125 gigaelectron volts (GeV) and with a significance of 3.8 sigma. This means that the probability of the background alone fluctuating up by this amount or more is about one in 14,000. In particle physics, a discovery is deemed confirmed from a significance of five sigma. Measuring the Higgs decay modes Three different processes were studied, whereby the UZH researchers analyzed the Higgs decay into taus. Because the Higgs particle is extremely short-lived, it cannot be detected directly, but rather only via its decay products. The bottom quarks and taus, however, have a long enough lifetime to be measured directly in the CMS experiment’s pixel detector. The University of Zurich and the Large Hadron Collider The University of Zurich is actively involved in the LHC at CERN with five experimental research groups: The groups headed by professors Florencia Canelli, Vincenzo Chiochia and Ben Kilminster conduct research with the CMS detector, Professors Ulrich Straumann’s and Nicola Serra’s groups with the LHCb detector. For the analysis and interpretation of the data, they are supported by the groups under professors Thomas Gehrmann, Stefano Pozzorini, Gino Isidori and PD Dr. Massimiliano Grazzini. The CMS detector at CERN The CMS detector measures the energy and impulse of photons, electrons, muons and other charged particles with high precision. Different measuring instruments are arranged in tiers inside the 12,500-ton detector. 179 institutions worldwide are involved in the construction and operation of the CMS detector. The Swiss institutions are the University of Zurich, ETH Zurich and the Paul Scherrer Institute, which jointly developed and constructed the CMS pixel detector. The CMS Collaboration. «Evidence for the direct decay of the 125 GeV Higgs boson to fermions», Nature Physics Online. DOI: 10.1038/nphys3005 Prof. Vincenzo Chiochia Physics Institute of the University of Zurich Tel. + 41 22 767 60 41 Mobile: +41 76 487 57 50 University of Zurich Tel. +41 44 634 44 39 Bettina Jakob | Universität Zürich Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:c17b2b4b-78f5-4ec9-b2f0-ef8d48105e2e>
3
1,425
Content Listing
Science & Tech.
40.620955
95,559,336
- Research article - Open Access Physicochemical study of extractants for extraction of rare earth element © The Author(s). 2016 Received: 22 March 2016 Accepted: 21 September 2016 Published: 29 September 2016 In nuclear technology, the solvent extraction is an important step for recovery of rare earth materials, purification, radionuclide production, and preparation of nuclear reactor materials. Tributyl phosphate (TBP) and toluene are taken as extractant and diluent, respectively, for study of extraction efficiency for rare earth element like CeO2. In the present paper, idea of ultrasonic sound theory is intensively applied to study the physicochemical property of extractants and diluent pair. The experimental value of ultrasonic velocity, measured density, molar volume, and viscosity are used for computation of acoustical parameters and their excess values. The variations of physical and acoustical parameter are discussed in terms of molecular interactions. The prepared samples are treated for extraction of CeO2 by separating the organo-aqueous phase. The recovery of this reactor material with the help of this ultrasonic technique has been explained in terms of nature and extent of intermolecular interactions present in the binary mixture. The ultrasonic treatment for the recovery of this material provides an optimum value of the binary mixture for recovery of CeO2. The reprocessing of spent nuclear fuels is one of the key processes in the nuclear fuel cycle. As the conventional method suffers from many drawbacks in reprocessing of nuclear fuel from the view point of cost and minimization of waste, the development of a new process involving cost-effectiveness coupled with minimizing waste amount is a great challenge for the next generation reprocessing. Liquid-liquid extraction was one of the most promising methods of separating REE’s material. The repeated use of different organic solvents for this process results in severe corrosion to the equipment, and usage of lots of volatile organic solvents may lead to severe environmental pollution. The real picturisation of the role of extractant-diluent pair (EDP) is not still well defined in spite of work by many workers in this field (Laxmi et al. 2015; Thirumaran and Jayakumar 2009; Liu et al. 2016; Mahapatra et al. 2014; Mastumiya et al. 2014; Bhatanagar et al. 2010; Joshi et al. 2010; Patel and Parsania 2010; Palani and Kalavathy 2011; Sakthipandi et al. 2012; Srivastava et al. 2014; Aswar and Chudhary 2014; Giri and Nath 2015). Thus, an optimal concentration or range of concentration of extractants with particular diluents is a serious concern in the solvent extraction or purex process. Thus, the aim of the present investigation is to make an attempt to find an optimum concentration of EDP used for extraction process. The accurate concentration of EDP and their variation with different physical factors can be well studied by the propagation of ultrasonic wave in the concerned medium. To understand the basic process with the help of some fundamental parameter with easier way, another method called ultrasonic irradiation is more effective and efficient in this regard. The high frequency and short wavelength of ultrasonic wave makes it possible to interact with the atoms and materials of the concerned medium without any destruction of the medium or individual property of component. Tributyl phosphate (TBP) has been extensively used as a solvent in nuclear industry for fuel reprocessing due to its excellent chemical resistance and physical properties which results in better separation than other solvents. The extracting power of TBP is mainly due to presence of phosphoryl group which form solvates with the metal ions. The diluent-like toluene improves the physical properties of TBP by lowering its density and viscosity for better phase separation. Hence, it is important to study various physical properties of TBP in presence of diluent. In applications of liquid-liquid extraction processes for recycling of used nuclear fuel, the aqueous phase co-exists with an organic extracting phase which consists of a mixture of an extracting agent and a diluent. Cerium is a member of the lanthanide series of metals and is the most abundant of the rare earth elements in the earth’s crust. When present in compounds, cerium exists in both the trivalent (Ce3+) and the tetravalent (Ce4+) state. Cerium is found in nature along with other lanthanide elements in the minerals like alanite, bastnasite, monazite, cerite, and samarskite; however, only bastnasite and monazite are important sources commercially. Because of its unique stability in the tetravalent state, cerium can be separated out from the other rare earth elements through oxidation (forming CeO2) followed by variable solubility filtration. Materials and instruments High purity and analytical grade samples of TBP (AR > 98 %), toluene (AR > 99 %), HNO3 (AR > 15.5 mol/L), and CeO2 (AR > 99 %) procured from CDH chemicals were used as received. The binary mixture were prepared on percentage basis (w/w) by mixing known mass of toluene in appropriate masses of TBP and measuring their masses with the help of a high-precision electronic balance of (WENSAR, PGB 100, with accuracy ±0.001 g). The densities of all mixture as well as pure liquid were measured by a specific gravity bottle calibrated with deionized double-distilled water of density 0.9960 × 103kg/m3 at 303.15 K. The precision of density measurement was within ± 0.0001 kg/m3. The ultrasonic velocity in the mixtures as well as in the component liquids were measured at 303.15 K (calibrated up to ±0.01 m/s) by a single-crystal variable-path multifrequency ultrasonic interferometer operating at different frequencies 1–4 MHz (Mittal Enterprises, New Delhi, Model-M-81S). The temperature of the mixture was maintained constant within ±0.01 K by circulation of water from thermostatically regulated constant temperature water bath (B-206) through the water-jacketed cell. Viscosities of the mixtures were measured by Redwood apparatus (MAC, #RWV-5271 was precise up to ±0.0001 Nsm−2). Different concentrations of extractant were prepared by dissolving various amounts of TBP in toluene. All samples were stored in ground-glass stopper bottles to prevent the evaporation. The concentrations of extractant were studied and optimized by ultrasonic method in terms of existence of different intermolecular interaction explaining the various acoustic parameters with their deviated values. The ultrasonic velocity of the pure liquids and their freshly prepared mixtures of (TBP-toluene) were measured using multifrequency ultrasonic interferometer operating at different frequencies (1–4 MHz). The working principle used in the measurement of velocity of sound through medium was based on the accurate determination of the wavelength of ultrasonic waves of known frequency produced by quartz crystal in the measuring cell. The temperature of the solution was controlled by circulating water at a desired temperature through the jacket of a double-walled cell. Results and discussion Experimental density (ρ) and viscosity (η) values for pure liquids with literature values η (Nsm−2) × 10−3 Experimental values of density (ρ),viscosity (η), and molar volume at temperature 303.15 K Mole fraction of TBP Viscosity (Nsm−2) × 10−3 Ultrasonic velocity (C),excess acoustic impedance (ZE), excess isentropic compressibility (Δβs) of pure tributyl phosphate (TBP), toluene, and binary mixture of TBP and toluene at different mole fraction with different frequencies (1–4 MHz) for temperature 303.15 K Mole fraction of TBP Excess intermolecular free length (Lf E), excess surface tension (σE), excess molar volume (VE m) and excess of pure tributyl phosphate (TBP), toluene, and binary mixture of TBP and toluene at different mole fractions with different frequencies (1–4 MHz) for temperature 303.15 K Mole fraction of TBP Lf E (m) × 10−7 VE m (m3mol−1) × 10−7 ηE (Nsm−2) × 10−3 Percent extraction of cerium from CeO2 with concentration of extractant diluent pair Mole fraction of EDP % extraction of cerium The ultrasonic study of TBP and toluene is a nondestructive investigation used for probing the nature of the acoustical and molecular interaction in solvent mixture. The acoustic data of ultrasonic velocity, density, viscosity, molar volume, and acoustic parameters with their excess values of TBP with toluene over the different concentration range suggest the existence of a strong molecular interaction like dipole-induced dipole, dipole-dipole, and hydrogen bonding type. The frequency of the ultrasonic wave also influences the intermolecular interaction as all the parameters are based on the computation of ultrasonic velocity. The change of deviated and excess physicochemical parameter from certain concentration hints the presence of compatibility of the solvent mixture. Again, the extraction of cerium for each concentration of TBP and toluene indicates the maximum efficiency of the TBP and toluene is the same as demonstrated by each physicochemical parameter. The nature of interaction present in the TBP and toluene mixture provides an optimized value for the extraction process. As such, toluene with TBP may be used as effective diluents/modifiers in the extraction of cerium from cerium oxide material. The authors are thankful to Hon’ble Vice chancellor and Dean (PGS & R) for providing the financial support and laboratory facilities to carry out the research work. RG carried out preparation of sample for the experimental work, measured and computed different experimental datas under the guidance of GN also participate in the sequene alignment, scientific analysis and discussion of the different results. All authors read and approved the final manuscript. The author declare that they have no competing interests. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. - Ali A, Nabi F. J Disp Sci Tech. 2010;31:1326.View ArticleGoogle Scholar - Ali A, Nain AK. Acoust Lett. 1996;19:181.Google Scholar - Ali A, Nain AK, Kamil M. Thermochim Acta. 1996;274:209.View ArticleGoogle Scholar - Ali A, Abida, Hyder S. Phys Chem Liq. 2004;42:411.View ArticleGoogle Scholar - Aswar AS, Chudhary DS. J Pure Appl Ultrason. 2014;36:46–50.Google Scholar - Bhatanagar D, Joshi D, Kumar A, Jain CL. Ind J Pure Appl Phys. 2010;48:31.Google Scholar - Bottcher CJF. Theory of electric polarization. Msterdam: Elsevier; 1973. p. 1.View ArticleGoogle Scholar - Dey R, Raghuvanshi KS, Saini A, Harshavardhan A. Int J Sci Res. 2015;251–257. http://www.ijsr.net/conf/ISU-2015/ISU-055.pdf. - Fort RJ, Moore WR. Trans Faraday Soc. 1965;61:2102.View ArticleGoogle Scholar - Gangwar MK, Saxena AK, Agarwal R. Adv Appl Sci Res. 2013;4(2):29.Google Scholar - Giri R, Nath G. J Pure Appl Ultrason. 2015;37:44–7.Google Scholar - Goldman S, Care GCB. Can J Chem. 1971;49:1716.View ArticleGoogle Scholar - Habashi F. A text book of hydrometallurgy. Metallurgie extractive Quebec, Enr. 1993.Google Scholar - Joshi D, Kumar A, Ponia TP, Kumar B, Bhatanagar D. J Pure Appl Ultrason. 2010;32:22–31.Google Scholar - Laxmi BJ, Satyanarayana GR, GouriSankar M, Ramachandran D, Rambabu C. Chem Sci Trans. 2015;4(1):17.Google Scholar - Liu C, Cao J, Shen W, Ren Y, Mu W, Ding X. Fluid Phase Equilib. 2016;408:190.View ArticleGoogle Scholar - Mahapatra PK, Rout DR, Sengupta R. Sep Purif Technol. 2014;133:69.View ArticleGoogle Scholar - Matsumiya M, Kikuchi Y, Yamada T, Kawakami S. Sep Purif Technol. 2014;130:91.View ArticleGoogle Scholar - Moumouzias G, Ritzoulis G. J Chem Eng Data. 1992;37(4):482.View ArticleGoogle Scholar - Nabi F, Jesudason CG, Malik MA, Al-Thabaiti SA. Chem Eng Commun. 2013;200:77.View ArticleGoogle Scholar - Nadh ML, Mohan TM, Krishna TV, Kumar CRS. Ind J Pure Appl Phys. 2013;51:406.Google Scholar - Nath G, Paikaray R. Ind J Phys. 2009;9:763.Google Scholar - Nath G, Sahu S, Paikaray R. Ind J Phys. 2009a;4:429.View ArticleGoogle Scholar - Nath G, Sahu S, Paikaray R. Ind J Phys. 2009b;11:1567.View ArticleGoogle Scholar - Pal A, Kumar B. J Mol Liq. 2011;163:128.View ArticleGoogle Scholar - Palani R, Kalavathy S. J Pure Appl Ultrason. 2011;33:21–5.Google Scholar - Patel JP, Parsania PH. J Pure Appl Ultrason. 2010;32:84–7.Google Scholar - Petek A, Dolecek V. Acta Chim Slov. 1998;45:153.Google Scholar - Pradhan SK, Das SK, Moharana L, Swain BB. Ind J Pure Appl Phys. 2012;50:161.Google Scholar - Sakthipandi K, Rajendran V, Jayakumar T. J Pure Appl Ultrason. 2012;34:69–71.Google Scholar - Schulz WW, Navratil JD. Science and technology of tributyl phosphate, CRC press. 1984.Google Scholar - Shieh PS, Fendler JH. J Chem Soc Farad. 1977;173:1480.View ArticleGoogle Scholar - Srivastava R, Pandey A, Gupta RK, Gupta A. J Pure Appl Ultrason. 2014;36:36–45.Google Scholar - Thirumaran S, Jayakumar JE. Ind J Pure Appl Phys. 2009;47:265.Google Scholar - Tuck DG. Trans Faraday Soc. 1961;57:1297.View ArticleGoogle Scholar
<urn:uuid:db44539d-d58a-42a9-b5a3-26b928904cbf>
2.734375
3,360
Academic Writing
Science & Tech.
51.975202
95,559,345
Variability of ion density due to solar flares as measured by SROSS-C2 satellite The ion densities have been measured from year 1995-1998, using RPA payload of SROSSC2 satellite to study the effect of solar flares on ion density. Solar flare data has been obtained from National Geophysical Data Center (NGDC) Boulder, Colorado (USA). Study indicates considerable decrease in total ion density during flare time compared to normal time. This decrease varies from 1.2 to 2.8 times. Out of four ion species – O+, O2+, H+ and He+ as measured by SROSS – C2, O+ ion density is most affected by the effects of the flare. There is considerable decrease in O+ ion density while O2 +, H+ and He+ density show negligible change during flare time compared to normal time. Furthermore relation between change in ion density (ΔN) as a response to change in ion temperature (ΔT) during flare time and normal days has been estimated. A comparison with O+ density obtained from IRI – 2012 during the flare time underestimates the density value. Full Text: PDF (downloaded 365 times) - There are currently no refbacks.
<urn:uuid:a03961c1-fa68-4d68-8e46-0dd8dced31e1>
2.921875
251
Truncated
Science & Tech.
42.972113
95,559,362
The PolyCam imager aboard NASA’s OSIRIS-REx spacecraft captured this composite image of Jupiter (center) and three of its moons, Callisto (left), Io and Ganymede. Credit: NASA/Goddard/University of Arizona During Earth-Trojan asteroid search operations, the PolyCam imager aboard NASA’s OSIRIS-REx spacecraft captured this image of Jupiter (center) and three of its moons, Callisto (left), Io, and Ganymede. The image, which shows the bands of Jupiter, was taken at 3:34 a.m. EST, on Feb. 12, when the spacecraft was 76 million miles (122 million kilometers) from Earth and 418 million miles (673 million kilometers) from Jupiter. PolyCam is OSIRIS-REx’s longest range camera, capable of capturing images of the asteroid Bennu from a distance of two million kilometers. This image was produced by taking two copies of the same image, adjusting the brightness of Jupiter separately from the significantly dimmer moons, and compositing them back together so that all four objects are visible in the same frame. NASA’s Goddard Space Flight Center in Greenbelt, Maryland provides overall mission management, systems engineering and the safety and mission assurance for OSIRIS-REx. Dante Lauretta of the University of Arizona, Tucson, is the principal investigator, and the University of Arizona also leads the science team and the mission’s observation planning and processing. Lockheed Martin Space Systems in Denver built the spacecraft and is providing flight operations. Goddard and KinetX Aerospace are responsible for navigating the OSIRIS-REx spacecraft. OSIRIS-REx is the third mission in NASA’s New Frontiers Program. NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages the agency’s New Frontiers Program for its Science Mission Directorate in Washington. Explore further:OSIRIS-REx takes its first image of Jupiter Provided by:NASA’s Goddard Space Flight Center
<urn:uuid:d38c1c1e-4c49-4335-b3b6-fa023bded98e>
3.046875
429
News Article
Science & Tech.
27.385705
95,559,390
Roughness length () is a parameter of some vertical wind profile equations that model the horizontal mean wind speed near the ground; in the log wind profile, it is equivalent to the height at which the wind speed theoretically becomes zero. In reality the wind at this height no longer follows a mathematical logarithm. It is so named because it is typically related to the height of terrain roughness elements. Whilst it is not a physical length, it can be considered as a length-scale a representation of the roughness of the surface. As an approximation, the roughness length is approximately one-tenth of the height of the surface roughness elements. For example, short grass of height 0.01m has a roughness length of approximately 0.001m. Surfaces are rougher if they have more protrusions. Forests have much larger roughness lengths than tundra, for example. Roughness length is an important concept in urban meteorology as the building of tall structures, such as skyscrapers, has an effect on roughness length and wind patterns. |Open sea, Fetch at least 5 km||0.0002| |Mud flats, snow; no vegetation, no obstacles||0.005| |Open flat terrain; grass, few isolated obstacles||0.03| |Low crops; occasional large obstacles, x/H > 20||0.10| |High crops; scattered obstacles, 15 < x/H < 20||0.25| |parkland, bushes; numerous obstacles, x/H ≈ 10||0.5| |Regular large obstacle coverage (suburb, forest)||1.0| |City centre with high- and low-rise buildings||≥ 2| - WMO Guide to Meteorological Instruments and Methods of Observation WMO-No. 8 page I.5-13 - Aerodynamic Roughness (Length AMS Glossary) - Surface Roughness Length - Roughness (AMS Glossary)[dead link] |This climatology/meteorology–related article is a stub. You can help Wikipedia by expanding it.|
<urn:uuid:20b2db56-8f66-495c-a46a-617daa62eb7e>
3.421875
448
Knowledge Article
Science & Tech.
59.010227
95,559,401
Around half were introduced deliberately (ornamental, crop or forest species), and the other half accidentally, along with imported seeds. In addition to chemical, mechanical and more recently biological control measures, it is now crucial to intervene upstream of invasion to prevent the introduction of potentially invasive species or regulate the spread of plants that are already on the island. Lake Gol is a particularly eloquent example. In early 2006, it was completely covered by a mixture of water hyacinths (Eichhornia crassipes) and water lettuces (Pistia stratiotes). In March 2006, cyclone Diwa cleared almost 99% of the lake by flushing it out and washing the plants into the sea, but eight months later, it was again totally covered, to the detriment of the aquatic ecosystem as a whole. These aquatic plants are estimated to produce some 250 tonnes of biomass per hectare per fortnight, a rate that rules out mechanical control and calls for biological measures, which have already proved their efficacy in many tropical countries. Rational joint management of natural environments and rural areas There are four main ways of intervening upstream of invasion. The first is to ban the introduction of risky plants. This means assessing the risks beforehand. To this end, analyses by CIRAD under the EU POSEIDOM* programme served to identify around fifty potentially invasive plants, most of them ornamental. A ban is due to be placed on introducing those plants in the French overseas regions. Of the ornamental species grown in the highlands of Reunion, 34 are highly invasive. Moreover, once plants are introduced, there is a latency period before they begin to multiply and spread. However, once they have spread over an area of 100 hectares, they are impossible to eradicate, which confirms the need for early intervention. It is also vital to take account of the movement of species between the various elements of the landscape: forests, rangelands, crops and inhabited areas. This is the second line of intervention. For instance, grasslands, which contain a high proportion of species from outside (80%), can play a major role in plant movements on a landscape level, depending on how well they are managed. In sensitive areas such as the highlands, rearing herbivorous animals is one way of managing the environment. Recent work under a rangeland management project (PASTOFOR)**, showed that productive, well-kept grasslands help to maintain biodiversity in surrounding natural environments by limiting the spread of invasive species. On the other hand, grasslands in which weeds are not sufficiently controlled are a threat to neighbouring natural environments. The interfaces between different environments are key to the circulation and development of invasive species: rational joint management of natural environments and rural areas is the only way of controlling them. Taking account of how plants spread Classing invasive plants according to their ecological impact and ability to conquer new areas has meant that it is now possible to define the priorities more effectively. However, it is also necessary to determine how they spread depending on the ecological context and on how the environments concerned are managed. In fact, these plants may spread in different ways depending on the type of environment. For instance, the false pepper tree (Schinus terebenthifolius) grows from seed in humid environments and from suckers under dry conditions. Likewise, the giant bramble (Rubus alceifolius) bears fruit at low altitudes but only propagates vegetatively at heights of more than 1000 metres above sea level. The results of these studies mean that it is now possible to take more effective action against the threat posed to the island's biodiversity by these invasive plants. Managing invasive species on an island means taking account of all the processes that lead to invasion and considering various levels of intervention, from preventing risky introductions, through early detection of the initial signs of invasions, to control and the subsequent restoration of environments. Lastly, over and above preventing invasion, the main priority is to make local populations aware of the problem. * POSEIDOM: Programme of options specific to the remote and insular nature of the French overseas department. ** PASTOFOR: Management of pastoralism on the fringes of highly protected natural environments. Helen Burford | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:7f90d15b-87bd-4f21-a97d-a684984067aa>
3.3125
1,526
Content Listing
Science & Tech.
32.329441
95,559,407
Conservation of mass The law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter and energy, the mass of the system must remain constant over time, as system's mass cannot change, so quantity cannot be added nor removed. Hence, the quantity of mass is conserved over time. The law implies that mass can neither be created nor destroyed, although it may be rearranged in space, or the entities associated with it may be changed in form. For example, in chemical reactions, the mass of the chemical components before the reaction is equal to the mass of the components after the reaction. Thus, during any chemical reaction and low-energy thermodynamic processes in an isolated system, the total mass of the reactants, or starting materials, must be equal to the mass of the products. The concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. Historically, mass conservation was demonstrated in chemical reactions independently by Mikhail Lomonosov and later rediscovered by Antoine Lavoisier in the late 18th century. The formulation of this law was of crucial importance in the progress from alchemy to the modern natural science of chemistry. The conservation of mass only holds approximately and is considered part of a series of assumptions coming from classical mechanics. The law has to be modified to comply with the laws of quantum mechanics and special relativity under the principle of mass-energy equivalence, which states that energy and mass form one conserved quantity. For very energetic systems the conservation of mass-only is shown not to hold, as is the case in nuclear reactions and particle-antiparticle annihilation in particle physics. Mass is also not generally conserved in open systems. Such is the case when various forms of energy and matter are allowed into, or out of, the system. However, unless radioactivity or nuclear reactions are involved, the amount of energy escaping (or entering) such systems as heat, mechanical work, or electromagnetic radiation is usually too small to be measured as a decrease (or increase) in the mass of the system. For systems where large gravitational fields are involved, general relativity has to be taken into account, where mass-energy conservation becomes a more complex concept, subject to different definitions, and neither mass nor energy is as strictly and simply conserved as is the case in special relativity. - 1 Formulation and examples - 2 History - 3 Generalization - 4 See also - 5 References Formulation and examples The law of conservation of mass can only be formulated in classical mechanics when the energy scales associated to an isolated system are much smaller than , where is the mass of a typical object in the system, measured in the frame of reference where the object is at rest, and is the speed of light. The law can be formulated mathematically in the fields of fluid mechanics and continuum mechanics, where the conservation of mass is usually expressed using the continuity equation, given in differential form as where is the density (mass per unit volume), is the time, is the divergence, and is the flow velocity field. The interpretation of the continuity equation for mass is the following: For a given closed surface in the system, the change in time of the mass enclosed by the surface is equal to the mass that traverses the surface, positive if matter goes in and negative if matter goes out. For the whole isolated system, this condition implies that the total mass , sum of the masses of all components in the system, does not change in time, i.e. In chemistry, the calculation of the amount of reactant and products in a chemical reaction, or stoichiometry, is founded on the principle of conservation of mass. The principle implies that during a chemical reaction the total mass of the reactants is equal to the total mass of the products. For example, in the following reaction 4 + 2 O 2 → CO 2 + 2 H where one molecule of methane (CH 4) and two oxygen molecules O 2 are converted into one molecule of carbon dioxide (CO 2) and two of water (H 2O). The number of molecules as result from the reaction can be derived from the principle of conservation of mass, as initially four hydrogen atoms, 4 oxygen atoms and one carbon atom are present (as well as in the final state), then the number water molecules produced must be exactly two per molecule of carbon dioxide produced. An important idea in ancient Greek philosophy was that "Nothing comes from nothing", so that what exists now has always existed: no new matter can come into existence where there was none before. An explicit statement of this, along with the further principle that nothing can pass away into nothing, is found in Empedocles (approx. 4th century BC): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." A further principle of conservation was stated by Epicurus around 3rd century BC, who, describing the nature of the Universe, wrote that "the totality of things was always such as it is now, and always will be". Jain philosophy, a non-creationist philosophy based on the teachings of Mahavira (6th century BC), states that the universe and its constituents such as matter cannot be destroyed or created. The Jain text Tattvarthasutra (2nd century CE) states that a substance is permanent, but its modes are characterised by creation and destruction. A principle of the conservation of matter was also stated by Nasīr al-Dīn al-Tūsī (around 13th century CE). He wrote that "A body of matter cannot disappear completely. It only changes its form, condition, composition, color and other properties and turns into a different complex or elementary matter". Mass conservation in chemistry By the 18th century the principle of conservation of mass during chemical reactions was widely used and was an important assumption during experiments, even before a definition was formally established, as can be seen in the works of Joseph Black, Henry Cavendish, and Jean Rey. The first to outline the principle was given by Mikhail Lomonosov in 1756. He demonstrated it by experiments and had discussed the principle before in 1774 in correspondence with Leonhard Euler, though his claim on the subject is sometimes challenged. A more refined series of experiments were later carried out by Antoine Lavoisier who expressed his conclusion in 1773 and popularized the principle of conservation of mass. The demonstrations of the principle led alternatives theories obsolete, like the phlogiston theory that claimed that mass could be gained or lost in combustion and heat processes. The conservation of mass was obscure for millennia because of the buoyancy effect of the Earth's atmosphere on the weight of gases. For example, a piece of wood weighs less after burning; this seemed to suggest that some of its mass disappears, or is transformed or lost. This was not disproved until careful experiments were performed in which chemical reactions such as rusting were allowed to take place in sealed glass ampoules; it was found that the chemical reaction did not change the weight of the sealed container and its contents. Weighing of gases using scales was not possible until the invention of the vacuum pump in 17th century. Once understood, the conservation of mass was of great importance in progressing from alchemy to modern chemistry. Once early chemists realized that chemical substances never disappeared but were only transformed into other substances with the same weight, these scientists could for the first time embark on quantitative studies of the transformations of substances. The idea of mass conservation plus a surmise that certain "elemental substances" also could not be transformed into others by chemical reactions, in turn led to an understanding of chemical elements, as well as the idea that all chemical processes and transformations (such as burning and metabolic reactions) are reactions between invariant amounts or weights of these chemical elements. Following the pioneering work of Lavoisier the prolonged and exhaustive experiments of Jean Stas supported the strict accuracy of this law in chemical reactions, even though they were carried out with other intentions. His research indicated that in certain reactions the loss or gain could not have been more than from 2 to 4 parts in 100,000. The difference in the accuracy aimed at and attained by Lavoisier on the one hand, and by Morley and Stas on the other, is enormous. The law of conservation of mass was challenged with the advent of special relativity. In one of the Annus Mirabilis papers of Albert Einstein in 1905, he suggested an equivalence between mass and energy. This theory implied several assertions, like the idea that internal energy of a system could contribute to the mass of whole the system, or that mass could be converted into electromagnetic radiation. However, as Max Planck pointed out, a change in mass as a result of extraction or addition of chemical energy, as predicted by Einstein's theory, is so small that it could not be measured with the available instruments and could not be presented as a test to the special relativity. Einstein speculated that the energies associated with newly discovered radioactivity were significant enough, compared with the mass of systems producing them, to enable their mass-change to be measured, once the energy of the reaction had been removed from the system. This later indeed proved to be possible, although it was eventually to be the first artificial nuclear transmutation reaction in 1932, demonstrated by Cockcroft and Walton, that proved the first successful test of Einstein's theory regarding mass-loss with energy-loss. The law conservation of mass and the analogous law of conservation of energy were finally overruled by a more general principle known as the mass–energy equivalence. Special relativity also redefines the concept of mass and energy, which can be used interchangeably and are relative to the frame of reference. Several definitions had to be defined for consistency like rest mass of a particle (mass in the rest frame of the particle) and relativistic mass (in another frame). The latter term is usually less frequently used. For more discussion, see Mass in special relativity. This article or section may contain misleading parts. In special relativity, the conservation of mass does not apply if the system is open and energy escapes. However, it does continue to apply to totally closed (isolated) systems. If energy cannot escape a system, its mass cannot decrease. In relativity theory, so long as any type of energy is retained within a system, this energy exhibits mass. Also, mass must be differentiated from matter (see below), since matter may not be perfectly conserved in isolated systems, even though mass is always conserved in such systems. However, matter is so nearly conserved in chemistry that violations of matter conservation were not measured until the nuclear age, and the assumption of matter conservation remains an important practical concept in most systems in chemistry and other studies that do not involve the high energies typical of radioactivity and nuclear reactions. The mass associated with chemical amounts of energy is too small to measure The change in mass of certain kinds of open systems where atoms or massive particles are not allowed to escape, but other types of energy (such as light or heat) are allowed to enter or escape, went unnoticed during the 19th century, because the change in mass associated with addition or loss of small quantities of thermal or radiant energy in chemical reactions is very small. (In theory, mass would not change at all for experiments conducted in isolated systems where heat and work were not allowed in or out.) Mass conservation remains correct if energy is not lost The conservation of relativistic mass implies the viewpoint of a single observer (or the view from a single inertial frame) since changing inertial frames may result in a change of the total energy (relativistic energy) for systems, and this quantity determines the relativistic mass. The principle that the mass of a system of particles must be equal to the sum of their rest masses, even though true in classical physics, may be false in special relativity. The reason that rest masses cannot be simply added is that this does not take into account other forms of energy, such as kinetic and potential energy, and massless particles such as photons, all of which may (or may not) affect the total mass of systems. For moving massive particles in a system, examining the rest masses of the various particles also amounts to introducing many different inertial observation frames (which is prohibited if total system energy and momentum are to be conserved), and also when in the rest frame of one particle, this procedure ignores the momenta of other particles, which affect the system mass if the other particles are in motion in this frame. For the special type of mass called invariant mass, changing the inertial frame of observation for a whole closed system has no effect on the measure of invariant mass of the system, which remains both conserved and invariant (unchanging), even for different observers who view the entire system. Invariant mass is a system combination of energy and momentum, which is invariant for any observer, because in any inertial frame, the energies and momenta of the various particles always add to the same quantity (the momentum may be negative, so the addition amounts to a subtraction). The invariant mass is the relativistic mass of the system when viewed in the center of momentum frame. It is the minimum mass which a system may exhibit, as viewed from all possible inertial frames. The conservation of both relativistic and invariant mass applies even to systems of particles created by pair production, where energy for new particles may come from kinetic energy of other particles, or from one or more photons as part of a system that includes other particles besides a photon. Again, neither the relativistic nor the invariant mass of totally closed (that is, isolated) systems changes when new particles are created. However, different inertial observers will disagree on the value of this conserved mass, if it is the relativistic mass (i.e., relativistic mass is conserved but not invariant). However, all observers agree on the value of the conserved mass if the mass being measured is the invariant mass (i.e., invariant mass is both conserved and invariant). The mass-energy equivalence formula gives a different prediction in non-isolated systems, since if energy is allowed to escape a system, both relativistic mass and invariant mass will escape also. In this case, the mass-energy equivalence formula predicts that the change in mass of a system is associated with the change in its energy due to energy being added or subtracted: This form involving changes was the form in which this famous equation was originally presented by Einstein. In this sense, mass changes in any system are explained simply if the mass of the energy added or removed from the system, are taken into account. The formula implies that bound systems have an invariant mass (rest mass for the system) less than the sum of their parts, if the binding energy has been allowed to escape the system after the system has been bound. This may happen by converting system potential energy into some other kind of active energy, such as kinetic energy or photons, which easily escape a bound system. The difference in system masses, called a mass defect, is a measure of the binding energy in bound systems – in other words, the energy needed to break the system apart. The greater the mass defect, the larger the binding energy. The binding energy (which itself has mass) must be released (as light or heat) when the parts combine to form the bound system, and this is the reason the mass of the bound system decreases when the energy leaves the system. The total invariant mass is actually conserved, when the mass of the binding energy that has escaped, is taken into account. In general relativity, the total invariant mass of photons in an expanding volume of space will decrease, due to the red shift of such an expansions. The conservation of both mass and energy therefore depends on various corrections made to energy in the theory, due to the changing gravitational potential energy of such systems. - Charge conservation - Conservation law - Fick's laws of diffusion - Law of definite proportions - Law of multiple proportions - Volkenstein, Mikhail V. (2009). Entropy and Information (illustrated ed.). Springer Science & Business Media. p. 20. ISBN 978-3-0346-0078-1. Extract of page 20 - Okuň, Lev Borisovič (2009). Energy and Mass in Relativity Theory. World Scientific. p. 253. ISBN 978-981-281-412-8. Extract of page 253 - Lewis, David (2012). Early Russian Organic Chemists and Their Legacy (illustrated ed.). Springer Science & Business Media. p. 29. ISBN 978-3-642-28219-5. Extract of page 29 - Fr. 12; see pp.291–2 of Kirk, G. S.; J. E. Raven; Malcolm Schofield (1983). The Presocratic Philosophers (2 ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-27455-5. - Long, A. A.; D. N. Sedley (1987). "Epicureanism: The principals of conservation". The Hellenistic Philosophers. Vol 1: Translations of the principal sources with philosophical commentary. Cambridge: Cambridge University Press. pp. 25–26. ISBN 0-521-27556-3. - Mahavira is dated 598 BC - 526 BC. See. Dundas, Paul; John Hinnels ed. (2002). The Jains. London: Routledge. ISBN 0-415-26606-8. p. 24 - Devendra (Muni.), T. G. Kalghatgi, T. S. Devadoss (1983) A source-book in Jaina philosophy Udaipur:Sri Tarak Guru Jain Gran. p.57. Also see Tattvarthasutra verses 5.29 and 5.37 - Farid Alakbarov (Summer 2001). A 13th-Century Darwin? Tusi's Views on Evolution, Azerbaijan International 9 (2). - Whitaker, Robert D. (1975-10-01). "An historical note on the conservation of mass". Journal of Chemical Education. 52 (10): 658. Bibcode:1975JChEd..52..658W. doi:10.1021/ed052p658. ISSN 0021-9584. - Robert D. Whitaker, "An Historical Note on the Conservation of Mass", Journal of Chemical Education, 52, 10, 658-659, Oct 75 - Pomper, Philip (October 1962). "Lomonosov and the Discovery of the Law of the Conservation of Matter in Chemical Transformations". Ambix. 10 (3): 119–127. - Lomonosov, Mikhail Vasil’evich (1970). Mikhail Vasil’evich Lomonosov on the Corpuscular Theory. Henry M. Leicester (trans.). Cambridge, Mass.: Harvard University Press. Introduction, p. 25. - Matthew Moncrieff Pattison Muir, The Elements of Chemistry (1904) - Nouv. Recherches sur les lois des proportions chimiques (1865) 152, 171, 189 - "Conservation of Mass in Chemical Changes"Journal - Chemical Society, London, Vol.64, Part 2 Chemical Society (Great Britain) - William Edwards Henderson, A Course in General Chemistry (1921) - Ida Freund, The study of Chemical Composition: an account of its method and historical development, with illustrative quotations (1904) - Kenneth R. Lang, Astrophysical Formulae, Springer (1999), ISBN 3-540-29692-1
<urn:uuid:9d5176af-5261-483c-ad72-9c1a3276328f>
3.640625
4,114
Knowledge Article
Science & Tech.
43.791018
95,559,416
Vail Daily science column: Glass is all around us October 31, 2014 In the spirit of Halloween, Denver's Botanic Gardens is in full costume. Seuss-like silica spikes, crazy crystalline citadels and psychedelic petals ooze from the landscape. Rather than being tropical plants gone wild, these features are made of glass. Glass is all around us, although typically less conspicuously than in the Gardens' Chihuly exhibition. It's on our faces, our phones and our homes. But where does it come from? Glass comes from sand. Much of that sand begins its journey right here in the mountains. On the mountaintops just north of Vail, rocks such as granite are broken down by snow, wind, rain, plants and microbes. Bits and pieces of broken-up rock roll down the mountain into streams and rivers, where they're carried out to the plains or to mountain-bound parks. Translation: A "park" is what Coloradans call a "basin" or "valley". The most famous one has its own TV show — South Park. As particles make their downhill journey, they're broken down into sand and dust-sized sediments. This mix includes minerals like shiny mica, pink feldspar, and if you're lucky, a bit of gold. However the most common constituent of sediments is the mineral quartz — the key ingredient in glass. Recommended Stories For You Such sand is everywhere, but to produce sand needed for glass, something special must happen. Nature first has to break sediments down and get rid of all the stuff that isn't quartz. This happens naturally in lakes, beaches, rivers and streams. Wind is the most important cook in the kitchen, though. Because air isn't very viscous and doesn't have much mass, wind can only pick up tiny particles. It whisks away most of the dust-sized mineral grains, carrying them far, far away. Such minuscule grains can be blown clear over mountaintops. Sometimes this process is visible from outer space — just Google a satellite image of the Saharan dust cloud off western Africa. Heavier, bigger mineral fragments get left behind. But the "middle-sized" grains, such as quartz, get rolled, bounced, and sometimes picked up by the wind. These grains can't be blown far nor high, so they tend to pile up over millennia at the feet of mountain ranges, or where wind regularly slows or becomes turbulent. In our very own San Luis Valley, giant quartz-rich dunes have formed by these very processes. If the dunes weren't in a national park, ten they'd make good glass sand! The best glass sands, though, are underfoot. There, thousands of ancient dune fields have become petrified as vast layers of sandstone. A great example is Colorado's Lyons Sandstone, which represents a 265 million-year-old dune field that stretched from Mexico to Canada. You've probably seen this rock before — cladding buildings at the University of Colorado, as sidewalks in Old Denver, or perhaps even in your garden pavers. Sandstones like this are mined around the world for glass sand — largely because Mother Nature has already sorted the quartz grains and broken down all the other mineral constituents. Quartz sand from such rocks is also the key ingredient in fracking fluids. In its purest form, quartz sand ought to make clear glass. But even the best dune sand has some impurities in it. A common one is iron, the same element that helps give Red Rocks Amphitheater its vivid colors. To quell the effects of such contaminants, powdered mineral compounds can be added to clarify glass. One of the most common historical additives was the mineral pyrolusite, whose manganese helped neutralize the effects of iron impurities. But it had a side effect — the manganese caused the glass to turn purple after long exposure in sunlight. Once viewed as an unfortunate flaw, this purple window glass is now a sought-after mark of authenticity for antique windows and bottles. And those fabulous colors used by Dale Chihuly? Most of them derive from mineral additives blended into melted sand. Such compounds help impart color to the glass, make it easier to melt and give it better structural properties. Even metals are added, like the silver compounds that help prescription glasses darken when exposed to sunlight. But my favorite is red glass. To give glass a red color you've got to add gold to it. And I'll bet some of it comes from Colorado. James Hagadorn, Ph.D., is a scientist at the Denver Museum of Nature & Science. Suggestions and comments are welcome at firstname.lastname@example.org. Trending In: Columns - Ask a Sports Medicine Doc column: Water skiing injury likely a hamstring injury - Vail Daily column: Boreas Pass: All that’s left is a road - Vail Daily column: Lots of rain leads to mosquito mania - Ask a Vail sports medicine doc: Tibial plateau fractures a common skiing injury - Vail Daily column: Dig into fun this fall with fossils - Suspects in Lake Christine Fire arrested at residence in Basalt - Eagle County wildfires could have knocked out electricity to thousands - Modular homes are being used for a growing number of Vail Valley projects - Natural Grocers, Marshalls move in to Glenwood Meadows shopping center - Suspects in Lake Christine Fire turn themselves in to authorities early Sunday, July 15
<urn:uuid:cb4a633d-8e02-463a-8d42-82b9eb60c4fb>
3.234375
1,150
Truncated
Science & Tech.
53.884346
95,559,422
Every HTML document must have a Title element. The title should identify the contents of the document and in a global context, and may be used in history lists and as a label for the windows displaying the document. Unlike headings, titles are not typically rendered in the text of a document itself. Normally, browsers will render the text contained within the <TITLE> ... </TITLE> elements in the title bar of the browser window. The Title element must occur within the head of the document and may not contain anchors, paragraph elements, or highlighting. Only one title is allowed in a document. NOTE : The length of a title is not limited, however, long titles may be truncated in some applications. To minimise the possibility, titles should be kept as succinct as possible. Also keep in mind that a short title, such as 'Introduction' may be meaningless out of context. An example of a meaningful title might be 'Introduction to HTML elements' This is the only element that is required within the HEAD element. The other elements described are optional and can be implemented when appropriate. <TITLE>Welcome to the HTML Reference</TITLE> <TITLE> element, in accordance with the Internet Explorer Dynamic HTML, supports some of the standard properties and methods. Of the Standard Dynamic HTML properties, the <TITLE> element supports document, id, parentElement, sourceIndex and tagName. See the Standard Dynamic HTML properties topic for more details. Of the Standard Dynamic HTML methods, the <TITLE> element supports contains, getAttribute, removeAttribute and setAttribute. See the Standard Dynamic HTML methods topic for more details. © 1995-1998, Stephen Le Hunte |file: /Techref/language/html/ib/Document_Structure_Elements/title.htm, 3KB, , updated: 2004/3/1 16:47, local time: 2018/7/19 09:51, |©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?| <A HREF="http://techref.massmind.org/techref/language/html/ib/Document_Structure_Elements/title.htm"> <TITLE></A> |Did you find what you needed?| Welcome to massmind.org! Welcome to techref.massmind.org!
<urn:uuid:a148c009-5618-4922-aa03-78e5ef890809>
3.09375
540
Documentation
Software Dev.
47.456742
95,559,427
Geophysical Investigation of St. Catherines Island Using Electrical Resistivity and Ground Penetrating Radar St. Catherines Island is a barrier island experiencing saltwater intrusion via structural pathways that may include joints, faults, or sag structures. This study used geophysical methods of electrical resistivity (ER) and ground penetrating radar (GPR) to locate and determine the modes of transportation of the saltwater. The geophysical study was conducted in November of 2016 near a shallow (6-7 m deep) aquifer well traverse that has shown recent spikes in chloride concentration. Three geophysical transects where collected using ER and GPR. The ER data were collected using 56 electrodes with either 2 m or 3 m spacing. A dipole-dipole array with a strong gradient was used for data collection and then inverted using EarthImager 2D (Advanced Geosciences, Inc.). The GPR data were collected along the same transects as the ER data using a 100 MHz shielded antenna set at a shallow time window and a 250 MHz shielded antenna set at a deep time window. The GPR profiles were processed using Object Mapper (MALA). The ER data show a low resistivity layer at 1-6 m depth that correlates with the sandy surficial aquifer, a higher resistivity layer at 6-13 m depth that may represent a clay aquitard, and a low resistivity layer from 13-26 m depth that may represent a deeper aquifer. The GPR data suggest lateral and vertical variation in water saturated porosity of the sandy surficial aquifer and a sharp reflector below the aquifer that is interpreted as the top of the clay aquitard. The geophysical data correlate and will be combined to investigate structures causing the saltwater intrusion. This research will allow us to gain a better understanding of the hydrogeology of St. Catherines Island and how it may be impacted by rising sea level, along with the other barrier islands along the Georgia coast. Georgia Water Resources Council Annual Conference (GWRC) Diederich, Ryan G., Jacque L. Kelly, Robert K. Vance, Anne M. DeLua. "Geophysical Investigation of St. Catherines Island Using Electrical Resistivity and Ground Penetrating Radar." Geology and Geography Faculty Presentations.
<urn:uuid:d7de53d5-f48a-4965-b806-818a94f87007>
2.5625
477
Academic Writing
Science & Tech.
41.399526
95,559,445
Melting Behavior [76–84] The melting of polymers is, in principle, a first order transition. Nevertheless, it is not possible to describe all the experimental observations, which have been made for polymer melts, in terms of equilibrium thermodynamics. Due to the limited mobility of the long polymer chains, they do not reach their equilibrium conformation within a finite time. Thus, in order to completely describe the state of a system, not only the usual variables of state but also inner ordering parameters, which reflect the thermal history of the system, are required. KeywordsGibbs Free Energy Natural Rubber Melting Behavior Usual Variable Entropy Effect Unable to display preview. Download preview PDF.
<urn:uuid:f907ff0a-5afe-4276-8ce1-9ca9b45de7ae>
2.84375
144
Truncated
Science & Tech.
26.717576
95,559,448
Spectral Characteristic of Polar Motion in the 2005-2006 and 1999-2000 Winters Seasons The Earth's pole moves on the Earth's surface along the spiral curve known as "polhody". Polhody computed from the IERS C04 pole coordinates were compared with those computed after removing successively oscillations with periods shorter than 150, 30, 10, 2 days. The comparison of these polhody show that the loops in the winter seasons of 2005-2006 and 1999-2000 are caused by the oscillations of polar motion with periods shorter than 30 days. These short periodical oscillations of geodetic excitation function of polar motion were correlated with those of the atmospheric and oceanic excitation functions. There are high correlations with coefficients equal to 0.8-0.9 during epochs when loops occur. Barnes R. T. H, R. Hide, A. A. White, and C. A. Wilson (1983) Atmospheric angular momentum fluctuations, length-of-day changes and polar motion, Proc. R. Soc. Lond., A387, 31-73. Brzeziński A. (1992) Polar motion excitation by variations of the effective angular momentum functions: considerations concerning deconvolution problem, Manuscr. Geodet, 17, 3-20. Brzeziński A., R.M. Ponte, and A.H. Ali (2004) Nontidal oceanic excitation of nutation and diurnal/semidiurnal polar motion revisited, J. Geophys. Res., 109, B11407, doi:10.10292004JB003054. Eubanks T.M., J.A. Steppe, J.O. Dickey, R.D. Rosen, and D.A. Salstein (1988) Causes of rapid motions of the Earth's pole, Nature, 334 115-119. Gross R.S. (2000) The excitation of the Chandler wobble, Geophys. Res. Lett., 27(15) 23292332. Gross R. S. Fukumori, I. and Menemenlis, D., (2005) Atmospheric and Oceanic Excitation of Decadal-Scale Earth : Orientation Variations, J. Geophys. Res., vol. 110, B09405. Kalnay E., et al. (1996) The NMC/NCAR 40-year reanalysis project, Bull. Am. Meteorol. Soc., 77(3), 437-471. Kołaczek B. (1993) Variations of Short Periodical Oscillations of Earth Rotation. Proc. of the 156th Symp. of the IAU held in Shanghai, China, Sep. 15-19, 1992. Developments in Astrometry and Their Impact on Astrophysics and Geodynamics. I.I Mueller and B. Kolaczek (eds). pp. 291-296. Kluwer Academic Publishers. Dordrecht/Boston/New York. Kołaczek B. (1995) Short Period Variations of Earth Rotation. Proc. Journees 1995 "Systemes de Reference Spatio-Temporels", Warsaw, Poland, Sep. 18-20, 1995. pp 147-154. Kołaczek B., W. Kosek, H. Schuh (2000) Short period oscillations of Earth rotation, Proceedings of the IAU Colloquium 178, Polar Motion: Historical and Scientific Problems, pp. 533-544. Kosek W. (1995) Time Variable Band Pass Filter Spectra of Real and Complex-Valued Polar Motion Series, Artificial Satellites, Planetary Geodesy, 30, Warsaw, Poland, 27-43. Kosek W., J. Nastula, B. Kolaczek (1995a) Variability of Polar Motion Oscillations with Periods from 20 to 150 Days in 1979-1991. Bulletin Geodesique, Springer Verlag (1995) 69: 308-319. Lambert S.B., C. Bizouard, and V. Dehant (2006) Rapid variations in polar motion during the 2005-2006 winter season. Geophysical Research Letters, Vol. 33, LXXXXX, doi:10.1029/2006GL026422. Otnes R. K., and L. Enochson (1972) Digital Time Series Analysis, John Wiley and Sons, New York. Salstein D.A., D.M. Kann, A.J. Miller, and R.D. Rosen (1993) The subbureau for atmospheric angular momentum of the International Earth Rotation Service: A meteorological data center with geodetic applications, Bull. Am. Meteorol. Soc., 10, 67-80.
<urn:uuid:e9bdbb30-16a6-4b67-8738-62220e77cb6b>
3
1,016
Academic Writing
Science & Tech.
78.290014
95,559,465
Shell Recycling - Big Gains From Small Things Author: Elise Catterall Australia’s coastline has seen extensive losses in its shellfish reefs, which has many environmental impacts including fewer shellfish. But now your dinner plate leftovers may be coming to the rescue. Two great recycling initiatives are shaping up to transform reefs along Australia’s east coast and to support marine life and reduce landfill at the same time - and they both involve the recycling of shells. The initiatives draw on the understanding that mature shells – from oysters, scallops, and mussels – provide the ideal environment to grow young shellfish and that returning used shells to the water helps restore damaged and depleted reef environments. It also helps with erosion control and siltation. The first initiative, coordinated by The Nature Conservancy Australia, is focussed on the Victorian coastline near Port Phillip Bay. By collecting shells donated by restaurants and seafood wholesalers in Geelong, Nature Conservancy is working to restore the once abundant shellfish reefs of the area and to resurrect shellfish populations. These depleted reefs and their shellfish populations are a result of historic dredge fishing in the area. After collecting and cleaning the shells, they weather them for around six months, exposing them to wind and sun. They then put them in bulk bags that are placed on the shoreline to produce a new reef, on which young shellfish grow. So far, with the support of Little Creatures Brewery, Mantzaris Fisheries, Wah Wah Gee, and the Geelong Disabled People’s Industries, the initiative has collected 300 cubic meters of discarded shells that would otherwise have gone to landfill. The second initiative, coordinated by OceanWatch Australia, is also targeting depleted shellfish populations and damaged shorelines, but this time at five river sites around Sydney. In NSW 99% of wild oyster populations are functionally extinct because of pollution, sedimentation, disease, and habitat loss or degradation from coastal development. The program also relies on donations – namely from Sydney’s Star Casino and from oyster farmers in Port Stephens on the NSW mid-north coast. It uses biodegradable coconut fibre bags filled with old oyster shells to line the shore. Aquaculture program manager of OceanWatch Australia, Andy Myers, explains: “….the high lime content of oyster shells makes them really attractive to baby oysters. When oyster larvae settle on other oysters, when they grow they secrete a natural cement and bind the structures together. When the bag breaks down the structural complexity will still be there for a multitude of marine organisms.” Approximately eight tonnes of shells are being recycled for this purpose. A similar program to restore the natural shellfish cycle has been in place in the US for several years now, under the management of the Shell Recycling Alliance (SRA), and now has over 300 restaurants participating in the scheme. The Nature Conservancy Australia’s US counterpart also uses similar techniques. - Follow OceanWatch Australia guidelines for protecting marine environments. - Watch The Nature Conservancy Australia’s video about the project. - Watch OceanWatch Australia’s Living Shorelines Program video. - Support the efforts of The Nature Conservancy Australia and OceanWatch Australia. - The Nature Conservancy Australia - OceanWatch Australia - Australian Broadcasting Corporation - Australian Broadcasting Corporation - Oyster Recovery - The Age Subscribe to Positive Environment News. Positive Environment News has been compiled using publicly available information. Planet Ark does not take responsibility for the accuracy of the original information and encourages readers to check the references before using this information for their own purposes. Author: Elise CatterallElise is a writer, photographer, and naturopath with a passion for nature. She completed a Master of Public Health in 2017 through the University of Sydney. Her photographic work focuses on flowers and plants as a way of celebrating nature. She has been writing for Planet Ark since 2017, sharing positive environment stories, personal environmental experiences and perspectives. - Paperbark review: a sleepy wombat and a powerful story » - Everyday Enviro with Elise - New life for old things » - Packaging industry moves towards better plastic recycling outcomes » - Mexico City is turning its beltways into vertical gardens » - A sustainable future for fashion » - Trading trash for a hot cuppa »
<urn:uuid:59b9339c-5c0f-4ab2-bbd2-c6fa6d7ebe58>
3.1875
903
News (Org.)
Science & Tech.
23.727926
95,559,468
Spectroscopic Observations by Hubble reveal Sunscreen Snow on Hot Exoplanet This illustration shows the seething hot planet Kepler-13Ab that circles very close to its host star, Kepler-13A. In the background is the star's binary companion, Kepler-13B, and the third member of the multiple-star system is the orange dwarf star Kepler-13C. Credit: NASA, ESA, and G. Bacon (STScI). "In many ways, the atmospheric studies we're doing now on these gaseous 'hot Jupiter' kinds of planets are test beds for how we're going to do atmospheric studies of terrestrial, Earth-like planets," said Thomas Beatty, assistant research professor of astronomy at Penn State and the lead author of the study. "Understanding more about the atmospheres of these planets and how they work will help us when we study smaller planets that are harder to see and have more complicated features in their atmospheres." The team's results are published in the October, 2017 issue of The Astronomical Journal. Beatty's team targeted planet Kepler-13Ab because it is one of the hottest of the known exoplanets. Its dayside temperature is nearly 5,000 degrees Fahrenheit. Kepler-13Ab is so close to its parent star that it is tidally locked, so one side always faces the star while the other side is in permanent darkness. The team discovered that the sunscreen snowfall happens only on the planet's permanent nighttime side. Any visitors to this exoplanet would need to bottle up some of that sunscreen, because they won't find it on the sizzling-hot daytime side. The astronomers didn't go looking for titanium oxide specifically. Instead, their studies revealed that this giant planet's atmosphere is cooler at higher altitudes -- which was surprising because it is the opposite of what happens on other hot Jupiters. Titanium oxide in the atmospheres of other hot Jupiters absorbs light and reradiates it as heat, making the atmosphere grow warmer at higher altitudes. Even at their much colder temperatures, most of our solar system's gas giants also have warmer temperatures at higher altitudes. Intrigued by this surprising discovery, researchers concluded that the light-absorbing gaseous form of titanium oxide has been removed from the dayside of planet Kepler-13Ab's atmosphere. Without the titanium oxide gas to absorb incoming starlight on the daytime side, the atmospheric temperature there grows colder with increasing altitude. The astronomers suggest that powerful winds on Kepler-13Ab carry the titanium oxide gas around, condensing it into crystalline flakes that form clouds. Kepler-13Ab's strong surface gravity -- six times greater than Jupiter's -- then pulls the titanium oxide snow out of the upper atmosphere and traps it in the lower atmosphere on the nighttime side of the planet. "Understanding what sets the climates of other worlds has been one of the big puzzles of the last decade," said Jason Wright, associate professor of astronomy at Penn State, and one of the study's co-authors. "Seeing this cold-trap process in action provides us with a long sought and important piece of that puzzle." The team's observations confirm a theory from several years ago that this kind of precipitation could occur on massive, hot planets with powerful gravity. "Presumably, this precipitation process is happening on most of the observed hot Jupiters, but those gas giants all have lower surface gravities than Kepler-13Ab," Beatty explained. "The titanium oxide snow doesn't fall far enough in those atmospheres, and then it gets swept back to the hotter dayside, revaporizes, and returns to a gaseous state." The researchers used Hubble's Wide Field Camera 3 to conduct spectroscopic observations of the exoplanet's atmosphere in near-infrared light. Hubble made the observations as the distant world traveled behind its star, a transit event called a secondary eclipse. This type of transit yields information on the temperature of the components of the atmosphere on the exoplanet's dayside. "These observations of Kepler-13Ab are telling us how condensates and clouds form in the atmospheres of very hot Jupiters, and how gravity will affect the composition of an atmosphere," Beatty explained. "When looking at these planets, you need to know not only how hot they are, but also what their gravity is like." This article has been republished from materials provided by Penn State University. Note: material may have been edited for length and content. For further information, please contact the cited source. Thomas G. Beatty, Nikku Madhusudhan, Angelos Tsiaras, Ming Zhao, Ronald L. Gilliland, Heather A. Knutson, Avi Shporer, Jason T. Wright. Evidence for Atmospheric Cold-trap Processes in the Noninverted Emission Spectrum of Kepler-13Ab Using HST/WFC3. The Astronomical Journal, 2017; 154 (4): 158 DOI: 10.3847/1538-3881/aa899b. How do Forests Respond to Atmospheric Pollution?News How forests respond to elevated nitrogen levels from atmospheric pollution is not always the same. While a forest is filtering nitrogen as expected, a higher percentage than previously seen is leaving the system again as the potent greenhouse gas nitrous oxide, say researchers.READ MORE
<urn:uuid:433d8e24-24d8-4460-8b81-f0cb2be352e3>
3.1875
1,106
News Article
Science & Tech.
40.718004
95,559,473
Cholesterol can bind important molecules into pairs, enabling human cells to react to external signals. Researchers at Friedrich-Alexander University Erlangen-Nürnberg’s (FAU) Chair of Biotechnology have studied these processes in more detail using computer simulations. Their findings have now been published in the latest volume of the journal PLOS Computational Biology*. FAU researchers Kristyna Pluhackova and Stefan Gahbauer discovered that cholesterol strongly influences signal transmission in the body. Their study focused on the chemokine receptor CXCR4, which belong to a group known as G protein-coupled receptors (GPCRs). These receptors sense external stimuli such as light, hormones or sugar and pass these signals on to the interior of the cell which reacts to them. CXCR4 normally supports the human immune system. However, it also plays an important role in the formation of metastases and the penetration of HIV into the cell interior. There is evidence to suggest that certain GPCRs must form pairs known as dimers in order to sense and pass on external stimuli. The FAU researchers’ simulations show that cholesterol strongly influences the formation of CXCR4 pairs, which in turn suggests that it affects their function. This means that cholesterol is required for these pairs to form correctly. In this process, cholesterol molecules selectively ‘glue’ specific regions of two CXCR4 molecules together , resulting in a complex structure that is believed to sense signals and pass them on through the cell membrane. Although the receptors can still bind to one another without sufficient cholesterol, in this case different structures are formed which most likely suppress the transmission of signals to the cell interior. These processes had not been studied in depth on the molecular level until now. The two researchers from the Computational Biology group at FAU’s Chair of Biotechnology used over 1000 computer simulations to examine them. A better understanding of the influence of cholesterol and dimerisation on the function of GPCRs could pave the way for new medications to be developed. Prof. Dr. Rainer Böckmann Phone: +49 9131 8525409 Dr. Susanne Langer | idw - Informationsdienst Wissenschaft O2 stable hydrogenases for applications 23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Materials Sciences 23.07.2018 | Information Technology 23.07.2018 | Health and Medicine
<urn:uuid:e9bf54ec-dd61-4d30-bb1f-616f7bbfb32e>
3.203125
1,047
Content Listing
Science & Tech.
38.012813
95,559,503
First Scientist to create a model of an atom. Came up with the theory of atoms which is named after the Greek word Atomus which means uncuttable. He used a grain of sand to represent an atom. Created the atomic theory of matter. The theory states that atoms cannot be created or destroyed and that different elements can combine in whole ratios to form chemical compounds. Discovered the Electron. Came up with the 'Plum Pudding' model. In this model he thought that the atom was mostly positive and negativ electrons wandered around the atom. Created Nucleus. Said that positive matter was just in the middle. Said the atom was mostly empty space and electrons surrounded the positive nucleus. Changed orbit of electron in his model but also created energy levels in the atom, where only a certon amount of electrons could fit one energy level of the atom. This model is still used to this day created his own atomic model. Thought that the only way to find the locatin and energy of an electron was to calculate the probability of being a certain distance from the nucleus. Said that you can't know the exact velocity and momentumof the electron at the time which means you cant know the exact location of the electron. Believed that electrons can act like both particles and waves just like light. He said that waves were produced by electrons contained in orbit around the nucleus, set up a standing wave of certain energy, frequency and wavelength Discovered the neutron.
<urn:uuid:8c76a7e3-9d8b-43d3-9cdd-25fcff5a6d09>
3.5
302
Structured Data
Science & Tech.
49.448732
95,559,504
We’re surrounded by water molecules in the air. Even in the driest environments, it swirls around us as airborne vapors. For decades researchers have worked to develop ways to extract and collect water from the air but these concepts have typically required large amounts of energy and have often proven unwieldy. However, two new projects, outlined in papers published this month in Science Advances, have refined water extraction technologies and built upon their efficiencies, demonstrating remarkable innovation and promise.StormCon: The Surface Water Quality Conference and Expo - Join us in Denver this August 12–16 at StormCon: The North American Surface Water Quality Conference & Expo. Your colleagues from around the country will be there at the largest stormwater-specific conference of the year and you should be there too! Get details & register today at www.StormCon.com. The first development is a water production device designed by a team of scientists at UC Berkley that works by using a metal-organic framework—a powdery combination of zirconium and carbon atoms—that acts as a sponge, absorbing water molecules from the air. When the device is heated by the sun, the material releases the water molecules, which condense on the walls. The liquid flows into a collection vessel. “If you expose this material to humid air, the framework will get saturated with water molecules,” says chemist Eugene Kapustin, coauthor on the paper. “And then, because the water molecules don’t stick too tightly to the interior of the framework, we can release this water by heating the powder.” Kapustin and his fellow researchers are currently improving upon the system by testing a higher-efficiency aluminum-based collection medium. They report that the material can complete at least 150 cycles of filling and drying without experiencing any degradation. The second innovative water production project builds on fog catching devices. As Janice Kaspersen, editor of Erosion Control, recently explained, communities in Chile’s Coquimbo region are successfully harvesting fog with nets that capture moisture and pipe it to a storage container. The efficiency of these passive fog devices is relatively low, however, yielding only 1–2%. By using electric fields and charging air atoms, a team of researchers led by Maher Damak and Kripa K. Varanasi, recently found that they could ionize water droplets and make them attracted to the mesh collector. The ionization is so effective, in fact, that droplets that aren’t captured in the mesh collector turn back around and come through it, resulting in 99% efficiency. Operating at 60 watts per square meter of mesh, the device is surprisingly efficient, especially in comparison with other conventional air-water generators that chill air and allow it to condense. The team is considering a wide range of applications including cooling towers at power plants. These units consume massive amounts of water and release it to the atmosphere as they reject waste heat in vapor plumes. Those water molecules, researchers explain, could be captured and collected for on-site reuse. What other applications would you suggest for these water production technologies?
<urn:uuid:8bbb81d4-6672-4ba3-a6ab-6e0cb5dfaac4>
3.734375
648
News Article
Science & Tech.
32.065131
95,559,532
|MLA Citation:||Bloomfield, Louis A. "Question 791"| How Everything Works 17 Jul 2018. 17 Jul 2018 <http://howeverythingworks.org/print1.php?QNum=791>. You would also see special sources of radio waves, microwaves, and infrared light. Radio antennas, cellular telephones, and microwave communication dishes would be dazzlingly bright and infrared remote controls would light up when you pressed their buttons. You would see ultraviolet light in sunlight and from the black lights in dance halls. But there wouldn't be much other ultraviolet light around to see, particularly indoors. X-rays and gamma rays would be rare and you might only see them if you walked into a hospital or a dentist's office. Gamma rays would be even rarer, visible mostly in hospitals.
<urn:uuid:d1c3cd0d-39e6-4ffd-8ade-9925c609ddc8>
2.6875
169
Knowledge Article
Science & Tech.
63.988602
95,559,546
Theia (//) is a hypothesized ancient planetary-mass object in the early Solar System that, according to the giant impact hypothesis, collided with another planetary-mass object, Gaia (the early Earth) around 4.5 billion years ago. According to the hypothesis, Theia was an Earth trojan about the size of Mars, with a diameter of about 6,102 km (3,792 miles). Geologist Edward Young of the University of California, Los Angeles, drawing on an analysis of rocks collected by Apollo missions 12, 15, and 17, proposes that Theia collided head-on with Earth, in contrast to the previous theory that suggested a glancing impact. Models of the impact indicate that Theia's debris gathered around Earth to form the early Moon. Some scientists think the material thrown into orbit originally formed two moons that later merged to form the single moon we know today. The Theia hypothesis also explains why Earth's core is larger than would be expected for a body its size: according to the hypothesis, Theia's core and mantle mixed with Earth's. Theia is thought to have orbited in the L4 or L5 configuration presented by the Earth–Sun system, where it would tend to remain. In that case, it would have grown, potentially to a size comparable to Mars. Gravitational perturbations by Venus could have eventually put it onto a collision course with the Earth. Theia was named for the titaness Theia, who in Greek mythology was the mother of Selene, the goddess of the moon, which parallels the planet Theia's collision with the early Earth that is theorized to have created the Moon. An alternative name, Orpheus, is also used. According to the giant-impact hypothesis, Theia orbited the Sun, nearly along the orbit of the proto-Earth, by staying close to one or the other of the Sun–Earth system's two more stable Lagrangian points (i.e. either L4 or L5). Theia was eventually perturbed away from that relationship by the gravitational influence of Jupiter and/or Venus, resulting in a collision between Theia and Earth. Originally, the hypothesis supposed that Theia had struck Earth with a glancing blow and ejected many pieces of both the proto-Earth and Theia, those pieces either forming one body that became the Moon or forming two moons that eventually merged to form the Moon. Such accounts assumed that Theia striking the proto-Earth head-on would have led to the destruction of both planets, creating a short-lived second asteroid belt between the orbits of Venus and Mars. In contrast, evidence published in January 2016 suggests that the impact was indeed a head-on collision and that Theia's remains can be found both in the Earth and the Moon. From the beginning of modern astronomy, there have been at least four hypotheses for the origin of the Moon: - that a single body split into Earth and the Moon; - that the Moon was captured by Earth's gravity (as most of the outer planets' smaller moons were captured); - that Earth and the Moon formed at the same time when the protoplanetary disk accreted; and - the Theia scenario. - Wolpert, Stuart. "UCLA study shows the moon is older than previously thought". scitechdaily.com. Retrieved 14 January 2017. - "The Theia Hypothesis: New Evidence Emerges that Earth and Moon Were Once the Same". The Daily Galaxy. 2007-07-05. Retrieved 2013-11-13. - Nace, Trevor (2016-01-30). "New Evidence For 4.5 Billion Year Old Impact Formed Our Moon". Forbes. Retrieved 2016-01-30. - Jutzi, M.; Asphaug, E. (2011). "Forming the lunar farside highlands by accretion of a companion moon". Nature. 476 (7358): 69–72. Bibcode:2011Natur.476...69J. doi:10.1038/nature10289. PMID 21814278. - "Faceoff! The Moon's oddly different sides", Astronomy, August 2014, 44–49. - "A New Model for the Origin of the Moon". SETI Institute. - "STEREO Hunts for Remains of an Ancient Planet near Earth". NASA. 2009-04-09. Archived from the original on 2013-11-13. Retrieved 2013-11-13. - Murdin, Paul (2016). Rock Legends: The Asteroids and Their Discoverers. Springer. p. 178. doi:10.1007/978-3-319-31836-3. ISBN 9783319318363. - Byrne, Charles (2007). The Far Side of the Moon: A Photographic Guide. Springer. p. 202. ISBN 9780387732060. - Reufer, Andreas; Meier, Matthias M. M.; Benz, Willy; Wieler, Rainer (2012). "A hit-and-run giant impact scenario". Icarus. 221 (1): 296–299. arXiv: . Bibcode:2012Icar..221..296R. doi:10.1016/j.icarus.2012.07.021. - Wolpert, Stuart (January 28, 2016). "Moon was produced by a head-on collision between Earth and a forming planet". UCLA newsroom. UCLA. - Herwartz, D.; Pack, A.; Friedrichs, B.; Bischoff, A. (2014). "Identification of the giant impactor Theia in lunar rocks". Science. 344 (6188): 1146–1150. Bibcode:2014Sci...344.1146H. doi:10.1126/science.1251117. PMID 24904162. - Meier, M. M. M.; Reufer, A.; Wieler, R. (2014). "On the origin and composition of Theia: Constraints from new models of the Giant Impact". Icarus. 242: 316–328. arXiv: . Bibcode:2014Icar..242..316M. doi:10.1016/j.icarus.2014.08.003.
<urn:uuid:3a6d89d9-8e0f-4585-b614-4fe29bd430ca>
3.859375
1,308
Knowledge Article
Science & Tech.
79.001851
95,559,567
Green energy has attracted much attention in the last 10 years because of its cost-effective and Earth-friendly ways of gathering energy. This constant and reliable energy source lasts as long as the Earth does which encourages the adoption of energy-friendly policies. The main proponent of green energy is solar, since is the most convenient and profitable renewable energy source. Solar energy refers to solar radiation energy produced by the sun. This energy is garnered through solar cells, also known as “solar chip,” which is a kind of photoelectric semiconductor wafer that works by solar energy. Solar cells can be divided into single crystal silicon solar cell, polycrystalline silicon solar cell and thin film solar cells. The earliest type of solar cells is the single crystal silicon solar cell. Silicon solar cells fall into two categories: polycrystalline silicon solar cells and amorphous silicon solar cells. As an abundant natural earth element, silicon is vital for the solar industry and is used in almost all types of cells. Single crystal silicon solar cell is another types of solar cells that is currently developed at a faster rate. This type of cells is so widely used that astronauts brought it into space! It requires a high purity of silicon, close to 99.999%. Another type of cells is known as polycrystalline silicon solar cells. is to save costs. The production process of polycrystalline silicon is similar to the production process of single crystal but the photoelectric silicon conversion efficiency is about 12%, which lower than the single crystal silicon solar cell. While these cells are generally cheaper, this cells uses jade making it less convenient to produce than polycrystalline silicon. The final type of cells is thin film solar cells. They are produced with lower priced materials, such as glass, plastics, ceramics, graphite, metal strips, etc, and are only a few nanometers thick. Experts believe that within the next 5 years, thin film solar cells will be widely used in our watches, calculators, and even clothing.
<urn:uuid:2da1f884-d511-4098-b48b-317617b836cb>
3.5625
415
Knowledge Article
Science & Tech.
37.956214
95,559,574
New research by a Florida State University geography professor shows that climate change may be playing a key role in the strength and frequency of tornadoes hitting the United States. Published Wednesday in the journal Climate Dynamics, Professor James Elsner writes that though tornadoes are forming fewer days per year, they are forming at a greater density and strength than ever before. So, for example, instead of one or two forming on a given day in an area, there might be three or four occurring. Professor James Elsner's new study shows that climate change may be causing more and deadlier tornadoes. Credit: Bill Lax/Florida State University "We may be less threatened by tornadoes on a day-to-day basis, but when they do come, they come like there's no tomorrow," Elsner said. Elsner, an expert in climate and weather trends, said in the past, many researchers dismissed the impact of climate change on tornadoes because there was no distinct pattern in the number of tornado days per year. In 1971, there were 187 tornado days, but in 2013 there were only 79 days with tornadoes. But a deeper dive into the data showed more severity in the types of storms and that more were happening on a given day than in previous years. "I think it's important for forecasters and the public to know this," Elsner said. "It's a matter of making sure the public is aware that if there is a higher risk of a storm, there may actually be multiple storms in a day." The United States experiences more tornadoes than any other country, and despite advances in technology and warning systems, they still remain a hazard to residents in storm-prone areas. The 2011 tornado season, for example, had nearly 1,700 storms and killed more than 550 people. So far, in 2014, there have been 189 storms with a death toll of 43, according to the NOAA/National Weather Service Storm Prediction Center. One bright spot of news in the research, Elsner added, was that the geographic areas impacted most regularly by tornadoes do not appear to be growing. Elsner was joined on the paper by independent researcher Thomas H. Jagger, formerly a research associate at Florida State University, and meteorologist Svetoslava Elsner. Kathleen Haughney | Eurek Alert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:34f02821-8502-4a4f-8ae0-adc86430a37a>
3.171875
1,120
Content Listing
Science & Tech.
45.377936
95,559,581
A team of scientists of the Max Planck Institute for the Structure and Dynamics of Matter (MPSD) at the Center for Free-Electron Laser Science in Hamburg investigated optically-induced superconductivity in the alkali-doped fulleride K3C60under high external pressures. This study allowed, on one hand, to uniquely assess the nature of the transient state as a superconducting phase. In addition, it unveiled the possibility to induce superconductivity in K3C60 at temperatures far above the -170 degrees Celsius hypothesized previously, and rather all the way to room temperature. The paper by Cantaluppi et al has been published in Nature Physics. Unlike ordinary metals, superconductors have the unique capability of transporting electrical currents without any loss. Nowadays, their technological application is hindered by their low operating temperature, which in the best case can reach -70 degrees Celsius. Researchers of the group of Prof. A. Cavalleri at the Max Planck Institute for the Structure and Dynamics of Matter (MPSD) in Hamburg have routinely used intense laser pulses to stimulate different classes of superconducting materials. Under specific conditions, they have detected evidences of superconductivity at unprecedented high temperatures, although this state persisted very shortly, just for a small fraction of a second. An important example is that of K3C60, an organic molecular solid formed by weakly-interacting C60“buckyball” molecules (60 carbon atoms bond in the shape of a football), which is superconducting at equilibrium below a critical temperature of -250 degrees Celsius. In 2016, Mitrano and coworkers at the MPSD discovered that tailored laser pulses, tuned to induce vibrations of the C60 molecules, can induce a short-lived, highly conducting state with properties identical to those of a superconductor up to a temperature of at least -170 degrees Celsius - far higher than the equilibrium critical temperature. In their most recent investigation, A. Cantaluppi, M. Buzzi and colleagues at MPSD in Hamburg went a decisive step further by monitoring the evolution of the light-induced state in K3C60 once external pressure was applied by a diamond anvil cell. At equilibrium, when pressure is applied, the C60 molecules in the potassium-doped fulleride are held closer to each other. This weakens the equilibrium superconducting state and significantly reduces the critical temperature. “Understanding whether the light-induced state found in K3C60 responds in the same way as the equilibrium superconductor is a key step towards uniquely determining the nature of this state and can provide new hints to unveil the physical mechanism behind light-induced high-temperature superconductivity”, says Alice Cantaluppi. K3C60 was systematically investigated, in the presence of photo-excitation, for pressures varying from ambient conditions up to 2.5 GPa, which corresponds to 25,000 times the atmospheric pressure. The authors measured a strong reduction in photo-conductivity with increasing pressure. Such behaviour is very different from that found in conventional metals, while it is fully compatible with the phenomenology of a superconductor, thus standing for a first unambiguous interpretation of the light-induced state in K3C60 as a transient superconducting phase. “Importantly”, says Michele Buzzi, “we observed that for stronger optical excitations, we can obtain an incipient, transient superconductor at temperatures far above the -170 degrees Celsius hypothesized previously, and rather all the way to room temperature.” A universal picture able to describe the physical mechanism behind the phenomenon of light-induced high-temperature superconductivity in K3C60 is still missing and the ultimate goal of obtaining a stable room-temperature superconductor is not around the corner yet. Nonetheless, the novel approach introduced by the MPSD team, which combines optical excitation with the application of other external stimuli, as external pressure or magnetic fields, shall pave the way in this direction, allowing for the generation, control, and understanding of new phenomena in complex materials. This work was supported by the ERC Synergy Grant “Frontiers in Quantum Materials’ Control” (Q-MAC), the Hamburg Centre for Ultrafast Imaging (CUI), and the priority program SFB925 of the Deutsche Forschungsgemeinschaft. The experiments were performed in the laboratories of the Center for Free-Electron Laser Science (CFEL), a joint enterprise of DESY, the Max Planck Society, and the University of Hamburg. The research was carried out in close collaboration with scientists of the University of Parma and of the ELETTRA Synchrotron Facility, Trieste, Italy. Further information available from Jenny Witt, MPSD PR officer Tel: +49 40 8998 6593 Jenny Witt | Max-Planck-Institut für Struktur und Dynamik der Materie What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:1d74a580-bd5b-4c16-9481-bcae011a190b>
3
1,669
Content Listing
Science & Tech.
28.687959
95,559,596
where n is the degree of the root. A root of degree 2 is called a square root and a root of degree 3, a cube root. Roots of higher degree are referred by using ordinal numbers, as in fourth root, twentieth root, etc. - 3 is a square root of 9, since 32 = 9. - −3 is also a square root of 9, since (−3)2 = 9. Any non-zero number, considered as complex number, has n different "complex roots of degree n" (nth roots), including those with zero imaginary part, i.e. any real roots. The root of 0 is zero for all degrees n, since 0n = 0. In particular, if n is even and x is a positive real number, one of its nth roots is positive, one is negative, and the rest (when n > 2) are complex but not real; if n is even and x is a negative real, none of the nth roots is real. If n is odd and x is real, one nth root is real and has the same sign as x, while the other (n − 1) roots are not real. Finally, if x is not real, then none of its nth roots is real. Roots are usually written using the radical symbol or radix with denoting the principal square root of , denoting the principal cube root, denoting the principal fourth root, and so on. In the expression , n is called the index, is the radical sign or radix, and is called the radicand. Since the radical symbol denotes a function, it is defined to return only one result for a given argument , which is called the principal nth root of . Conventionally, a real root, preferably non-negative, if there is one, is designated as the principal nth root. A complementary definition of principal root (though not formally defined or universally accepted) is to say that it is always the complex root that has the least value of the argument among all roots; here “argument” is bound to and means the counterclockwise angle in radian between the positive real axis and the line joining the complex number to the origin. - has three cube roots: , and with arguments respectively. Of these, has the least argument and hence in some contexts is considered the principal cube root, while in other contexts is said to be the principal cube root because it is the only real one. - has four fourth roots: and , having arguments and respectively. So is always considered the unique principal fourth root, because it is a positive real, which necessarily has the least argument possible: . An unresolved root, especially one using the radical symbol, is sometimes referred to as a surd or a radical. Any expression containing a radical, whether it is a square root, a cube root, or a higher root, is called a radical expression, and if it contains no transcendental functions or transcendental numbers it is called an algebraic expression. Roots are particularly important in the theory of infinite series; the root test determines the radius of convergence of a power series. Roots can also be defined for complex numbers, and the complex roots of 1 (the roots of unity) play an important role in higher mathematics. Galois theory can be used to determine which algebraic numbers can be expressed using roots and to prove the Abel–Ruffini theorem, which states that a general polynomial equation of degree five or higher cannot be solved using roots alone; this result is also known as "the insolubility of the quintic". Definition and notationEdit An nth root of a number x, where n is a positive integer, is any of the n real or complex numbers r whose nth power is x: Every positive real number x has a single positive nth root, called the principal nth root, which is written . For n equal to 2 this is called the principal square root and the n is omitted. The nth root can also be represented using exponentiation as x1/n. For even values of n, positive numbers also have a negative nth root, while negative numbers do not have a real nth root. For odd values of n, every negative number x has a real negative nth root. For example, −2 has a real 5th root, but −2 does not have any real 6th roots. Every non-zero number x, real or complex, has n different complex number nth roots. (In the case x is real, this count includes any real nth roots.) The only complex root of 0 is 0. The nth roots of almost all numbers (all integers except the nth powers, and all rationals except the quotients of two nth powers) are irrational. For example, All nth roots of integers are algebraic numbers. The term surd traces back to al-Khwārizmī (c. 825), who referred to rational and irrational numbers as audible and inaudible, respectively. This later led to the Arabic word "أصم" (asamm, meaning "deaf" or "dumb") for irrational number being translated into Latin as "surdus" (meaning "deaf" or "mute"). Gerard of Cremona (c. 1150), Fibonacci (1202), and then Robert Recorde (1551) all used the term to refer to unresolved irrational roots. A square root of a number x is a number r which, when squared, becomes x: Every positive real number has two square roots, one positive and one negative. For example, the two square roots of 25 are 5 and −5. The positive square root is also known as the principal square root, and is denoted with a radical sign: Since the square of every real number is a positive real number, negative numbers do not have real square roots. However, every negative number has two imaginary square roots. For example, the square roots of −25 are 5i and −5i, where i represents a square root of −1. A cube root of a number x is a number r whose cube is x: Every real number x has exactly one real cube root, written . For example, Every real number has two additional complex cube roots. Identities and propertiesEdit Expressing the degree of an nth root in its exponent form, as in , makes it easier to manipulate powers and roots. Every positive real number has exactly one positive real nth root, and such the rules for operations with surds involving positive radicands are straightforward within the real numbers: Subtleties can occur when taking the nth roots of negative or complex numbers. For instance: - but rather Since the rule strictly holds for non-negative real radicands only, its application leads to the inequality in the first step above. Simplified form of a radical expressionEdit A non-nested radical expression is said to be in simplified form if - There is no factor of the radicand that can be written as a power greater than or equal to the index. - There are no fractions under the radical sign. - There are no radicals in the denominator. For example, to write the radical expression in simplified form, we can proceed as follows. First, look for a perfect square under the square root sign and remove it: Next, there is a fraction under the radical sign, which we change as follows: Finally, we remove the radical from the denominator as follows: When there is a denominator involving surds it is always possible to find a factor to multiply both numerator and denominator by to simplify the expression. For instance using the factorization of the sum of two cubes: Simplifying radical expressions involving nested radicals can be quite difficult. It is not obvious for instance that: The above can be derived through: The radical or root may be represented by the infinite series: with . This expression can be derived from the binomial series. Computing principal rootsEdit and the fifth root of 34 is where here the dots signify not only that the decimal expression does not end after a finite number of digits, but also that the digits never enter a repeating pattern, because the number is irrational. Since for positive real numbers a and b the equality holds, the above property can be extended to positive rational numbers. Let , with p and q coprime and positive integers, be a rational number, then r has a rational nth root, if both positive integers p and q have an integer nth root, i.e., is the product of nth powers of rational numbers. If one or both nth roots of p or q are irrational, the quotient is irrational, too. nth root algorithmEdit until the desired precision is reached. Depending on the application, it may be enough to use only the first Newton approximant: For example, to find the fifth root of 34, note that 25 = 32 and thus take x = 2, n = 5 and y = 2 in the above formula. This yields The error in the approximation is only about 0.03%. Newton's method can be modified to produce a generalized continued fraction for the nth root which can be modified in various ways as described in that article. For example: In the case of the fifth root of 34 above (after dividing out selected common factors): Digit-by-digit calculation of principal roots of decimal (base 10) numbersEdit Building on the digit-by-digit calculation of a square root, it can be seen that the formula used there, , or , follows a pattern involving Pascal's triangle. For the nth root of a number is defined as the value of element in row of Pascal's Triangle such that , we can rewrite the expression as . For convenience, call the result of this expression . Using this more general expression, any positive principal root can be computed, digit-by-digit, as follows. Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into groups of digits equating to the root being taken, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the square. One digit of the root will appear above each group of digits of the original number. Beginning with the left-most group of digits, do the following procedure for each group: - Starting on the left, bring down the most significant (leftmost) group of digits not yet used (if all the digits have been used, write "0" the number of times required to make a group) and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by and add the digits from the next group. This will be the current value c. - Find p and x, as follows: - Let be the part of the root found so far, ignoring any decimal point. (For the first step, ). - Determine the greatest digit such that . - Place the digit as the next digit of the root, i.e., above the group of digits you just brought down. Thus the next p will be the old p times 10 plus x. - Subtract from to form a new remainder. - If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration. Find the square root of 152.2756. 1 2. 3 4 / \/ 01 52.27 56 01 100·1·00·12 + 101·2·01·11 ≤ 1 < 100·1·00·22 + 101·2·01·21 x = 1 01 y = 100·1·00·12 + 101·2·01·12 = 1 + 0 = 1 00 52 100·1·10·22 + 101·2·11·21 ≤ 52 < 100·1·10·32 + 101·2·11·31 x = 2 00 44 y = 100·1·10·22 + 101·2·11·21 = 4 + 40 = 44 08 27 100·1·120·32 + 101·2·121·31 ≤ 827 < 100·1·120·42 + 101·2·121·41 x = 3 07 29 y = 100·1·120·32 + 101·2·121·31 = 9 + 720 = 729 98 56 100·1·1230·42 + 101·2·1231·41 ≤ 9856 < 100·1·1230·52 + 101·2·1231·51 x = 4 98 56 y = 100·1·1230·42 + 101·2·1231·41 = 16 + 9840 = 9856 00 00 Algorithm terminates: Answer is 12.34 Find the cube root of 4192 to the nearest hundredth. 1 6. 1 2 4 3 / \/ 004 192.000 000 000 004 100·1·00·13 + 101·3·01·12 + 102·3·02·11 ≤ 4 < 100·1·00·23 + 101·3·01·22 + 102·3·02·21 x = 1 001 y = 100·1·00·13 + 101·3·01·12 + 102·3·02·11 = 1 + 0 + 0 = 1 003 192 100·1·10·63 + 101·3·11·62 + 102·3·12·61 ≤ 3192 < 100·1·10·73 + 101·3·11·72 + 102·3·12·71 x = 6 003 096 y = 100·1·10·63 + 101·3·11·62 + 102·3·12·61 = 216 + 1,080 + 1,800 = 3,096 096 000 100·1·160·13 + 101·3·161·12 + 102·3·162·11 ≤ 96000 < 100·1·160·23 + 101·3·161·22 + 102·3·162·21 x = 1 077 281 y = 100·1·160·13 + 101·3·161·12 + 102·3·162·11 = 1 + 480 + 76,800 = 77,281 018 719 000 100·1·1610·23 + 101·3·1611·22 + 102·3·1612·21 ≤ 18719000 < 100·1·1610·33 + 101·3·1611·32 + 102·3·1612·31 x = 2 015 571 928 y = 100·1·1610·23 + 101·3·1611·22 + 102·3·1612·21 = 8 + 19,320 + 15,552,600 = 15,571,928 003 147 072 000 100·1·16120·43 + 101·3·16121·42 + 102·3·16122·41 ≤ 3147072000 < 100·1·16120·53 + 101·3·16121·52 + 102·3·16122·51 x = 4 The desired precision is achieved: The cube root of 4192 is about 16.12 The principal nth root of a positive number can be computed using logarithms. Starting from the equation that defines r as an nth root of x, namely with x positive and therefore its principal root r also positive, one takes logarithms of both sides (any base of the logarithm will do) to obtain The root r is recovered from this by taking the antilog: (Note: That formula shows b raised to the power of the result of the division, not b multiplied by the result of the division.) For the case in which x is negative and n is odd, there is one real root r which is also negative. This can be found by first multiplying both sides of the defining equation by −1 to obtain then proceeding as before to find |r|, and using r = −|r|. The ancient Greek mathematicians knew how to use compass and straightedge to construct a length equal to the square root of a given length. In 1837 Pierre Wantzel proved that an nth root of a given length cannot be constructed if n is not a power of 2. Every complex number other than 0 has n different nth roots. The two square roots of a complex number are always negatives of each other. For example, the square roots of −4 are 2i and −2i, and the square roots of i are If we express a complex number in polar form, then the square root can be obtained by taking the square root of the radius and halving the angle: A principal root of a complex number may be chosen in various ways, for example Using the first(last) branch cut the principal square root maps to the half plane with non-negative imaginary(real) part. The last branch cut is presupposed in mathematical software like Matlab or Scilab. Roots of unityEdit The number 1 has n different nth roots in the complex plane, namely These roots are evenly spaced around the unit circle in the complex plane, at angles which are multiples of . For example, the square roots of unity are 1 and −1, and the fourth roots of unity are 1, , −1, and . Every complex number has n different nth roots in the complex plane. These are where η is a single nth root, and 1, ω, ω2, ... ωn−1 are the nth roots of unity. For example, the four different fourth roots of 2 are In polar form, a single nth root may be found by the formula Here r is the magnitude (the modulus, also called the absolute value) of the number whose root is to be taken; if the number can be written as a+bi then . Also, is the angle formed as one pivots on the origin counterclockwise from the positive horizontal axis to a ray going from the origin to the number; it has the properties that and Thus finding nth roots in the complex plane can be segmented into two steps. First, the magnitude of all the nth roots is the nth root of the magnitude of the original number. Second, the angle between the positive horizontal axis and a ray from the origin to one of the nth roots is , where is the angle defined in the same way for the number whose root is being taken. Furthermore, all n of the nth roots are at equally spaced angles from each other. If n is even, a complex number's nth roots, of which there are an even number, come in additive inverse pairs, so that if a number r1 is one of the nth roots then r2 = –r1 is another. This is because raising the latter's coefficient –1 to the nth power for even n yields 1: that is, (–r1)n = (–1)n × r1n = r1n. It was once conjectured that all polynomial equations could be solved algebraically (that is, that all roots of a polynomial could be expressed in terms of a finite number of radicals and elementary operations). However, while this is true for third degree polynomials (cubics) and fourth degree polynomials (quartics), the Abel–Ruffini theorem (1824) shows that this is not true in general when the degree is 5 or greater. For example, the solutions of the equation cannot be expressed in terms of radicals. (cf. quintic equation) - Bansal, R. K. (2006). New Approach to CBSE Mathematics IX. Laxmi Publications. p. 25. ISBN 978-81-318-0013-3. - Silver, Howard A. (1986). Algebra and trigonometry. Englewood Cliffs, N.J.: Prentice-Hall. ISBN 0-13-021270-9. - "Definition of RADICATION". www.merriam-webster.com. - "radication - Definition of radication in English by Oxford Dictionaries". Oxford Dictionaries - English. - "Earliest Known Uses of Some of the Words of Mathematics". Mathematics Pages by Jeff Miller. Retrieved 2008-11-30. - McKeague, Charles P. (2011). Elementary algebra. p. 470. - B. F. Caviness, R. J. Fateman, "Simplification of Radical Expressions", Proceedings of the 1976 ACM Symposium on Symbolic and Algebraic Computation, p. 329. - Richard Zippel, "Simplification of Expressions Involving Radicals", Journal of Symbolic Computation 1:189–210 (1985) doi:10.1016/S0747-7171(85)80014-6. - Wantzel, M. L. (1837), "Recherches sur les moyens de reconnaître si un Problème de Géométrie peut se résoudre avec la règle et le compas", Journal de Mathématiques Pures et Appliquées, 1 (2): 366–372.
<urn:uuid:464c3406-df3f-4f48-88c7-10532d7956be>
4.09375
4,546
Knowledge Article
Science & Tech.
73.858512
95,559,606
12 July 2018 Smart organic crystals for gears, valves and nanocircuits Published online 29 May 2018 Scientists create a new brand of bendable, self-healing single crystals. A joint research team from the United Arab Emirates and India created organic crystals that can bend and twist from heat or light, and then recover from both without cracking – properties that, so far, haven’t been previously demonstrated in single crystals. “With such crystals, one could think of applications that range from making devices with miniature moving parts such as gears, valves and ratchets in microfluidics to active components in soft robots, dynamic systems that do not include any metal parts,” says Pance Naumov, associate professor of chemistry at New York University Abu Dhabi, United Arab Emirates, and co-author of the study1. The crystals, shaped like needles and blocks, can potentially be used in making nanoelectrical circuits where the key components are rapidly moving organic materials, he adds. They’re prepared by slowly evaporating two organic compounds from a solution of an organic compound. When heated, the crystals twisted and when cooled, they reverted to their original shape, without any damage. They reacted similarly to mechanical stress, and ultraviolet light. The researchers attribute such flexibility to weak intermolecular interactions between the building blocks of the crystals, allowing them to heal heat- and light-induced structural deformities on their own. - Gupta, P. et al. All-in-one: thermally twistable, photobendable, elastically deformable and self-healable soft crystal. Angew. Chem. Int. Ed. https://doi.org/10.1002/anie.201802785 (2018)
<urn:uuid:c6f1e3b3-f3fd-4062-9355-54b62fb6b68b>
3.03125
366
Truncated
Science & Tech.
41.248114
95,559,631
The Importance of Immutability in Java The Importance of Immutability in Java We all know immutability is important, but do you know why or how to achieve it in Java? This post is a one stop shop guide to the challenge. Join the DZone community and get the full member experience.Join For Free How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices. One of the consistent criticisms of Java is that it lacks a formal immutable type. We can (and should) make a good attempt at creating immutable Objects, even though they may be inherently flawed due to the nature of the JVM. An immutable object is one whose state cannot and will not change after it’s initial creation. Immutable objects are great, mostly because they are Thread safe (and threaded code should be avoided as much as possible). You can pass them around without fear they will be changed. I highly recommend you spend some time with a functional language like Scala to really appreciate the amazing power immutable Objects can have (but then come straight back, because Scala has a whole different bag of problems). Interestingly, almost all of the new features in Java 8 (Date and Time, Optionals and Streams) have been implemented in an immutable fashion. This allows much of the performance benefits that can come from things such as parallel Streams. Immutable Objects allow us to create side-effect free functions as seen in Functional Programming languages which are the basis for creating fast, lock free code. How to Create an Immutable Object in Java? The main action is to mark all fields as final. This obviously means they cannot change after initial construction. Beware though, that you can still have a final field where the Object contained is mutable. In this case it is necessary to copy the Object when initially set in the constructor, and provide a copy of the Object when it is being accessed from outside the class. This obviously adds complication to our code and design. Ideally you should follow the original advice from Effective Java: "Classes should be immutable unless there's a very good reason to make them mutable....If a class cannot be made immutable, limit its mutability as much as possible." If you have an Object field in your class endeavour where possible to make it immutable too. If you succeed in making all your fields immutable then you may choose to also make them public as I do- the fields cannot be changed post construction and are only used for reading. This makes the addition of a getter redundant. In my code bases if I’m accessing data using the fields then I know that class is Immutable. The main exception to this is when I’m writing libraries, as it’s much harder to refactor if you need to introduce mutability later on. If you own all the code though, to move from field access to method access is one shortcut in IntelliJ. You should also make your class final to defend from subclassing. It would be possible to create mutable subclasses and thus ruining your hard work. It takes a concious effort to create immutable classes but it should be your target wherever possible. A common anti-pattern I’ve seen in developers is that after creating a constructor and set of fields they generate getters/setters for all of them. Do not do this! Firstly, code should only be written if it is needed; if you have no test or prod code that is accessing a field then it does not need a getter; if no other code is trying to change the field after creation then you don’t need a setter. Create code on demand and do not optomize early. One of the main criticisms of immutable objects is that it can lead to a proliferation of objects and as a result performance issues- have the potential for a significant amount of churn of new objects as you’re having to create a new one for any state change. Unless you're in a crazy high performance environment (and you're almost certainly not) then it really isn't an issue. Objects are cheap. Even Oracle thinks so: “The impact of object creation is often overestimated and can be offset by some of the efficiency associated with immutable objects. These include decreased overhead due to garbage collection, and the elimination of code needed to protect mutable objects from corruption.” This is a classic case of eagerly optomising code. If you create immutable code and it turns out you’re hitting massive performance roadblocks then refactor. For most cases you’ll be fine. There are obvious exceptions; data structures tend to be much easier to implement and more performant if using mutability. As always, apply common sense to your approach, but default to immutability. Opinions expressed by DZone contributors are their own.
<urn:uuid:17d7810e-159a-4208-96e2-7bf89bd8f829>
2.578125
1,010
Truncated
Software Dev.
47.74202
95,559,632
Whether it be studying for an exam, working for that promotion, playing your favorite sport, fixing up the messed up car, or decluttering our homes, everyday, we do things. Some people accomplish them better than others, some don’t. This is not necessarily because some people are just born better, or are more talented, but because there are right ways and wrong ways of doing things, and among the right ways, there is a best way – and some people are able to approximate that best way better than others. In Physics, we might be familiar with the formula Work = Force multiplied by Distance multiplied by the cosine of the angle between the direction of the force exerted and the required direction. The angle theta is the deviation from the best way of doing a certain task. The smaller the deviation from the “best” way, the smaller the angle. When the angle is 0, cos 0 = 1, and the Force multiplied by distance is at it’s “purest” so to speak. When the angle is 90 degrees, cos 90 = 0, and no matter how much force one exerts, that number is multiplied by zero, hence the work is zero. If the angle exceeds 90, the cosine of that is negative.. In real life, we can think of F as the best effort we can put into trying to accomplish a task, so for simplicity’s sake, let’s assume that our F will be the same. The factor here that sets apart good from great will be how smart one exerts the effort – how one approaches a task such that none of the effort put in is wasted. Hence, our goal here would be to get as close as possible to the ideal: the best method for doing a certain task, then focusing the best effort we can in that direction so that we attain maximum capacity. Of course there are different methods for different tasks, and that’s the fun part – we get to explore…
<urn:uuid:ff37f079-6491-4126-89ae-f07abaf5477e>
3.140625
411
Personal Blog
Science & Tech.
56.935947
95,559,633
Fish can live in almost any aquatic environment on Earth, but when the climate changes and temperatures go up many species are pushed to the limit. The amount of time needed to adjust to new conditions could prove critical for how different species cope in the future, reveals a new study from researchers at the University of Gothenburg, published in the scientific journal Proceedings of the Royal Society B. Climate change continues apace thanks to increasing levels of greenhouse gases in the atmosphere. The greenhouse effect has led not only to an increase in average temperatures but also to more extreme weather conditions, such as major heatwaves. More than just survival In contrast to birds and mammals, fish are ectothermic, which means that their body temperature fluctuates in line with the temperature of their surroundings. Fish that live at different temperatures can generally do so because they are able to optimise their bodily functions to that particular temperature. Changes in the ambient temperature can therefore disrupt this balance. "Previous research has focused almost exclusively on whether different species will be able to survive an increase in temperature or not," says Erik Sandblom, researcher at the University of Gothenburg's Department of Biological and Environmental Sciences. "We were interested in finding out how species that survive actually manage to do so, how long it takes and the limitations they have to contend with during the acclimation period." Most vulnerable during the first few weeks In the published trial the researchers simulated a temporary heatwave and then monitored how the physiology of the shorthorn sculpin, a common marine bottom-dwelling fish species, was affected. The results show that during the first week of the heatwave the fish were severely restricted and were forced to forego high-energy processes such as eating or swimming in order to survive. "During the first few weeks of a sudden heatwave the fish do survive but are vulnerable to events that would otherwise pass without problem. Dealing with extra challenges such as escaping from predators or coping with disease can be fatal." Amount of time decisive The trial took eight weeks and the results show that the physiological load reduces with each passing week as the fish gradually manage to reset their bodily functions and acclimate to the new environment. The results also show that the "cost" to the fish correlates closely with how long it takes to adjust. In a future that is both warmer and more variable, it is therefore likely to be important not only to adjust to new conditions, but to do so quickly. The research was carried out with: Michael Axelsson, Albin Gräns and Henrik Seth at the University of Gothenburg. Researcher at the Department of Biological and Environmental Sciences Tel: +46 (0)31 786 4548, +46 (0)703 286 358 Henrik Axlid | idw - Informationsdienst Wissenschaft Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:21a2509f-9101-4494-8330-363ba5a40515>
3.84375
1,241
Content Listing
Science & Tech.
37.494536
95,559,661
Most of the programs that we will write for this class will be stand-alone programs. However, Java also enables a way to write Applets which are programs that run through a browser. For this lab warm-up you can learn how to create java applets and incorporate them into a webpage. The applet you will compile and use is the blinking text applet above. This will also get you familiar with whatever java programming environment you choose to use so you can just start on the first programming assignment next week with all these details worked out. For this lab you should turn in the java applet you designed and a link to a webpage with this applet running. To turn in it, log in to the UCI Electronic Educational Environment, and upload (1) your applet and (2) a link to your webpage. Java Applets are programs that can be run through a web browser. This section will guide you through two different ways of running an Applet: through a web browser or through BlueJ. In order to run any program or Applet through BlueJ, you have to first create a project and put the appropriate files in the project. This is something that is required by BlueJ, and is its way of organizing files for a particular application. Start out by opening up BlueJ through the Start menu. ( All the machines in the lab already have BlueJ installed. If you would like to install BlueJ on your own computer visit http://www.bluej.org . ) Since we are creating a new project, pull down the Projects menu in BlueJ and choose New Project. Select the location where you want your project to reside and give your project a name. This will create a folder with that name and two additional files in the folder. To run a Java Applet through a web browser, you will need at least two different files: an html file and at least one file containing the Java bytecode for the applet. The name of the html file will always have the extension .html and the names of the files containing the Java bytecode will always have the extension .class . The .class file is created by compiling Java source code which will be stored in files ending in .java . Copy the Blink.java file into your project folder. If we want to run the Applet through BlueJ, we need to add the new Java source file to the project. From the Edit menu in BlueJ, select Add Class From File. Select Blink.java and click Add . A box with the name of the applet (Blink) will appear in your window. Click on the Compile button to compile the project. This will produce a file called Blink.class in the project folder which is the Java bytecode for the applet.To run the applet, right click on the box with Blink applet and select Run Applet . You can run the applet through the applet viewer or the web browser. Experiment with both. For now, select one and click OK . When you run the Applet, BlueJ automatically produces an html file that points to the bytecode file for that applet. If you right click on the Blink applet and select Open Editor you can edit the java source code. You will need to do this when you develop your own Java programs. Go in and edit the text that is blinking in the applet. Go back to the folder containing the files for your project. If you double-click on the Blink.html file, your browser will open and display your new applet. If you open the html file with a text editor, you can see and edit the html file. Note the line that points to the Blink.class file. This tells the browser to execute this bytecode.
<urn:uuid:62534d8e-6571-46a9-bde5-9b7e4cd4a380>
4.03125
772
Tutorial
Software Dev.
70.188782
95,559,681
Cleaner wrasses perform an important function on reefs by removing parasites from larger fishes. Many types of fish seek out the services of cleaner wrasses and in some instances may lead to the establishment of "cleaning stations" where larger fish will queue up to be cleaned by several cleaner wrasses. Many of these cleaning stations are permanent and most fish will return to the same place to be cleaned. Cleaner wrasses advertise their availability by swimming in a quick jerking motion which is recognized by potential clients. In this symbiotic relationship of trust and mutual benefit the small wrasses will even enter the mouth and gill cavities of predatory fishes, just as this wrasse has done with a large honeycomb moray eel. 10 months ago
<urn:uuid:36a137ee-c01a-4c30-ae3b-92cccfa73068>
2.671875
150
Personal Blog
Science & Tech.
38.6552
95,559,686
Signals From Mars Strings (SiPjAjk) = S7P5A51 Base Sequence = 12735 String Sequence = 12735 - 5 - 51 A station X on Earth is 8 x 107 km away from a surface probe on Mars. The probe transmits radiowaves to X at a frequency of 6 x 105 per second. How long does it take the radio waves to get to X? (a) S7P5A51 (Physical Change - Duration). Pj Problem of Interest is of type change. Problems of speed, velocity, acceleration and time (duration) are change problems. Radio waves are a type of electromagnetic radiation, so their speed is the same as the speed of light. which Speed of light = 3 x 108m/sec. Time = distance/speed. Distance covered by radio waves = 8 x 107 x 103 m So, Time it takes radio waves to get to station X on Earth: Time = (8 x 1010)/ (3 x 108m/sec) = 2.7 x 102 secs = 270 secs. The point . is a mathematical abstraction. It has negligible size and a great sense of position. Consequently, it is front and center in abstract existential reasoning. Single Variable Functions Ordinary Differential Equations (ODEs) The Universe is composed of matter and radiant energy. Matter is any kind of mass-energy that moves with velocities less than the velocity of light. Radiant energy is any kind of mass-energy that moves with the velocity of light. Composition And Structure Of Matter How Matter Gets Composed How Matter Gets Composed (2) Molecular Structure Of Matter Molecular Shapes: Bond Length, Bond Angle Molecular Shapes: Valence Shell Electron Pair Repulsion Molecular Shapes: Orbital Hybridization Molecular Shapes: Sigma Bonds Pi Bonds Molecular Shapes: Non ABn Molecules Molecular Orbital Theory More Pj Problem Strings
<urn:uuid:ee9c929a-9ca3-4dca-b8fe-1dad9aea5d93>
3.015625
429
Tutorial
Science & Tech.
56.346659
95,559,735
Because nature doesn’t always behave the same in a lab, test tube or computer program as it does in the real world, scientists and engineers have come up with ideas that didn’t turn out as expected. DDT was considered a panacea for a range of insect pest issues, from controlling disease to helping farmers. But we didn’t understand bioaccumulation back then—toxins concentrating up the food chain, risking the health and survival of animals from birds to humans. Chlorofluorocarbons, or CFCs, seemed so terrific we put them in everything from aerosol cans to refrigerators. Then we learned they damage the ozone layer, which protects us from harmful solar radiation. These unintended consequences come partly from our tendency to view things in isolation, without understanding how all nature is interconnected. We’re now facing the most serious unintended consequence ever: climate change from burning fossil fuels. Some proposed solutions may also result in unforeseen outcomes. Oil, gas and coal are miraculous substances—energy absorbed from the sun by plants and animals hundreds of millions of years ago, retained after they died and concentrated as the decaying life became buried deeper into the Earth. Burning them to harness and release this energy opened up possibilities unimaginable to our ancestors. We could create machines and technologies to reduce our toil, heat and light our homes, build modern cities for growing populations and provide accessible transport for greater mobility and freedom. And because the stuff seemed so plentiful and easy to obtain, we could build vehicles and roads for everyone—big cars that used lots of gas—so that enormous profits would fuel prosperous, consumer-driven societies. We knew fairly early that pollution affected human health, but that didn’t seem insurmountable. We just needed to improve fuel efficiency and create better pollution-control standards. That reduced rather than eliminated the problem and only partly addressed an issue that appears to have caught us off-guard: the limited availability of these fuels. But the trade-offs seemed worthwhile. Then, for the past few decades, a catastrophic consequence of our profligate use of fossil fuels has loomed. Burning them has released excessive amounts of carbon dioxide into the atmosphere, creating a thick, heat-trapping blanket. Along with our destruction of natural carbon-storing environments, such as forests and wetlands, this has steadily increased global average temperatures, causing climate change. We’re now faced with ever-increasing extreme weather-related events and phenomena such as ocean acidification, which affects myriad marine life, from shellfish to corals to plankton. The latter produce oxygen and are at the very foundation of the food chain. Had we addressed the problem from the outset, we could have solutions in place. We could have found ways to burn less fossil fuel without massively disrupting our economies and ways of life. But we’ve become addicted to the lavish benefits that fossil fuels have offered, and the wealth and power they’ve provided to industrialists and governments. And so there’s been a concerted effort to stall or avoid corrective action, with industry paying front groups, “experts” and governments to deny or downplay the problem. Now that climate change has become undeniable, with consequences getting worse daily, many experts are eyeing solutions. Some are touting massive technological fixes, such as dumping large amounts of iron filings into the seas to facilitate carbon absorption, pumping nutrient-rich cold waters from the ocean depths to the surface, building giant reflectors to bounce sunlight back into space and irrigating vast deserts. But we’re still running up against those pesky unintended consequences. Scientists at the Helmholtz Centre for Ocean Research in Kiel, Germany, studied five geoengineering schemes and concluded they’re “either relatively ineffective with limited warming reductions, or they have potentially severe side effects and cannot be stopped without causing rapid climate change.” That’s partly because we don’t fully understand climate and weather systems and their interactions. That doesn’t mean we should rule out geoengineering. Climate change is so serious that we’ll need to marshal everything we have to confront it, and some methods appear to be more benign than others. But geoengineering isn’t the solution. And it’s no excuse to go on wastefully burning fossil fuels. We must conserve energy and find ways to quickly shift to cleaner sources. Visit EcoWatch’s CLIMATE CHANGE page for more related news on this topic.
<urn:uuid:a2a2194b-7e33-4c1e-93ab-ebea34711107>
3.640625
928
Nonfiction Writing
Science & Tech.
34.137256
95,559,784
why do volcanoes occur at constructive plate boundaries A volcano is an opening in the Earthвs crust. It allows hot magma, ash and gases to escape from below the surface. Magma chamber в Lava в magma, once it reaches the surface Crater в Vent в Secondary cones в Ash, steam and gas в Volcanic bombs в There are two types of volcano, composite and shield. Composite volcanoes are steep-sided and cone-shaped, made up of layers of ash and lava and containing sticky lava which doesnвt flow very far. Mount Etna in Italy is a composite volcano. Shield volcanoes have gently sloping sides and runny lava that covers a wide area. Gases escape very easily from shield volcanoes. Mauna Loa in Hawaii is a shield volcano. Destructive plate margins Pare where a plate of higher density isP subducted, or forced underneath,Panother, less dense plate. This occurs due to convection currents in the mantle. These are caused by radioactive decay which produces heat, causing the hotter, less dense material to rise and then sink when it cools down again (like the heating of a room by a radiator). u Volcanoes /u Melting of the mantle occurs above the subducted plate. The magms (melt) is less dense, so it rises up towards the crust. The magma pushes through the crust, and erupts at the earth #39;s surface as a volcano. This can be very explosive (example, Mount St Helens eruption, May 1980). u Earthquakes /u As the plate is being subducted, it can sometimes become stuck on the overriding plate. The subducting plate is still trying to force its way down into the mantle, so there is a build up of force. This can also drag the overriding plate down with it (a good way to visualise this is to use your hands as the plates and see what happens when one gets #39;stuck #39;). If enough force builds up, this causes the plates to jolt suddenly past each other, with the overriding plate pinging back up. This releases energy, which is felt as an earthquake. - Views: 547 why do the plates move in the first place why do people live in the canadian shield why do oceanic plates subduct under continental plates why do nuclear explosions create mushroom clouds why is there an s in island why was there no oxygen in the early atmosphere why is gold often found in veins of quartz
<urn:uuid:e76b5732-4f60-4836-819a-5fb8cfebf71e>
4.28125
537
Knowledge Article
Science & Tech.
53.147809
95,559,796
What is Angular Size? You may have heard the terms arc-minute and arc-second mentioned on the TV, magazines or other websites. These are the units of measurement for angular size used in modern astronomy. Angular size is used to describe the dimensions of an object as it appears in the sky. - Introduction to Astronomy - The Celestial Sphere - Right Ascension and Declination - What is Angular Size? - What is the Milky Way Galaxy? - The Astronomical Magnitude Scale - Sidereal Time, Civil Time and Solar Time - Equinoxes and Solstices - Parallax, Distance and Parsecs - A Newbie's Guide to Distances in Space - Luminosity and Flux of Stars - Kepler's Laws of Planetary Motion - What Are Lagrange Points? - Glossary of Astronomy & Photographic Terms - Astronomical Constants Adverts Blocked Please disable AdBlocking software and allow me to set cookies so that I can continue providing free content and services. Angular Size is measured in arc-minutes and arc-seconds, which are used to represent angles on a sphere. An arcsecond is 1/3600th of one degree, and a radian is 180/π degrees, so one radian equals 3,600*180/π arcseconds, which is about 206,265 arcseconds.This is useful to astronomers when working out distances between objects or calculating magnifications required for observations. Angular Size is also used as a measure of optical instruments resolving power - basically how small an object can be seen. Angular size refers to the object's apparent size as seen from an observer on Earth. For example, the Moon has an angular size of approximately 30 arcminutes. The angular size of an object is determined by its actual size and its distance from the observer. For an object of fixed size, the larger the distance, the smaller the angular size. For objects at a fixed distance, the larger the actual size of an object, the larger its angular size. Many deep sky objects such as galaxies and nebulas appear as non-circular and are thus typically given two measures of diameter: Major Diameter and Minor Diameter. For example, the Small Magellanic Cloud has a visual apparent diameter of 5° 20' x 3° 5'. Experiment: Calculate the Angular Size of the Moon Let's try and calculate the angular size of the Moon. All you need is a tape measure and a ruler. Hold the ruler at arm's length and measure the diameter of the Moon, you may have to wait for a full moon to be able to accurately measure it. You should get a reading of between 5mm and 8mm depending on the season and the length of your arm. Note down your measured apparent size of the Moon. Next, you need to measure the distance between your hand and your eye (you may need help with this one) and note this down as well. Both measurements need to be the same units, ideally millimetres. Now, all we have to do is some simple maths. Equation 12 - Angular Size Calculation This will give you the angular size of the object in radians, where Sap is the apparent size measured and l is the between your hand and your eye. You can then use the formula to calculate its actual size: Equation 13 - Diameter given Angular size and Distance The Moons angular size can be converted from radians to arc-seconds by multiplying by 206,264. Arc-minutes can be found by dividing by 60. Your answer should be between 30 and 35 arc-minutes in diameter. Angular Size of the Sun Do you think that the angular size of the Sun is greater or smaller than the angular size of the Moon Do not look at the Sun! Blindness or visual impairment will be the result. The Sun and the Moon appear to us the same size - almost exactly the same (hence Solar Eclipses), but we know that the Sun is many, many times bigger than the Moon. The Sun is 400 times bigger than the Moon, however, the Sun is also 400 times further away from us - so the result is that the Sun and the Moon have the same angular size. Last updated on: Tuesday 16th January 2018 A look at the celestial sphere and how we locate objects using Right Ascension and Declination Kepler's Laws of Planetary Motion are simple, yet govern the mechanics of planets, solar systems and even galaxies.
<urn:uuid:9c65948b-0054-4b52-b49e-f7a5613dc1fe>
4.125
941
Knowledge Article
Science & Tech.
48.950653
95,559,830
Magnetosphere of Saturn Aurorae on the south pole of Saturn as viewed by Hubble |Radius of Saturn||60,330 km| |Equatorial field strength||21 μT (0.21 G)| |Solar wind parameters| |IMF strength||0.5 nT| |Bow shock distance||~27 Rs| |Magnetopause distance||~22 Rs| |Main ions||O+, H2O+, OH+, H3O+, HO2+ and O2+ and H+| |Mass loading rate||~100 kg/s| |Maximum plasma density||50–100 cm−3| |Spectrum||radio, near-IR and UV| |Total power||0.5 TW| |Radio emission frequencies||10–1300 kHz| The magnetosphere of Saturn is the cavity created in the flow of the solar wind by the planet's internally generated magnetic field. Discovered in 1979 by the Pioneer 11 spacecraft, Saturn's magnetosphere is the second largest of any planet in the Solar System after Jupiter. The magnetopause, the boundary between Saturn's magnetosphere and the solar wind, is located at a distance of about 20 Saturn radii from the planet's center, while its magnetotail stretches hundreds of Saturn radii behind it. Saturn's magnetosphere is filled with plasmas originating from both the planet and its moons. The main source is the small moon Enceladus, which ejects as much as 1,000 kg/s of water vapor from the geysers on its south pole, a portion of which is ionized and forced to co-rotate with the Saturn’s magnetic field. This loads the field with as much as 100 kg of water group ions per second. This plasma gradually moves out from the inner magnetosphere via the interchange instability mechanism and then escapes through the magnetotail. The interaction between Saturn's magnetosphere and the solar wind generates bright oval aurorae around the planet's poles observed in visible, infrared and ultraviolet light. The aurorae are related to the powerful saturnian kilometric radiation (SKR), which spans the frequency interval between 100 kHz to 1300 kHz and was once thought to modulate with a period equal to the planet's rotation. However, later measurements showed that the periodicity of the SKR's modulation varies by as much as 1%, and so probably does not exactly coincide with Saturn’s true rotational period, which as of 2010 remains unknown. Inside the magnetosphere there are radiation belts, which house particles with energy as high as tens of megaelectronvolts. The energetic particles have significant influence on the surfaces of inner icy moons of Saturn. In 1980–1981 the magnetosphere of Saturn was studied by the Voyager spacecraft. Up until September 2017 it was a subject of ongoing investigation by Cassini mission, which arrived in 2004 and spent over 13 years observing the planet. - 1 Discovery - 2 Structure - 3 Dynamics - 4 Interaction with rings and moons - 5 Exploration - 6 Notes - 7 References - 8 Bibliography - 9 Further reading - 10 External links Immediately after the discovery of Jupiter's decametric radio emissions in 1955, attempts were made to detect a similar emission from Saturn, but with inconclusive results. The first evidence that Saturn might have an internally generated magnetic field came in 1974, with the detection of weak radio emissions from the planet at the frequency of about 1 MHz. These medium wave emissions were modulated with a period of about 10 h 30 min, which was interpreted as Saturn's rotation period. Nevertheless, the evidence available in the 1970s was too inconclusive and some scientists thought that Saturn might lack a magnetic field altogether, while others even speculated that the planet could lie beyond the heliopause. The first definite detection of the saturnian magnetic field was made only on September 1, 1979, when it was passed through by the Pioneer 11 spacecraft, which measured its magnetic field strength directly. Like Jupiter's magnetic field, Saturn's is created by a fluid dynamo within a layer of circulating liquid metallic hydrogen in its outer core. Like Earth, Saturn's magnetic field is mostly a dipole, with north and south poles at the ends of a single magnetic axis. On Saturn, like on Jupiter, the north magnetic pole is located in the northern hemisphere, and the south magnetic pole lies in the southern hemisphere, so that magnetic field lines point away from the north pole and towards the south pole. This is reversed compared to the Earth, where the north magnetic pole lies in the southern hemisphere. Saturn's magnetic field also has quadrupole, octupole and higher components, though they are much weaker than the dipole. The magnetic field strength at Saturn's equator is about 21 μT (0.21 G), which corresponds to a dipole magnetic moment of about 4.6 × 1018 T•m3. This makes Saturn's magnetic field slightly weaker than Earth's; however, its magnetic moment is about 580 times larger. Saturn's magnetic dipole is strictly aligned with its rotational axis, meaning that the field, uniquely, is highly axisymmetric. The dipole is slightly shifted (by 0.037 Rs) along Saturn's rotational axis towards the north pole. Size and shape Saturn's internal magnetic field deflects the solar wind, a stream of ionized particles emitted by the Sun, away from its surface, preventing it from interacting directly with its atmosphere and instead creating its own region, called a magnetosphere, composed of a plasma very different from that of the solar wind. The magnetosphere of Saturn is the second–largest magnetosphere in the Solar System after that of Jupiter. As with Earth's magnetosphere, the boundary separating the solar wind's plasma from that within Saturn's magnetosphere is called the magnetopause. The magnetopause distance from the planet's center at the subsolar point[note 1] varies widely from 16 to 27 Rs (Rs=60,330 km is the equatorial radius of Saturn). The magnetopause's position depends on the pressure exerted by the solar wind, which in turn depends on solar activity. The average magnetopause standoff distance is about 22 Rs. In front of the magnetopause (at the distance of about 27 Rs from the planet) lies the bow shock, a wake-like disturbance in the solar wind caused by its collision with the magnetosphere. The region between the bow shock and magnetopause is called the magnetosheath. At the opposite side of the planet, the solar wind stretches Saturn's magnetic field lines into a long, trailing magnetotail, which consists of two lobes, with the magnetic field in the northern lobe pointing away from Saturn and the southern pointing towards it. The lobes are separated by a thin layer of plasma called the tail current sheet. Like Earth’s, Saturn's tail is a channel through which solar plasma enters the inner regions of the magnetosphere. Similar to Jupiter, the tail is the conduit through which the plasma of the internal magnetospheric origin leaves the magnetosphere. The plasma moving from the tail to the inner magnetopshere is heated and forms a number of radiation belts. Saturn's magnetosphere is often divided into four regions. The innermost region co-located with Saturn's planetary rings, inside approximately 3 Rs, has a strictly dipolar magnetic field. It is largely devoid of plasma, which is absorbed by ring particles, although the radiation belts of Saturn are located in this innermost region just inside and outside the rings. The second region between 3 and 6 Rs contains the cold plasma torus and is called the inner magnetosphere. It contains the densest plasma in the saturnian system. The plasma in the torus originates from the inner icy moons and particularly from Enceladus. The magnetic field in this region is also mostly dipolar. The third region lies between 6 and 12–14 Rs and is called the dynamic and extended plasma sheet. The magnetic field in this region is stretched and non-dipolar, whereas the plasma is confined to a thin equatorial plasma sheet. The fourth outermost region is located beyond 15 Rs at high latitudes and continues up to magnetopause boundary. It is characterized by a low plasma density and a variable, non-dipolar magnetic field strongly influenced by the Solar wind. In the outer parts of Saturn's magnetosphere beyond approximately 15–20 Rs the magnetic field near the equatorial plane is highly stretched and forms a disk-like structure called magnetodisk. The disk continues up to the magnetopause on the dayside and transitions into the magnetotail on the nightside. Near the dayside it can be absent when the magnetosphere is compressed by the Solar wind, which usually happens when the magnetopause distance is smaller than 23 Rs. On the nightside and flanks of the magnetosphere the magnetodisk is always present. The Saturn's magnetodisk is a much smaller analog of the Jovian magnetodisk. The plasma sheet in the Saturn's magnetosphere has a bowl-like shape not found in any other known magnetosphere. When Cassini arrived in 2004, there was a winter in the northern hemisphere. The measurements of the magnetic field and plasma density revealed that the plasma sheet was warped and lay to the north of the equatorial plane looking like a giant bowl. Such a shape was unexpected. The processes driving Saturn's magnetosphere are similar to those driving Earth's and Jupiter's. Just as Jupiter's magnetosphere is dominated by plasma co–rotation and mass–loading from Io, so Saturn's magnetosphere is dominated by plasma co–rotation and mass–loading from Enceladus. However, Saturn's magnetosphere is much smaller in size, while its inner region contains too little plasma to seriously distend it and create a large magnetodisk.[note 2] This means that it is much more strongly influenced by the solar wind, and that, like Earth's magnetic field, its dynamics are affected by reconnection with the wind similar to the Dungey cycle. Another distinguishing feature of Saturn's magnetosphere is high abundance of neutral gas around the planet. As revealed by ultraviolet observation of Cassini, the planet is enshrouded in a large cloud of hydrogen, water vapor and their dissociative products like hydroxyl, extending as far as 45 Rs from Saturn. In the inner magnetosphere the ratio of neutrals to ions is around 60 and it increases in the outer magnetosphere, which means that the entire magnetospheric volume is filled with relatively dense weakly ionized gas. This is different, for instance, from Jupiter or Earth, where ions dominate over neutral gas, and has consequences for the magnetospheric dynamics. Sources and transport of plasma The plasma composition in Saturn's inner magnetosphere is dominated by the water group ions: O+, H2O+, OH+ and others, hydronium ion (H3O+), HO2+ and O2+, although protons and nitrogen ions (N+) are also present. The main source of water is Enceladus, which releases 300–600 kg/s of water vapor from the geysers near its south pole. The released water and hydroxyl (OH) radicals (a product of water's dissociation) form a rather thick torus around the moon's orbit at 4 Rs with densities up to 10,000 molecules per cubic centimeter. At least 100 kg/s of this water is eventually ionized and added to the co–rotating magnetospheric plasma. Additional sources of water group ions are Saturn's rings and other icy moons. The Cassini spacecraft also observed small amounts of N+ ions in the inner magnetosphere, which probably originate from Enceladus as well. In the outer parts of the magnetosphere the dominant ions are protons, which originate either from the Solar wind of Saturn's ionosphere. Titan, which orbits close to the magnetopause boundary at 20 Rs, is not a significant source of plasma. The relatively cold plasma in the innermost region of Saturn's magnetosphere, inside 3 Rs (near the rings) consists mainly of O+ and O2+ ions. There ions together with electrons form an ionosphere surrounding the saturnian rings. For both Jupiter and Saturn, transport of plasma from the inner to the outer parts of the magnetosphere is thought to be related to interchange instability. In the case of Saturn, magnetic flux tubes loaded with cold, water–rich plasma interchange with flux tubes filled with hot plasma arriving from the outer magnetosphere. The instability is driven by centrifugal force exerted by the plasma on the magnetic field. The cold plasma is eventually removed from the magnetosphere by plasmoids formed when the magnetic field reconnects in the magnetotail. The plasmoids move down the tail and escape from the magnetosphere. The reconnection or substorm process is thought to be under the control of the solar wind and Saturn's largest moon Titan, which orbits near the outer boundary of the magnetosphere. In the magnetodisk region, beyond 6 Rs, the plasma within the co–rotating sheet exerts a significant centrifugal force on the magnetic field, causing it to stretch.[note 3] This interaction creates a current in the equatorial plane flowing azimuthally with rotation and extending as far as 20 Rs from the planet. The total strength of this current varies from 8 to 17 MA. The ring current in the saturnian magnetosphere is highly variable and depends of the solar wind pressure, being stronger when the pressure is weaker. The magnetic moment associated with this current slightly (by about 10 nT) depresses the magnetic field in the inner magnetosphere, although it increases the total magnetic moment of the planet and causing the size of the magnetosphere to become larger. Saturn has bright polar aurorae, which have been observed in the ultraviolet, visible and near infrared light. The aurorae usually look like bright continuous circles (ovals) surrounding the poles of the planet. The latitude of auroral ovals varies in the range of 70–80°; the average position is 75 ± 1° for the southern aurora, while the northern aurora is closer to the pole by about 1.5°.[note 4] From time to time either aurorae can assume a spiral shape instead of oval. In this case it begins near midnight at a latitude of around 80°, then its latitude decreases to as low as 70° as it continues into the dawn and day sectors (counterclockwise). In the dusk sector the auroral latitude increases again, although when it returns to the night sector it still has a relatively low latitude and does not connect to the brighter dawn part. Unlike Jupiter's, the Saturn's main auroral ovals are not related to the breakdown of the co–rotation of the plasma in the outer parts of the planet's magnetosphere. The aurorae on Saturn are thought to be connected to the reconnection of the magnetic field under the influence of the Solar wind (Dungey cycle), which drives an upward current (about 10 million amperes) from the ionosphere and leads to the acceleration and precipitation of energetic (1–10 keV) electrons into the polar thermosphere of Saturn. The saturnian aurorae are more similar to those of the Earth, where they are also Solar wind driven. The ovals themselves correspond to the boundaries between open and closed magnetic field lines—so called polar caps, which are thought to reside at the distance of 10–15° from the poles. The aurorae of Saturn are highly variable. Their location and brightness strongly depends on the Solar wind pressure: the aurorae become brighter and move closer to the poles when the Solar wind pressure increases. The bright auroral features are observed to rotate with the angular speed of 60–75% that of Saturn. From time to time bright features appear in the dawn sector of the main oval or inside it. The average total power emitted by the aurorae is about 50 GW in the far ultraviolet (80–170 nm) and 150–300 GW in the near-infrared (3–4 μm—H3+ emissions) parts of the spectrum. Saturn kilometric radiation Saturn is the source of rather strong low frequency radio emissions called Saturn kilometric radiation (SKR). The frequency of SKR lies in the range 10–1300 kHz (wavelength of a few kilometers) with the maximum around 400 kHz. The power of these emissions is strongly modulated by the rotation of the planet and is correlated with changes in the solar wind pressure. For instance, when Saturn was immersed into the giant magnetotail of Jupiter during Voyager 2 flyby in 1981, the SKR power decreased greatly or even ceased completely. The kilometeric radiation is thought to be generated by the Cyclotron Maser Instability of the electrons moving along magnetic field lines related to the auroral regions of Saturn. Thus the SKR is related to the auroras around the poles of the planet. The radiation itself comprises spectrally diffuse emissions as well as narrowband tones with bandwidths as narrow as 200 Hz. In the frequency–time plane arc like features are often observed, much like in the case of the Jovian kilometric radiation. The total power of the SKR is around 1 GW. The modulation of the radio emissions by planetary rotation is traditionally used to determine the rotation period of the interiors of fluid giant planets. In the case of Saturn, however, this appears to be impossible, as the period varies at the timescale of tens years. In 1980–1981 the periodicity in the radio emissions as measured by Voyager 1 and 2 was 10 h 39 min 24 ± 7 s, which was then adopted as the rotational period of Saturn. Scientists were surprised when Galileo and then Cassini returned a different value—10 h 45 min 45 ± 36 s. Further observation indicated that the modulation period changes by as much as 1% on the characteristic timescale of 20–30 days with an additional long term trend. There is a correlation between the period and solar wind speed, however, the causes of this change remain a mystery. One reason may be that the saturnian perfectly axially symmetric magnetic field fails to impose a strict corotation on the magnetospheric plasma making it slip relative to the planet. The lack of a precise correlation between the variation period of SKR and planetary rotation makes it all but impossible to determine the true rotational period of Saturn. Saturn has relatively weak radiation belts, because energetic particles are absorbed by the moons and particulate material orbiting the planet. The densest (main) radiation belt lies between the inner edge of the Enceladus gas torus at 3.5 Rs and the outer edge of the A Ring at 2.3 Rs. It contains protons and relativistic electrons with energies from hundreds of kiloelectronvolts (keV) to as high as tens of megaelectronvolts (MeV) and possibly other ions. Beyond 3.5 Rs the energetic particles are absorbed by the neutral gas and their numbers drop, although less energetic particles with energies in the range of hundreds keV appear again beyond 6 Rs—these are the same particles that contribute to the ring current.[note 3] The electrons in the main belt probably originate in the outer magnetosphere or Solar wind, from which they are transported by the diffusion and then adiabatically heated. However, the energetic protons consist of two populations of particles. The first population with energies of less than about 10 MeV has the same origin as electrons, while the second one with the maximum flux near 20 MeV results from the interaction of cosmic rays with solid material present in the Saturnian system (so called cosmic ray albedo neutron decay process—CRAND). The main radiation belt of Saturn is strongly influenced by interplanetary solar wind disturbances. The innermost region of the magnetosphere near the rings is generally devoid of energetic ions and electrons because they are absorbed by ring particles. Saturn, however, has the second radiation belt discovered by Cassini in 2004 and located just inside the innermost D Ring. This belt probably consists of energetic charged particles formed via the CRAND process or of ionized energetic neutral atoms coming from the main radiation belt. The saturnian radiation belts are generally much weaker than those of Jupiter and do not emit much microwave radiation (with frequency of a few Gigahertz). Estimates shows that their decimetric radio emissions (DIM) would be impossible to detect from the Earth. Nevertherless the high energy particles cause weathering of the surfaces of the icy moons and sputter water, water products and oxygen from them. Interaction with rings and moons The abundant population of solid bodies orbiting Saturn including moons as well as ring particles exerts a strong influence on the magnetosphere of Saturn. The plasma in the magnetosphere co-rotates with the planet, continuously impinging on the trailing hemispheres of slowly moving moons. While ring particles and the majority of moons only passively absorb plasma and energetic charged particles, three moons – Enceladus, Dione and Titan – are significant sources of new plasma. The absorption of energetic electrons and ions reveals itself by noticeable gaps in the radiation belts of Saturn near the moon's orbits, while the dense rings of Saturn completely eliminate all energetic electrons and ions closer than 2.2 RS, creating a low radiation zone in the vicinity of the planet. The absorption of the co-rotating plasma by a moon disturbs the magnetic field in its empty wake—the field is pulled towards a moon, creating a region of a stronger magnetic field in the near wake. The three moons mentioned above add new plasma into the magnetosphere. By far the strongest source is Enceladus, which ejects a fountain of water vapor, carbon dioxide and nitrogen through cracks in its south pole region. A fraction of this gas is ionized by the hot electrons and solar ultraviolet radiation and is added to the co-rotational plasma flow. Titan once was thought to be the principal source of plasma in Saturn's magnetosphere, especially of nitrogen. The new data obtained by Cassini in 2004–2008 established that it is not a significant source of nitrogen after all, although it may still provide significant amounts of hydrogen (due to dissociation of methane). Dione is the third moon producing more new plasma than it absorbs. The mass of plasma created in the vicinity of it (about 6 g/s) is about 1/300 as much as near Enceladus. However, even this low value can not be explained only by sputtering of its icy surface by energetic particles, which may indicate that Dione is endogenically active like Enceladus. The moons that create new plasma slow the motion of the co-rotating plasma in their vicinity, which leads to the pile-up of the magnetic field lines in front of them and weakening of the field in their wakes—the field drapes around them. This is the opposite to what is observed for the plasma-absorbing moons. The plasma and energetic particles present in the magnetosphere of Saturn, when absorbed by ring particles and moons, cause radiolysis of the water ice. Its products include ozone, hydrogen peroxide and molecular oxygen. The first one has been detected in the surfaces of Rhea and Dione, while the second is thought to be responsible for the steep spectral slopes of moons' reflectivities in the ultraviolet region. The oxygen produced by radiolysis forms tenuous atmospheres around rings and icy moons. The ring atmosphere was detected by Cassini for the first time in 2004. A fraction of the oxygen gets ionized, creating a small population of O2+ ions in the magnetosphere. The influence of Saturn's magnetosphere on its moons is more subtle than the influence of Jupiter on its moons. In the latter case, the magnetosphere contains a significant number of sulfur ions, which, when implanted in surfaces, produce characteristic spectral signatures. In the case of Saturn, the radiation levels are much lower and the plasma is composed mainly of water products, which, when implanted, are indistinguishable from the ice already present. As of 2014 the magnetosphere of Saturn has been directly explored by four spacecraft. The first mission to study the magnetosphere was Pioneer 11 in September 1979. Pioneer 11 discovered the magnetic field and made some measurements of the plasma parameters. In November 1980 and August 1981, Voyager 1–2 probes investigated the magnetosphere using an improved set of instruments. From the fly-by trajectories they measured the planetary magnetic field, plasma composition and density, high energy particle energy and spatial distribution, plasma waves and radio emissions. Cassini spacecraft was launched in 1997, and arrived in 2004, making the first measurements in more than two decades. The spacecraft continued to provide information about the magnetic field and plasma parameters of the saturnian magnetosphere until its intentional destruction on September 15, 2017. In the 1990s, the Ulysses spacecraft conducted extensive measurements of the Saturnian kilometric radiation (SKR), which is unobservable from Earth due to the absorption in the ionosphere. The SKR is powerful enough to be detected from a spacecraft at the distance of several astronomical units from the planet. Ulysses discovered that the period of the SKR varies by as much as 1%, and therefore is not directly related to the rotation period of the interior of Saturn. - The subsolar point is a point on a planet, never fixed, at which the Sun appears directly overhead. - On the dayside a noticeable magnetodisk only forms when the Solar wind pressure is low, and the magnetosphere has a size larger than about 23 Rs. However, when the magnetosphere is compressed by the Solar wind the dayside magnetodisk is quite small. On the other hand, in the dawn sector of the magnetosphere the disk-like configuration is present permanently. - The contribution of the plasma thermal pressure gradient force may also be significant. In addition, an important contribution to the ring current is provided by energetic ions with energy of more than about 10 keV. - The difference between the southern and northern aurorae is related to the shift of the internal magnetic dipole to the northern hemisphere—the magnetic field in the northern hemisphere is slightly stronger than in the southern one. - Russel, 1993, p. 694 - Belenkaya, 2006, pp. 1145–46 - Blanc, 2005, p. 238 - Sittler, 2008, pp. 4, 16–17 - Tokar, 2006 - Gombosi, 2009, p. 206, Table 9.1 - Zarka, 2005, pp. 378–379 - Bhardwaj, 2000, pp. 328–333 - Smith, 1959 - Brown, 1975 - Kivelson, 2005, p. 2077 - Russel, 1993, pp. 717–718 - Kivelson, 2005, pp. 303–313 - Russel, 1993, p. 709, Table 4 - Gombosi, 2009, p. 247 - Russel, 1993, pp. 690–692 - Gombosi, 2009, pp. 206–209 - Andre, 2008, pp. 10–15 - Andre, 2008, pp. 6–9 - Mauk, 2009, pp. 317–318 - Gombosi, 2009, pp. 211–212 - Gombosi, 2009, pp. 231–234 - Blanc, 2005, pp. 264–273 - Mauk, 2009, pp. 282–283 - Young, 2005 - Smith, 2008 - Gombosi, 2009, pp. 216–219 - Smith, 2008, pp. 1–2 - Gombosi, 2009, pp. 219–220 - Russell, 2008, p. 1 - Gombosi, 2009, pp. 206, 215–216 - Gombosi, 2009, pp. 237–240 - Bunce, 2008, pp. 1–2 - Gombosi, 2009, pp. 225–231 - Bunce, 2008, p. 20 - Kurth, 2009, pp. 334–342 - Clark, 2005 - Nichols, 2009 - Gombosi, 2009, pp. 209–211 - Kurth, 2009, pp. 335–336 - Cowley, 2008, pp. 2627–2628 - Kurth, 2009, pp. 341–348 - Zarka, 2007 - Gurnett, 2005, p. 1256 - Andre, 2008, pp. 11–12 - Gombosi, 2009, pp. 221–225 - Paranicas, 2008 - Zarka, 2005, pp. 384–385 - Mauk, 2009, pp. 290–293 - Mauk, 2009, pp. 286–289 - Leisner, 2007 - Mauk, 2009, pp. 283–284, 286–287 - Mauk, 2009, pp. 293–296 - Mauk, 2009, pp. 285–286 - Johnson, 2008, pp. 393–394 - Zarka, 2005, p. 372 - Andre, N.; Blanc, M.; Maurice, S.; et al. (2008). "Identification of Saturn's magnetospheric regions and associated plasma processes: Synopsis of Cassini observations during orbit insertion". Reviews of Geophysics. 46 (4): RG4008. Bibcode:2008RvGeo..46.4008A. doi:10.1029/2007RG000238. - Belenkaya, E.S.; Alexeev, I.I.; Kalagaev, V.V.; Blohhina, M.S. (2006). "Definition of Saturn's magnetospheric model parameters for the Pioneer 11 flyby" (pdf). Annales Geophysicae. 24 (3): 1145–56. Bibcode:2006AnGeo..24.1145B. doi:10.5194/angeo-24-1145-2006. - Bhardwaj, Anil; Gladstone, G. Randall (2000). "Auroral emissions of the giant planets" (pdf). Reviews of Geophysics. 38 (3): 295–353. Bibcode:2000RvGeo..38..295B. doi:10.1029/1998RG000046. - Blanc, M.; Kallenbach, R.; Erkaev, N.V. (2005). "Solar System Magnetospheres". Space Science Reviews. 116 (1–2): 227–298. Bibcode:2005SSRv..116..227B. doi:10.1007/s11214-005-1958-y. - Brown, Larry W. (1975). "Saturn radio emission near 1 MHz". Journal of Geophysical Research. 112: L89–L92. Bibcode:1975ApJ...198L..89B. doi:10.1086/181819. - Bunce, E.J.; Cowley, S.W.H.; Alexeev, I.I.; et al. (2007). "Cassini observations of the variation of Saturn's ring current parameters with system size" (pdf). The Astrophysical Journal. 198 (A10): A10202. Bibcode:2007JGRA..11210202B. doi:10.1029/2007JA012275. - Clark, J.T.; Gerard, J.-C.; Grodent D.; et al. (2005). "Morphological differences between Saturn's ultraviolet aurorae and those of Earth and Jupiter" (PDF). Nature. 433 (7027): 717–719. Bibcode:2005Natur.433..717C. doi:10.1038/nature03331. PMID 15716945. Archived from the original (pdf) on 2011-07-16. - Cowley, S.W.H.; Arridge, C.S.; Bunce, E.J.; et al. (2008). "Auroral current systems in Saturn's magnetosphere: comparison of theoretical models with Cassini and HST observations". Annales Geophysicae. 26 (9): 2613–2630. Bibcode:2008AnGeo..26.2613C. doi:10.5194/angeo-26-2613-2008. - Gombosi, Tamas I.; Armstrong, Thomas P.; Arridge, Christopher S.; et al. (2009). "Saturn's Magnetospheric Configuration". Saturn from Cassini–Huygens. Springer Netherlands. pp. 203–255. doi:10.1007/978-1-4020-9217-6_9. ISBN 978-1-4020-9217-6. - Gurnett, D.A.; Kurth, W.S.; Hospodarsky, G.B.; et al. (2005). "Radio and Plasma Wave Observations at Saturn from Cassini's Approach and First Orbit". Science. 307 (5713): 1255–59. Bibcode:2005Sci...307.1255G. doi:10.1126/science.1105356. PMID 15604362. - Johnson, R.E.; Luhmann, J.G.; Tokar, R.L.; et al. (2008). "Production, ionization and redistribution of O2 in Saturn's ring atmosphere" (pdf). Icarus. 180 (2): 393–402. Bibcode:2006Icar..180..393J. doi:10.1016/j.icarus.2005.08.021. - Kivelson, Margaret Galland (2005). "The current systems of the Jovian magnetosphere and ionosphere and predictions for Saturn" (pdf). Space Science Reviews. Springer. 116 (1–2): 299–318. Bibcode:2005SSRv..116..299K. doi:10.1007/s11214-005-1959-x. - Kivelson, M.G. (2005). "Transport and acceleration of plasma in the magnetospheres of Earth and Jupiter and expectations for Saturn" (pdf). Advances in Space Research. 36 (11): 2077–89. Bibcode:2005AdSpR..36.2077K. doi:10.1016/j.asr.2005.05.104. - Kurth, W.S.; Bunce, E.J.; Clarke, J.T.; et al. (2009). "Auroral Processes". Saturn from Cassini–Huygens. Springer Netherlands. pp. 333–374. doi:10.1007/978-1-4020-9217-6_12. ISBN 978-1-4020-9217-6. - Leisner, S.; Khurana, K.K.; Russell, C.T.; et al. (2007). "Observations of Enceladus and Dione as Sources for Saturn's Neutral Cloud". Lunar and Planetary Science. XXXVIII: 1425. Bibcode:2007LPI....38.1425L. - Mauk, B.H.; Hamilton, D.C.; Hill, T.W.; et al. (2009). "Fundamental Plasma Processes in Saturn's Magnetosphere". Saturn from Cassini–Huygens. Springer Netherlands. pp. 281–331. doi:10.1007/978-1-4020-9217-6_11. ISBN 978-1-4020-9217-6. - Nichols, J.D.; Badman, S.V.; Bunce, E.J.; et al. (2009). "Saturn's equinoctial auroras" (pdf). Geophysical Research Letters. 36 (24): L24102:1–5. Bibcode:2009GeoRL..3624102N. doi:10.1029/2009GL041491. - Paranicas, C.; Mitchell, D.G.; Krimigis, S.M.; et al. (2007). "Sources and losses of energetic protons in Saturn's magnetosphere" (pdf). Icarus. 197 (2): 519–525. Bibcode:2008Icar..197..519P. doi:10.1016/j.icarus.2008.05.011. - Russell, C.T. (1993). "Planetary Magnetospheres" (pdf). Reports on Progress in Physics. 56 (6): 687–732. Bibcode:1993RPPh...56..687R. doi:10.1088/0034-4885/56/6/001. - Russell, C.T.; Jackman, C.M.; Wei, H.Y.; et al. (2008). "Titan's influence on Saturnian substorm occurrence" (pdf). Geophysical Research Letters. 35 (12): L12105. Bibcode:2008GeoRL..3512105R. doi:10.1029/2008GL034080. - Sittler, E.C.; Andre, N.; Blanc, M.; et al. (2008). "Ion and neutral sources and sinks within Saturn's inner magnetosphere: Cassini results" (pdf). Planetary and Space Science. 56 (1): 3–18. Bibcode:2008P&SS...56....3S. doi:10.1016/j.pss.2007.06.006. - Smith, H.T.; Shappirio, M.; Johnson, R.E.; et al. (2008). "Enceladus: A potential source of ammonia products and molecular nitrogen for Saturn's magnetosphere" (pdf). Journal of Geophysical Research. 113 (A11): A11206. Bibcode:2008JGRA..11311206S. doi:10.1029/2008JA013352. - Smith, A.L.; Carr, T.D (1959). "Radio frequency observations of the planets in 1957–1958". The Astrophysical Journal. 130: 641–647. Bibcode:1959ApJ...130..641S. doi:10.1086/146753. - Tokar, R.L.; Johnson, R.E.; Hill, T.V.; et al. (2006). "The Interaction of the Atmosphere of Enceladus with Saturn's Plasma". Science. 311 (5766): 1409–12. Bibcode:2006Sci...311.1409T. doi:10.1126/science.1121061. PMID 16527967. - Young, D.T.; Berthelier, J.-J.; Blanc, M.; et al. (2005). "Composition and Dynamics of Plasma in Saturn's Magnetosphere". Science. 307 (5713): 1262–66. Bibcode:2005Sci...307.1262Y. doi:10.1126/science.1106151. PMID 15731443. - Zarka, P.; Kurth, W.S. (2005). "Radio wave emissions from the outer planets before Cassini". Space Science Reviews. 116 (1–2): 371–397. Bibcode:2005SSRv..116..371Z. doi:10.1007/s11214-005-1962-2. - Zarka, Phillipe; Lamy, Laurent; Cecconi, Baptiste; Prangé, Renée; Rucker, Helmut O. (2007). "Modulation of Saturn's radio clock by solar wind speed" (PDF). Nature. 450 (7167): 265–267. Bibcode:2007Natur.450..265Z. doi:10.1038/nature06237. PMID 17994092. Archived from the original (pdf) on 2011-06-03. - Arridge, C.S.; Russell, C.T.; Khurana, K.K.; et al. (2007). "Mass of Saturn's magnetodisc: Cassini observations" (pdf). Geophysical Research Letters. 34 (9): L09108. Bibcode:2007GeoRL..3409108A. doi:10.1029/2006GL028921. - Burger, M.H.; Sittler, E.C.; Johnson, R.E.; et al. (2007). "Understanding the escape of water from Enceladus" (pdf). Journal of Geophysical Research. 112 (A6): A06219. Bibcode:2007JGRA..112.6219B. doi:10.1029/2006JA012086. - Hill, T.W.; Thomsen, M.F.; Henderson, M.G.; et al. (2008). "Plasmoids in Saturn's magnetotail" (pdf). Journal of Geophysical Research. 113 (A1): A01214. Bibcode:2008JGRA..11301214H. doi:10.1029/2007JA012626. - Krimigis, S.M.; Sergis, N.; Mitchell, D.G.; et al. (2007). "A dynamic, rotating ring current around Saturn" (pdf). Nature. 450 (7172): 1050–53. Bibcode:2007Natur.450.1050K. doi:10.1038/nature06425. PMID 18075586. - Martens, Hilary R.; Reisenfeld, Daniel B.; Williams, John D.; et al. (2008). "Observations of molecular oxygen ions in Saturn's inner magnetosphere" (pdf). Geophysical Research Letters. 35 (20): L20103. Bibcode:2008GeoRL..3520103M. doi:10.1029/2008GL035433. - Russell, C.T.; Khurana, K.K.; Arridge, C.S.; Dougherty, M.K. (2008). "The magnetospheres of Jupiter and Saturn and their lessons for the Earth" (pdf). Advances in Space Research. 41 (8): 1310–18. Bibcode:2008AdSpR..41.1310R. doi:10.1016/j.asr.2007.07.037. - Smith, H.T.; Johnson, R.E.; Sittler, E.C. (2007). "Enceladus: The likely dominant nitrogen source in Saturn's magnetosphere" (pdf). Icarus. 188 (2): 356–366. Bibcode:2007Icar..188..356S. doi:10.1016/j.icarus.2006.12.007. - Southwood, D.J.; Kivelson, M.G. (2007). "Saturnian magnetospheric dynamics: Elucidation of a camshaft model" (pdf). Journal of Geophysical Research. 112 (A12): A12222. Bibcode:2007JGRA..11212222S. doi:10.1029/2007JA012254. - Stallard, Tom; Miller, Steve; Melin, Henrik; et al. (2008). "Jovian-like aurorae on Saturn". Nature. 453 (7198): 1083–85. Bibcode:2008Natur.453.1083S. doi:10.1038/nature07077. PMID 18563160. - Saturn Sends Mixed Signals
<urn:uuid:d51cd0e1-f99c-49f8-8b8a-a705136e6b65>
4.03125
9,145
Knowledge Article
Science & Tech.
70.938483
95,559,857
NASA: Meteor over California and Nevada was size of minivan What would your Honda Odyssey or Dodge Grand Caravan look like falling to the Earth from space? Probably a lot like the picture above. The picture taken in Reno, Nevada, on Sunday morning shows a meteor the size of a minivan plunging through the Earth’s atmosphere, according to Bill Cooke of the Meteoroid Environments Office at NASA’s Marshall Space Flight Center in Huntsville, Alabama. Of course, this would have been one heavy minivan. Cooke said it weighed about 154,300 pounds. Your minivan probably weighs in at about 4,000 pounds. It was that size and weight that made the fireball visible in the daylight, according to NASA scientists. It was seen from Sacramento, California, in the north to Las Vegas in the south. “Most meteors you see in the night’s sky are the size of tiny stones or even grains of sand and their trail lasts all of a second or two,” NASA’s Don Yeomans said in a press release. “Fireballs you can see relatively easily in the daytime and are many times that size.” Even then, count yourself lucky if you got to see it, said Yeomans, of NASA’s Near-Earth Object Program Office at the Jet Propulsion Laboratory in Pasadena, California. “An event of this size might happen about once a year,” he said. “But most of them occur over the ocean or an uninhabited area, so getting to see one is something special.” The meteor disintegrated before hitting the ground, releasing the energy of a five-kiloton explosion in the process, according to the NASA release.AlertMe
<urn:uuid:58f4861a-499a-4f83-869a-ffe7e3f7b099>
2.703125
368
News Article
Science & Tech.
48.890217
95,559,876
Chapter 23. The Electric Force 23-1. Two balls each having a charge of 3 (C are separated by 20 mm. What is the force of repulsion between them? [pic]; F = 202 N 23-2. Two point charges of -3 and +4 (C are 12 mm apart in a vacuum. What is the electrostatic force between them? [pic]; F = 750 N, attraction 23-3. An alpha particle consists of two protons (qe =1.6 x 10-19 C) and two neutrons (no charge). What is the repulsive force between two alpha particles separated by 2 nm? qα = 2(1.6 x 10-19 C) = 3.2 x 10-19 C [pic]; F = 2.30 x 10-10 N 23-4. Assume that the radius of the electron's orbit around the proton in a hydrogen atom is approximately 5.2 x 10-11 m. What is the electrostatic force of attraction? [pic]; F = 8.52 x 10-8 N 23-5.What is the separation of two -4 (C charges if the force of repulsion between them is 200 N? [pic]; r = 26.8 mm 23-6. Two identical charges separated by 30 mm experience a repulsive force of 980 N. What is the magnitude of each charge? [pic]; q = 9.90 (C *23-7. A 10 (C charge and a -6 (C charge are separated by 40 mm. What is the force between them. The spheres are placedin contact for a few moments and then separated again by 40 mm. What is the new force? Is it attractive or repulsive? [pic]; F = 338 N, attraction When spheres touch, 6 (C of charge are neutralized, leaving 4 (C to be shared by two spheres, or +2 (C on each sphere. Now they are again separated. F =5.62 N, repulsion *23-8. Two point charges initially attract each other with a force of 600 N. If their separation is reduced to one-third of its original distance, what is the new force of attraction? [pic]; r1 = 3 r2 [pic] F2 = 5400 N The Resultant Electrostatic Force 23-9. A +60 (C charge is placed 60 mm to the left of a +20 (C charge. What is the resultant force on a -35(C charge placed midway between the two charges? F13 = 2.10 x 104 N, directed to the left [pic]; F13 = 2.10 x 104 N, directed to right. FR = F13 + F23 = (-2.10 x 104 N) + (0.700 x 104 N); FR = -1.40 x 104 N, left. 23-10. A point charge of +36 (C is placed 80 mm to the left of a second point charge of -22 (C. What force is exerted on thirdcharge of +10 (C located at the midpoint? F13 = 2025 N, directed to the right [pic]; F13 = 1238 N, directed to right. FR = F13 + F23 = 2025 N + 1238 N; FR = 3260 N, left. 23-11. For Problem 23-10, what is the resultant force on a third charge of +12 (C placed between the other charges and located 60 mm from the +36 (C charge? Both to right, so FR = F13 + F23 = 1080 N + 5940 N; F = 7020 N, rightward. 23-12. A +6 (C charge is 44 mm to the right of a -8 (C charge. What is the resultant force on a -2 (C charge that is 20 mm to the right of the -8 (C charge? Both to right, so FR = F13 + F23 = 360 N + 187.5 N; F = 548 N, rightward *23-13. A64-(C charge is locate 30 cm to the left of a 16-(C charge. What is the resultant force on a -12 (C charge positioned exactly 50 mm below the 16 (C charge? F13 = 2033 N, 59.00 N of W [pic]= 691 N, upward. Fx = 0 – F13 cos 59.00 = -(2033 N) cos 590 ; Fx = -1047 N Fy = F23 + F13 sin 59.00 = 691 N +(2033 N) sin 590; Fy = 2434 N [pic]; ( = 66.70 N of W. Resultant force: FR = 2650 N, 66.70 N of W (or 113.30) *23-14. A charge of +60 nC is located 80 mm above a -40-nC charge. What is the resultant force on a -50-nC charge located 45 mm horizontally to the right of the -40-nC charge? F13 = 2564 N, 60.640... Leer documento completo Regístrate para leer el documento completo.
<urn:uuid:fbe8e85c-7e24-4145-bf06-29a2191b55b8>
3.6875
1,131
Tutorial
Science & Tech.
117.723283
95,559,881
What is Vertically Stacked System? Vertically Stacked System meaning A low-pressure system, usually a closed low or cutoff low, which is not tilted with height, i.e., located similarly at all levels of the atmosphere. Such systems typically are weakening and are slow-moving, and are less likely to produce severe weather than tilted systems. However, cold pools aloft associated with vertically-stacked systems may enhance instability enough to produce severe weather. reference: National Weather Service Glossary
<urn:uuid:14a25e76-de6b-41ba-a52a-604690ac3c67>
3.265625
102
Knowledge Article
Science & Tech.
22.270769
95,559,892
Returns the type of the specified path, indicating whether it is an absolute, relative, or <Not A Path>. This node does not verify that the path exists on the computer. It checks only the syntax of the path. Use the File/Directory Info to verify that a file or directory exists on the computer. Path whose syntax you want to check. The type of the specified path. UNC file paths are absolute file paths. Use two leading backslashes (\\) to represent UNC file paths. type returns <Not A Path> only when you wire the Not A Path constant to this node. |2||2||<Not A Path>| Where This Node Can Run: Desktop OS: Windows FPGA: Not supported
<urn:uuid:e4eb40a9-ba3a-47d5-9212-3d4b5562bdef>
2.90625
154
Documentation
Software Dev.
61.747605
95,559,917
Radiative forcing or climate forcing is the difference between insolation (sunlight) absorbed by the Earth and energy radiated back to space. The influences that cause changes to the Earth’s climate system altering Earth’s radiative equilibrium, forcing temperatures to rise or fall, are called climate forcings. Positive radiative forcing means Earth receives more incoming energy from sunlight than it radiates to space. This net gain of energy will cause warming. Conversely, negative radiative forcing means that Earth loses more energy to space than it receives from the sun, which produces cooling. Typically, radiative forcing is quantified at the tropopause or at the top of the atmosphere (often accounting for rapid adjustments in temperature) in units of watts per square meter of the Earth's surface. Positive forcing (incoming energy exceeding outgoing energy) warms the system, while negative forcing (outgoing energy exceeding incoming energy) cools it. Causes of radiative forcing include changes in insolation and the concentrations of radiatively active gases, commonly known as greenhouse gases, and aerosols. Almost all of the energy that affects Earth's climate is received as radiant energy from the Sun. The planet and its atmosphere absorb and reflect some of the energy, while long-wave energy is radiated back into space. The balance between absorbed and radiated energy determines the average global temperature. Because the atmosphere absorbs some of the re-radiated long-wave energy, the planet is warmer than it would be in the absence of the atmosphere: see greenhouse effect. The radiation balance is altered by such factors as the intensity of solar energy, reflectivity of clouds or gases, absorption by various greenhouse gases or surfaces and heat emission by various materials. Any such alteration is a radiative forcing, and changes the balance. This happens continuously as sunlight hits the surface, clouds and aerosols form, the concentrations of atmospheric gases vary and seasons alter the groundcover. "Radiative forcing is a measure of the influence a factor has in altering the balance of incoming and outgoing energy in the Earth-atmosphere system and is an index of the importance of the factor as a potential climate change mechanism. In this report radiative forcing values are for changes relative to preindustrial conditions defined at 1750 and are expressed in Watts per square meter (W/m2)." In simple terms, radiative forcing is "...the rate of energy change per unit area of the globe as measured at the top of the atmosphere." In the context of climate change, the term "forcing" is restricted to changes in the radiation balance of the surface-troposphere system imposed by external factors, with no changes in stratospheric dynamics, no surface and tropospheric feedbacks in operation (i.e., no secondary effects induced because of changes in tropospheric motions or its thermodynamic state), and no dynamically induced changes in the amount and distribution of atmospheric water (vapour, liquid, and solid forms). Radiative forcing can be used to estimate a subsequent change in steady-state (often denoted "equilibrium") surface temperature (ΔTs) arising from that forcing via the equation: where λ is commonly denoted the climate sensitivity parameter, usually with units K/(W/m2), and ΔF is the radiative forcing in W/m2. A typical value of λ, 0.8 K/(W/m2), gives an increase in global temperature of about 1.6 K above the 1750 reference temperature due to the increase in CO2 over that time (278 to 405 ppm, for a forcing of 2.0 W/m2), and predicts a further warming of 1.4 K above present temperatures if the CO2 mixing ratio in the atmosphere were to become double its pre-industrial value; both of these calculations assume no other forcings. Radiative forcing (measured in watts per square meter) can be estimated in different ways for different components. For solar irradiance (i.e., "solar forcing"), the radiative forcing is simply the change in the average amount of solar energy absorbed per square meter of the Earth's area. Since the Earth's cross-sectional area exposed to the Sun (πr2) is equal to 1/4 of the surface area of the Earth (4πr2), the solar input per unit area is one quarter the change in solar intensity. This must be multiplied by the fraction of incident sunlight that is absorbed, F=(1-R), where R is the reflectivity (albedo), of the Earth. The albedo is approximately 0.3, so F is approximately equal to 0.7. Thus, the solar forcing is the change in the solar intensity divided by 4 and multiplied by 0.7. Likewise, a change in albedo will produce a solar forcing equal to the change in albedo divided by 4 multiplied by the solar constant. Forcing due to atmospheric gas For a greenhouse gas, such as carbon dioxide, radiative transfer codes that examine each spectral line for atmospheric conditions can be used to calculate the change ΔF as a function of changing concentration. These calculations might be simplified into an algebraic formulation that is specific to that gas. For instance, a proposed simplified first-order approximation expression for carbon dioxide would be: where C is the CO2 concentration in parts per million by volume and C0 is the reference concentration. There is the claim of a relationship between carbon dioxide and radiative forcing is logarithmic, at concentrations up to around eight times the current value, and thus increased concentrations have a progressively smaller warming effect. Some claim that at higher concentrations,however, it becomes supra-logarithmic so that there is no saturation in the absorption of infrared radiation by CO2. Radiative forcing is a useful way to compare different causes of perturbations in a climate system. Other possible tools can be constructed for the same purpose: for example Shine et al. say "...recent experiments indicate that for changes in absorbing aerosols and ozone, the predictive ability of radiative forcing is much worse... we propose an alternative, the 'adjusted troposphere and stratosphere forcing'. We present GCM calculations showing that it is a significantly more reliable predictor of this GCM's surface temperature change than radiative forcing. It is a candidate to supplement radiative forcing as a metric for comparing different mechanisms...". In this quote, GCM stands for "global circulation model", and the word "predictive" does not refer to the ability of GCMs to forecast climate change. Instead, it refers to the ability of the alternative tool proposed by the authors to help explain the system response. The table below (derived from atmospheric radiative transfer models) shows changes in radiative forcing between 1979 and 2013. The table includes the contribution to radiative forcing from carbon dioxide (CO2), methane (CH 4), nitrous oxide (N 2O); chlorofluorocarbons (CFCs) 12 and 11; and fifteen other minor, long-lived, halogenated gases. The table includes the contribution to radiative forcing of long-lived greenhouse gases. It does not include other forcings, such as aerosols and changes in solar activity. 1990 = 1 The table shows that CO2 dominates the total forcing, with methane and chlorofluorocarbons (CFC) becoming relatively smaller contributors to the total forcing over time. The five major greenhouse gases account for about 96% of the direct radiative forcing by long-lived greenhouse gas increases since 1750. The remaining 4% is contributed by the 15 minor halogenated gases. It might be observed that the total forcing for year 2016, 3.027 W m−2, together with the commonly accepted value of climate sensitivity parameter λ, 0.8 K /(m−2), results in an increase in global temperature of 2.4 K, much greater than the observed increase, about 1.2 K. Part of this difference is due to lag in the global temperature achieving steady state with the forcing. The remainder of the difference is due to negative aerosol forcing[better source needed] and/or climate sensitivity being less than the commonly accepted value, or some combination thereof. The table also includes an "Annual Greenhouse Gas Index" (AGGI), which is defined as the ratio of the total direct radiative forcing due to long-lived greenhouse gases for any year for which adequate global measurements exist to that which was present in 1990. 1990 was chosen because it is the baseline year for the Kyoto Protocol. This index is a measure of the inter-annual changes in conditions that affect carbon dioxide emission and uptake, methane and nitrous oxide sources and sinks, the decline in the atmospheric abundance of ozone-depleting chemicals related to the Montreal Protocol. and the increase in their substitutes (hydrogenated CFCs (HCFCs) and hydrofluorocarbons (HFC). Most of this increase is related to CO2. For 2013, the AGGI was 1.34 (representing an increase in total direct radiative forcing of 34% since 1990). The increase in CO2 forcing alone since 1990 was about 46%. The decline in CFCs considerably tempered the increase in net radiative forcing. An alternative table prepared for use in climate model intercomparisons conducted under the auspices of IPCC and including all forcings, not just those of greenhouse gases, is available at http://www.climatechange2013.org/images/report/WG1AR5_AIISM_Datafiles.xlsx - Shindell, Drew (2013). "Radiative Forcing in the AR5" (PDF). Retrieved 15 September 2016. - Rebecca, Lindsey (14 January 2009). "Climate and Earth's Energy Budget : Feature Articles". earthobservatory.nasa.gov. Retrieved 3 April 2018. - "NASA: Climate Forcings and Global Warming". 14 January 2009. - "Climate Change 2007: Synthesis Report" (PDF). ipcc.ch. Retrieved 3 April 2018. - Rockström, Johan; Steffen, Will; Noone, Kevin; Persson, Asa; Chapin, F. Stuart; Lambin, Eric F.; Lenton, Timothy F.; Scheffer, M; et al. (23 September 2009). "A safe operating space for humanity". Nature. 461 (7263): 472–475. Bibcode:2009Natur.461..472R. doi:10.1038/461472a. PMID 19779433. - "IPCC Third Assessment Report - Climate Change 2001". Archived from the original on 30 June 2009. - "Atmosphere Changes". Archived from the original on 10 May 2009. - Myhre, G.; Highwood, E.J.; Shine, K.P.; Stordal, F. (1998). "New estimates of radiative forcing due to well mixed greenhouse gases" (PDF). Geophysical Research Letters. 25 (14): 2715–8. Bibcode:1998GeoRL..25.2715M. doi:10.1029/98GL01908. - Huang, Yi; Bani Shahabadi, Maziar (28 November 2014). "Why logarithmic?". J. Geophys. Res. Atmospheres. 119 (24): 13,683–89. Bibcode:2014JGRD..11913683H. doi:10.1002/2014JD022466. - Zhong, Wenyi; Haigh, Joanna D. (27 March 2013). "The greenhouse effect and carbon dioxide". Weather. 68 (4): 100–5. Bibcode:2013Wthr...68..100Z. doi:10.1002/wea.2072. ISSN 1477-8696. - IPCC WG-1 Archived 13 December 2007 at the Wayback Machine. report - Shine, Keith P.; Cook, Jolene; Highwood, Eleanor J.; Joshi, Manoj M. (23 October 2003). "An alternative to radiative forcing for estimating the relative importance of climate change mechanisms". Geophysical Research Letters. 30 (20): 2047. Bibcode:2003GeoRL..30.2047S. doi:10.1029/2003GL018141. - This article incorporates public domain material from the NOAA document: Butler, J.H. and S.A. Montzka (1 August 2013). "THE NOAA ANNUAL GREENHOUSE GAS INDEX (AGGI)". NOAA/ESRL Global Monitoring Division CFC-113, tetrachloromethane (CCl 4), 1,1,1-trichloroethane (CH 3); hydrochlorofluorocarbons (HCFCs) 22, 141b and 142b; hydrofluorocarbons (HFCs) 134a, 152a, 23, 143a, and 125; sulfur hexafluoride (SF 6), and halons 1211, 1301 and 2402) - Hansen, J.E.; et al. "GISS Surface Temperature Analysis: Analysis Graphs and Plots". Goddard Institute for Space Studies, National Aeronautics and Space Administration. - Particulates#Climate effects - Schwartz, Stephen E.; Charlson, Robert J.; Kahn, Ralph A.; Ogren, John A.; Rodhe, Henning (2010). "Why hasn't Earth warmed as much as expected?". Journal of Climate (published 15 May 2010). 23 (10): 2453–64. Bibcode:2010JCli...23.2453S. doi:10.1175/2009JCLI3461.1. - IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp., 31 January 2014., - IPCC glossary - CO2: The Thermostat that Controls Earth's Temperature by NASA's Goddard Institute for Space Studies, October, 2010, Forcing vs. Feedbacks - Intergovernmental Panel on Climate Change’s Fourth Assessment Report (2007), Chapter 2, "Changes in Atmospheric Constituents and Radiative Forcing," pp. 133–134 (PDF, 8.6 MB, 106 pp.). - U.S. EPA (2009), Climate Change – Science. Explanation of climate change topics including radiative forcing. - United States National Research Council (2005), Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties, Board on Atmospheric Sciences and Climate - Small volcanoes add up to cooler climate; Airborne particles help explain why temperatures rose less last decade August 13, 2011; Vol.180 #4 (p. 5) Science News - NASA: The Atmosphere’s Energy Budget - Energy balance: the simplest climate model - Explore Mann's climate projections from Scientific American
<urn:uuid:d04162c9-94d5-46d6-8722-27092919aefa>
4.375
3,210
Knowledge Article
Science & Tech.
52.102211
95,559,970
Magnetology is an application which permits one to watch the forecast of geomagnetic storms and the state of Earth. s geomagnetic field. WHY IS IT SO IMPORTANT? Geomagnetic storms adversely affect the health of humans. They can cause aches, dizziness, rapid heart rate, and abnormal blood pressure. People with abnormal conditions of the blood and of the cardiovascular and autonomic nervous systems are particularly susceptible to the influence of geomagnetic storms. The degree of magnetic storms' influence on a particular person depends on many factors, including health status, stress, physical and emotional fatigue, natural predisposition. Not only can geomagnetic storms have a negative impact on humans, but also on communications, navigation, and power. Drones or unmanned aerial vehicles (UAVs) are most vulnerable to impact of geomagnetic storms. By taking into account the forecast of geomagnetic storms, you can plan difficult decisions, avoid high physical stress, and protect yourself from negative consequences. Our application will help you with this. ABOUT THE NATURE OF THE GEOMAGNETIC STORMS The Earth's geomagnetic field changes occur under the influence of the Sun. The intensity of the impact is not constant and depends on the ongoing processes on the solar surface. The strength of geomatic field disturbances is characterized by the value of Kp index. The Kp index can be assigned values from 0 to 9, where 0 is the absence of disturbances and 9 represents extremely strong perturbations of the earth. s geomagnetic field. Disturbances with a Kp index of 5 and above are called storms. The number of geomagnetic storms per year can reach 50. Available in Apple App Storehttps://itunes. apple.com/app/id789651124IN COMING UPDATES OF THE APPLICATION EXPECT The ability to view history;Personal forecasts. The app uses data of National Oceanic and Atmospheric Administration.
<urn:uuid:11ddbd6e-5fb1-4a83-a0c7-1718c8746d1a>
2.921875
396
Product Page
Science & Tech.
37.503417
95,559,978
6 November, 2017Scientific Paper Astronomers found a rich molecular reservoir in the heart of an active star-forming galaxy with the Atacama Large Millimeter/submillimeter Array (ALMA). Among eight clouds identified at the center of the galaxy NGC 253, one exhibits very complex chemical composition, while in the other clouds many signals are missing. This chemical richness and diversity shed light on the nature of the baby boom galaxy. Ryo Ando, a graduate student of the University of Tokyo, and his colleagues observed the galaxy NGC 253 and for the first time, they resolved the locations of star formation in this galaxy down to the scale of a molecular cloud, which is a star formation site with a size of about 30 light-years. As a result, they identified eight massive, dusty clouds aligned along the center of the galaxy. “With its unprecedented resolution and sensitivity, ALMA showed us the detailed structure of the clouds,” said Ando, the lead author of the research paper published in the Astrophysical Journal. “To my surprise, the gas clouds have a strong chemical individuality despite their similarity in size and mass.” Different molecules emit radio waves at different frequencies. Using this feature, the team investigated the chemical composition of the distant clouds by analyzing the radio signals precisely. They identified signals from various molecules including formaldehyde (H2CO), hydrogen cyanide (HCN), and many organic molecules. One of the clouds stood out with its extremely rich chemical composition. The team identified footprints of 19 different molecules in the cloud, such as thioformaldehyde (H2CS), propyne (CH3CCH), and complex organic molecules including methanol (CH3OH) and acetic acid (CH3COOH). “The data are filled with the signals of various molecules,” said Ando. “It is like a forest of molecules.” Many “molecular forests” have been found in our Milky Way Galaxy, but this is the first example outside the Milky Way. Researchers assume that the molecular jungle is an aggregate of dense and warm cocoons around bright baby stars. The cocoon gas is heated from inside by hundreds of young stars and a myriad of chemical reactions is driven to form various molecules. Interestingly, the number of chemical signals is different in different clouds. For example, another cloud among the eight has a very sparse chemical composition, even though it is located within dozens of light-years of the chemically rich cloud. Such a diverse nature of star forming clouds has never been seen before and could be a key to understanding the starburst process in this galaxy. NGC 253 is a prototypical active star forming galaxy, or starburst galaxy. It is located 11 million light-years away in the constellation Sculptor. Starburst, or baby boom, galaxies have been the major drivers of star formation and galaxy evolution throughout the whole history of the Universe. Therefore it is crucial to understand what exactly is going on in the heart of such galaxies. These observation results were published as Ando et al. “Diverse nuclear star-forming activities in the heart of NGC 253 resolved with 10-pc-scale ALMA images” in the Astrophysical Journal in November 2017. The research team members are: Ryo Ando (The University of Tokyo), Kouichiro Nakanishi (National Astronomical Observatory of Japan/SOKENDAI), Kotaro Kohno (The University of Tokyo), Takuma Izumi (National Astronomical Observatory of Japan/The University of Tokyo), Sergio Martín (European Southern University/Joint ALMA Observatory), Nanase Harada (Academia Sinica Institute of Astronomy and Astrophysics), Shuro Takano (Nihon University), Nario Kuno (University of Tsukuba), Naomasa Nakai (University of Tsukuba), Hajime Sugai (The University of Tokyo), Kazuo Sorai (Hokkaido University), Tomoka Tosaki (Joetsu University of Education), Kazuya Matsubayashi (National Astronomical Observatory of Japan), Taku Nakajima (Nagoya University), Yuri Nishimura (The University of Tokyo/National Astronomical Observatory of Japan), and Yoichi Tamura (Nagoya University/The University of Tokyo) This research was supported by the Japan Society for the Promotion of Science KAKENHI (Grant Number 15K05035 and 25247019). The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of the European Organisation for Astronomical Research in the Southern Hemisphere (ESO), the U.S. National Science Foundation (NSF) and the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Republic of Chile. ALMA is funded by ESO on behalf of its Member States, by NSF in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and by NINS in cooperation with the Academia Sinica (AS) in Taiwan and the Korea Astronomy and Space Science Institute (KASI). ALMA construction and operations are led by ESO on behalf of its Member States; by the National Radio Astronomy Observatory (NRAO), managed by Associated Universities, Inc. (AUI), on behalf of North America; and by the National Astronomical Observatory of Japan (NAOJ) on behalf of East Asia. The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA.
<urn:uuid:da93d492-486b-4a87-881a-c0522946df65>
3.5
1,166
News (Org.)
Science & Tech.
16.107552
95,560,010
Dataset Details - Northern Ireland European hare (Lepus europaeus) survey 2005 European hare records for Northern Ireland during 2005 Chapter of a PhD thesis: Reid, N. (2006). Conservation ecology of the Irish hare (Lepus timidus hibernicus) and published in the peer-reviewed journal Biology & Environment; Proceedings of the Royal Irish Academy Nocturnal spotlight surveys from driven transects Mid-Ulster and west Tyrone All sightings verified by Neil Reid; ambiguous sightings were not included. Published as: Reid, N. & Montgomery, W.I. (2007). Is naturalisation of the brown hare in Ireland a threat to the endemic Irish hare? Biology and Environment: Proceedings of the Royal Irish Academy, 107b(3): 129-138 Restricted (Data can be viewed on the mapping system but is not available for download) Dr. Neil Reid, Northern Ireland European hare (Lepus europaeus) survey 2005, National Biodiversity Data Centre, Ireland, accessed 20 July 2018, <https://maps.biodiversityireland.ie/Dataset/61> Dataset species distribution Terrestrial Map - 10kmDistribution of the number of species recorded within each 10km grid square (ITM). Marine Map - 50KmDistribution of the number of species recorded within each 50km grid square (WGS84). Dataset record distribution Terrestrial Map - 10kmDistribution of the number of records within each 10km grid square (ITM). Marine Map - 50KmDistribution of the number of records within each 50km grid square (WGS84). Records per year Species per year |Abundance||Measure of abundance of the organism recorded| |Common name||The common name of the taxon| |Determinier name||The name of the person(s) who verified the identification - used for verify records of difficult taxa| |Loading data from server|
<urn:uuid:03a82d8e-c58d-4244-8c1b-80ecdc2dfd6c>
2.765625
431
Structured Data
Science & Tech.
36.244442
95,560,016
Become an Xcoder: Start Programming the Mac Using Objective-C by B. Altenberg, A. Clarke, P. Mougin Publisher: CocoaLab 2008 Number of pages: 69 A free book for starting with Cocoa using Objective-C. It teaches you the basics of programming, in particular Objective-C programming, using Xcode. This tutorial is written for non-programmers, and is aimed at leveling the learning curve as much as possible. Home page url Download or read it online for free here: by James Duncan Davidson - O'Reilly Media, Inc. This new edition covers the latest updates to the Cocoa frameworks, including examples that use the Address Book and Universal Access APIs. This is the 'must-have' book for people who want to develop applications for Mac OS X. by Neil Smyth - Techotopia The Objective-C 2.0 Essentials free online book contains 34 chapters of detailed information intended to provide everything necessary to gain proficiency as an Objective-C programmer for both Mac OS X and iPhone development. by Ryan Hodson - Smashwords This book is both a concise quick-reference and a comprehensive introduction for newcomers to the Objective-C programming language. It walks through each language feature step-by-step, explaining complex programming concepts via hands-on examples. by Feifan Zhou - Binpress I will attempt to teach C and Objective-C as one language. Obj-C is a strict superset of plain C, which means that any valid C is also valid Obj-C. I will supply plenty of screenshots, and include exercises at the end of each lesson.
<urn:uuid:3691a45e-324a-4948-8169-213677d435bb>
2.8125
345
Product Page
Software Dev.
50.628839
95,560,021
this section we are exploring the java.net which provides the support for networking in java with a generic style. All the java classes for developing a network program are defined in the java.net Here we illustrate an example in which a client sends a request (lets say the request is.."POST/index.html HTTP/1.0\\n\\n" ) to the server for a file named index.html and as the server establishes a connection successfully, it gets the index.html file and sends it to the client. |Listens to port 8080.||Connects to port 8080.| |Accepts the connection.||Writes "POST/index.html HTTP/1.0\\n\\n".| |Reads up until it gets the second end-of line (\\n).| |Sees that POST is a known command and that HTTP/1.0 is a valid protocol version.| |Reads a local file called /index.html.| |Writes "HTTP/1.0 8080 OK\\n\\n".||"8080" means "here comes the file."| Copies the contents of the file into the socket. Reads the contents of the file and displays it. The above process is an actual transaction via which a client and a server talk with each other. Every computer on a network has an address. It is called an Internet address ( IP Address) it is a 12 digit number that uniquely identifies each computer on the Network. There are 32 bits in an IP address having a sequence of four numbers between 0 and 255 separated by dots (.). However it is a very cumbersome process to remember the computers on the network through their IP address values so DNS is there to help you out. It has devised a string format to identify an IP address that avoids the user to track so many numbers (IPs) over a network, it is also known as Domain Naming Service (DNS). Now lets quickly move on to the networking part of java and take a look at how it relates to all of these networking concepts. In Java we can build I/O objects across the network by extending the I/O interface. Java supports most of the networking protocols e.g. TCP and UDP protocol families, TCP is used for reliable stream-based I/O across the network and UDP supports the point-to-point datagram-oriented model. All the java networking classes and interfaces are contained in the java.net package, it is given below: Including all above the package's interfaces of the java.net are as under : Apart from all of this the address is the fundamental part in sending mail, or establishing a connection across the Internet. In java the InetAddress class is used to encapsulate both the numerical IP address and the domain name for that address. The InetAddress class has no visible constructors that is why, to create an InetAddress object, we have to use one of the available factory methods like getLocalHost( ), getByName( ), and getAllByName( ) can be used.
<urn:uuid:5c6c9e9d-2b98-4440-bdaf-65144929978b>
3.9375
656
Documentation
Software Dev.
67.678109
95,560,023
What we don't need in object-oriented programming What we don't need in object-oriented programming Join the DZone community and get the full member experience.Join For Free The Agile Zone is brought to you in partnership with Techtown Training. Learn how DevOps and SAFe® can be used either separately or in unison as a way to make your organization more efficient, more effective, and more successful in our SAFe® vs DevOps eBook. Once I heard Alberto Brandolini giving a keynote at an Italian conference, saying, between other insights, that Lego bricks where one of the most abused metaphors in software engineering. One of the most abused sayings, instead, is this one: Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. -- Antoine de Saint-Exupérythat is often used in reference to application design. However, it is in general true that you should strive for writing code as simple as possible (in length but in complexity too) to obtain an end result specified by a requirement or a user story. If there are two designs for a feature, almost always we would choose the simplest. This article takes a broader view on object-oriented programming, and ask the following question: - What are the constructs we don't need in an object-oriented language? What we can take away without hampering our capabilities of building software? - If we actually don't need them, we can eliminate them from the language, and make it more simple. At the same time, make the application written in it more consistent, since they have less features to abuse. Forgive me if I'll refer to patterns instead of code, but I don't want this discussion to become language-specific. Usually, when you check for an object being instance of a class, you can implement the scenario with polymorphism. Should I say anything? Even Java, the most-diffused object-oriented language in the world, has Goto as a reserved keyword (I hope it is not implemented). break and continue (disguised jumps) They are really gotos which have put on camouflage, and any professor of computer science would keep an eye on you if you used them (mine did, and I was too young to recognize the value of clean code). Subclassing is very often abused, with classes extending other ones only to gain access to their methods, in spit of is-a relationships; maybe we can prohibit them altogether. We'll gain almost automatic conformance to the Liskov Substitution Principle, since it would involve only interface and their implementations. The main reason for its infringements, sharing methods code, would go away. Since eliminating subclassing is a bit extreme, an idea that stroke me while re-reading Uncle Bob's explanation of SOLID principles is to eliminate subclassing of concrete classes. This way, we'll never introduce a dependency towards an implementation detail, but we can still make use of abstract classes, very valuable in an API. protected (subclassing scope) Of course, if we eliminate subclassing, we can throw away the intermediate protected scope altogether. Also if we choose a single level of subclassing towards an abstract one, this scope would do less harm than it does today. For example, in Zend Framework almost always is protected instead of private, and the user is free to subclass and access any internal part of the framework's components. The scenarios that happen then are two: either the framework developers cannot modify code anymore to avoid breaking backward compatibility (so why they bothered restricting the scope in the first place), or the user has to revise his classes when upgrading from 1.6.1 to 1.6.2 because of sudden changes. public (for fields) Public fields are the death of encapsulation, and I can't remember the last time I introduced a public field long time ago. Let's just throw them away. private (for methods) How do you test private methods? Often we arrive to a situation where we have complex private methods we want to test independently. This is often a smell of a class that does more than one thing (Single Responsibility Principle), and should be ripped up. In this case, we can move the private methods away and transform them in public methods on a private collaborator, safeguarding encapsulation but simplifying the picture and being able to test them due to the new contract established between the main class and the extracted one. switch (conditional chain) Switch chains, especially if duplicated in different places, can be replaced with polymorphism. The idea is to push the switch up to che choice of the object, and push the particular operations in each of the switch branches into the chosen object. If we can replace switches, we can also replace ifs, that are a special case of switch with only two branches. The reasoning is the same and the advantages are clear: you would have only one execution path to test for each method, and you'll be sure of which lines are executed: all of them. Misko Hevery says that often most of the ifs can be replaced with polymorphism, but you should be pragmatic about it and not try to kill them all. The problem with the last two elimination is that in our languages we can't remove all ifs yet, but only push them up in the creation of the object graph. But if we had a language or a construct capable of transforming textual configuration into a graph of objects, we could get rid of ifs even in the Factories. Dependency Injection containers actually transform configuration into objects wired in different ways, so we may be near to a solution. Some of this crude eliminations are extreme and maybe not even practical. However, before making a trade-off between two scenarios, a programmer has to know the extreme situations that arise from the overuse and absence of a construct or feature. I am the first to recognize to have committed (in both senses) if and subclassing abuse. I hope to have made you think a bit about subclassing, scopes and conditionals, so that the next time you start coding you will ask yourself: I really need to use yet another subclass / a very long private method / this duplicated switch construct? Opinions expressed by DZone contributors are their own.
<urn:uuid:3d852417-2e6a-48eb-8ca1-758b168778be>
2.625
1,313
Truncated
Software Dev.
41.404172
95,560,030
I wrote a basic Python program to either pick the number 1 or 2 ( You can think of it like heads or tails) It's just a random number generator that is forced to choose between either 1 or 2, with every number generated i made the program track the longest streaks of consecutive numbers in a row. For example if the RNG picked the number 2 ten times in a row, the high score streak would be 10. Simple enough. Every time the streak was broken the streak counter would reset but keep track of the highest streak. I wanted to see how many times i could roll the same number in a row. I did a few tests, I used 1 thousand times, 2 million times, 5 million times and did 10 million times. Obviously the 10 million times one took longer, i could have used much higher numbers 50 million, 100 million, 1 billion but i didn't want to sit and wait very long for the PC to calculate. I just wanted a general feel. This is not for any project or anything, i just did this pointless task for some intuition. Anyway after my total amount of tests, the highest number of consecutive same numbers in a row was 28. Which was made during the 10 million test, this number obviously fluctuated and was not consistent on every test, but 28 was the highest i saw. The computer rolled the number "two" 28 times in a row, which is quite a streak, if you ran longer tests you would see a much higher streak I'm guessing. Per 1 million flips, i would frequently see 18 as the highest streak, or around there. My question is, the rate at which a computer does these calculations are so fast, the delay between each try is microseconds compared to if we were to actually sit there and flip a coin 1 million times. Does the delay between each try matter in any sort of way? For example is fliping a digital coin 10 million times in 1 second the same as flipping a digital coin 10 million times in 1 hour. You flipped the exact amount of times, but the delay between each flip was longer. Would this in anyway, change some sort of RNG god mechanics or at least when it comes to the way computers make RNG. Any random thoughts welcome. I was thinking today about how lucky I am and how i shoukd think this way more. I wondered what are the odds statistically of being born in a place that is not a war torn country, having an education, a car, a job, food, access to clean water, etc... I know this could be misconstrued as look at me, I have it better off than other people. But its not like that, just trying to see how statistically likely we are to have things we take for granted at one time or another. I’m looking at Connecticut’s unclaimed property website (www.CTBigList.com). Based on the statistics at the top of the page, the minimum value of a property is $50, the average value is $543 ($775,974,374.42/1,427,886), and the median value is between $100 and $500. I want to claim that 20 properties would likely be worth thousands of dollars (>=$2,000). My gut tells me that’s right, but I don’t know how to establish that. It can’t be 20 times the average ($543). I think that 20 times the median (>$100) is closer, but it seems that I need to know or make assumptions about the distribution of the values. I.e., there are some odd distributions where that might not work. So, is my statement right and how do you prove it mathematically? Or is there not enough information? The problem is simple: A box in a store contain 36 packs. One of those packs contain a rare item, but no one know which pack it is. A first guy buy one pack. Then he go back home without showing what he got, leaving the shop with 35 pack remaining. A second guy buy one pack. How would you formulate his odds to get the rare item considering the first guy maybe got it? Then he go back home without showing what he got, leaving the shop with 34 pack remaining. A third guy buy one pack...A fourth guy... a 36th... How would you formulate the odds for each guy, and which one has the more chance to get the rare item? Sorry for my bad english, i'm not english native^^ And thanks for your help :-) In a major city in the northwest U.S., 28% of the days experience some period of rain. What is the probability that at least 12 days out of a random sample of 31 days experience some period of rain? I need to know the steps and answer for this question. Any help much appreciated. So once you land a third tail, it's game over. What about 15 heads with only 4 tails allowed My friend had two dice and told me to guess a number 0-12 and I was wondering what the percent chance I had to guess correctly. I don’t know how to do that math but somebody here might :) I have C = X_i KL(Pi ||Qi) = Xi Xj p(j|i ) log p(j|i) / q(j|i). This is your basic KL divergence. Pi represents the conditional probability distribution over all given data-points given xi , and Qi represents the conditional probability distribution over all map points given yi. I understand that this is a asymmetric. The only question is the paper mentions "In particular, there is a large cost for using far map points to represent data-points that are close (i.e, using a small qj|i to model a large pj|i)." can anybody help in understanding this statement. Here is the link to full paper https://www.cs.tau.ac.il/~rshamir/abdbm/scribe/17/lec05.pdf pg 2 The prior probability that I have the disease is d, if I do then the probability of my test result being positive is p, and if I don't the probability of my test result being positive is q. I need to find the posterior probability that I have the disease for each test result (positive or negative). My attempt is below, but I don't know if I've got it right or if I have misunderstood what the probabilites represent. P(T) = probability that test is positive P(F) = probability that test is negative P(D) = probability that I have the disease P(ND) = probability that I don't have the disease Probabilities from description: P(D) = d P(ND) = 1 - d P(T|D) = p P(T|ND) = q P(F|D = 1 - p) P(F|ND = 1 - q) A priori probabilty of a positive test: P(T) = P(T|D)P(D) + P(T|ND)P(ND) = pd + q(1-d) P(F) = P(F|D)P(D) + P(F|ND)P(ND) = (1-p)d + (1-q)(1-d) P(D|T) = P(T|D)P(D)/P(T) = pd / (pd + q(1-d)) P(D|F) = P(F|D)P(D)/P(F) = (1-d)q / (1-p)d + (1-q)(1-d) If I put these into Excel (with 25000 random probabilities for p, q and d) all my probabilities are between 0 and 1, so have I got it right? bumping my previous question: We were trying to figure out if the probability of flush is higher than the probability of a straight and got this far (If you're not familiar with "trash", each player is dealt 7 cards. Then each player choose 3 cards to pass on to the next player (assume clockwise)): the probability you are holding 4 cards of the same suit out of 7 dealt is: 52/52 (can be any one card) * 12/51 (12 remaining in the suit) * 11/50 * 10/49. is that correct? what about the other 3? I think you add this total to itself however many times for all combinations of the 4 out of 7? for flush, assuming you are now holding 4 cards of the same suit after passing on your "trash". If someone were to pass you one card, the random probability of it being the same suit would be 9/52 (there are 9 more cards of that same suit). Since we don't have any information on whether that player or another is trying to go for a flush of the same suit, we won't try to account for that possibility for now. The probability that one of the 3 cards is the same suit is 9/52+9/52+9/52 = 27/52. For straights - the probability you are holding 4 consecutive cards out of the 7 dealt is: 52/52 (any card is fine) * 4/51 (very specific next 4) * 4/50 * 4/49 * 4/48. Is that correct? Add up for however many combinations of 4/7 there are. - assuming you are holding 4 consecutive cards (different suits is fine), the probability of getting the next card in the sequence is 4/52. Same with the card before (unless the current cards begin or end with an Ace). So the probability of one being the needed card is 4/52+4/52+4/52 = 12/52. - If you almost have a straight but need a card in the middle of the sequence, at best the probability of getting the correct card is 4/52. Is that correct? What are the correct answers, and can you detail out the steps? So a straight is harder to obtain than a flush in this version of the game? Which means a straight should beat a flush and not the other way around. But practically, you should go for the flush more often. I am working on a project that uses the Martingale Betting System. Given the probability of a win is 18/38 and the probability of a loss is 20/38 how would I find the following probability: What is the probability that I lose 6 times in a row before I win 10 times (not necessarily in a row, but could be)? I know how to solve for the probability that I lose 6 times in a row, but I don’t know how to incorporate it with the second component of “before I win 10 times”. At first I thought the two were independent but if I consider extremes (ex: before I win once, before i win 10,000) times I understand that they are not independent. I’ve been going back through some probability material and upon doing some problems I had the following thoughts and wanted to confirm their truth or be schooled otherwise. Are all of the following statements true? P(A or B) is always greater than or equal to P(A) P(A and B) is always less than or equal to the probability of the minimum between P(A) and P(B). Any confirmation or contradictions would be well appreciated The 4th Int’l Conference on Probability and Stochastic Analysis (ICPSA 2019) Conference Date: January 5-7, 2019 Conference Venue: Sanya, China Website: http://www.engii.org/conference/ICPSA2019/ Online Registration System: http://www.engii.org/RegistrationSubmission/default.aspx?ConferenceID=1075 Publication and Presentation All the accepted papers will be published by "Journal of Applied Mathematics and Physics" (ISSN: 2327-4352), a peer-reviewed open access journal that can ensure the widest dissemination of your published work. Contact Us The conference is soliciting state-of-the-art research papers in the following areas of interest: Probability Theory / Theoretical Probability Infinite Dimensional Analysis Quantum Probability Probability Distributions Law of Large Numbers Central Limit Theorem Stochastic Processes Measure-Valued Processes Stochastic Networks Large Deviation Theory Stochastic Differential Equations Stochastic Partial Differential Equations Population and Evolutionary Models Applied Stochastic Models Stochastic Analysis Hierarchical Mean-Field Analysis Applications of Probability and Stochastics Other Related Topics I am working on designing a board game with dice, but it's been a decade since high school math, and my probability is a little rusty. I want to chart the probability of how many dice will roll a certain number or higher when rolling different amounts of dice. So for example, let's say I'm rolling three dice, what is the probability that only one die will be >3? What is the probability that 2 dice will be >3? All three dice >3? No dice >3? I want to be able to chart out for 1-6 dice being thrown, and 0-6 'hits' being scored, and to be able to change what counts as a 'hit'. Can someone help refresh me on what formulas I need to use? Thanks! On Saturday, the NHL draft lottery was held. The rules are as follows: There are 3 lotteries: The first lottery has pre-set odds (listed below), with the worst team having odds of 18.5% (185 combinations out of 1000), 13.5% for 2nd worst, etc., all the way to 1% for 15th place. The winner of the 1st lottery receives the 1st overall pick. Following the first lottery, they redraw for the second lottery. If the winning combination belongs to the team who won the first lottery, they redraw. The winner of the 2nd lottery receives the 2nd overall pick. Same rules apply to the 3rd draw. The winner receives the 3rd overall pick. Note that because of the re-draw possibility, the odds for each lottery shift depending on who won the prior lottery. (e.g. if Buffalo (18.5%) wins the first lottery, Ottawa’s odds in the 2nd lottery move to 16.564%. If Florida (1%) wins the first lottery, Ottawa’s odds in the 2nd lottery would be 13.636%.) The NHL announced the final draft position of teams 4-15, essentially indicating (by omission) which 3 teams had won the 3 lotteries. The final order of these 3 teams was to be revealed later that night. Prior to learning their true order, and only knowing which 3 teams were in the top 3, but not their order, what is the probability of each team winning the 1st overall pick? The teams were (with original odds in brackets): I have figured out two very different answers to this question, and I cannot disprove either one yet. Thanks for your help! ORIGINAL ODDS FOR 1ST LOTTERY: NY Rangers 6% NY Islanders 3.5% St. Louis 1.5% If I have a deck of cards and I can draw one card each turn, and I'm looking for one specific card (let's say king of hearts), I'm guessing my chances would be 1 in 42, then 1 in 41 etc. But, if I drew one card per turn and also shuffled the deck,would my chances stay the same or increase? Situation 1 I roll 1 die. The result is the amount of dice I can roll for the second part. I roll the amount of dice and need to roll either 5s or 6s. I can reroll any result of 1,2,3 or 4 once only. Example. I roll a die and get a 3. I roll 3 dice and get 1,4,5. I reroll the 1 and the 4 and get 2,6. Therefore I have gotten 2 results of 5 or 6 overall. The second situation is the same as the first except I can reroll the first dice if I want. So if roll a 1 I can reroll the dice and say get a 5. But I can't reroll the dice in the second part. Which method will have the higher chance of getting more results of 5 or 6 in the second part I was playing some games with my buddy a few weeks back and we each rolled a d-20. They both were the same. We re-rolled. Same. Re-roll. Same. Only on our fourth roll did we get a different set of numbers. The same thing happened 2 hours later. I can only imagine this is so unlikely that it's possible that nobody else has experienced this before, especially in this short time frame. Can someone please tell me the probability of 2 d-20's rolling equal numbers 3 times in a row? Suppose a random variable X has a p.m.f P(x). Let Y = aX+b f9r some a and b. Find the distribution of Y in terms of P(x) when b>=0. Now I'm really skeptical about this but after racking my brain i have the feeling that the answer to this is simply P(x)? Any help is highly appreciated
<urn:uuid:9de4e866-74ed-412a-9380-af3c388a5fde>
2.5625
3,708
Comment Section
Science & Tech.
76.951513
95,560,075
Ukrainian astronomers have discovered a large asteroid that could hit Earth in 2032, though the impact risk is minimal, according to current estimates. The 410-meter-wide (1,350-foot) minor planet, which has been named 2013 TV135, was first discovered last weekend by the Crimean Astrophysical Observatory in southern Ukraine, according to the International Astronomer Union’s Minor Planet Center. As of Thursday, the discovery had been confirmed by five more astronomy groups, including in Italy, Spain, the UK and Russia’s Siberian republic of Buryatia, the center said on its website. The asteroid has been classified as potentially hazardous, a formal tag given to celestial bodies whose orbits bring them closer than 7.5 million km from Earth’s orbit. The minimal distance between the orbits of 2013 TV135 and Earth is currently put at 1.7 million km. However, it also has a 1 in 63,000 chance of colliding with Earth on August 26, 2032, according to available estimates. Astronomers will be able to better evaluate the impact risk of the asteroid – and even determine its possible impact site on Earth – in 2028, Timur Kryachko of the Crimean Astrophysical Observatory told RIA Novosti on Thursday. “Here’s a super-task for our space industry,” Russian Deputy Prime Minister Dmitry Rogozin, who has lobbied for Russia to develop asteroid defense systems, said of the asteroid on Twitter on Thursday. The 2013 TV135 has been given a 1 out of 10 rating on the Torino Scale, which estimates asteroid impact hazards. Only one other asteroid currently has the same rating, with collision risks for all others being “effectively zero,” according to NASA’s Near Earth Object Program. The 2013 TV135 colliding with Earth would create an explosion estimated to be equivalent to 2,500 megatons of TNT – 50 times greater than the biggest nuclear bomb ever detonated. DON'T MISS A POST - FOLLOW US ON FACEBOOK! Comments at Speisa are unmoderated. We do believe in free speech, but posts using foul language, as well as abusive, hateful, libelous and genocidal posts, will be deleted if seen. However, if a comment remains on the site, it in no way constitutes an endorsement by Speisa of the sentiments contained therein.comments powered by Disqus Warns of a new major war in Europe Europe comes ever closer to a new major war, says one of Putin's closest advisers. He believes the situation today is considerably more tense than during the Cold War.
<urn:uuid:02f2dfed-b65f-4f4e-bea4-5cf8a0020b8b>
3.296875
551
News Article
Science & Tech.
43.75708
95,560,078
The typical climatic diagram for this zonobiome shows a very high rainfall for all months of the year and an almost horizontal curve for the mean monthly temperature, which is around 27°C in low-lying areas (Fig. 1.1). Daily temperature fluctuations are, by contrast, far greater, the mean annual value varying from 6° to 12°C. When accurate data are available, this mean value is recorded on the climatic diagram, left of the temperature curve; the absolute maximum and the mean daily maximum of the warmest month are also recorded (above on the ordinate), as are the average daily minimum of the coldest month and the absolute minimum (below on the ordinate). Absolute fluctuations in temperature at the equator can, even at sea level, be 13–18°C. This can easily be overlooked, if the monthly average alone is taken into account. This is a diurnal climate. KeywordsPotential Evaporation Humid Tropic Congo Basin Free Water Surface Virgin Forest Unable to display preview. Download preview PDF.
<urn:uuid:0e5644cc-4ed4-4933-bb4c-d66712777c3b>
3.5
218
Truncated
Science & Tech.
38.921626
95,560,116
Writing in the Inderscience publication International Journal of Environment and Waste Management, the team explains how bacteria that grow on particles in a sand filter effectively extract the compounds that produce the taste. Natural earthy and musty smells in our drinking water are not usually a health risk, but many consumers prefer a fresher taste. This represents an ongoing challenge to the water companies. "Although adverse odours do not present a risk to human health, their presence often leads to a misconception that the water is unsafe for drinking," explains Gayle Newcombe, Research Leader at the Applied Chemistry Unit of the Australian Water Quality Centre in Salisbury, South Australia. She and her colleagues have investigated the effect of sand filters in extracting the most common earthy molecules, geosmin and methylisoborneol, from the water supply. These two compounds occur naturally in water and are non-toxic. Newcombe and her colleagues at the Australian Water Quality Centre and Bridget McDowall in the School of Chemical Engineering at The University of Adelaide have now demonstrated that they can remove geosmin and MIB using biologically active sand filters. In such filters, the particles of sand are allowed to accumulate a biological film of beneficial bacteria that absorb and break down the biodegradable odour molecules. The team tested sand filter material taken from working water treatment plants. They found that sand taken from a 26-year old filter had a well-established biofilm and was able to remove any detectable traces of geosmin and MIB in less than two weeks. Fresh filter sand with no biofilm, in contrast, was essentially ineffective, removing less than two-thirds of the geosmin and MIB even after several months of operation. The team is now investigating how to accelerate the development of active biofilms for water purification. Jim Corlett | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Life Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Social Sciences
<urn:uuid:92b7ecbd-4c76-44cf-ab47-591ae88eb17a>
3.109375
1,018
Content Listing
Science & Tech.
37.58222
95,560,118
or log in The animation presents the largest mountains, plains, rivers, lakes and deserts of the Earth. This animation demonstrates the most important relief features, surface waters and their... The rise and drop of sea levels caused by the gravitational force of the Moon. Lateral compressive forces cause rocks to form folds. This is how fold mountains are formed. Natural gas and petroleum are among the most important sources of energy and raw materials today. The duration of solar radiation, the angle of the Sun's rays and the surface albedo all have an... A periodic climate pattern that occurs across the tropical Pacific Ocean every five years. One third of the continents is covered with desert. Today, desertification is a growing problem.
<urn:uuid:ec4cfcb4-55b3-47f5-9e28-663425c4770d>
3.1875
152
Content Listing
Science & Tech.
48.663841
95,560,119
Current Observations Table Explainer Ucluelet High School |Temperature: ||11.7 °C| L: 11.1 °C, H: 17.1 °C |Humidity: ||0 %| |Pressure: ||1021 hPa | |Insolation: ||0 W/m2| |UV Index: ||0| |Rain: ||0.00 mm| |Wind Speed: ||1 km/hr W| Gust: 3.2 km/hr The current observations table shows measured values from the most recent minute that have reached the main database. We record minute averages of each variable but these are calculated from up to 30 samples from the instruments. Under the current temperature we show the low (L) and high (H) from the last 24 hours of the Under the minute average wind speed we show the strongest single observation of wind speed that we recorded in that minute. We call this the wind gust and it's shown on the wind graphs as a cyan line. This value gives some idea of the minute to minute variability of wind speed at the site. This page took 6.3 ms to generate. Last Modified: July 27 2017 11:40:58.
<urn:uuid:2bda10e7-7de3-4770-b775-738c02d56682>
2.796875
267
Truncated
Science & Tech.
81.908236
95,560,125
Take any prime number greater than 3 , square it and subtract one. Working on the building blocks will help you to explain what is special about your results. Many numbers can be expressed as the difference of two perfect squares. What do you notice about the numbers you CANNOT make? A 2-Digit number is squared. When this 2-digit number is reversed and squared, the difference between the squares is also a square. What is the 2-digit number? Can you explain the surprising results Jo found when she calculated the difference between square numbers? A man paved a square courtyard and then decided that it was too small. He took up the tiles, bought 100 more and used them to pave another square courtyard. How many tiles did he use altogether?
<urn:uuid:b636631e-3c9f-4d95-8ccc-cd21e74bafb5>
3.40625
156
Q&A Forum
Science & Tech.
63.528141
95,560,133
( 8.127446808 t+0.4850712500, ) At time t=1/2 minutes the coordinates of the point P are ( 3.1110, ) Note: at time t=0 x-coordinate of the center of the wheel is NOT 0. Find the best study resources around, tagged to your specific courses. Share your own to gain free Course Hero access. Get one-on-one homework help from our expert tutors—available online 24/7. Ask your own questions or browse existing Q&A threads. Satisfaction guaranteed!
<urn:uuid:a7f7d4d3-fbff-4ee4-bab9-b29b227b8a77>
2.53125
123
Truncated
Science & Tech.
85.131125
95,560,148
ATLSTL (http://atlstl.org). The Active Template Library is the leading library for creating COM components, and ATLSTL provides software technologies for assisting in its use. Components are defined within the namespace atlstl. COMSTL (http://comstl.org). The Component Object Model has become the foundational technology for component software development on the Windows platform. COMSTL provides software technologies for manipulating COM interfaces and APIs. Components are defined within the namespace comstl. .netSTL (http://dotnetstl.org). The .NET platform is rapidly gaining popularity, and C++.NET is the powerhouse language of the .NET platform. .netSTL enhances the use of Managed C++ by applying STL techniques to the .NET framework. Components are defined within the namespace dotnetstl. InetSTL (http://inetstl.org). The Internet is the medium of the new millenium, and Internet programming is an important part of many modern C++ programmers' daily lives. InetSTL enhances the use of C++ for Internet Programming by applying STL techniques to Internet APIs. Components are defined within the namespace inetstl. MFCSTL (http://mfcstl.org). Even if it's showing its age a bit, the Microsoft Foundation Classes are still a widely used technology for the development of user interface applications and components on the Windows platform. MFCSTL provides software technologies for manipulating MFC classes and APIs. Components are defined within the namespace mfcstl. UNIXSTL (http://unixstl.org) Like its stablemate WinSTL, UNIXSTL provides a number of libraries for programming with today's other main operating system. Components are defined within the namespace unixstl. WinSTL (http://winstl.org). Microsoft Windows is the pre-eminent desktop platform of our age: whether you love it or hate it, you'll doubtless end up programming on it! WinSTL provides a number of libraries for standardising the dizzying array of different APIs, simplifying your work considerably. Components are defined within the namespace winstl.
<urn:uuid:7c42a233-8237-4853-a922-f1c3c4ba28d0>
2.625
451
Content Listing
Software Dev.
42.664483
95,560,158
Parametric equations define a group of quantities as functions of one or more independent variables called parameters. Parametric equations are commonly used to express the coordinates of the points that make up a geometric object such as a curve or surface, in which case the equations are collectively called a parametric representation or parameterization. The polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction. The reference point (analogous to the origin of a Cartesian system) is called the pole, and the ray from the pole in the reference direction is the polar axis. The distance from the pole is called the radial coordinate or radius, and the angle is called the angular coordinate, polar angle, or azimuth Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY 3/0 license. Download for free at http://cnx.org/contents/fd53eae1-fa2...email@example.com.
<urn:uuid:8e6e975f-24b9-4732-9765-acb2c6dcbd05>
3.859375
240
Truncated
Science & Tech.
37.618606
95,560,190
Assembled bytes conventionally fall into two sections: text and data. You may have separate groups of data in named sections that you want to end up near to each other in the object file, even though they are not contiguous in the assembler source. as allows you to use subsections for this purpose. Within each section, there can be numbered subsections with values from 0 to 8192. Objects assembled into the same subsection go into the object file together with other objects in the same subsection. For example, a compiler might want to store constants in the text section, but might not want to have them interspersed with the program being assembled. In this case, the compiler could issue a ‘.text 0’ before each section of code being output, and a ‘.text 1’ before each group of constants being output. Subsections are optional. If you do not use subsections, everything goes in subsection number zero. Each subsection is zero-padded up to a multiple of four bytes. (Subsections may be padded a different amount on different flavors of as.) Subsections appear in your object file in numeric order, lowest numbered to highest. (All this to be compatible with other people's assemblers.) The object file contains no representation of subsections; other programs that manipulate object files see no trace of them. They just see all your text subsections as a text section, and all your data subsections as a data section. To specify which subsection you want subsequent statements assembled into, use a numeric argument to specify it, in a ‘.text expression’ or a ‘.data expression’ statement. When generating COFF output, you can also use an extra subsection argument with arbitrary named sections: ‘.section name, When generating ELF output, you can also use the .subsection directive (see SubSection) to specify a subsection: ‘.subsection expression’. Expression should be an absolute expression (see Expressions). If you just say ‘.text’ then ‘.text 0’ is assumed. Likewise ‘.data’ means ‘.data 0’. Assembly text 0. For instance: .text 0 # The default subsection is text 0 anyway. .ascii "This lives in the first text subsection. *" .text 1 .ascii "But this lives in the second text subsection." .data 0 .ascii "This lives in the data section," .ascii "in the first data subsection." .text 0 .ascii "This lives in the first text section," .ascii "immediately following the asterisk (*)." Each section has a location counter incremented by one for every byte assembled into that section. Because subsections are merely a convenience restricted to as there is no concept of a subsection location counter. There is no way to directly manipulate a location counter—but the .align directive changes it, and any label definition captures its current value. The location counter of the section where statements are being assembled is said to be the active location counter.
<urn:uuid:3a0c8b33-d8a6-4625-b3cb-171f40c1fee5>
3.375
660
Documentation
Software Dev.
51.526371
95,560,202
By John Sullivan, Office of Engineering Communications Researchers using satellite imaging have found much greater than expected deforestation since 2000 in the highlands of Southeast Asia, a critically important world ecosystem. Zhenzhong Zeng, a postdoctoral researcher at Princeton University and the lead author of a July 2 article describing the findings in Nature Geoscience, said the researchers used a combination of satellite data and computational algorithms to reach their conclusions. The report shows a loss of 29.3 million hectares of forest (roughly 113,000 square miles or about twice the size of New York state) between 2000 and 2014. Zeng said that represents 57 percent more loss than current estimations of deforestation made by the International Panel on Climate Change. He said most of the forest has been cleared for crops. Because forests absorb atmospheric carbon, and burning forests contribute carbon to the atmosphere, loss of forests could be devastating. An accurate estimation of forest cover also is critical for assessments of climate change. Zeng also said transformation of mountainous regions from old forest to cropland can have widespread environmental impacts from soil retention to water quality in the region. This article addresses professionals like foresters, ecologists, project developers, certifiers, and environmentalists (in the following called natural resources managers) who are new to the field of remote sensing or who want to update their basic knowledge. What will you learn? You will learn to distinguish between various data collection methodologies, understand the pros and cons of different data sources, and select the right data set to answer your questions efficiently. By KATIERA WINFREY Climate scientists credit a new satellite mapping system with helping firefighters battle wildfires, and they say the new system helps better connect fire agencies across the state. A lot of the fire-spotting happens at the National Weather Service. The fire-mapping system has proven to be most helpful in rural areas where wildfires popped up recently. Efforts by the Brazilian government over the past 15 years to curb deforestation have been a widely celebrated success, but a new study finds that there’s more deforestation happening in Brazil than official accounts suggest. The study, led by researchers from Brown University, compared data from Brazil’s official Monitoring Deforestation in the Brazilian Amazon by Satellite Project (PRODES) with two independent satellite measures of forest cover. The study found that about 9,000 square kilometers of forestland not included in PRODES monitoring were cleared from 2008 to 2012. That’s an area roughly the size of Puerto Rico. “PRODES has been an incredible monitoring tool and has facilitated the successful enforcement of policies,” said Leah VanWey, co-author of the research and senior deputy director at the Institute at Brown for Environment and Society. “But we show evidence that landowners are working around it in ways that are destroying important forests.” The research is published in the journal Conservation Letters. Read more at: http://phys.org/news/2016-10-significant-deforestation-brazilian-amazon-undetected.html#jCp This research examines the impact of forest management regimes, with various degrees of restriction, on forest conservation in a dry deciduous Indian forest landscape. Forest change is mapped using Landsat satellite images from 1977, 1990, 1999, and 2011. The landscape studied has lost 1478 km2 of dense forest cover between 1977 and 2011, with a maximum loss of 1002 km2 of dense forest between 1977 and 1990. The number of protected forest areas has increased, concomitant with an increase in restrictions on forest access and use outside protected areas. Interviews with residents of 20 randomly selected villages indicate that in the absence of alternatives, rather than reducing their dependence on forests, communities appear to shift their use to other, less protected patches of forest. Pressure shifts seem to be taking place as a consequence of increasing protection, from within protected areas to forests outside, leading to the creation of protected but isolated forest islands within a matrix of overall deforestation, and increased conflict between local residents and forest managers. A broader landscape vision for forest management needs to be developed, that involves local communities with forest protection and enables their decision-making on forest management outside strict protected areas.
<urn:uuid:063ac18a-ae88-43ac-b7d6-239f2380c1c6>
3.734375
874
Content Listing
Science & Tech.
26.659066
95,560,266
To help estimate fish populations, scientists experiment with seafloor-mounted sonar systems that monitor fish in the water column above Shelikof Strait, in the Gulf of Alaska, is an important spawning area for walleye pollock, the target of the largest--and one of the most valuable--fisheries in the nation. This year, a team of NOAA Fisheries scientists went there to turn their usual view of the fishery upside-down. The bottom-mounted sonars produce high-quality data. This image shows the abundance of pollock as viewed by the upward-looking sonar at a spawning site on March 15th, 2015. The colors in the image represent the strength of sound reflected from fish, with a strong echo from the sea surface visible at the top of the image. Scientists have been conducting fish surveys in the Shelikof Strait for decades. They do that in part by riding around in a ship and using sonar systems--basically, fancy fish finders--to see what's beneath them. But in February of this year, scientists moored three sonar devices to the seafloor and pointed them up toward the surface. The devices have been recording the passage of fish above them ever since. Because underwater devices cannot transmit data in real time, the sonar systems have been storing their data internally, leaving scientists in a state of suspense since February. But suspense turned to satisfaction last week when, working in cooperation with local fishermen aboard a 90-foot chartered fishing vessel, scientists retrieved the moorings from the bottom of Shelikof Strait. "The data looked beautiful," said Alex De Robertis, a biologist with NOAA's Alaska Fisheries Science Center, shortly after he cracked open the unit and downloaded the data. First Attempt with a New Technology "This was a first trial," De Robertis said. "We're still developing the technology to see how well it works." Whether moored on the bottom or carried by a ship, the sonar systems that scientists use work the same way: they emit a ping that echoes off the fish (and anything else in the water column). Based on the strength of the echo, scientists estimate the number of fish in the water. Those estimates are used when setting sustainable catch limits. "Usually we estimate how many fish we have by reading the acoustic echo off their backs," said De Robertis. "In this case, we'll be reading the echo from their bellies." But unlike shipboard sonar, moored sonars are stationary, so the tricky part is choosing the right mooring locations. De Robertis, along with NOAA Fisheries colleagues Chris Wilson and Robert Levine, have analyzed 20 years of survey data to select the three locations used in this study, which they hope will prove representative of the larger Shelikof Strait area. A Long-term Perspective If the technology works, scientists could use it to augment traditional, ship-based surveys. In addition to using sonar, those surveys also involve catching a sample of fish with a trawl, which produces information on the age, size, and physical condition of the fish. However, those surveys offer only a snapshot of what's happening in the water during the time of the survey. In years when the fish aggregate earlier or later than usual, the ship-based surveys might miss some of the action. The experimental sonar system, on the other hand, records over long periods--3 months long in the case of the experimental deployment in Shelikof Strait. "This will give us a new window on what fish populations are doing over time that we wouldn't be able to get any other way," De Robertis said. Scientists will just have to get used to the fact that the window is upside down. Marjorie Mooney-Seuss | EurekAlert! Researchers discover natural product that could lead to new class of commercial herbicide 16.07.2018 | UCLA Samueli School of Engineering Advance warning system via cell phone app: Avoiding extreme weather damage in agriculture 12.07.2018 | Leibniz-Zentrum für Agrarlandschaftsforschung (ZALF) e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:0f2ccef8-64e9-465b-b209-fbec124bbed1>
4.03125
1,386
Content Listing
Science & Tech.
44.501975
95,560,274
The molecule, nitric oxide (NO), plays critical roles in the human body - from the destruction of invading microorganisms to the relaying of neural signals. But catching NO at work has long eluded scientists because it often exists in minute concentrations and for only short periods of time. Now, MIT chemists have developed a bright fluorescent sensor that, in conjunction with microscopy, captures and illuminates NO in living, functioning cells. The work, reported May 28 in the online issue of Nature Chemical Biology, will aid scientists’ understanding of how and when NO operates. Stephen J. Lippard, the Arthur Amos Noyes Professor of Chemistry at MIT, developed the sensor with an eye toward understanding the role of NO in neural activity. But this work has broad biological applications since NO is produced throughout the body. "Our goal is to detect its formation in spatio-temporal terms, to see where and when it is produced in a cell, and in which collections of cells, and to connect its production with underlying chemical signaling events," Lippard said. Until the 1990s, scientists mainly knew NO as a product of lightning and the combustion engine - and as an ingredient in smog. A simple molecule consisting of one nitrogen and one oxygen atom, it contains an unpaired electron that makes it highly reactive and destructive. "Nobody thought it would be tolerated by a cell, much less used for biological purposes," Lippard said. Then came the stunning discovery that the peculiar blood vessel relaxer Endothelial Derived Relaxation Factor, identified in the 1980s, was actually NO. NO was then unmasked in macrophages (white blood cells), tumors, bones and neurons. In sweat and saliva it has antibacterial properties; in Viagra, rejuvenating effects. Paradoxically, NO often has contradictory behaviors. At some levels, it lowers high blood pressure, destroys invading microorganisms and tumor cells, maintains bone mass and relays neural signals. At other levels, it causes septic shock and promotes tumors, arthritis and nerve death. These puzzles make understanding how and when NO operates in cells all the more relevant, and that requires a better means of monitoring it as cells go about their normal business. But existing assays have either been too invasive or measured NO only indirectly. Lippard, together with graduate student Mi Hee Lim, the first author of the study, and postdoctoral researcher Dong Xu, produced a novel NO sensor by attaching a derivative of the widely used cellular imaging agent, fluorescein, to a copper atom. The resulting complex does not fluoresce until the fluorescein, in modified form, is released - which only happens in the presence of NO. The sensor works in real time, in the aqueous, neutral pH conditions of tissues, and at the tiny nanomolar-concentrations of NO found in living cells. How exclusive and selective is the NO detector? To find out, Lim and Xu made a mix of banana-shaped neuroblastomas and M&M-shaped macrophages, which each require different triggers to synthesize NO from a particular amino acid. When they triggered NO production in just the neuroblastomas, they could literally see that the sensor had selectively detected only those cells. "That delighted me the most because we want to detect one cell type selectively in a heterogeneous population of cells," Lippard said. Lippard plans to use this NO sensor to learn about the role of this elusive molecule in neurobiology. In the nervous system, a neuron releases NO at the synapse after receiving a signal from another neuron. NO then diffuses back to the pre-synaptic neuron and surrounding cells, perhaps to say: "I got the message." "The ability to visualize nitric oxide at the nanomolar level in cells and tissues should be of tremendous benefit in determining its effects on long term potentiation (LTP) and neuronal development," commented Michael J. Clarke, a chemist at the National Science Foundation, which funded this research. Elizabeth A. Thomson | MIT News Office O2 stable hydrogenases for applications 23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Health and Medicine 23.07.2018 | Earth Sciences 23.07.2018 | Science Education
<urn:uuid:ac16bdce-6be4-408a-a8d1-774f94952744>
3.484375
1,427
Content Listing
Science & Tech.
37.472561
95,560,297
With climate change storms are becoming more frequent and ferocious, leading to the accelerated erosion of coastlines. This leaves many coastal communities and low lying area increasingly vulnerable to flooding and property damage. Existing methods for protecting coastlines, including seawalls, revetments, and dykes, are expensive, use a lot of concrete and rarely look good. By contrast, coral reefs are a natural and sustainable solution, which are known to reduce wave heights by over 70%. The accelerated growth of reefs, using Biorock technology powered by CCell paddles, provides long-term coastal protection and enhances marine habitats. The CCell paddles both harness and dampen the energy within waves, with excess power (not needed by Biorock) being sold to the local grid, which helps the system pay for itself. Coral reefs contribute an estimated £108 billion pa to the global economy and provide a habitat for 25% of all known marine species. The world has lost nearly half of all its coral reefs, with the World Ressources Institute (WRI) estimating that over 60% of the remaining coral reefs are under threat. Climate change is increasing the stress on corals, with rising water temperatures and acidity lead to coral bleaching and ultimately to their death. BioRock is a proven technique for both repairing existing corals and building new reefs. The technology uses a small electrical current to extract minerals (mostly calcium carbonate) within the sea water to form rock (often called biological concrete) around a steel wire frames placed on the seabed. Electricity is the backbone of any modern society, but remote coastal locations often struggle to obtain power, often because of the high costs of importing fuel to power their generators. Across many islands, electricity is 3-4 times more expensive than the power available in developed countries like the UK or USA.
<urn:uuid:e694903d-cb9d-484e-bad8-ac60b9815d7e>
3.6875
379
Knowledge Article
Science & Tech.
28.029923
95,560,315
- Proceedings of the National Academy of Sciences of the United States of America - Published over 5 years ago The permanent ice cover of Lake Vida (Antarctica) encapsulates an extreme cryogenic brine ecosystem (-13 °C; salinity, 200). This aphotic ecosystem is anoxic and consists of a slightly acidic (pH 6.2) sodium chloride-dominated brine. Expeditions in 2005 and 2010 were conducted to investigate the biogeochemistry of Lake Vida’s brine system. A phylogenetically diverse and metabolically active Bacteria dominated microbial assemblage was observed in the brine. These bacteria live under very high levels of reduced metals, ammonia, molecular hydrogen (H(2)), and dissolved organic carbon, as well as high concentrations of oxidized species of nitrogen (i.e., supersaturated nitrous oxide and ∼1 mmol⋅L(-1) nitrate) and sulfur (as sulfate). The existence of this system, with active biota, and a suite of reduced as well as oxidized compounds, is unusual given the millennial scale of its isolation from external sources of energy. The geochemistry of the brine suggests that abiotic brine-rock reactions may occur in this system and that the rich sources of dissolved electron acceptors prevent sulfate reduction and methanogenesis from being energetically favorable. The discovery of this ecosystem and the in situ biotic and abiotic processes occurring at low temperature provides a tractable system to study habitability of isolated terrestrial cryoenvironments (e.g., permafrost cryopegs and subglacial ecosystems), and is a potential analog for habitats on other icy worlds where water-rock reactions may cooccur with saline deposits and subsurface oceans. A singular adaptive phenotype of a parthenogenetic insect species (Acyrthosiphon pisum) was selected in cold conditions and is characterized by a remarkable apparition of a greenish colour. The aphid pigments involve carotenoid genes well defined in chloroplasts and cyanobacteria and amazingly present in the aphid genome, likely by lateral transfer during evolution. The abundant carotenoid synthesis in aphids suggests strongly that a major and unknown physiological role is related to these compounds beyond their canonical anti-oxidant properties. We report here that the capture of light energy in living aphids results in the photo induced electron transfer from excited chromophores to acceptor molecules. The redox potentials of molecules involved in this process would be compatible with the reduction of the NAD(+) coenzyme. This appears as an archaic photosynthetic system consisting of photo-emitted electrons that are in fine funnelled into the mitochondrial reducing power in order to synthesize ATP molecules. We have developed an implantable fuel cell that generates power through glucose oxidation, producing 3.4 μW cm(-2) steady-state power and up to 180 μW cm(-2) peak power. The fuel cell is manufactured using a novel approach, employing semiconductor fabrication techniques, and is therefore well suited for manufacture together with integrated circuits on a single silicon wafer. Thus, it can help enable implantable microelectronic systems with long-lifetime power sources that harvest energy from their surrounds. The fuel reactions are mediated by robust, solid state catalysts. Glucose is oxidized at the nanostructured surface of an activated platinum anode. Oxygen is reduced to water at the surface of a self-assembled network of single-walled carbon nanotubes, embedded in a Nafion film that forms the cathode and is exposed to the biological environment. The catalytic electrodes are separated by a Nafion membrane. The availability of fuel cell reactants, oxygen and glucose, only as a mixture in the physiologic environment, has traditionally posed a design challenge: Net current production requires oxidation and reduction to occur separately and selectively at the anode and cathode, respectively, to prevent electrochemical short circuits. Our fuel cell is configured in a half-open geometry that shields the anode while exposing the cathode, resulting in an oxygen gradient that strongly favors oxygen reduction at the cathode. Glucose reaches the shielded anode by diffusing through the nanotube mesh, which does not catalyze glucose oxidation, and the Nafion layers, which are permeable to small neutral and cationic species. We demonstrate computationally that the natural recirculation of cerebrospinal fluid around the human brain theoretically permits glucose energy harvesting at a rate on the order of at least 1 mW with no adverse physiologic effects. Low-power brain-machine interfaces can thus potentially benefit from having their implanted units powered or recharged by glucose fuel cells. Thirdhand smoke (THS) is the accumulation of secondhand smoke on environmental surfaces. THS is found on the clothing and hair of smokers as well as on surfaces in homes and cars of smokers. Exposure occurs by ingestion, inhalation and dermal absorption. Children living in homes of smokers are at highest risk because they crawl on the floor, touch parents' clothing/hair and household objects. Using mice exposed to THS under conditions that mimic exposure of humans, we show that THS increases cellular oxidative stress by increasing superoxide dismutase (SOD) activity and hydrogen peroxide (H2O2) levels while reducing the activity of antioxidant enzymes catalase and glutathione peroxidase (GPx) that break down H2O2 into H2O and O2. This results in lipid peroxidation, protein nitrosylation and DNA damage. Consequences of these cell and molecular changes are hyperglycemia and insulinemia. Indeed, we found reduced levels of insulin receptor, PI3K, AKT, all important molecules in insulin signaling and glucose uptake by cells. To determine whether these effects on THS-induced insulin resistance are due to increase in oxidative stress, we treated mice exposed to THS with the antioxidants N-acetyl cysteine (NAC) and alpha-tocopherol (alpha-toc) and showed that the oxidative stress, the molecular damage, and the insulin resistance, were significantly reversed. Conversely, feeding the mice with chow that mimics “western diet”, which is known to increase oxidative stress, while exposing the mice to THS, further increased the oxidative stress and aggravated hyperglycemia and insulinemia. In conclusion, THS exposure results in insulin resistance in the form of non-obese type II diabetes (NODII) through oxidative stress. If confirmed in humans, these studies could have a major impact on how people view exposure to environmental tobacco toxins, in particular to children, elderly and workers in environments where tobacco smoke has taken place. Electronic cigarettes (ECs) are battery-operated devices designed to vaporise nicotine, which may help smokers quitting or reducing their tobacco consumption. There is a lack of data on the health effects of EC use among smokers with COPD and whether regular use results in improvement in subjective and objective COPD outcomes. We investigated long-term changes in objective and subjective respiratory outcomes in smokers with a diagnosis of COPD who quit or reduced substantially their tobacco consumption by supplementing with or converting only to ECs use. ABSTRACT Fe(II)-oxidizing aerobic bacteria are poorly understood, due in part to the difficulties involved in laboratory cultivation. Specific challenges include (i) providing a steady supply of electrons as Fe(II) while (ii) managing rapid formation of insoluble Fe(III) oxide precipitates and (iii) maintaining oxygen concentrations in the micromolar range to minimize abiotic Fe(II) oxidation. Electrochemical approaches offer an opportunity to study bacteria that require problematic electron donors or acceptors in their respiration. In the case of Fe(II)-oxidizing bacteria, if the electron transport machinery is able to oxidize metals at the outer cell surface, electrodes poised at potentials near those of natural substrates could serve as electron donors, eliminating concentration issues, side reactions, and mineral end products associated with metal oxidation. To test this hypothesis, the marine isolate Mariprofundus ferrooxydans PV-1, a neutrophilic obligate Fe(II)-oxidizing autotroph, was cultured using a poised electrode as the sole energy source. When cells grown in Fe(II)-containing medium were transferred into a three-electrode electrochemical cell, a cathodic (negative) current representing electron uptake by bacteria was detected, and it increased over a period of weeks. Cultures scraped from a portion of the electrode and transferred into sterile reactors consumed electrons at a similar rate. After three transfers in the absence of Fe(II), electrode-grown biofilms were studied to determine the relationship between donor redox potential and respiration rate. Electron microscopy revealed that under these conditions, M. ferrooxydans PV-1 attaches to electrodes and does not produce characteristic iron oxide stalks but still appears to exhibit bifurcate cell division. IMPORTANCE Electrochemical cultivation, supporting growth of bacteria with a constant supply of electron donors or acceptors, is a promising tool for studying lithotrophic species in the laboratory. Major pitfalls present in standard cultivation methods used for metal-oxidizing microbes can be avoided by the use of an electrode as the sole electron donor. Electrochemical cultivation also offers a window into the poorly understood metabolism of microbes such as obligate Fe(II), Mn(II), or S(0) oxidizers by replacing the electron source with the controlled surface of an electrode. The elucidation of redox-dependent behavior of these microbes could enhance industrial applications tuned to oxidation of specific metals, provide insight into how bacteria evolved to compete with oxygen for reactive metal species, and model geochemical impacts of their metabolism in the environment. - Proceedings of the National Academy of Sciences of the United States of America - Published about 5 years ago The emergence of oxygen-producing (oxygenic) photosynthesis fundamentally transformed our planet; however, the processes that led to the evolution of biological water splitting have remained largely unknown. To illuminate this history, we examined the behavior of the ancient Mn cycle using newly obtained scientific drill cores through an early Paleoproterozoic succession (2.415 Ga) preserved in South Africa. These strata contain substantial Mn enrichments (up to ∼17 wt %) well before those associated with the rise of oxygen such as the ∼2.2 Ga Kalahari Mn deposit. Using microscale X-ray spectroscopic techniques coupled to optical and electron microscopy and carbon isotope ratios, we demonstrate that the Mn is hosted exclusively in carbonate mineral phases derived from reduction of Mn oxides during diagenesis of primary sediments. Additional observations of independent proxies for O2-multiple S isotopes (measured by isotope-ratio mass spectrometry and secondary ion mass spectrometry) and redox-sensitive detrital grains-reveal that the original Mn-oxide phases were not produced by reactions with O2, which points to a different high-potential oxidant. These results show that the oxidative branch of the Mn cycle predates the rise of oxygen, and provide strong support for the hypothesis that the water-oxidizing complex of photosystem II evolved from a former transitional photosystem capable of single-electron oxidation reactions of Mn. The extraordinary properties of graphene and carbon nanotubes motivate the development of methods for their use in producing continuous, strong, tough fibres. Previous work has shown that the toughness of the carbon nanotube-reinforced polymer fibres exceeds that of previously known materials. Here we show that further increased toughness results from combining carbon nanotubes and reduced graphene oxide flakes in solution-spun polymer fibres. The gravimetric toughness approaches 1,000 J g(-1), far exceeding spider dragline silk (165 J g(-1)) and Kevlar (78 J g(-1)). This toughness enhancement is consistent with the observed formation of an interconnected network of partially aligned reduced graphene oxide flakes and carbon nanotubes during solution spinning, which act to deflect cracks and allow energy-consuming polymer deformation. Toughness is sensitive to the volume ratio of the reduced graphene oxide flakes to the carbon nanotubes in the spinning solution and the degree of graphene oxidation. The hybrid fibres were sewable and weavable, and could be shaped into high-modulus helical springs. Peat bogs are primarily situated at mid to high latitudes and future climatic change projections indicate that these areas may become increasingly wetter and warmer. Methane emissions from peat bogs are reduced by symbiotic methane oxidizing bacteria (methanotrophs). Higher temperatures and increasing water levels will enhance methane production, but also methane oxidation. To unravel the temperature effect on methane and carbon cycling, a set of mesocosm experiments were executed, where intact peat cores containing actively growing Sphagnum were incubated at 5, 10, 15, 20, and 25°C. After two months of incubation, methane flux measurements indicated that, at increasing temperatures, methanotrophs are not able to fully compensate for the increasing methane production by methanogens. Net methane fluxes showed a strong temperature-dependence, with higher methane fluxes at higher temperatures. After removal of Sphagnum, methane fluxes were higher, increasing with increasing temperature. This indicates that the methanotrophs associated with Sphagnum plants play an important role in limiting the net methane flux from peat. Methanotrophs appear to consume almost all methane transported through diffusion between 5 and 15°C. Still, even though methane consumption increased with increasing temperature, the higher fluxes from the methane producing microbes could not be balanced by methanotrophic activity. The efficiency of the Sphagnum-methanotroph consortium as a filter for methane escape thus decreases with increasing temperature. Whereas 98% of the produced methane is retained at 5°C, this drops to approximately 50% at 25°C. This implies that warming at the mid to high latitudes may be enhanced through increased methane release from peat bogs. We describe the first implanted glucose biofuel cell (GBFC) that is capable of generating sufficient power from a mammal’s body fluids to act as the sole power source for electronic devices. This GBFC is based on carbon nanotube/enzyme electrodes, which utilize glucose oxidase for glucose oxidation and laccase for dioxygen reduction. The GBFC, implanted in the abdominal cavity of a rat, produces an average open-circuit voltage of 0.57 V. This implanted GBFC delivered a power output of 38.7 μW, which corresponded to a power density of 193.5 μW cm(-2) and a volumetric power of 161 μW mL(-1). We demonstrate that one single implanted enzymatic GBFC can power a light-emitting diode (LED), or a digital thermometer. In addition, no signs of rejection or inflammation were observed after 110 days implantation in the rat.
<urn:uuid:73948a06-8c0c-47d6-878d-ad87e61f435f>
2.78125
3,119
Content Listing
Science & Tech.
12.59475
95,560,318
Depending on the input signal, neurons generate action potentials either near or far away from the cell body, as researcher from Munich predict. This flexibility would improve our ability to localize sound sources. In order to process acoustic information with high temporal fidelity, nerve cells may flexibly adapt their mode of operation according to the situation. At low input frequencies, they generate most outgoing action potentials close to the cell body. Following inhibitory or high frequency excitatory signals, the cells produce many action potentials more distantly. This way, they are highly sensitive to the different types of input signals. These findings have been obtained by a research team headed by Professor Christian Leibold, Professor Benedikt Grothe, and Dr. Felix Felmy from the Bernstein Center and the Bernstein Focus Neurotechnology in Munich and the LMU Munich, who used computer models in their study. The researchers report their results in the latest issue of The Journal of Neuroscience. Did the bang come from ahead or from the right? In order to localize sound sources, nerve cells in the brain stem evaluate the different arrival times of acoustic signals at the two ears. Being able to detect temporal discrepancies of up to 10 millionths of a second, the neurons have to become excited very quickly. In this process, they change the electrical voltage that prevails on their cell membrane. If a certain threshold is exceeded, the neurons generate a strong electrical signal—a so-called action potential—which can be transmitted efficiently over long axon distances without weakening. In order to reach the threshold, the input signals are summed up. This is achieved easier, the slower the nerve cells alter their electrical membrane potential. These requirements—rapid voltage changes for a high temporal resolution of the input signals, and slow voltage changes for an optimal signal integration that is necessary for the generation of an action potential—represent a paradoxical challenge for the nerve cell. “This problem is solved by nature by spatially separating the two processes. While input signals are processed in the cell body and the dendrites, action potentials are generated in the axon, a cell process,” says Leibold, leader of the study. But how sustainable is the spatial separation? In their study, the researchers measured the axons’ geometry and the threshold of the corresponding cells and then constructed a computer model that allowed them to investigate the effectiveness of this spatial separation. The researchers’ model predicts that depending on the situation, neurons produce action potentials with more or less proximity to the cell body. For high frequency or inhibitory input signals, the cells will shift the location from the axon’s starting point to more distant regions. In this way, the nerve cells ensure that the various kinds of input signals are optimally processed—and thus allow us to perceive both small and large acoustic arrival time differences well, and thereby localize sounds in space. The Bernstein Center Munich is part of the National Bernstein Network Computational Neuroscience in Germany. With this funding initiative, the German Federal Ministry of Education and Research (BMBF) has supported the new discipline of Computational Neuroscience since 2004 with over 170 million Euros. The network is named after the German physiologist Julius Bernstein (1835-1917). Prof. Dr. Christian Leibold Department Biology II Großhaderner Straße 2 82152 Planegg-Martinsried (Germany) Tel: +49 (0)89 2180-74802 S. Lehnert, M. C. Ford, O. Alexandrova, F. Hellmundth, F. Felmy, B. Grothe & C. Leibold (2014): Action potential generation in an anatomically constrained model of medial superior olive axons. Journal of Neuroscience, 34(15): 5370—5384. http://neuro.bio.lmu.de/research_groups/res-leibold_ch personal website Christian Leibold http://www.bccn-munich.de Bernstein Center München http://www.uni-muenchen.de LMU Munich http://www.nncn.de National Bernstein Network Computational Neuroscience Mareike Kardinal | idw - Informationsdienst Wissenschaft NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:ff442ba9-3384-4cb1-89e0-a10d64392665>
3.671875
1,515
Content Listing
Science & Tech.
40.534961
95,560,335
Traditionally, to understand how a gene functions, a scientist would breed an organism that lacks that gene - "knocking it out" - then ask how the organism has changed. Are its senses affected? Its behavior? Can it even survive? Thanks to the recent advance of gene editing technology, this gold standard genetic experiment has become much more accessible in a wide variety of organisms. Now, researchers at Rockefeller University have harnessed a technique known as CRISPR-Cas9 editing in an important and understudied species: the mosquito, Aedes aegypti, which infects hundreds of millions of people annually with the deadly diseases chikungunya, yellow fever, and dengue fever. Researchers led by postdoctoral fellow Benjamin J. Matthews adapted the CRISPR-Cas9 system to Ae. aegypti and were able to efficiently generate targeted mutations and insertions in a number of genes. The immediate goal of this project, says Matthews, is to learn more about how different genes help the species operate so efficiently as a disease vector, and create new ways to control it. "To understand how the female mosquito actually transmits disease," says Matthews, "you have to learn how she finds humans to bite, and how she chooses a source of water to lay her eggs. Once you have that information, techniques for intervention will come." In the study, published March 26 in Cell Reports, Matthews and research assistant Kathryn E. Kistler, both in Leslie B. Vosshall's Laboratory of Neurogenetics and Behavior, adapted the CRISPR-Cas9 system to introduce precise mutations in Ae. aegypti. Previously, to create these types of mutations, scientists relied on techniques that used engineered proteins to bind to specific segments of DNA they wanted to remove, a process that was both expensive and unreliable. CRISPR-Cas9, in contrast, consists of short stretches of RNA that bind to specific regions of the genome where a protein, Cas9, cleaves the DNA. Scientists have been studying how RNA binds to DNA for decades and "the targeting is done with rules that we have a good handle on," says Matthews, which makes it easy to reprogram CRISPR-Cas9 to target any gene. "This amazing technique has worked in nearly every organism that's been tried," says Vosshall, who is Robin Chemers Neustein Professor and a Howard Hughes Medical Institute investigator. "There are lots of interesting animal species out there that could not be studied using genetics prior to CRISPR-Cas9, and as a result this technique is already revolutionizing biology." This work opens the door to learning more about the role of specific genes the Vosshall lab suspects may help mosquitoes propagate, perhaps by finding the perfect spot to lay their eggs. Their protocols will likely also help other scientists apply the same technique to study additional organisms, such as agricultural pests or mosquitoes that carry malaria. "Before starting this project, we thought it would be difficult to modify many genes in the mosquito genome in a lab setting" Matthews says. "With a little tweaking, we were able to make this technique routine and it's only going to get easier, faster, and cheaper from here on out." Zach Veilleux | EurekAlert! NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:d5f5a2cd-a8da-4855-95d7-8f22b63dc575>
3.640625
1,304
Content Listing
Science & Tech.
40.773254
95,560,336
Why prioritize the Cassava Mosaic Virus? Cassava is a major food crop in Africa and Asia. Cassava can grow under drought, high temperature and poor soil conditions, but its production is severely limited by viral diseases. Cassava Mosaic Disease (CMD) is one of the most economically important crop diseases in Africa. MEET OUR TEAM William Neal Reynolds Professor of Biochemistry, North Carolina State University William Neal Reynolds Distinguished Professor of Agriculture, North Carolina State University Assistant Professor in the Department of Ecology, Evolution and Natural Resources, Rutgers University Recent News and Events Whiteflies and Cassava Mosaic Virus RT @RutgersEENR: @RutgersEENR featured Prof this week is Ass Prof Siobain Duffy. Her interests range from molecular evolution of eme… https://t.co/RVDlOeJyrX RT @virome_girl: Great talk by @aewtospo - I love the way she connects the big picture of needing to feed billions of more people wi… https://t.co/hKNiGCsWTd RT @ElEarlyBird: Fun #TWiV episode recording at #ASV2018 with @Annealiz1 1st TWiV on plant viruses? @profvrr considers plant episode… https://t.co/5PBWoY4vDX RT @virome_girl: Varsani - predicted ancestral genimivirus genomes, created them, grew in maize in Lab, it worked! Then watched them… https://t.co/KhuAXr59Au
<urn:uuid:2470c610-af1e-4197-9e72-e2ec9a15b1cd>
2.8125
337
News (Org.)
Science & Tech.
31.78951
95,560,342
A way of detecting gold deposits beneath the ground has been right in front of our eyes for years. Eucalyptus trees take up gold particles into their leaves. X-ray imaging could replace test drilling. Eucalyptus roots go down a very long way in search of water. Some sinker roots reach 40 meters below the surface. If there is gold in the ground it is concentrated in the leaves. It is pushed to extremities of the tree because gold is toxic. Research has proven the theory. Trees above gold deposits store gold, but in gold-free areas there is no stored gold. It is place specific as well: the amount of gold particles varies with the level of gold deposits in the ground. There is no threat to the trees as the amount of stored gold is minuscule. All types of plants store minerals besides eucalyptus. However, the overwhelming number of plant species in Australia are eucalyptus. It is no longer necessary to drill in difficult, rocky places. Taking a few leaves for analysis does not damage the trees. This will make the search for gold much easier and cheaper. Society by Ty Buchanan
<urn:uuid:baa76afd-8074-44b7-8c1a-45f20765635a>
3.828125
242
Personal Blog
Science & Tech.
61.959053
95,560,346
Lipidomic Mass Spectrometry and its Application in Neuroscience News Jan 09, 2014 Central and peripheral nervous systems are lipid rich tissues. Lipids, in the context of lipid-protein complexes, surround neurons and provide electrical insulation for transmission of signals allowing neurons to remain embedded within a conducting environment. Lipids play a key role in vesicle formation and fusion in synapses. They provide means of rapid signaling, cell motility and migration for astrocytes and other cell types that surround and play supporting roles neurons. Unlike many other signaling molecules, lipids are capable of multiple signaling events based on the different fragments generated from a single precursor during each event. Lipidomics, until recently suffered from two major disadvantages: (1) level of expertise required an overwhelming amount of chemical detail to correctly identify a vast number of different lipids which could be close in their chemical reactivity; and (2) high amount of purified compounds needed by analytical techniques to determine their structures. Advances in mass spectrometry have enabled overcoming these two limitations. Mass spectrometry offers a great degree of simplicity in identification and quantification of lipids directly extracted from complex biological mixtures. Mass spectrometers can be regarded to as mass analyzers. There are those that separate and analyze the product ion fragments in space (spatial) and those which separate product ions in time in the same space (temporal). Databases and standardized instrument parameters have further aided the capabilities of the spatial instruments while recent advances in bioinformatics have made the identification and quantification possible using temporal instruments. The review is published online in World Journal of Biological Chemistry and is free to access. ‘Good Cholesterol’ May Not Always be Good for Postmenopausal WomenNews Postmenopausal factors may have an impact on the heart-protective qualities of high-density lipoproteins (HDL) – also known as ‘good cholesterol’ – according to a study led by researchers in the University of Pittsburgh Graduate School of Public Health.READ MORE What Makes Good Brain Proteins Turn Bad?News The protein FUS is implicated in two neurodegenerative diseases: amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD). Using a newly developed fruit fly model, researchers have zoomed in on the protein structure of FUS to gain more insight into how it causes neuronal toxicity and disease.
<urn:uuid:763e106c-2ebd-4890-ab93-38ba171499f6>
2.6875
497
Content Listing
Science & Tech.
9.559538
95,560,374
Historically, scientists believed that behavioural differences between colonies of chimpanzees were due to variations in genetics. A team at Liverpool, however, has now discovered that variations in behaviour are down to chimpanzees migrating to other colonies, proving that they build their ‘cultures’ in a similar way to humans. Primatologist, Dr Stephen Lycett, explains: “We knew there were behavioural differences between chimpanzee colonies, but nobody really knew why. It was assumed that young chimpanzees developed certain behavioural characteristics from the genes passed down from their parents, but there was no evidence to clearly support this. It was also thought that because behaviour was dictated by biology, chimpanzees did not have a ‘culture’ in the same way that humans do.” By looking at how chimpanzees prepare their food, the research team discovered that one colony used stone tools to crack nuts, whereas another colony used wooden tools as well as stone. They found these methods of preparing food have spread 4000km from East to West Africa over the more than 100,000 years. The team also found this true of other techniques, such as grooming. The research suggests that behavioural variety is due to how chimpanzees socialise rather than genetics as previously thought. To investigate the theory further researchers built an evolutionary tree of chimpanzee behaviour in East and West Africa as well as a genetic family tree. They had expected to find that those with similar genetic patterns also shared behavioural similarities. Instead, they found that some chimpanzees shared behavioural similarities with those that were genetically different from them. Dr Lycett, added: “This explains why some colonies, for example, use similar methods for finding food, adopting certain behaviour and adapting different methods to suit their own environment. In this sense we can see for the first time that culture exists in our closest relatives.” Samantha Martin | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:0bf5c31b-1bcb-4bd7-a5a2-965205e935d9>
3.78125
957
Content Listing
Science & Tech.
35.714508
95,560,402
Genetics as it applies to evolution, molecular biology, and medical aspects. Moderators: honeev, Leonid, amiradm, BioTeam - Posts: 21 - Joined: Sun Nov 06, 2005 3:29 pm At what point in his experiment did Mendel derive the 1:2:1 ratio?What characteristics in pea plants does this ratio represent? - Posts: 103 - Joined: Thu Jan 05, 2006 10:06 pm - Location: GA In true breeding between a purple pea plant and white pea plant. The phenotypic ratio resulted in 3 purple and 1 white 3:1. In terms of genotype there were two categories of purple flowered plants. PP and Pp which gives the 1:2:1 genotypic ratio. "In omnias paratus!" Who is online Users browsing this forum: No registered users and 2 guests
<urn:uuid:0fde3e84-52de-4267-ae44-34312b747bd3>
2.546875
192
Comment Section
Science & Tech.
68.978306
95,560,421
Come to our PaleoTime-BE International Fossil Show in Wijgmaal (BE), on November 11 2018! Contribute knowledge and information to Fossiel.net! How can I help? Most Popular Articles Parrotfish mouthplate large The Actinopterygii, or ray-finned fish, are a class of fish within the group of bony fish (Osteichthyes). Unlike cartilaginous fish (Chondrichthyes), bony fishes posses a skeleton completly composed of bone. The ray-finned fish are not the only class within the Osteichthyes, the lobe-finned fish (Sarcopterygii) are also placed in this group. Diplomystus, a well preserved ray-finned fish from the Eocene of Wyoming, USA Photos or locations for Actinopterygii on this site Do you have additional information for this article? Please contact the Fossiel.net Team.
<urn:uuid:d9069961-9d9b-4366-b6d8-8f6ba2431e76>
2.796875
212
Knowledge Article
Science & Tech.
37.938624
95,560,460
Start Your Free Trial Log In · Join On This Day Start Your Free Trial What are you looking for? Click here to search Browse popular topics: Enhance your search results page with Britannica’s FREE Chrome extension. Install now – FACTS matter. Images and Videos Gravitational lens, as observed by the Hubble Space Telescope.In this picture… Effects of gravity on the Moon and Earth Effects of gravity on Earth and the Moon. Earth's gravitational force weakens with increasing distance. The variation in the gravitational field, given in milliGals (mGal), over the Earth's surface gives rise to an imaginary… Experimental evidence for general relativityIn 1919, observation of a solar… Laser Interferometer Space Antenna (LISA)LISA, a Beyond Einstein Great Observatory,… Gravity versus launch speed The launch speed required for a spacecraft to escape Earth's gravitational pull differs depending on its trajectory. Gravity map of Earth's ocean surface, computed from radar-altimetry measurements made from orbit by the U.S. satellite Seasat… Moon: gravity map Gravity maps of the Moon, showing near and far sides, based on Lunar Prospector data. Figure 9: The gravitational force exerted by the Sun on the Earth produces… Figure 3: Variation of gravitational acceleration across a finite-sized body leading to differential acceleration relative… Explanation of gravitational force. (01:57) Overview of gravity, with a focus on zero gravity. Sprinting versus falling (01:14) An experiment to demonstrate which is faster over 10 metres: the fastest sprinter in the world or an object pulled by gravity. General relativity; gravity (03:21) Explore general relativity. Gravitational waves; relativity: applications (02:37) Learn about the significance of gravitational waves in science and in everyday life. Apollo 15 commander David Scott dropping a 1.32-kg (2.91-pound) aluminum geological hammer and a 0.03-kg (0.07-pound) falcon… (01:23) Stability of orbits (01:54) How Earth is able to remain in orbit by balancing speed and gravitational pull from the Sun. Solar system (02:43) Learn how the solar system, which formed from a roughly spherical cloud, became flat. Discussion of the forces acting on bodies floating in water. Sea level measurement (03:25) Learn how sea level is measured. Learn how gravity dictates the shapes of stones. Gravitational wave; laser interferometer: LIGO (05:38) Learn about gravitational waves and how scientists in 2015 first directly detected them. Centrifugal force (03:13) Watch a German air force pilot training to withstand the stresses of flight in a centrifuge that produces centrifugal force… Dark matter (01:10) A brief lesson on the gravitational effects of dark matter. Scientists studying the effects of weightlessness. Various aspects of life in microgravity and how it is simulated on Earth. (02:12) Gravitational waves (03:23) An overview of gravitational waves, including the announcement of their discovery. Sir Isaac Newton's formulation of the law of universal gravitation. (02:10) You may also be interested in... Media for: Black hole Media for: Gravitational lens Media for: Pendulum Media for: Geomagnetic field Earth’s Atmosphere and Clouds Merry and Bright: 8 Jolly Christmas Plants Watch Your Step: 6 Things You Can Fall Into You have successfully emailed this. Error when sending the email. Try again later. On This Day Gravity: Images and Videos Back to Article Email this page
<urn:uuid:a2dda53f-ca4c-4516-8dcc-80fd44d7193b>
3.796875
813
Content Listing
Science & Tech.
43.441785
95,560,485
CARSON CITY — Reno scientist Joe McConnell was not particularly surprised when he discovered high traces of pollution embedded in ice taken in southern Greenland. After all, talk about greenhouse gases causing global warming and the hole in the ozone layer has been going on for decades. But McConnell noticed that the highest traces of pollution in the 115-meter-long section of ice he examined were deposited around 1900, not in recent years or during the peak of industrialization in Europe and North America in the 1960s and ’70s. In fact, McConnell found that pollution levels in the early 1900s were two to five times the levels of today. In particular, toxic levels of heavy metal pollutants from burning coal — cadmium, thallium and lead — were found in Greenland ice. It didn’t take McConnell, the lead researcher and director of the Desert Research Institute’s Ultra-Trace Chemistry Laboratory, long to figure out why: One hundred years ago was the time when most factories and homes in Europe and the Eastern United States burned coal. “Things are looking better,” McConnell said of today’s pollution emissions in North America and Europe. “The Clean Air Act has made a big difference as has the demise of Soviet industry in the Arctic. A lot of people don’t believe humans can have had such a large amount of impact. But our data showed they have.” Americans should not take his results as a reason to forge ahead with coal-fired power plants and gas-guzzling cars, McConnell said. The study he and fellow DRI researcher Ross Edwards completed was published last month by the National Academy of Sciences. The study was partially funded by the National Science Foundation. In their conclusions, the researchers express concern about the effects that pollution generated in China and India will have around the world because of the “rapid coal-driven growth of Asian economies.” This industrialization may pose a risk to the food chain as toxic heavy metals are carried through the atmosphere and deposited in the polar regions, the study contends. “But cleaner burning coal technologies, or better yet, reduced reliance on coal burning, may head off the potential problems,” McConnell wrote. Before taking his DRI job 10 years ago, McConnell worked in the oil industry and in China. He believes the world must end its dependence on fossil fuels and develop non-polluting alternative energy sources. “The United States ought to lead the charge toward renewable energy,” he said. “There is no doubt that coal is a dirty source of power.” Although scrubbers in coal-fired power plants today capture a lot of particulate material, McConnell said, they do not catch all the heavy metals he found in the Greenland ice. He now wants to study ice core samples from Alaska or Russia to determine the effects of pollutants in the Pacific region. In analyzing why pollution was so great 100 years ago, McConnell perused history books. Pittsburgh then was America’s dirtiest city. He said the air over the Steel City was so filled with coal emissions that street lights remained on for days at a time because the sky was dark with the “black fog.” Nearly 70 people died in the Pittsburgh suburb of Donora in 1948 when industrial pollutants from steel plants were trapped by an air inversion. “London fog,” McConnell added, originally wasn’t the term of a line of clothing but referred to the coal-polluted, smoggy air over London that during air inversions killed hundreds of residents. The word “smog” was created in London in 1905. Coal smoke from thousands of chimneys combined with natural fog to form smog. McConnell and Edwards collected their ice core samples during a trip to Greenland in May 2004. One-inch by one-inch by 2-foot-long sections of ice were removed from the ice pack and taken to the researchers’ Reno laboratory, where they were melted and analyzed. “We can accurately date ice,” McConnell said. “It is like tree rings.” They analyzed ice dating to 1772. The effects of industrialization started showing up in the 1860s and peaked about 1905. The Greenland ice field is two miles thick, and some of the ice was created 4,000 years ago, McConnell said. McConnell found a nifty use for ice he didn’t need to analyze: He held a party, melted the ice and served the water. “Imagine the feeling of drinking water that was there at the time Thomas Jefferson wrote the Declaration of Independence,” he said. Contact Capital Bureau Chief Ed Vogel email@example.com or 775-687-3901.
<urn:uuid:224f171b-0fa0-4a67-8c11-6913904cec66>
3.25
996
News Article
Science & Tech.
49.915383
95,560,500
The Java language enables you to define a class inside another class. Such a class is defined as Nested Class or Local Inner class. The Nested Class are classified into - 1)Static Nested Inner class are declared static called static Nested classes. 2)Non-Static Nested classes are those classes that are not defined static. This class is defined also as Inner class. The Inner class is defined as local Inner class when you declare a inner class within the body of method. Understand with Example The Tutorial illustrates an example from Local Inner class. The code include a Outer class Inner class include a method that add the integer into the array. The println print the even indices of the array. The inner class Get Even Integer is same as Java iterators,The work of Get Even Integer are used to step through a Inner class, This method used in GetBig Integer is used to retrieve the last element, the current element and the last element. The main method that instantiate a Inner Class object and this is used to fill the array Of Ints array with the specified value, This call a print even method and print the required even output in ascending order. Here is the code: Output will be displayed as:
<urn:uuid:d1f08c59-95da-4f04-a3ad-90a493804601>
4
253
Documentation
Software Dev.
47.70148
95,560,504