text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
The first outbreak of the evolution of multicellular organisms falls on the Wend, the last period of the Proterozoic (Precambrian), about 620-550 million years ago. At that time, climate of our planet was rather cold, and glaciers that covered the single supercontinent nearly reached the equator. The cold is beneficial for the evolution of sea creatures. In modern seas, significant concentrations of dissolved oxygen, phosphates, and the organic matter provide for a high biological productivity and the appearance of very large animals. In ancient times, the situation was probably similar: first multicellular organisms lived in cold seawater. As is known, there was a sharp increase in the fauna diversity in the Cambrian Period. However, in the preceding period, Wend, the fauna was rather rich too, as said Mikhail A. Fedonkin in his report of October 17 in the Vavilov Institute of General Genetics (Moscow). Fedonkin is the corresponding member of the Russian Academy of Sciences and the head of the Laboratory of Precambrian Organisms in the Paleontological Institute in Moscow. Mrs. Elena Kleschenko | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:53bed6f3-0d7d-4691-ae49-1485244b0ce6>
3.3125
819
Content Listing
Science & Tech.
35.771147
95,566,961
Authors: George Rajna Predicting the properties of subatomic particles before their experimental discovery has been a big challenge for physicists. There's a new particle in town, and it's a double-charmingly heavy beast. Researchers working on the LHCb experiment at CERN's Large Hadron Collider have announced the discovery of the esoterically named Xicc++ particle. One of the fundamental challenges in nuclear physics is to predict the properties of subatomic matter from quantum chromodynamics (QCD)—the theory describing the strong force that confines quarks into protons and neutrons, and that binds protons and neutrons together. At very high energies, the collision of massive atomic nuclei in an accelerator generates hundreds or even thousands of particles that undergo numerous interactions. The first experimental result has been published from the newly upgraded Continuous Electron Beam Accelerator Facility (CEBAF) at the U.S. Department of Energy's Thomas Jefferson National Accelerator Facility. The result demonstrates the feasibility of detecting a potential new form of matter to study why quarks are never found in isolation. A team of scientists currently working at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) announced that it has possibly discovered the existence of a particle integral to nature in a statement on Tuesday, Dec. 15, and again on Dec.16. In 2012, a proposed observation of the Higgs boson was reported at the Large Hadron Collider in CERN. The observation has puzzled the physics community, as the mass of the observed particle, 125 GeV, looks lighter than the expected energy scale, about 1 TeV. 'In the new run, because of the highest-ever energies available at the LHC, we might finally create dark matter in the laboratory,' says Daniela. 'If dark matter is the lightest SUSY particle than we might discover many other SUSY particles, since SUSY predicts that every Standard Model particle has a SUSY counterpart.' The problem is that there are several things the Standard Model is unable to explain, for example the dark matter that makes up a large part of the universe. Many particle physicists are therefore working on the development of new, more comprehensive models. They might seem quite different, but both the Higgs boson and dark matter particles may have some similarities. The Higgs boson is thought to be the particle that gives matter its mass. And in the same vein, dark matter is thought to account for much of the 'missing mass' in galaxies in the universe. It may be that these mass-giving particles have more in common than was thought. The magnetic induction creates a negative electric field, causing an electromagnetic inertia responsible for the relativistic mass change; it is the mysterious Higgs Field giving mass to the particles. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate by the diffraction patterns. The accelerating charges explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron's spin also, building the bridge between the Classical and Relativistic Quantum Theories. The self maintained electric potential of the accelerating charges equivalent with the General Relativity space-time curvature, and since it is true on the quantum level also, gives the base of the Quantum Gravity. Comments: 23 Pages. [v1] 2017-07-31 13:02:33 Unique-IP document downloads: 28 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:d6183ffd-1e24-450c-bba9-79259beaaa68>
3.359375
867
Comment Section
Science & Tech.
33.019994
95,566,972
1. "7x50" binoculars magnify angles by a factor of 7.0, and their objective lenses have an aperture of 50 mm diameter. (a) According to Rayleigh's criterion, what is the intrinsic angular resolution of these binoculars? Assume that the light have a wavelength of 500 nm. (b) At best, the pupil of your eye has an aperture of 7.0 mm diameter. Compare the angular resolution of your eye divided by a factor of 7.0 with the intrinsic angular resolution of the binoculars. Which of the two numbers determines the actual angular resolution you can achieve while looking through the binoculars. 2. A Newton's ring apparatus consists of a lens with one flat surface and one convex surface rests on a flat plate of glass (Fig. 35-45a). A light ray normally incident on the lens will be partially reflected by the curved surface of the lens and partially reflected by the flat plate of glass. The interference between these reflected rays will be constructive or destructive, depending on the height of the air gap between the lens and the plate. The interference gives rise to the pattern of Newton's rings, shown in Fig. 35-45b. (a) Why is the center of the pattern dark? (b) Show that the radius of the mth dark ring is r= √(mλR-m^2 λ^2/4) where λ is the wavelength of the light and R is the radius of curvature of the convex surface of the lens. (c) What is the radius of the first dark ring if λ= 500 nm and R = 3.0 m? Please see the attachment for the diagrams.© BrainMass Inc. brainmass.com July 15, 2018, 9:19 pm ad1c9bdddf The solution discusses the answers the given questions regarding light and interference.
<urn:uuid:febbfc70-f39f-44f9-b2d4-19c4556f8302>
3.75
398
Tutorial
Science & Tech.
74.649623
95,566,998
Coefficient of Conservatism: Coefficient of Wetness: SMALL-HEADED SUNFLOWER, SMALL WOODLAND SUNFLOWER Found mostly south of Michigan and barely west of the Mississippi. Like H. divaricatus, it is a forest and savanna species. The only Michigan collection (H. S. Pepoon in 1903, MSC) from “dry wood borders” gives no clue as to whether it should be considered indigenous here. Helianthus divaricatus may have disks as small as in this species, but the rays are longer and the leaves nearly or quite sessile. MICHIGAN FLORA ONLINE. A. A. Reznicek, E. G. Voss, & B. S. Walters. February 2011. University of Michigan. Web. July 15, 2018. https://michiganflora.net/species.aspx?id=353.
<urn:uuid:269c3463-8b39-4d73-9cda-7ef876c3129a>
2.71875
199
Knowledge Article
Science & Tech.
62.775198
95,567,000
Define antiderivative of a function. A function that has a given function as its derivative. For example, F(x) = x3 ... View Full Answer What is the domain of f/g given by if functions f and g have domains Df and Dg, respectively?Math Find the parameters a and b included in the linear function f(x) = a x + b so that f -1 (2) = 3 a...Math Is the sign of the second derivative of a given function f informs you on the concavity of the gr...Math
<urn:uuid:48138b27-6705-4235-b77f-36470fd86a8e>
3.296875
123
Q&A Forum
Science & Tech.
89.992477
95,567,001
Solar energy has made it possible for those in developing countries to turn on lights and power devices at their convenience. They no longer have to rely on the limited resources they have, and it’s also reaped the same benefits for homeless people. Camps in Berkeley, California, have installed donated solar panels to provide easy access to power for those in need. Back on November 4th, Bay Area Rapid Transit evicted numerous homeless people from their camp along Adeline Street. It’s another eviction in a long string of them between the city of Berkeley and the “First They Came for the Homeless” group. Most of them transferred over to Old City Hall while a smaller group established just north of the old one. A third camp is now located at the edge of Aquatic Park. With aid from Sam Clune, a former mortgage broker, these homeless camps were hooked up with solar panels. Clune lost his recreational vehicle in late 2016 when the registration and smog test certifications were expired. He lost all of his belongings in the vehicle when it was towed away by police, and he’s been located at Aquatic Park since. Prior to losing the RV, Clune was able to install solar panels on it. He was able to develop the skills needed to install more solar panels at the homeless camps in Berkeley. He also praised the benefits this power had over gas generators. The latter would break down consistently, made a lot of noise when in operation, and they weren’t safe for the environment. A 915-watt system was first installed at Aquatic Park. Three solar panels are linked with four golf cart batteries. Extension cords branch out to the residents in their tents. Instead of needing to find a spot to charge devices or get some light, this new source provides consistent, reliable power. “The big difference is not what you can do with electricity. It’s what you do not have to do,” Clune told The Mercury News. “Instead of sitting in a coffee shop for three hours a day charging stuff, building your whole day around it, we can now accomplish something else with our day.” Clune notes that this frees up other public locations, like libraries, where numerous homeless people attempt to charge their phones and computers. Other solar panel systems have been installed at the other camps, although the one located at Old City Hall doesn’t receive as much sunlight. They’ve added more solar panels for further generation, but they can only fill about two golf cart batteries. In all, the three camps combined have a capacity of 2,080 watts, which is enough for 40 people that inhabit these areas. Clune says that further energy will be pumped out in the summer when there’s more sunlight and they convert to 24 volts of power instead of 12 volts. Belize has saved the second-biggest coral reef in the world, which provides food and economical benefits to the Central American country. After passing legislation to ban oil exploration, UNESCO has taken it off their endangered list. Beluga whales are heading from China to a new home on an Icelandic island that brings them closer to a natural habitat. Multiple organizations are not only providing them a better home, but are hoping that other entertainment parks follow in their footsteps. To keep rare bat species in an area where they thrive, a community that's already created nearly 100 sustainable homes is changing their street lights. These new red LED bulbs will allow humans to continue operating at night while the bats can avoid it. Mountain gorillas remain an endangered species, but conservation efforts such as regulated tourism and habitat protection has increased their population over the last 35 years. It's jumped 25 percent in a specific African region in the past eight years.
<urn:uuid:c6a35603-392b-44ac-b42a-9d7841940a3c>
2.671875
777
News Article
Science & Tech.
49.779537
95,567,003
Nitrogen is most often considered to be the limiting nutrient for plant growth in marine waters. As a result, knowledge of nitrogen loading and ambient water-column concentrations are considered to be critical to understanding the response of aquatic ecosystems to nutrient over-enrichment—a process known as eutrophication when it results in the excess production of organic matter. Plant production in many estuarine systems may also be limited by light availability as a result of high levels of turbidity in the water resulting from sediments, dissolved organic matter, and phytoplankton in the water column. Light limitation resulting from human-induced increases in turbidity is known to be particularly deliterious to seagrass production/distribution in some ecosystems and also play an important role in determining how phytoplankton respond to nutrient enrichment. EPA is developing water qulaity criteria for estuaries that require knowledge of both total nitrogen and light availability (measured as photsynthetically active radiation, PAR). Through the National Estuarine Research Reserve (NERR) System-Wide Monitoring Program (SWMP), inorganic nutrient concentrations, chlorophyll-a concetration, and a number of hydrographic and water quality parameters are sampled on a monthly basis at 7 sites in the Great Bay system. In addition, these same parameters, as well as bacteria concentrations, are measured at a number of sites in Great Bay and Hampton Harbor through the National Coastal Assessment (NCA) funded through the EPA. This project takes advantage of these existing monitoring activities to collect and analyze for particulate organic nitrogen (PON), dissolved organic nitrogen (DON) and photosynthetically active radiation (PAR) at a up to 10 existing sample sites in the New Hampshire seacoast region. When combined with existing dissolved inorganic nitrogen measurements, PON and DON allow the entire Total Nitrogen (TN) pool to be quantified. PAR measurements provide, for the first time, an estimate of the light availability in the system. New Hampshire Estuaries Project Pennock, Jonathan, "2004 Great Bay Organic Nitrogen (PON & DON) and Light Extinction (PAR) Monitoring Program" (2005). PREP Reports & Publications. 181.
<urn:uuid:13b6c1af-20bf-46fb-8380-b77360132a43>
3.453125
455
Academic Writing
Science & Tech.
9.746825
95,567,040
The galaxy just got MORE puzzling: 'Dark matter' which scientists hoped would 'fill in' our theory of the universe isn't there, after 400-star scan in Milky Way 'Dark matter' - the mysterious substance thought to 'glue' the universe together - might not exist, throwing current theories of the universe into chaos. 'Dark matter' is thought to make up around 83% of the universe by mass - and to 'hold together' galaxies - but a scan of 400 stars near our Sun found no trace of it. The study, using the La Silla telescope in Chile, is the biggest of its type ever conducted. Dark matter is thought to make up 83% of the mass of the universe - but a recent survey found no trace of it, deepening the mystery of what our universe is actually made of The La Silla telescope: The telescope (which is a fork mounted Ritchey-Chretien) was built by Zeiss and has been in use at La Silla since 1984 Dozens of scientific projects on Earth are searching for dark matter, many using detectors buried deep under ground in mines - but the Chilean scientists say they are unlikely to find it. The most accurate study so far of the motions of stars in the Milky Way has found no evidence for dark matter in a large volume around the Sun. A team using the MPG/ESO 2.2-meter telescope at ESO’s La Silla Observatory, along with other telescopes, has mapped the motions of more than 400 stars up to 13,000 light-years from the Sun. From this new data they have calculated the mass of material in the vicinity of the Sun, in a volume four times larger than ever considered before. ‘The amount of mass that we derive matches very well with what we see -- stars, dust and gas -- in the region around the Sun,’ says team leader Christian Moni Bidin, of the Universidad de Concepcion, Chile. ‘But this leaves no room for the extra material -- dark matter -- that we were expecting. Our calculations show that it should have shown up very clearly in our measurements. But it was just not there!’ Blue filaments weave in and out of a nebula: Dark matter is thought to form a 'halo' around galaxies, and to 'structure' many objects in space ‘Despite the new results, the Milky Way certainly rotates much faster than the visible matter alone can account for. So, if dark matter is not present where we expected it, a new solution for the missing mass problem must be found.' Our results contradict the currently accepted models. The mystery of dark matter has just become even more mysterious. Future surveys, such as the ESA Gaia mission, will be crucial to move beyond this point.’ concludes Christian Moni Bidin. Dark matter is a mysterious substance that cannot be seen, but shows itself by its gravitational attraction for the material around it. This extra ingredient in the cosmos was originally suggested to explain why the outer parts of galaxies, including our own Milky Way, rotated so quickly, but dark matter now also forms an essential component of theories of how galaxies formed and evolved. Today it is widely accepted that this dark component constitutes about the 80% of the mass in the universe, despite the fact that it has resisted all attempts to clarify its nature, which remains obscure. All attempts so far to detect dark matter in laboratories on Earth have failed. Bvery carefully measuring the motions of many stars, particularly those away from the plane of the Milky Way, the team could work backwards to deduce how much matter is present. The motions are a result of the mutual gravitational attraction of all the material, whether normal matter such as stars, or dark matter. Astronomers’ existing models of how galaxies form and rotate suggest that the Milky Way is surrounded by a halo of dark matter. They are not able to precisely predict what shape this halo takes, but they do expect to find significant amounts in the region around the Sun. But only very unlikely shapes for the dark matter halo -- such as a highly elongated form -- can explain the lack of dark matter uncovered in the new study. Most watched News videos - 'Africa won the world cup': Trevor Noah mocks France World Cup win - Utah train worker calls a group of girls porn stars - Macron's security advisor IMPERSONATES police to beat protestors - Police release video of Stormy Daniels' arrest outside strip club - 12 suspects appear in Bronx court in 'Justice for Junior' case - Piglets brutally killed by having their heads slammed on floor - Brutal bat attack caught on surveillance video in the Bronx - Drowned woman and child found next to survivor clinging to wreck - Shocking video shows mother brutally beating her twin girls - Tourist dies after waterfall jump in background of music video - Bon Jovi star Richie Sambora soars in fighter plane - The terrifying moment a plane comes crashing down in South Africa
<urn:uuid:de3cc1ef-5a7e-45a5-a6dc-2a8666e00de6>
3.046875
1,048
Truncated
Science & Tech.
40.093028
95,567,046
Forest recovery since 1860 in a Mediterranean region: drivers and implications for land use and land cover spatial distribution - 331 Downloads Land use and land cover (LULC) change is a major part of environmental change. Understanding its long-term causes is a major issue in landscape ecology. Our aim was to characterise LULC transitions since 1860 and assess the respective and changing effects of biophysical and socioeconomic drivers on forest, arable land and pasture in 1860, 1958 and 2010, and of biophysical, socioeconomic and distance from pre-existing forest on forest recovery for the two time intervals. We assessed LULC transitions by superimposing 1860, 1958 and 2010 LULCs using a regular grid of 1 × 1 km points, in a French Mediterranean landscape (195,413 ha). We tested the effects of drivers using logistic regressions, and quantified pure and joint effects by deviance partitioning. Over the whole period, the three main LULCs were spatially structured according to land accessibility and soil productivity. LULC was driven more by socioeconomic than biophysical drivers in 1860, but the pattern was reversed in 2010. A widespread forest recovery mainly occurred on steeper slopes, far from houses and close to pre-existing forest, due to traditional practice abandonment. Forest recovery was better explained by biophysical than by socioeconomic drivers and was more dependent on distance from pre-existing forest between 1958 and 2010. Our results showed a shift in drivers of LULC and forest recovery over the last 150 years. Contrary to temperate regions, the set-aside of agricultural practices on difficult land has strengthened the link between biophysical drivers and LULC distribution over the last 150 years. KeywordsForest transition Historical maps Land cover change Land use change Logistic regression Long-term Northern Mediterranean Transition matrix The 1860 LULC and lithology maps were provided by the PNRL, and 1958 orthophotographs were provided by the National Institute of Geographic and Forestry Information. This work is part of a PhD Thesis supported by the Région Provence-Alpes-Côte-d’Azur and IRSTEA. The UMR 1137 Forest Ecology and Ecophysiology is supported by a Grant overseen by the French National Research Agency (ANR) as part of the “Investissements d’Avenir” Program (ANR-11-LABX-0002-01, Lab of Excellence ARBRE). The authors thank J. Wu and two anonymous reviewers for their constructive and helpful comments. We also thank P. K. Roche and K. Verheyen for their advice. - De Keersmaeker L, Onkelinx T, De Vos B, Rogiers N, Vandekerkhove K, Thomaes A, De Schrijver A, Hermy M, Verheyen K (2015) The analysis of spatio-temporal forest changes (1775–2000) in Flanders (northern Belgium) indicates habitat-specific levels of fragmentation and area loss. Landscape Ecol 30(2):247–259CrossRefGoogle Scholar - Foley JA, DeFries R, Asner GP, Barford C, Bonan G, Carpenter SR, Chapin S, Coe MT, Daily GC, Gibbs HK, Helkowski JH, Holloway T, Howard EA, Kucharik CJ, Monfreda C, Patz JA, Prentice IC, Ramankutty N, Snyder PK (2005) Global consequences of land use. Science 309(5734):570–574CrossRefPubMedGoogle Scholar - Fourchy P (1963) Les lois du 28 juillet 1860 et 8 juin 1864 sur le reboisement et le gazonnement des montagnes. Rev géogr alp 51(1):19–41Google Scholar - Gilbert Y (1989) Elevage, forêt et société. Analyse socio-historique. For Méditerr XI(3):203–216Google Scholar - Jenness J, Brost B, Beier P (2013) Land facet corridor designer: extension for ArcGIS, June 2016 edition. Jenness Enterprises. http://www.jennessent.com/arcgis/land_facets.htm - Lambin EF, Turner BL, Geist HJ, Agbola SB, Angelsen A, Bruce JW, Coomes OT, Dirzo R, Fischer G, Folk C, George PS, Homewood K, Imbernon J, Leemans R, Li X, Moran EF, Mortimore M, Ramakrishnan PS, Richards JF, Skånes H, Steffen W, Stone GD, Svedin U, Veldkamp TA, Vogel C, Xu J (2001) The causes of land-use and land-cover change: moving beyond the myths. Glob Environ Change Hum Policy Dimens 11(4):261–269CrossRefGoogle Scholar - Mather AS (1992) The forest transition. Area 24(4):367–379Google Scholar - Nash MS, Chaloud DJ, Kepner WG, Sarri S (2008) Regional assessment of landscape and land use change in the Mediterranean region—Morocco case study (1981–2003). In: Liotta PH, Mouat DA, Kepner WG, Lancaster JM (eds) Environmental change and human security: recognizing and acting on hazard impacts. NATO science for peace and security series C: environmental security. Springer, Dordrecht, p 143Google Scholar - Saas Y, Gosselin F (2014) Comparison of regression methods for spatially-autocorrelated count data on regularly-and irregularly-spaced locations. Ecography 37(5):476–489Google Scholar - Salvaudon A, Hamel A, Grel A, Rossi M, Vallauri D (2012) Notice de la carte des forêts anciennes du Parc Naturel Régional du Lubéron (1:40000) avec référence aux autres usages du sol. p 18Google Scholar - Sereda S, Lukan M (2009) Assessment of changes in landuse development in the Magura and the Eastern Tatras in the years 1772–2003. Oecol Mont 18:1–13Google Scholar - Varese P (1990) Pré-étude en vue d’une typologie des stations forestières du Luberon. Parc Naturel Régional du Luberon, Apt, p 141Google Scholar - Verdier A (2013) Redécouvrir le saltus: l’exemple des pâquis lorrains. Revue de géographie historiqueGoogle Scholar - Wood SN (2006) Generalized additive models: an introduction with R. Chapman and Hall/CRC, Boca RatonGoogle Scholar
<urn:uuid:326ef6be-9b8d-42d7-9734-ee31b6bea3cb>
2.78125
1,457
Academic Writing
Science & Tech.
48.089609
95,567,085
The basic principle of the white light speckle method for strain analysis is reviewed with a theoretical analysis of the recording and reconstruction processes of specklegrams including the effect of defocus. Ways to increase the sensitivity are proposed. The method's a-plicability to measuring inplane displacement, out-of-plane displacement, surface strain on a curved object, interior strain in a 3-D object, and large deformation is exemplified. F. P. Chiang, "Theory And Applications Of The White Light Speckle Method For Strain Analysis," Optical Engineering 21(4), 214570 (1 August 1982). https://doi.org/10.1117/12.7972953
<urn:uuid:35dd9acb-f1ae-4d70-a85c-02195cdb5ca5>
2.8125
148
Truncated
Science & Tech.
50.397273
95,567,094
The discs for this game are kept in a flat square box with a square hole for each disc. Use the information to find out how many discs of each colour there are in the box. Complete the magic square using the numbers 1 to 25 once each. Each row, column and diagonal adds up to 65. Can you make square numbers by adding two prime numbers together? Add the sum of the squares of four numbers between 10 and 20 to the sum of the squares of three numbers less than 6 to make the square of another, larger, number. Find another number that is one short of a square number and when you double it and add 1, the result is also a square number. Can you use this information to work out Charlie's house number? Mrs Morgan, the class's teacher, pinned numbers onto the backs of three children. Use the information to find out what the three numbers were. Cut differently-sized square corners from a square piece of paper to make boxes without lids. Do they all have the same volume? Are these statements always true, sometimes true or never true? Can you make a cycle of pairs that add to make a square number using all the numbers in the box below, once and once only? In 1871 a mathematician called Augustus De Morgan died. De Morgan made a puzzling statement about his age. Can you discover which year De Morgan was born in? One block is needed to make an up-and-down staircase, with one step up and one step down. How many blocks would be needed to build an up-and-down staircase with 5 steps up and 5 steps down? This activity creates an opportunity to explore all kinds of number-related patterns. Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? These squares have been made from Cuisenaire rods. Can you describe the pattern? What would the next square look like? Does a graph of the triangular numbers cross a graph of the six times table? If so, where? Will a graph of the square numbers cross the times table too? What is the value of the digit A in the sum below: [3(230 + A)]^2 = 49280A Each light in this interactivity turns on according to a rule. What happens when you enter different numbers? Can you find the smallest number that lights up all four lights? Show that 8778, 10296 and 13530 are three triangular numbers and that they form a Pythagorean triple. Think of a number, square it and subtract your starting number. Is the number you’re left with odd or even? How do the images help to explain this? Using your knowledge of the properties of numbers, can you fill all the squares on the board? Imagine a pyramid which is built in square layers of small cubes. If we number the cubes from the top, starting with 1, can you picture which cubes are directly below this first cube? How many four digit square numbers are composed of even numerals? What four digit square numbers can be reversed and become the square of another number? Use the interactivities to complete these Venn diagrams. A woman was born in a year that was a square number, lived a square number of years and died in a year that was also a square number. When was she born?
<urn:uuid:52f34d05-4c8d-4ae2-98b2-aee820e455b6>
3.421875
699
Tutorial
Science & Tech.
70.721398
95,567,139
An occluded front is drawn in purple. It is a cold front that overtakes a warm front. They commonly occur close to a maturing low pressure system. A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is associated with a drying of the air mass. Additionally, cold core funnel clouds are possible if shear is significant along the cold front. Small isolated occluded fronts often remain for a time after a low pressure system has decayed and these create cloudy conditions with patchy rain or showers. - - Little boy gives rather than receives for his birthday - - Marriage the not-so-secret ingredient in The Taza Truck - - Bethlehem City Council urges state legislature to decriminalize marijuana - - Easton School District OKs HVAC project for high school - - Northampton County stabbing leaves man with pierced heart - - NFL quarterback ties the knot at Bucks County wedding venue - - Sanatoga Green project to be broken into phases - - Bucks County mother faces charges from drug-tainted breast milk - - Congressman Fitzpatrick criticizes Russian interference in election - - Taylor Swift becomes the 'third wheel' in marriage proposal |Record||98°F July 18, 1999||39°F July 18, 1938|
<urn:uuid:a6ebde64-5dbc-4e53-8fc7-4bbd51b89fe4>
3.171875
273
Knowledge Article
Science & Tech.
29.94466
95,567,147
Environmental metrics are designed to assess the environmental impact of technology or activity. The impact is primarily related to using natural resources (renewable or non-renewable) and generating waste. The ultimate sustainability goal is to minimize the environmental impact via using less non-renewable resources and generating less waste and pollution. Since the complete elimination of these factors is hardly possible, it is also important to evaluate the rate at which environment can absorb the impacts and become remediated. Embodied energy / Emergy / Transformity The concepts of embodied energy, emergy, and transformity are introduced as universal measures in the environmental accounting theory. The basic approach in that theory is to use energy units for assessing inputs and outputs of various natural and industrial systems. All types of energy and real wealth products are related to the primordial source - solar energy - through transformity. It is stated that going through multiple transformations via both technological and natural converters, available energy acquires new quality, while the load on the environment due to those transformations increases. For example, on a hot day, we use an air conditioner. The energy to power the air conditioner comes from the electricity grid. Prior to that, the electric energy is converted by an electromechanical generator at a power plant from the kinetic or thermal energy of steam, which is in turn produced by combustion of fossil fuels. Energy stored in the fossil fuels (which were originally biomass) was the solar energy transformed by plants to organic matter via photosynthesis. So, solar energy is the original component of the final energy used by the air conditioner (Figure 3.1). Transformations of the primary types of energy through an array of energy conversion technologies described in the example above demonstrate the idea of transformity applied to a particular type of usable energy, such as electricity. In other words, transformity indicates how many transformations are necessary to obtain a particular sort of energy in a usable form and also show how “costly” those transformations are for the environment. Quantification of the energy inputs and outputs in terms of primary solar energy may be a tricky task; however, a number of studies provided such data and enabled energy flow analysis for environmental systems. Check Your Understanding Probing Question: Which of the following types of resources, in your opinion, has the lowest transformity based on the concept outlined above: (B) Natural gas (E) Ammonia fertilizer Click for answer. Emergy is defined as the available solar energy used up directly or indirectly to make a service, product, fuel, or another form of usable resource. This term essentially means the solar energy equivalent. Transformity, in this case, is the equivalence factor: Emergy [seJ] = Energy stored or available [J] x Transformity [seJ/J] Emergy is usually measured in solar energy joules [seJ], and transformity is therefore expressed as a ratio of solar energy Joules to regular Joules. Here we use tree logs as an example for expressing transformity. The key energy transformation involved in the production of tree logs is photosynthesis, the natural processes that convert CO2 gas and solar radiation into biomass. This calculation is done for 1 Ga of forest. Here, the Solar emergy flow essentially indicates how much solar energy is supplied by the sun onto that 1 Ga area. This would depend on solar irradiance, or insolation, which in turn would depend on the geographic location of the forest, local weather profile, and other factors. The Energy flow, in this case, is the energy content of the wood produced by the 1 Ga forest per year. The result can be read as: 3846 joules of solar energy is used per each joule of energy stored in the logs. We understand that in order to be accurate, transformity has to be evaluated taking into account the larger surroundings of the system and specific conditions. We also see from this example that only a part of the available solar energy is captured and converted to the usable stored energy (logs), while the rest of it is dissipated or redirected in this system. Several quantitative environmental metrics have been defined based on the emergy theory (see system diagram in Figure 3.2). Figure 3.2 is a system diagram showing the energy flows and transformations within a generic locality (surrounded by the system boundary). The Economic Use box can be seen as a "transformer" of the available energy and resources into some Yield (Y), i.e., some product needed to sustain this system. The inputs to the system are classified as renewable resources, non-renewable resources, local resources, and non-local (purchased) resources. It is presumed that system sustainability is favored by using renewable energy resources and local energy resources. The sum of both of these categories is denoted by R on this diagram. On the contrary, non-renewable (N) and non-local (purchased, F) resources work against sustainable development. These presumptions set the background for devising certain sustainability metrics in this study. One of such metrics, which characterizes the environmental impact of a transformational process, is Environmental Loading Ratio (ELR): ELR = (F + N) / R From this relationship, we can see that the more non-renewable and outside resources are involved in the process, the higher the ERL index. An increase in renewable energy use translates into a lower ELR value. As you can guess, lower ELR is benefitial for the environment. Another useful index introduced here is Energy Yield Ratio (EYR): EYR = Y / F This metric characterizes system's capability to exploit local resources (renewable or not). The more the system depends on imported resources or services (increasing F), the lower EYR, and the higher system vulnerability. Finally, the Sustainability Index (SI) combines both ELR and EYR as follows: SI = EYR / ELR Obviously, for higher sustainability “score”, we are interested in having the highest EYR versus the lowest ELR. Within this approach, SI can be used as an aggregate measure to characterize sustainability function of a given process, technology, or economy. Please see further explanation of this method and example calculations of metrics in the reading material referenced below. Journal article: Brown, M.T., and Ulgiati, S., Ecological Engineering 9 (1997) 51-69. This paper explains the calculation of environmental and sustainability indices based on the available energy flows. It illustrates the process of devising sustainability metrics and applying them to a number of technologies and products. Please study this article. In this lesson activity, you will be asked to perform a simple calculation of the environmental metrics based on the approach described herein. The article is available as PDF file in the Lesson 3 Module on Canvas or can be accessed through the databases of the PSU Library system. Book: Odum, H.T., Environmental Accounting, John Wiley & Sons 1996. pp.1-15. This book introduces the emergy theory and method for evaluation of environmental and economic use. Chapters 1 and 2, especially, would help you understand the basics of this approach. This is an optional reading, and the book is not provided in the electronic format. However, if you are interested in this topic and would like to use this approach in your own assessments, it is a proper resource to explore. Note that the above-described approach to assessing a system sustainability is just a single illustration of how sustainability metrics can be devised. The parameters chosen by the authors were specific to their objectives. Calculations they provide answer some of the questions, but may not answer other questions that a stakeholder may have. In that respect, setting the objectives for your assessment and stating clear definitions and assumptions is a very important step in any assessment study in order to make the results meaningful. Kaya Equation (introduced by the economist Yochi Kaya) is another example of environmental metric, which helps to recognize different contributions to the total CO2 emissions of a country. It incorporates population, level of economic activity, level of energy consumption, and carbon intensity: Where P = population, GDP = gross domestic product, (GDP/P) = GDP per capita, (E/GDP) = energy intensity per unit of GDP, and (CO2/E) = carbon emissions per unit energy consumed. GDP is the market value of all officially recognized final goods and services produced within a country in a given period of time. GDP per capita is often considered an indicator of a country's standard of living. The last two terms on the right side of the equation are metrics that need to be lowered in order to benefit the environment. The term (CO2/E) is technology dependent. The more fossil fuel burning is involved in the production of consumable energy (energy conversion), the higher the “carbon cost” of that energy. This carbon cost is indicated by this factor. Cleaner technologies are characterized by lower (CO2/E), or even zero in an ideal case. From the systems perspective, though, there may be no zero emission technologies, since manufacturing, maintenance, and support system operation of such energy conversion systems may still require a certain amount of energy from fossil fuels. For example, a “green bus” uses a hydrogen fuel cell stack as an engine and emits only water from H2 + 0.5O2(air) = H2O reaction. However, manufacturing of such a bus requires equipment operated from the grid, which distributes electricity from a fossil fuel power plant. Furthermore, maintenance of this bus over its lifetime may require other non-renewable resources. Therefore, its carbon “footprint” may be quite low, but still non-zero (unless we decide on different system boundaries). The Kaya model allows estimating how changing technological solutions for energy conversion can help the economy in terms of emission reduction. Determination of the CO2/E factor provides a quantitative scale for measuring environmental impact in terms of “carbon cost”. The CO2/E metric is common in many assessment studies discussing alternative energy sources. We need to keep in mind reported values usually reflect the lifecycle, "from cradle to grave" emissions, i.e., those related to technology manufacturing, operation, and decommissioning altogether (not only operational emissions). Take a look at this example of a National Renewable Energy Laboratory (NREL) study that had a goal to compare the lifecycle greenhouse gas emissions of various energy technologies. The study took into account the total estimated emissions from more than 2100 LCA publications and related those to the total amount of energy generated by those systems during their lifetime operation. The results are summarized on the bar graph (p.2 of the fact sheet). Interpreting the graph, answer the following questions for yourself: - What units are used to express the CO2/E metric? - Energy from which technology (out of those studied) has the highest "carbon cost"? Which one has the lowest "carbon cost"? - Which stage of the technology lifecycle does result in the most CO2 emissions: in case of renewable energy systems and in case of fossil fuel energy systems? Various Internet sites use combinations of environmental metrics to calculate the so-called ecological footprint. This is an illustration how environmental metrics can be used to compare human lifestyles, which essentially comes down to the comparison of technologies people use. These calculators are far from being specific and use generalized information on environmental impacts. Here are a couple of calculators you can check just for fun:
<urn:uuid:58b0910e-a5d1-495e-a742-24eef5f39b65>
3.4375
2,408
Knowledge Article
Science & Tech.
27.367081
95,567,185
Java Performance Tuning Java(TM) - see bottom of page Our valued sponsors who help make this site possible JProfiler: Get rid of your performance problems and memory leaks! Training online: Threading Essentials course Tips October 2004 Get rid of your performance problems and memory leaks! Get rid of your performance problems and memory leaks! Back to newsletter 047 contents The ABCs of Synchronization, Part 1 (Page last updated August 2004, Added 2004-10-31, Author Jeff Friesen, Publisher java.net). Tips: - Java 1.5 (5.0) introduces the java.util.concurrent.locks.Lock interface for implementing locking operations that are more extensive than those offered by synchronized methods and synchronized statements. - You may need to synchronize access when iterating a collection if another thread could update that collection at the same time. - Deadlock is the situation where locks are acquired by multiple threads, no thread holds its desired lock but some hold the locks needed by other threads The ABCs of Synchronization, Part 2 (Page last updated September 2004, Added 2004-10-31, Author Jeff Friesen, Publisher java.net). Tips: - J2SE 5.0's java.util.concurrent.locks package includes Condition, an interface that serves as a replacement for Object's wait and notify methods. You can use Condition implementations to create multiple condition variables that associate with one Lock object, so that a thread can wait for multiple conditions to occur. - [Article includes a simple producer/consumer example]. - Java's memory model permits threads to store the values of variables in local memory for performance reasons. This means that different threads can see different values for a variable where access/update to the variable is not synchronized. - Excessive synchronization can cause a program's performance to suffer. - Each variable marked volatile is read from main memory, and written to main memory - local (thread) memory is avoided. - J2SE 5.0 introduces java.util.concurrent.atomic, a package of classes that extends the notion of volatile variables to include an atomic conditional update operation, which permits lock-free, thread-safe programming on single variables. - A countdown latch is a synchronizer that allows one or more threads to wait until some collection of operations being performed in other threads finishes. This synchronizer is implemented by the CountDownLatch class. - A cyclic barrier is a synchronizer that allows a collection of threads to wait for each other to reach a common barrier point. This synchronizer is implemented by the CyclicBarrier class and also makes use of the BrokenBarrierException class. - An exchanger is a synchronizer that allows a pair of threads to exchange objects at a synchronization point. This synchronizer is implemented by the Exchanger<V> class, where V is a placeholder for the type of objects that may be exchanged. - A semaphore is a synchronizer that uses a counter to limit the number of threads that can access limited resources. Java Threads, 3rd Edition, Chapter 5 (Page last updated September 2004, Added 2004-10-31, Author Scott Oaks, Henry Wong, Publisher O'Reilly). Tips: - Programs can perform poorly because of excessive or incorrect synchronization. - Acquiring a contended lock (one that is held by another thread) is more expensive than acquiring an uncontended lock and, more significantly, you have to wait for it to be unlocked which can be a real drag on performance. - You can use the volatile keyword for an instance variable (other than a double or long) to avoid synchronizing. - Unsychronized access to variables that are not volatile are not guaranteed to retrieve the latest value for tha variable, since another thread could hold a more recent value. - JVM, CPU and compiler optimizations applied to statements could re-order the statements if the order does not matter within a method block. But the order may matter for multi-threaded access, causing corrupt data. In these situations synchronization is necessary [chapter includes a nice example of that]. - Synchronization is required to prevent race conditions that can cause data to be found in either an inconsistent or intermediate state. - Not all race conditions need to be avoided - only the race conditions within thread-unsafe sections of code are considered a problem. - Shrink the synchronization scope to be as small as possible and reorganize code so that threadsafe sections can be moved outside of the synchronized block. - The ++ operator is not atomic. AtomicInteger has a method that allows the integer it holds to be incremented atomically without using synchronization. - Atomic classes (AtomicInteger, AtomicLong, AtomicBoolean, and AtomicReference) enable you to build complex code that requires no synchronization at all. - The Atomic classes getAndSet( ) method atomically sets the variable to a new value while returning the previous value, without acquiring any synchronization locks. Other methods allow atomic conditional updates (*compareAndSet); atomic increments and decrements (incrementAndGet, decrementAndGet, getAndIncrement, and getAndDecrement); and atomic pre/post addition (addAndGet and getAndAdd). - AtomicIntegerArray, AtomicLongArray, and AtomicReferenceArray allow you to atomically operate on individual array elements. - AtomicIntegerFieldUpdater, AtomicLongFieldUpdater, and AtomicReferenceFieldUpdater allow you to perform atomic operations on volatile variables. - AtomicMarkableReference and AtomicStampedReference allow a mark or stamp to be attached to any object reference. - The purpose of synchronization is not to prevent all race conditions; it is to prevent problematic race conditions. - 5.0 java.util.concurrent atomic classes are not a direct replacement of the synchronization tools: using them may require a complex redesign of the program, even in some simple cases. - The purpose of atomic variables is to reduce the synchronization required, in order to improve performance. - The purpose of atomic variables is not to remove race conditions that are not threadsafe; their purpose is to make the code threadsafe so that the race condition does not have to be prevented. - It is necessary to balance the usage of synchronization and atomic variables. Synchronization blocks other threads to allow atomic operations; atomic variables allows threads to execute in parallel with threadsafe operations. - Implementing condition variable functionality using atomic variables is possible but not necessarily efficient. You may be swapping a potentially blocking operation using synchronization with multiple spinning operations using atomic variables. - The tradeoffs between using atomic variables vs. synchronization is that of using pessimistic locking vs. optimistic locking, i.e. locking and waiting for locks to ensure an operation succeeds (synchronization), or not locking and retrying on failure to ensure an operation succeeds (atomic variables). - You can implement atomic support for any new type by simply encapsulating the data type into a read-only data object - the data object can then be changed atomically by atomically changing the reference to a new data object. - Atomically setting a group of variables can be done by creating an object that encapsulates the values that can be changed; the values can then be changed simultaneously by atomically changing the atomic reference to the values. - Performing bulk data modification and other advanced atomic data manipulations may use a large number of objects, which can have a significant overhead. So using atomic variables has to be balanced with using synchronization, including the costs of different algorithms and extra objects. - Thread local variables' most common use is to allow multiple threads to cache their own data rather than contend for synchronization locks around shared data. Java Threads, 3rd Edition, Chapter 6 extract (Page last updated September 2004, Added 2004-10-31, Author Scott Oaks, Henry Wong, Publisher O'Reilly). Tips: - The java.util.concurrent Semaphore class is essentially a lock with an attached counter. If a semaphore is constructed with its fair flag set to true, the semaphore tries to allocate the permits in the order that the requests are made, as close to first-come-first-serve as possible. The downside to this option is speed: it takes more time for the virtual machine to order the acquisition of the permits than to allow an arbitrary thread to acquire a permit. - The java.util.concurrent barrier (CyclicBarrier class) is simply a waiting point where all the threads can sync up either to merge results or to safely move on to the next part of the task. In every exception condition, the barrier simply breaks, thus requiring that the individual threads resolve the matter. - The java.util.concurrent CountDownLatch is like a barrier, but releases threads after a count down. - The java.util.concurrent Exchanger class allows data to be passed between threads. - The only time you need to lock data is when data is being changed, that is, when it is being written. So it is more efficient to allow multiple concurrent read locks preventing a write, whereas a write lock should prevent any read locks being obtained. This behavior is supported by the java.util.concurrent ReadWriteLock and ReentrantReadWriteLock classes. A further efficiency is that threads that own the write lock can also acquire the read lock, accomplished by acquiring the read lock before releasing the write lock. Improve J2EE Application Performance with Caching (Page last updated September 2004, Added 2004-10-31, Author Scott Robinson , Publisher developer.com). Tips: - Caching is a tried-and-true technique for improving the efficiency of an application, but you should be thoughtful about where and when you use it. - Store frequently-referenced data at an easily accessible location in the application - databases can be enabled to do this very easily. - A J2EE server can passively do entity bean caching, if you're using entity beans to do data access. - Try JSP cache tags which can cachepage fragments. - Use a Servlet 2.4 caching filter. - Turn repetitive data into Java objects (using JCache) - A good general rule of thumb is to implement caching when it eliminates the need for a remote call. - A good J2EE-specific rule of thumb is to implement caching when it eliminates a need for one tier to make a call to an underlying tier. - Consider if caching at a particular point reduces network activity significantly. - Factor in the volume of data that needs to be cached - it is not normally efficient to cache an entire database in the application server. - Static data is easy to cache, dynamic is not. - Caching adds complexity in the way of extra failure points, multithreading and cache coherency issues. J2EE Design Patterns (Page last updated November 2003, Added 2004-10-31, Author Alan Baumgarten, Publisher BEA). Tips: - Using the Session Facade pattern to wrap EJBs with a Session bean improves performance as only the calls between the client and the session bean go across the network, while calls from the session bean to - the entity beans are local to the EJB container. - Performance can be enhanced through the use of local interfaces. Local interfaces provide support for "lightweight" access from within an EJB container, avoiding the overhead associated with a remote interface. - The Session Facade pattern allows transaction optimization, for example by enclosing the entity beans calls in the session bean method, the database operations are automatically grouped together as a transactional unit (to ensure that they participate in the same transaction, the entity bean methods should be assigned the "Required" transaction attribute). With container-managed transactions, the container would begin a transaction when each EJB method starts to execute and end the transaction when the method returns. - Distributed applications often waste time looking for network resources. - The Service Locator pattern caches the remote resource reference, reducing the number of remote lookups made. Back to newsletter 047 contents Last Updated: 2018-06-28 Copyright © 2000-2018 Fasterj.com. All Rights Reserved. All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners. Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation. RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss Trouble with this page? Please contact us
<urn:uuid:9cb0e4e9-1501-499a-83b0-1a1463976111>
3
2,638
Content Listing
Software Dev.
32.291116
95,567,193
University of Pittsburgh researchers have developed the first natural, nontoxic method for biodegrading carbon nanotubes, a finding that could help diminish the environmental and health concerns that mar the otherwise bright prospects of the super-strong materials commonly used in products, from electronics to plastics. A Pitt research team has found that carbon nanotubes deteriorate when exposed to the natural enzyme horseradish peroxidase (HRP), according to a report published recently in “Nano Letters” coauthored by Alexander Star, an assistant professor of chemistry in Pitt's School of Arts and Sciences, and Valerian Kagan, a professor and vice chair of the Department of Environmental and Occupational Health in Pitt's Graduate School of Public Health. These results open the door to further development of safe and natural methods-with HRP or other enzymes-of cleaning up carbon nanotube spills in the environment and the industrial or laboratory setting. Carbon nanotubes are one-atom thick rolls of graphite 100,000 times smaller than a human hair yet stronger than steel and excellent conductors of electricity and heat. They reinforce plastics, ceramics, or concrete; conduct electricity in electronics or energy-conversion devices; and are sensitive chemical sensors, Star said. (Star created an early-detection device for asthma attacks wherein carbon nanotubes detect minute amounts of nitric oxide preceding an attack. See link below.) “The many applications of nanotubes have resulted in greater production of them, but their toxicity remains controversial,” Star said. “Accidental spills of nanotubes are inevitable during their production, and the massive use of nanotube-based materials could lead to increased environmental pollution. We have demonstrated a nontoxic approach to successfully degrade carbon nanotubes in environmentally relevant conditions.” The team's work focused on nanotubes in their raw form as a fine, graphite-like powder, Kagan explained. In this form, nanotubes have caused severe lung inflammation in lab tests. Although small, nanotubes contain thousands of atoms on their surface that could react with the human body in unknown ways, Kagan said. Both he and Star are associated with a three-year-old Pitt initiative to investigate nanotoxicology. “Nanomaterials aren't completely understood. Industries use nanotubes because they're unique-they are strong, they can be used as semiconductors. But do these features present unknown health risks? The field of nanotoxicology is developing to find out,” Kagan said. “Studies have shown that they can be dangerous. We wanted to develop a method for safely neutralizing these very small materials should they contaminate the natural or working environment.” To break down the nanotubes, the team exposed them to a solution of HRP and a low concentration of hydrogen peroxide at 4 degrees Celcius (39 degrees Fahrenheit) for 12 weeks. Once fully developed, this method could be administered as easily as chemical clean-ups in today's labs, Kagan and Star said. Morgan Kelly | EurekAlert! Research finds new molecular structures in boron-based nanoclusters 13.07.2018 | Brown University 3D-Printing: Support structures to prevent vibrations in post-processing of thin-walled parts 12.07.2018 | Fraunhofer-Institut für Produktionstechnologie IPT For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:eb367b7f-7afe-4125-b13a-51a1bb79a08e>
3.28125
1,292
Content Listing
Science & Tech.
33.515329
95,567,230
Nowadays, our world is in search of cleaner energy sources to power our increasing industrial and economical needs. Solar energy is becoming an alternative source to fossil fuels, however, due to the accelerating pace at which we are consuming energy, we need to develop ubiquitous PV technologies that can be employed everywhere: on buildings, clothes, consumer electronics and wearables. This necessitates ultra-thin film, low-cost and ideally flexible solar cells without compromising the environment during production, use, or disposal. Now ICFO researchers Dr. Maria Bernechea, Dr. Nicky Miller, Guillem Xercavins, David So, and Dr. Alexandros Stavrinadis, led by ICREA Prof. at ICFO Gerasimos Konstantatos have found a solution to this increasing problem. They have fabricated a solution-processed, semi-transparent solar cell based on AgBiS2 nanocrystals, a material that consists of non-toxic, earth-abundant elements, produced in ambient conditions at low temperatures. These crystals have shown to be very strong panchromatic absorbers of light and have been further engineered to act as effective charge-transporting medium for solution-processed solar cells. What is special about these cells? As researcher Dr. Maria Bernechea comments, “They contain AgBiS2 nanocrystals, a novel material based on non-toxic elements. The chemical synthesis of the nanocrystals allows exquisite control of their properties through engineering at the nanoscale and enables their dissolution in colloidal solutions. The material is synthesized at very low temperatures (100ºC), an order of magnitude lower than the ones required for silicon based solar cells.´´ The team of researchers at ICFO developed these cells through a low temperature hot-injection synthetic procedure. They first dispersed the nanocrystals into organic solvents, where the solutions showed to be stable for months without any losses in the device performance. Then, the nanocrystals were deposited onto a thin film of ZnO and ITO, the most commonly used transparent conductive oxide, through a layer-by-layer deposition process until a thickness of approximately 35nm was achieved. “A very interesting feature of AgBiS2 solar cells is that they can be made in air at low temperatures using low-cost solution processing techniques without the need for the sophisticated and expensive equipment required to fabricate many other solar cells. These features give AgBiS2 solar cells significant potential as a low-cost alternative to traditional solar cells.” as Dr. Nicky Miller states. These cells, in this first report, have already achieved power conversion efficiencies of 6.3%, which is on par with the early reported efficiencies of currently high performance thin film PV technologies. This highlights the potential of AgBiS2 as a solar-cell material that in the near future can compete with current thin film technologies that rely on vacuum-based, high-temperature manufacturing processes. As ICREA Prof at ICFO Gerasimos Konstantatos concludes, “This is the first efficient inorganic nanocrystal solid-state solar cell material that simultaneously meets demands for non-toxicity, abundance and low-temperature solution processing. These first results are very encouraging, yet this is still the beginning and we are currently working on our next milestone towards efficiencies > 12%”. The results obtained from this study, which was financially supported by European Commission within the NANOMATCELL project, signifies a turning point in the concept and production of solar cells, moving from silicon cells to low-cost environmentally friendly solar cells that will definitely imply a safer and more sustainable world for the future. “Solution-processed cells based on environmentally friendly AgBiS2 nanocrystals”, M. Bernechea, N. Cates Miller, G. Xercavins, D. So, A. Stavrinadis, G. Konstantatos, Nature Photon. 2016, DOI: 10.1038/NPHOTON.2016.108
<urn:uuid:61d5305f-aed7-4b8d-a8ba-71e06c85c293>
3.015625
849
News (Org.)
Science & Tech.
27.108802
95,567,243
Caesium and its Mixtures: Their Chemical Reactions with Alloys of Transition Metals Used to Clad Reactor Fuels The elements Cs, Te and O2 are considered to be the primary corrodants during burn-up of mixed oxide UO2/PUO2 fuel in Fast Breeder Nuclear Reactors, and the alloys PE16 and 12R72HV comprise advanced forms of fuel cladding to replace the originally employed M316 steel. The steels FV448 and DT2203Y05 are members of a ferritic class of alloys that might also play a role in the cladding under conditions of high burn-up. Most of the work [1– 5] was devoted to determining the extent and nature of corrosion of the alloys PE16, 12R72HV, M316, FV448 and DT2203Y05 by Cs/Te/O2 mixtures in sealed capsules for an arbitrary time of 168 h at 948 K, a temperature probably typical of the clad temperature in an operating fuel pin. KeywordsIntergranular Corrosion Dark Layer Fast Breeder Nuclear Reactor Liquid Metal Embrittlement M316 Steel Unable to display preview. Download preview PDF.
<urn:uuid:a921946b-6663-4c46-a65f-4c4f9812ac9e>
2.5625
257
Truncated
Science & Tech.
40.714838
95,567,256
(A) the anode loses mass and the cathode gains mass. (B) the mass of the anode remains the same but the cathode gains mass. (C) the mass of the anode decreases but the mass of the cathode remains constant. (D) the anode and the cathode neither gain nor lose mass. (E) both electrodes gain in mass. I was thinking in Electrolysis of CuSO4(aq) Anode: 2H2O O2(g) + 4H+1 + 4e-1 -1.23V Cathode: 2 Cu+2+ 4e-1 2Cu + 0.34V Overall: 2H2O + 2 Cu+2 O2(g) + 4H+1 + 2Cu Eー = -1.02V that it would be D because it doesnt really change in either Recently Asked Questions - What are the reasons why the 1998 Human Rights Act was really needed? - how do I use the fzero function to solve this problem in matlab 2xy-40e^x=0 x^2-y=50 the solution for x lies near 0. - Use the normal distribution of fish lengths for which the mean is 10 10 inches and the standard deviation is 2 2 inches. Assume the variable x is normally
<urn:uuid:379bde48-f01e-4b6e-b2fd-979c939b5c71>
3.15625
288
Q&A Forum
Science & Tech.
85.435294
95,567,271
The human genome contains a lot of DNA that does not code for protein. Much of this DNA is involved with regulating which genes are turned on or off. There are also several types of non-coding RNA, some of which aid in protein production and some that inhibit it. Although non-coding DNA and RNA do not directly code for protein to be made, they serve to regulate which genes are made into protein in many cases. A gene is a portion of the DNA within a chromosome that contains all the necessary information for making RNA and then protein. The region of a gene that codes for protein and will be made into RNA is called the open reading frame, or ORF. The ability of the ORF to make RNA and then protein is controlled by a section of DNA called the regulatory region. This region of the DNA is very important in controlling which genes are turned on and eventually made into protein, but does not code for any protein itself. There are several types of RNA, most of which do not code for protein. Ribosomal RNA codes only for production of the ribosome, the complex which turns RNA into protein. Transfer RNA is important for making the protein from RNA, but does not code for making protein itself. Micro RNA, or miRNA, prevents protein from being made by targeting the coding RNA to be degraded. The miRNA serves to negatively regulate which genes are turned into protein, essentially turning the genes off. This process of turning off genes with miRNA is known as RNA interference. When a gene is transcribed from DNA to RNA, the resultant coding RNA, or mRNA, requires further processing before it can be made into protein. The mRNA is composed of sequences known as introns and exons. The introns do not code for any protein and are removed from the mRNA before it is made into protein. The exons are the sequences that code for protein. However, some exons are removed from the mRNA as well and do not get made into protein. This process of removing introns and exons from RNA is known as gene splicing. Some DNA has no known purpose and is therefore referred to as junk DNA. Junk DNA is commonly found in the telomeres -- the ends of the chromosomes. The telomeres of chromosomes are slightly shortened with each cell division, and over time, a significant amount of the DNA from the telomeres can be lost. It is thought that the telomeres are made of mostly junk DNA so that no important genetic information is lost when the telomeres are shortened.
<urn:uuid:c1e4b8d5-904f-45da-b607-028acc69937c>
4.09375
520
Knowledge Article
Science & Tech.
49.455096
95,567,288
Structure of Viruses. A virus has an outer capsid composed of protein subunits, and an inner core of nucleic acid. An outer membranous envelope may be acquired when the virus buds from the cell. It may also include enzymes for nucleic acid replication. Viruses are classified by type of nucleic acid, viral shape and size, and by presence of an outer envelope.. Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
<urn:uuid:bcf4c167-e6b1-4cf0-b37a-37f540f50d5d>
3.46875
146
Knowledge Article
Science & Tech.
40.74
95,567,292
Explore the possibilities for reaction rates versus concentrations with this non-linear differential equation Many physical constants are only known to a certain accuracy. Explore the numerical error bounds in the mass of water and its constituents. To investigate the relationship between the distance the ruler drops and the time taken, we need to do some mathematical modelling... Get further into power series using the fascinating Bessel's equation. Was it possible that this dangerous driving penalty was issued in error? Are these statistical statements sometimes, always or never true? Or it is impossible to say? Here are several equations from real life. Can you work out which measurements are possible from each equation? Which line graph, equations and physical processes go together? Andy wants to cycle from Land's End to John o'Groats. Will he be able to eat enough to keep him going? Work with numbers big and small to estimate and calculate various quantities in physical contexts. Work out the numerical values for these physical quantities. Looking at small values of functions. Motivating the existence of the Taylor expansion. Which dilutions can you make using only 10ml pipettes? Get some practice using big and small numbers in chemistry. This is our collection of tasks on the mathematical theme of 'Population Dynamics' for advanced students and those interested in mathematical modelling. Look at the advanced way of viewing sin and cos through their power series. See how enormously large quantities can cancel out to give a good approximation to the factorial function. What functions can you make using the function machines RECIPROCAL and PRODUCT and the operator machines DIFF and INT? How much energy has gone into warming the planet? By exploring the concept of scale invariance, find the probability that a random piece of real data begins with a 1. Work with numbers big and small to estimate and calculate various quantities in biological contexts. Build up the concept of the Taylor series How do you choose your planting levels to minimise the total loss at harvest time? Use vectors and matrices to explore the symmetries of crystals. Could nanotechnology be used to see if an artery is blocked? Or is this just science fiction? Make an accurate diagram of the solar system and explore the concept of a grand conjunction. Estimate these curious quantities sufficiently accurately that you can rank them in order of size Why MUST these statistical statements probably be at least a little bit wrong? Analyse these beautiful biological images and attempt to rank them in size order. The probability that a passenger books a flight and does not turn up is 0.05. For an aeroplane with 400 seats how many tickets can be sold so that only 1% of flights are over-booked? Which pdfs match the curves? Starting with two basic vector steps, which destinations can you reach on a vector walk? Explore the properties of matrix transformations with these 10 stimulating questions. Explore the shape of a square after it is transformed by the action of a matrix. Invent scenarios which would give rise to these probability density functions. Use trigonometry to determine whether solar eclipses on earth can be perfect. Go on a vector walk and determine which points on the walk are closest to the origin. Can you make matrices which will fix one lucky vector and crush another to zero? Explore the meaning of the scalar and vector cross products and see how the two are related. Can you sketch these difficult curves, which have uses in mathematical modelling? In which Olympic event does a human travel fastest? Decide which events to include in your Alternative Record Book. Which of these infinitely deep vessels will eventually full up? Explore the relationship between resistance and temperature Can you find the volumes of the mathematical vessels? How would you go about estimating populations of dolphins? Match the descriptions of physical processes to these differential equations. Are these estimates of physical quantities accurate? In this short problem, try to find the location of the roots of some unusual functions by finding where they change sign. Which units would you choose best to fit these situations? When you change the units, do the numbers get bigger or smaller?
<urn:uuid:52affce7-31ce-497d-b033-2017497c4918>
3.125
858
Content Listing
Science & Tech.
44.936764
95,567,299
This task, written for the National Young Mathematicians' Award 2016, involves open-topped boxes made with interlocking cubes. Explore the number of units of paint that are needed to cover the boxes. . . . In a square in which the houses are evenly spaced, numbers 3 and 10 are opposite each other. What is the smallest and what is the largest possible number of houses in the square? This 100 square jigsaw is written in code. It starts with 1 and ends with 100. Can you build it up? A dog is looking for a good place to bury his bone. Can you work out where he started and ended in each case? What possible routes could he have taken? In how many ways can you fit two of these yellow triangles together? Can you predict the number of ways two blue triangles can be fitted together? A magician took a suit of thirteen cards and held them in his hand face down. Every card he revealed had the same value as the one he had just finished spelling. How did this work? You have 4 red and 5 blue counters. How many ways can they be placed on a 3 by 3 grid so that all the rows columns and diagonals have an even number of red counters? Hover your mouse over the counters to see which ones will be removed. Click to remover them. The winner is the last one to remove a counter. How you can make sure you win? Can you work out how many cubes were used to make this open box? What size of open box could you make if you had 112 cubes? Can you shunt the trucks so that the Cattle truck and the Sheep truck change places and the Engine is back on the main line? A tetromino is made up of four squares joined edge to edge. Can this tetromino, together with 15 copies of itself, be used to cover an eight by eight chessboard? What is the smallest cuboid that you can put in this box so that you cannot fit another that's the same into it? 10 space travellers are waiting to board their spaceships. There are two rows of seats in the waiting room. Using the rules, where are they all sitting? Can you find all the possible ways? How many different triangles can you make on a circular pegboard that has nine pegs? Take a rectangle of paper and fold it in half, and half again, to make four smaller rectangles. How many different ways can you fold it up? How many DIFFERENT quadrilaterals can be made by joining the dots on the 8-point circle? Here you see the front and back views of a dodecahedron. Each vertex has been numbered so that the numbers around each pentagonal face add up to 65. Can you find all the missing numbers? Building up a simple Celtic knot. Try the interactivity or download the cards or have a go on squared paper. How can you arrange the 5 cubes so that you need the smallest number of Brush Loads of paint to cover them? Try with other numbers of cubes as well. How will you go about finding all the jigsaw pieces that have one peg and one hole? Design an arrangement of display boards in the school hall which fits the requirements of different people. Swap the stars with the moons, using only knights' moves (as on a chess board). What is the smallest number of moves possible? What is the best way to shunt these carriages so that each train can continue its journey? Cut four triangles from a square as shown in the picture. How many different shapes can you make by fitting the four triangles back together? How many different ways can you find of fitting five hexagons together? How will you know you have found all the ways? What is the greatest number of counters you can place on the grid below without four of them lying at the corners of a square? Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? Can you make a 3x3 cube with these shapes made from small cubes? Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be? How many different cuboids can you make when you use four CDs or DVDs? How about using five, then six? Can you predict when you'll be clapping and when you'll be clicking if you start this rhythm? How about when a friend begins a new rhythm at the same time? How can you arrange these 10 matches in four piles so that when you move one match from three of the piles into the fourth, you end up with the same arrangement? Imagine a wheel with different markings painted on it at regular intervals. Can you predict the colour of the 18th mark? The 100th mark? Is it possible to rearrange the numbers 1,2......12 around a clock face in such a way that every two numbers in adjacent positions differ by any of 3, 4 or 5 hours? Looking at the picture of this Jomista Mat, can you decribe what you see? Why not try and make one yourself? Imagine you have six different colours of paint. You paint a cube using a different colour for each of the six faces. How many different cubes can be painted using the same set of six colours? Can you fit the tangram pieces into the outlines of the chairs? Can you fit the tangram pieces into the outline of this shape. How would you describe it? Can you fit the tangram pieces into the outline of this sports car? Anne completes a circuit around a circular track in 40 seconds. Brenda runs in the opposite direction and meets Anne every 15 seconds. How long does it take Brenda to run around the track? Can you cut up a square in the way shown and make the pieces into a triangle? Can you fit the tangram pieces into the outline of this goat and giraffe? How much of the square is coloured blue? How will the pattern continue? Here's a simple way to make a Tangram without any measuring or ruling lines. Make a cube out of straws and have a go at this practical challenge. Can you fit the tangram pieces into the outlines of the candle and sundial? Can you fit the tangram pieces into the outlines of these people? Have a go at this 3D extension to the Pebbles problem. Can you fit the tangram pieces into the outlines of Mai Ling and Chi Wing?
<urn:uuid:67b16757-b37b-4c87-adfb-15d71ecdb244>
3.609375
1,364
Tutorial
Science & Tech.
75.326278
95,567,300
Cell division is a complex process that requires a lot of energy to pull off. Many proteins are required to move molecules, filaments, membranes, and DNA in appropriate ways that do not result in damage. Thus, internal factors that influence cell division include the availability of energy molecules -- in the form of ATP, the integrity of replicated DNA, and the integrity of the protein machinery that does the heavy lifting. Lastly, damaged cells, mutant cells or old cells can enter a dormant state that prevents them from undergoing cell division. Adenosine triphosphate (ATP) is an energy molecule that cells use to power the protein machines within them. Cells cannot function without ATP -- it’s like a car without fuel. AMP-activated protein kinase (AMPK) is a protein that senses ATP levels within the cell. When ATP is used for energy, it is broken down to AMP (adenosine monophosphate). Having lots of AMP is a sign that the cell is low on energy. AMP binds to and helps activate AMPK. Once activated, AMPK causes the cell to stop growing and dividing, so that it has time to make more ATP. Successful cell division requires that a cell’s DNA content is duplicated and evenly divided between the two daughter cells that result. If the DNA is damaged in some way, the cell will not continue the process of division. The cell will stall the process to give itself time to repair the damaged DNA. Exposure to chemical agents and certain types of radiation (like ultraviolet light and x-rays) can result in DNA damage by causing errors in DNA duplication. A cell lives by making proteins inside it. These proteins do the day-to-day work of moving things, breaking things and building things. However, each protein needs to be folded into a special shape in order to function. When there are too many unfolded proteins in a cell, the cell knows that there is a problem. The unfolded protein response (UPR) is the way in which the cell detects that too many proteins are unfolded. The UPR causes the cell to arrest, meaning it will not grow and divide until the problem is fixed. Normal cells only grow and divide when they are healthy and have intact DNA. When cell undergo stress due to environmental chemicals that harm them or mutations that naturally arise, they stop what they are doing and start a repair process. If the repair process is unsuccessful, the cell then decides to kill itself or enter into a dormant state called senescence. Senescence is also a form of cellular aging that happens naturally over time. Senescence is an internal state in which cells can no longer divide.
<urn:uuid:a4e9c2ff-9144-49fa-80ec-08a77ac2d51f>
4.28125
548
Knowledge Article
Science & Tech.
49.104042
95,567,311
This slide presentation reviews NASA's role in the response to spacecraft accidents that involve human fatalities or injuries. Particular attention is given to the work of the Mishap Investigation Team (MIT), the first response to the accidents and the interface to the accident investigation board. The MIT does not investigate the accident, but the objective of the MIT is to gather, guard, preserve and document the evidence. The primary medical objectives of the MIT is to receive, analyze, identify, and transport human remains, provide assistance in the recovery effort, and to provide family Casualty Coordinators with latest recovery information. The MIT while it does not determine the cause of the accident, it acts as the fact gathering arm of the Mishap Investigation Board (MIB), which when it is activated may chose to continue to use the MIT as its field investigation resource. The MIT membership and the specific responsibilities and tasks of the flight surgeon is reviewed. The current law establishing the process is also reviewed. Department of Defense Recovery personnel and spacecraft technicians from NASA adn McDonnell Aircraft Corp., inspect Astronaut John Glenn's Mercury spacecraft, Friendship 7, following its return to Cape Canaveral after recovery in the Atlantic Ocean. Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and Hayhurst, Marc R.; Bitten, Robert E.; Shinn, Stephen A.; Judnick, Daniel C.; Hallgrimson, Ingrid E.; Youngs, Megan A. Although spacecraft developers have been moving towards standardized product lines as the aerospace industry has matured, NASA's continual need to push the cutting edge of science to accomplish unique, challenging missions can still lead to spacecraft resource growth over time. This paper assesses historical mass, power, cost, and schedule growth for multiple NASA spacecraft from the last twenty years and compares to industry reserve guidelines to understand where the guidelines may fall short. Growth is assessed from project start to launch, from the time of the preliminary design review (PDR) to launch and from the time of the critical design review (CDR) to launch. Data is also assessed not just at the spacecraft bus level, but also at the subsystem level wherever possible, to help obtain further insight into possible drivers of growth. Potential recommendations to minimize spacecraft mass, power, cost, and schedule growth for future missions are also discussed. Hillard, G. B.; Ferguson, D. C. Over the past decade, Low Earth Orbiting (LEO) spacecraft have gradually required ever-increasing power levels. As a rule, this has been accomplished through the use of high voltage systems. Recent failures and anomalies on such spacecraft have been traced to various design practices and materials choices related to the high voltage solar arrays. NASA Glenn has studied these anomalies including plasma chamber testing on arrays similar to those that experienced difficulties on orbit. Many others in the community have been involved in a comprehensive effort to understand the problems and to develop practices to avoid them. The NASA Space Environments and Effects program, recognizing the timeliness of this effort, commissioned and funded a design guidelines document intended to capture the current state of understanding. This document, which was completed in the spring of 2003, has been submitted as a proposed NASA standard. We present here an overview of this document and discuss the effort to develop it as a NASA standard. Appropriate design of fire detection systems requires knowledge of both the expected fire signature and the background aerosol levels. Terrestrial fire detection systems have been developed based on extensive study of terrestrial fires. Unfortunately there is no corresponding data set for spacecraft fires and consequently the fire detectors in current spacecraft were developed based upon terrestrial designs. In low gravity, buoyant flow is negligible which causes particles to concentrate at the smoke source, increasing their residence time, and increasing the transport time to smoke detectors. Microgravity fires have significantly different structure than those in 1-g which can change the formation history of the smoke particles. Finally the materials used in spacecraft are different from typical terrestrial environments where smoke properties have been evaluated. It is critically important to detect a fire in its early phase before a flame is established, given the fixed volume of air on any spacecraft. Consequently, the primary target for spacecraft fire detection is pyrolysis products rather than soot. Experimental investigations have been performed at three different NASA facilities which characterize smoke aerosols from overheating common spacecraft materials. The earliest effort consists of aerosol measurements in low gravity, called the Smoke Aerosol Measurement Experiment (SAME), and subsequent ground-based testing of SAME smoke in 55-gallon drums with an aerosol reference instrument. Another set of experiments were performed at NASAs Johnson Space Center White Sands Test Facility (WSTF), with additional fuels and an alternate smoke production method. Measurements of these smoke products include mass and number concentration, and a thermal precipitator was designed for this investigation to capture particles for microscopic analysis. The final experiments presented are from NASAs Gases and Aerosols from Smoldering Polymers (GASP) Laboratory, with selected Dennehy, Cornelius J.; Kunz, Nans At the request of the Science Mission Directorate Chief Engineer, the NASA Technical Fellow for Guidance, Navigation & Control assembled and facilitated a workshop on Spacecraft Hybrid Attitude Control. This multi-Center, academic, and industry workshop, sponsored by the NASA Engineering and Safety Center (NESC), was held in April 2013 to unite nationwide experts to present and discuss the various innovative solutions, techniques, and lessons learned regarding the development and implementation of the various hybrid attitude control system solutions investigated or implemented. This report attempts to document these key lessons learned with the 16 findings and 9 NESC recommendations. The NASA Engineering and Safety Center (NESC) has sponsored a Pathfinder Study to investigate how Model Based Systems Engineering (MBSE) and Model Based Engineering (MBE) techniques can be applied by NASA spacecraft development projects. The objectives of this Pathfinder Study included analyzing both the products of the modeling activity, as well as the process and tool chain through which the spacecraft design activities are executed. Several aspects of MBSE methodology and process were explored. Adoption and consistent use of the MBSE methodology within an existing development environment can be difficult. The Pathfinder Team evaluated the possibility that an "MBSE Template" could be developed as both a teaching tool as well as a baseline from which future NASA projects could leverage. Elements of this template include spacecraft system component libraries, data dictionaries and ontology specifications, as well as software services that do work on the models themselves. The Pathfinder Study also evaluated the tool chain aspects of development. Two chains were considered: 1. The Development tool chain, through which SysML model development was performed and controlled, and 2. The Analysis tool chain, through which both static and dynamic system analysis is performed. Of particular interest was the ability to exchange data between SysML and other engineering tools such as CAD and Dynamic Simulation tools. For this study, the team selected a Mars Lander vehicle as the element to be designed. The paper will discuss what system models were developed, how data was captured and exchanged, and what analyses were conducted. Short, Kendra (Principal Investigator); Van Buren, David (Principal Investigator) Atmospheric confetti. Inchworm crawlers. Blankets of ground penetrating radar. These are some of the unique mission concepts which could be enabled by a printable spacecraft. Printed electronics technology offers enormous potential to transform the way NASA builds spacecraft. A printed spacecraft's low mass, volume and cost offer dramatic potential impacts to many missions. Network missions could increase from a few discrete measurements to tens of thousands of platforms improving areal density and system reliability. Printed platforms could be added to any prime mission as a low-cost, minimum resource secondary payload to augment the science return. For a small fraction of the mass and cost of a traditional lander, a Europa flagship mission might carry experimental printed surface platforms. An Enceladus Explorer could carry feather-light printed platforms to release into volcanic plumes to measure composition and impact energies. The ability to print circuits directly onto a variety of surfaces, opens the possibility of multi-functional structures and membranes such as "smart" solar sails and balloons. The inherent flexibility of a printed platform allows for in-situ re-configurability for aerodynamic control or mobility. Engineering telemetry of wheel/soil interactions are possible with a conformal printed sensor tape fit around a rover wheel. Environmental time history within a sample return canister could be recorded with a printed sensor array that fits flush to the interior of the canister. Phase One of the NIAC task entitled "Printable Spacecraft" investigated the viability of printed electronics technologies for creating multi-functional spacecraft platforms. Mission concepts and architectures that could be enhanced or enabled with this technology were explored. This final report captures the results and conclusions of the Phase One study. First, the report presents the approach taken in conducting the study and a mapping of results against the proposed Dennehy, Cornelius J. There is a heightened interest within NASA for the design, development, and flight implementation of mixed-actuator hybrid attitude control systems for science spacecraft that have less than three functional reaction wheel actuators. This interest is driven by a number of recent reaction wheel failures on aging, but what could be still scientifically productive, NASA spacecraft if a successful hybrid attitude control mode can be implemented. Over the years, hybrid (mixed-actuator) control has been employed for contingency attitude control purposes on several NASA science mission spacecraft. This paper provides a historical perspective of NASA's previous engineering work on spacecraft mixed-actuator hybrid control approaches. An update of the current situation will also be provided emphasizing why NASA is now so interested in hybrid control. The results of the NASA Spacecraft Hybrid Attitude Control Workshop, held in April of 2013, will be highlighted. In particular, the lessons learned captured from that workshop will be shared in this paper. An update on the most recent experiences with hybrid control on the Kepler spacecraft will also be provided. This paper will close with some future considerations for hybrid spacecraft control. Resch, G. M.; Jones, D. L.; Connally, M. J.; Weinreb, S.; Preston, R. A. The international radio astronomy community is currently working on the design of an array of small radio antennas with a total collecting area of one square kilometer - more than a hundred times that of the largest existing (100-m) steerable antennas. An array of this size would provide obvious advantages for high data rate telemetry reception and for spacecraft navigation. Among these advantages are a two-orders-of-magnitude increase in sensitivity for telemetry downlink, flexible sub-arraying to track multiple spacecraft simultaneously, increased reliability through the use of large numbers of identical array elements, very accurate real-time angular spacecraft tracking, and a dramatic reduction in cost per unit area. NASA missions in many disciplines, including planetary science, would benefit from this increased ground-based tracking capability. The science return from planned missions could be increased, and opportunities for less expensive or completely new kinds of missions would be created. Heberlig, J. C. The overall management approach to the NASA Phase B definition studies for space stations, which were initiated in September 1969 and completed in July 1972, is reviewed with particular emphasis placed on the management approach used by the Manned Spacecraft Center. The internal working organizations of the Manned Spacecraft Center and its prime contractor, North American Rockwell, are delineated along with the interfacing techniques used for the joint Government and industry study. Working interfaces with other NASA centers, industry, and Government agencies are briefly highlighted. The controlling documentation for the study (such as guidelines and constraints, bibliography, and key personnel) is reviewed. The historical background and content of the experiment program prepared for use in this Phase B study are outlined and management concepts that may be considered for future programs are proposed. Burnside, Christopher G.; Trinh, Huu P.; Pedersen, Kevin W. The Robotic Lunar Lander (RLL) development project office at NASA Marshall Space Flight Center is currently studying several lunar surface science mission concepts. The focus is on spacecraft carrying multiple science instruments and power systems that will allow extended operations on the lunar surface or other air-less bodies in the solar system. Initial trade studies of launch vehicle options indicate the spacecraft will be significantly mass and volume constrained. Because of the investment by the DOD in low mass, highly volume efficient components, NASA has investigated the potential integration of some of these technologies in space science applications. A 10,000 psig helium pressure regulator test activity has been conducted as part of the overall risk reduction testing for the RLL spacecraft. The regulator was subjected to typical NASA acceptance testing to assess the regulator response to the expected RLL mission requirements. The test results show the regulator can supply helium at a stable outlet pressure of 740 psig within a +/- 5% tolerance band and maintain a lock-up pressure less than the +5% above nominal outlet pressure for all tests conducted. Numerous leak tests demonstrated leakage less than 10-3 standard cubic centimeters per second (SCCS) for the internal seat leakage at lock-up and less than 10-5 SCCS for external leakage through the regulator body. The successful test has shown the potential for 10,000 psig helium systems in NASA spacecraft and has reduced risk associated with hardware availability and hardware ability to meet RLL mission requirements. Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M. Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the Spry, James A.; Beaudet, Robert; Schubert, Wayne Dry heat microbial reduction (DHMR) is the primary method currently used to reduce the microbial load of spacecraft and component parts to comply with planetary protection re-quirements. However, manufacturing processes often involve heating flight hardware to high temperatures for purposes other than planetary protection DHMR. At present, the specifica-tion in NASA document NPR8020.12, describing the process lethality on B. atrophaeus (ATCC 9372) bacterial spores, does not allow for additional planetary protection bioburden reduction credit for processing outside a narrow temperature, time and humidity window. Our results from a comprehensive multi-year laboratory research effort have generated en-hanced data sets on four aspects of the current specification: time and temperature effects in combination, the effect that humidity has on spore lethality, and the lethality for spores with exceptionally high thermal resistance (so called "hardies"). This paper describes potential modifications to the specification, based on the data set gener-ated in the referenced studies. The proposed modifications are intended to broaden the scope of the current specification while still maintaining confidence in a conservative interpretation of the lethality of the DHMR process on microorganisms. Vulnerability of a variety of candidate spacecraft electronics to total ionizing dose and displacement damage is studied. Devices tested include optoelectronics, digital, analog, linear, and hybrid devices. Hoder, Douglas; Bergamo, Marcos The advanced communication technology satellite (ACTS) gigabit satellite network provides long-haul point-to-point and point-to-multipoint full-duplex SONET services over NASA's ACTS. at rates up to 622 Mbit/s (SONET OC-12), with signal quality comparable to that obtained with terrestrial fiber networks. Data multiplexing over the satellite is accomplished using time-division multiple access (TDMA) techniques coordinated with the switching and beam hopping facilities provided by ACTS. Transmissions through the satellite are protected with Reed-Solomon encoding. providing virtually error-free transmission under most weather conditions. Unique to the system are a TDMA frame structure and satellite synchronization mechanism that allow: (a) very efficient utilization of the satellite capacity: (b) over-the-satellite dosed-loop synchronization of the network in configurations with up to 64 ground stations: and (c) ground station initial acquisition without collisions with existing signalling or data traffic. The user interfaces are compatible with SONET standards, performing the function of conventional SONET multiplexers and. as such. can be: readily integrated with standard SONET fiber-based terrestrial networks. Management of the network is based upon the simple network management protocol (SNMP). and includes an over-the-satellite signalling network and backup terrestrial internet (IP-based) connectivity. A description of the ground stations is also included. OBryan, Martha V.; Seidleck, Christina M.; Carts, Martin A.; LaBel, Kenneth A.; Marshall, Cheryl J.; Reed, Robert A.; Sanders, Anthony B.; Hawkins, Donald K.; Cox, Stephen R.; Kniffin, Scott D. We present data on the vulnerability of a variety of candidate spacecraft electronics to proton and heavy ion induced single event effects. Devices tested include digital, analog, linear bipolar, and hybrid devices, among others. Getliffe, Gwendolyn V.; Inamdar, Niraj K.; Masterson, Rebecca; Miller, David W. This report, concluding a one-year NIAC Phase I study, describes a new structural and mechanical technique aimed at reducing the mass and increasing the deployed-to-stowed length and volume ratios of spacecraft systems. This technique uses the magnetic fields generated by electrical current passing through coils of high-temperature superconductors (HTSs) to support spacecraft structures and deploy them to operational configurations from their stowed positions inside a launch vehicle fairing. We present results and analysis investigating the effects of radiation on a variety of candidate spacecraft electronics to proton and heavy ion induced single event effects (SEE), proton-induced displacement damage (DD), and total ionizing dose (TID). Introduction: This paper is a summary of test results.NASA spacecraft are subjected to a harsh space environment that includes exposure to various types of ionizing radiation. The performance of electronic devices in a space radiation environment is often limited by its susceptibility to single event effects (SEE), total ionizing dose (TID), and displacement damage (DD). Ground-based testing is used to evaluate candidate spacecraft electronics to determine risk to spaceflight applications. Interpreting the results of radiation testing of complex devices is quite difficult. Given the rapidly changing nature of technology, radiation test data are most often application-specific and adequate understanding of the test conditions is critical. Studies discussed herein were undertaken to establish the application-specific sensitivities of candidate spacecraft and emerging electronic devices to single-event upset (SEU), single-event latchup (SEL), single-event gate rupture (SEGR), single-event burnout (SEB), single-event transient (SET), TID, enhanced low dose rate sensitivity (ELDRS), and DD effects. Cochran, Donna J.; Buchner, Stephen P.; Irwin, Tim L.; LaBel, Kenneth A.; Marshall, Cheryl J.; Reed, Robert A.; Sanders, Anthony B.; Hawkins, Donald K.; Flanigan, Ryan J.; Cox, Stephen R. We present data on the vulnerability of a variety of candidate spacecraft electronics to total ionizing dose and displacement damage. Devices tested include optoelectronics, digital, analog, linear bipolar devices, hybrid devices, Analog-to- Digital Converters (ADCs), and Digital-to-Analog Converters (DACs), among others. T Ticker, Ronald L.; Azzolini, John D. The study investigates NASA's Earth Science Enterprise needs for Distributed Spacecraft Technologies in the 2010-2025 timeframe. In particular, the study focused on the Earth Science Vision Initiative and extrapolation of the measurement architecture from the 2002-2010 time period. Earth Science Enterprise documents were reviewed. Interviews were conducted with a number of Earth scientists and technologists. fundamental principles of formation flying were also explored. The results led to the development of four notional distribution spacecraft architectures. These four notional architectures (global constellations, virtual platforms, precision formation flying, and sensorwebs) are presented. They broadly and generically cover the distributed spacecraft architectures needed by Earth Science in the post-2010 era. These notional architectures are used to identify technology needs and drivers. Technology needs are subsequently grouped into five categories: Systems and architecture development tools; Miniaturization, production, manufacture, test and calibration; Data networks and information management; Orbit control, planning and operations; and Launch and deployment. The current state of the art and expected developments are explored. High-value technology areas are identified for possible future funding emphasis. McCarthy, Kevin P.; Stocklin, Frank J.; Geldzahler, Barry J.; Friedman, Daniel E.; Celeste, Peter B. Over the next several years, NASA plans to launch multiple earth-science missions which will send data from low-Earth orbits to ground stations at 1-3 Gbps, to achieve data throughputs of 5-40 terabits per day. These transmission rates exceed the capabilities of S-band and X-band frequency allocations used for science probe downlinks in the past. Accordingly, NASA is exploring enhancements to its space communication capabilities to provide the Agency's first Ka-band architecture solution for next generation missions in the near-earth regime. This paper describes the proposed Ka-band solution's drivers and concept, constraints and analyses which shaped that concept, and expansibility for future needs ... FR 60622] inadvertently omits the responsibility of NASA's Freedom of Information Act (FOIA) Office..., this correction adds responsibility of the FOIA Office. This corrections also corrects the title to Sec... appropriate system manager, or, if unknown, to the Center Privacy Manager or Freedom of Information Act (FOIA... NASA announced the research opportunity Earth Venture Suborbital -2 (EVS-2) mission in support of the NASA's science strategic goals and objectives in 2013. Penn State University, NASA Langley Research Center (LaRC), and other academic institutions, government agencies, and industrial companies together formulated and proposed the Atmospheric Carbon and Transport -America (ACT -America) suborbital mission, which was subsequently selected for implementation. The airborne measurements that are part of ACT-America will provide a unique set of remote and in-situ measurements of CO2 over North America at spatial and temporal scales not previously available to the science community and this will greatly enhance our understanding of the carbon cycle. ACT -America will consist of five airborne campaigns, covering all four seasons, to measure regional atmospheric carbon distributions and to evaluate the accuracy of atmospheric transport models used to assess carbon sinks and sources under fair and stormy weather conditions. This coordinated mission will measure atmospheric carbon in the three most important regions of the continental US carbon balance: Northeast, Midwest, and South. Data will be collected using 2 airborne platforms (NASA Wallops' C-130 and NASA Langley's B-200) with both in-situ and lidar instruments, along with instrumented ground towers and under flights of the Orbiting Carbon Observatory (OCO-2) satellite. This presentation provides an overview of the ACT-America instruments, with particular emphasis on the airborne CO2and backscatter lidars, and the, rationale, approach, and anticipated results from this mission. Hooke, A. J. A set of standard telemetry protocols for downlink data flow facilitating the end-to-end transport of instrument data from the spacecraft to the user in real time is proposed. The direct switching of data by autonomous message 'packets' that are assembled by the source instrument on the spacecraft is discussed. The data system consists thus of a format on a message rather than word basis, and such packet telemetry would include standardized protocol headers. Standards are being developed within the NASA End-to-End Data System (NEEDS) program for the source packet and transport frame protocols. The source packet protocol contains identification of both the sequence number of the packet as it is generated by the source and the total length of the packet, while the transport frame protocol includes a sequence count defining the serial number of the frame as it is generated by the spacecraft data system, and a field specifying any 'options' selected in the format of the frame itself. Golshan, Nassar (Editor) The NASA Propagation Experimenters (NAPEX) Meeting and associated Advanced Communications Technology Satellite (ACTS) Propagation Studies Miniworkshop convene yearly to discuss studies supported by the NASA Propagation Program. Representatives from the satellite communications (satcom)industry, academia, and government with an interest in space-ground radio wave propagation have peer discussion of work in progress, disseminate propagation results, and interact with the satcom industry. NAPEX XX, in Fairbanks, Alaska, June 4-5, 1996, had three sessions: (1) "ACTS Propagation Study: Background, Objectives, and Outcomes," covered results from thirteen station-years of Ka-band experiments; (2) "Propagation Studies for Mobile and Personal Satellite Applications," provided the latest developments in measurement, modeling, and dissemination of propagation phenomena of interest to the mobile, personal, and aeronautical satcom industry; and (3)"Propagation Research Topics," covered a range of topics including space/ground optical propagation experiments, propagation databases, the NASA Propagation Web Site, and revision plans for the NASA propagation effects handbooks. The ACTS Miniworkshop, June 6, 1996, covered ACTS status, engineering support for ACTS propagation terminals, and the ACTS Propagation Data Center. A plenary session made specific recommendations for the future direction of the program. Armstrong, R. L.; Brodzik, M. J. Snow cover is an important variable for climate and hydrologic models due to its effects on energy and moisture budgets. Over the past several decades both optical and passive microwave satellite data have been utilized for snow mapping at the regional to global scale. For the period 1978 to 2002, we have shown earlier that both passive microwave and visible data sets indicate a similar pattern of inter-annual variability, although the maximum snow extents derived from the microwave data are, depending on season, less than those provided by the visible satellite data and the visible data typically show higher monthly variability. Snow mapping using optical data is based on the magnitude of the surface reflectance while microwave data can be used to identify snow cover because the microwave energy emitted by the underlying soil is scattered by the snow grains resulting in a sharp decrease in brightness temperature and a characteristic negative spectral gradient. Our previous work has defined the respective advantages and disadvantages of these two types of satellite data for snow cover mapping and it is clear that a blended product is optimal. We present a multi-sensor approach to snow mapping based both on historical data as well as data from current NASA EOS sensors. For the period 1978 to 2002 we combine data from the NOAA weekly snow charts with passive microwave data from the SMMR and SSM/I brightness temperature record. For the current and future time period we blend MODIS and AMSR-E data sets. An example of validation at the brightness temperature level is provided through the comparison of AMSR-E with data from the well-calibrated heritage SSM/I sensor over a large homogeneous snow-covered surface (Dome C, Antarctica). Prototype snow cover maps from AMSR-E compare well with maps derived from SSM/I. Our current blended product is being developed in the 25 km EASE-Grid while the MODIS data being used are in the Climate Modelers Grid (CMG) at approximately 5 km We are currently developing a flight prototype Spacecraft Charge Monitor (SCM) with support from NASA's Small Business Innovation Research (SBIR) program. The device will use a recently proposed high energy-resolution electron spectroscopic technique to determine spacecraft floating potential. The inspiration for the technique came from data collected by the Atmosphere Explorer (AE) satellites in the 1970s. The data available from the AE satellites indicate that the SCM may be able to determine spacecraft floating potential to within 0.1 V under certain conditions. Such accurate measurement of spacecraft charge could be used to correct biases in space plasma measurements. The device may also be able to measure spacecraft floating potential in the solar wind and in orbit around other planets. GREENBELT, Md.- Thanks to a fortuitous observation with NASA’s Swift satellite, astronomers for the first time have caught a star in the act of exploding. Astronomers have previously observed thousands of stellar explosions, known as supernovae, but they have always seen them after the fireworks were well underway. "For years we have dreamed of seeing a star just as it was exploding, but actually finding one is a once in a lifetime event," says team leader Alicia Soderberg, a Hubble and Carnegie-Princeton Fellow at Princeton University in Princeton, N.J. "This newly born supernova is going to be the Rosetta stone of supernova studies for years to come." A typical supernova occurs when the core of a massive star runs out of nuclear fuel and collapses under its own gravity to form an ultradense object known as a neutron star. The newborn neutron star compresses and then rebounds, triggering a shock wave that plows through the star’s gaseous outer layers and blows the star to smithereens. Astronomers thought for nearly four decades that this shock "break-out" will produce bright X-ray emission lasting a few minutes. X-ray Image X-ray Images But until this discovery, astronomers have never observed this signal. Instead, they have observed supernovae brightening days or weeks later, when the expanding shell of debris is energized by the decay of radioactive elements forged in the explosion. "Seeing the shock break-out in X-rays can give a direct view of the exploding star in the last minutes of its life and also provide a signpost to which astronomers can quickly point their telescopes to watch the explosion unfold," says Edo Berger, a Carnegie-Princeton Fellow at Princeton University. Soderberg's discovery of the first shock breakout can be attributed to luck and Swift's unique design. On January 9, 2008, Soderberg and Berger were using Swift to observe a supernova known as SN 2007uy in the spiral galaxy NGC 2770, located 90 million light-years from Earth in the Sydnor, Goerge H. The National Aeronautics and Space Administration's (NASA) Aeronautics Test Program (ATP) is implementing five significant ground-based test facility projects across the nation with funding provided by the American Recovery and Reinvestment Act (ARRA). The projects were selected as the best candidates within the constraints of the ARRA and the strategic plan of ATP. They are a combination of much-needed large scale maintenance, reliability, and system upgrades plus creating new test beds for upcoming research programs. The projects are: 1.) Re-activation of a large compressor to provide a second source for compressed air and vacuum to the Unitary Plan Wind Tunnel at the Ames Research Center (ARC) 2.) Addition of high-altitude ice crystal generation at the Glenn Research Center Propulsion Systems Laboratory Test Cell 3, 3.) New refrigeration system and tunnel heat exchanger for the Icing Research Tunnel at the Glenn Research Center, 4.) Technical viability improvements for the National Transonic Facility at the Langley Research Center, and 5.) Modifications to conduct Environmentally Responsible Aviation and Rotorcraft research at the 14 x 22 Subsonic Tunnel at Langley Research Center. The selection rationale, problem statement, and technical solution summary for each project is given here. The benefits and challenges of the ARRA funded projects are discussed. Indirectly, this opportunity provides the advantages of developing experience in NASA's workforce in large projects and maintaining corporate knowledge in that very unique capability. It is envisioned that improved facilities will attract a larger user base and capabilities that are needed for current and future research efforts will offer revenue growth and future operations stability. Several of the chosen projects will maximize wind tunnel reliability and maintainability by using newer, proven technologies in place of older and obsolete equipment and processes. The projects will meet NASA's goal of Atkinson, David J.; Doyle, Richard J.; James, Mark L.; Kaufman, Tim; Martin, R. Gaius A Spacecraft Health Automated Reasoning Prototype (SHARP) portability study is presented. Some specific progress is described on the portability studies, plans for technology transfer, and potential applications of SHARP and related artificial intelligence technology to telescience operations. The application of SHARP to Voyager telecommunications was a proof-of-capability demonstration of artificial intelligence as applied to the problem of real time monitoring functions in planetary mission operations. An overview of the design and functional description of the SHARP system is also presented as it was applied to Voyager. Rash, James; Parise, Ron; Hogie, Keith; Criscuolo, Ed; Langston, Jim; Powers, Edward I. (Technical Monitor) The Operating Missions as Nodes on the Internet (OMNI) project has shown that Internet technology works in space missions through a demonstration using the UoSAT-12 spacecraft. An Internet Protocol (IP) stack was installed on the orbiting UoSAT-12 spacecraft and tests were run to demonstrate Internet connectivity and measure performance. This also forms the basis for demonstrating subsequent scenarios. This approach provides capabilities heretofore either too expensive or simply not feasible such as reconfiguration on orbit. The OMNI project recognized the need to reduce the risk perceived by mission managers and did this with a multi-phase strategy. In the initial phase, the concepts were implemented in a prototype system that includes space similar components communicating over the TDRS (space network) and the terrestrial Internet. The demonstration system includes a simulated spacecraft with sample instruments. Over 25 demonstrations have been given to mission and project managers, National Aeronautics and Space Administration (NASA), Department of Defense (DoD), contractor technologists and other decisions makers, This initial phase reached a high point with an OMNI demonstration given from a booth at the Johnson Space Center (JSC) Inspection Day 99 exhibition. The proof to mission managers is provided during this second phase with year 2000 accomplishments: testing the use of Internet technologies onboard an actual spacecraft. This was done with a series of tests performed using the UoSAT-12 spacecraft. This spacecraft was reconfigured on orbit at very low cost. The total period between concept and the first tests was only 6 months! On board software was modified to add an IP stack to support basic IP communications. Also added was support for ping, traceroute and network timing protocol (NTP) tests. These tests show that basic Internet functionality can be used onboard spacecraft. The performance of data was measured to show no degradation from current Through a content analysis of 200 "tweets," this study was an exploration into the distinct features of text posted to NASA's "Twitter" site and the potential for these posts to serve as more engaging scientific text than traditional textbooks for adolescents. Results of the content analysis indicated the tweets and linked… Baker, John; Castillo-Rogez, Julie; Bousquet, Pierre-W.; Vane, Gregg; Komarek, Tomas; Klesh, Andrew As planetary science continues to explore new and remote regions of the Solar system with comprehensive and more sophisticated payloads, small spacecraft offer the possibility for focused and more affordable science investigations. These small spacecraft or micro spacecraft (attitude control and determination, capable computer and data handling, and navigation are being met by technologies currently under development to be flown on CubeSats within the next five years. This paper will discuss how micro spacecraft offer an attractive alternative to accomplish specific science and technology goals and what relevant technologies are needed for these these types of spacecraft. Acknowledgements: Part of this work is being carried out at the Jet Propulsion Laboratory, California Institute of Technology under contract to NASA. Government sponsorship acknowledged. Obrien, John E.; Fisk, Lennard A.; Aldrich, Arnold A.; Utsman, Thomas E.; Griffin, Michael D.; Cohen, Aaron Activities and National Aeronautics and Space Administration (NASA) programs, both ongoing and planned, are described by NASA administrative personnel from the offices of Space Science and Applications, Space Systems Development, Space Flight, Exploration, and from the Johnson Space Center. NASA's multi-year strategic plan, called Vision 21, is also discussed. It proposes to use the unique perspective of space to better understand Earth. Among the NASA programs mentioned are the Magellan to Venus and Galileo to Jupiter spacecraft, the Cosmic Background Explorer, Pegsat (the first Pegasus payload), Hubble, the Joint U.S./German ROSAT X-ray Mission, Ulysses to Jupiter and over the sun, the Astro-Spacelab Mission, and the Gamma Ray Observatory. Copies of viewgraphs that illustrate some of these missions, and others, are provided. Also discussed were life science research plans, economic factors as they relate to space missions, and the outlook for international cooperation. Belvin, W. Keith Remote sensing from spacecraft requires precise pointing of measurement devices in order to achieve adequate spatial resolution. Unfortunately, various spacecraft disturbances induce vibrational jitter in the remote sensing instruments. The NASA Langley Research Center has performed analysis, simulations, and ground tests to identify the more promising technologies for minimizing spacecraft pointing jitter. These studies have shown that the use of smart materials to reduce spacecraft jitter is an excellent match between a maturing technology and an operational need. This paper describes the use of embedding piezoelectric actuators for vibration control and payload isolation. In addition, recent advances in modeling, simulation, and testing of spacecraft pointing jitter are discussed. National Aeronautics and Space Administration, Washington, DC. Educational Programs Div. Presented is one of a series of publications of National Aeronautics and Space Administration (NASA) facts about the exploration of Mars. The Viking mission to Mars, consisting of two unmanned NASA spacecraft launched in August and September, 1975, is described. A description of the spacecraft and their paths is given. A diagram identifying the… Riis, Troels; Thuesen, Gøsta; Kilsgaard, Søren The National Aeronautics and Space Agency of USA, NASA, are working on formation flying capabilities for spacecrafts, GRACE Project. IAU and JPL are developing the inter spacecraft attitude link to be used on the two spacecrafts.......The National Aeronautics and Space Agency of USA, NASA, are working on formation flying capabilities for spacecrafts, GRACE Project. IAU and JPL are developing the inter spacecraft attitude link to be used on the two spacecrafts.... Veverka, J.; Langevin, Y.; Farquhar, R.; Fulchignoni, M. After two decades of spacecraft exploration, we still await the first direct investigation of an asteroid. This paper describes how a growing international interest in the solar system's more primitive bodies should remedy this. Plans are under way in Europe for a dedicated asteroid mission (Vesta) which will include multiple flybys with in situ penetrator studies. Possible targets include 4 Vesta, 8 Flora and 46 Hestia; launch its scheduled for 1994 or 1996. In the United States, NASA plans include flybys of asteroids en route to outer solar system targets Tobias, R. F. Topics considered include: NASA-Small Spacecraft Technology Initiative (SSTI) objectives, SSTI-Lewis overview, battery requirement, two cells Common Pressure Vessel (CPV) design summary, CPV electric performance, battery design summary, battery functional description, battery performance. Gordon, C. K. A preliminary design study was conducted to evaluate the suitability of the NASA 515 airplane as a flight demonstration vehicle, and to develop plans, schedules, and budget costs for fly-by-wire/active controls technology flight validation in the NASA 515 airplane. The preliminary design and planning were accomplished for two phases of flight validation. Sellmaier, Florian; Schmidhuber, Michael The book describes the basic concepts of spaceflight operations, for both, human and unmanned missions. The basic subsystems of a space vehicle are explained in dedicated chapters, the relationship of spacecraft design and the very unique space environment are laid out. Flight dynamics are taught as well as ground segment requirements. Mission operations are divided into preparation including management aspects, execution and planning. Deep space missions and space robotic operations are included as special cases. The book is based on a course held at the German Space Operation Center (GSOC). Gordon, Scott; Kern, Dennis L. NASA-HDBK-7008 Spacecraft Level Dynamic Environments Testing discusses the approaches, benefits, dangers, and recommended practices for spacecraft level dynamic environments testing, including vibration testing. This paper discusses in additional detail the benefits and actual experiences of vibration testing spacecraft for NASA Goddard Space Flight Center (GSFC) and Jet Propulsion Laboratory (JPL) flight projects. JPL and GSFC have both similarities and differences in their spacecraft level vibration test approach: JPL uses a random vibration input and a frequency range usually starting at 5 Hz and extending to as high as 250 Hz. GSFC uses a sine sweep vibration input and a frequency range usually starting at 5 Hz and extending only to the limits of the coupled loads analysis (typically 50 to 60 Hz). However, both JPL and GSFC use force limiting to realistically notch spacecraft resonances and response (acceleration) limiting as necessary to protect spacecraft structure and hardware from exceeding design strength capabilities. Despite GSFC and JPL differences in spacecraft level vibration test approaches, both have uncovered a significant number of spacecraft design and workmanship anomalies in vibration tests. This paper will give an overview of JPL and GSFC spacecraft vibration testing approaches and provide a detailed description of spacecraft anomalies revealed. Hurlbert, Kathryn Miller In the 21st century, the National Aeronautics and Space Administration (NASA), the Russian Federal Space Agency, the National Space Agency of Ukraine, the China National Space Administration, and many other organizations representing spacefaring nations shall continue or newly implement robust space programs. Additionally, business corporations are pursuing commercialization of space for enabling space tourism and capital business ventures. Future space missions are likely to include orbiting satellites, orbiting platforms, space stations, interplanetary vehicles, planetary surface missions, and planetary research probes. Many of these missions will include humans to conduct research for scientific and terrestrial benefits and for space tourism, and this century will therefore establish a permanent human presence beyond Earth s confines. Other missions will not include humans, but will be autonomous (e.g., satellites, robotic exploration), and will also serve to support the goals of exploring space and providing benefits to Earth s populace. This section focuses on thermal management systems for human space exploration, although the guiding principles can be applied to unmanned space vehicles as well. All spacecraft require a thermal management system to maintain a tolerable thermal environment for the spacecraft crew and/or equipment. The requirements for human rating and the specified controlled temperature range (approximately 275 K - 310 K) for crewed spacecraft are unique, and key design criteria stem from overall vehicle and operational/programatic considerations. These criteria include high reliability, low mass, minimal power requirements, low development and operational costs, and high confidence for mission success and safety. This section describes the four major subsystems for crewed spacecraft thermal management systems, and design considerations for each. Additionally, some examples of specialized or advanced thermal system technologies are presented Timothy, VanSant J.; Neergaard, Linda F. The Microwave Anisotropy Probe (MAP), a MIDEX mission built in partnership between Princeton University and the NASA Goddard Space Flight Center (GSFC), will study the cosmic microwave background. It will be inserted into a highly elliptical earth orbit for several weeks and then use a lunar gravity assist to orbit around the second Lagrangian point (L2), 1.5 million kilometers, anti-sunward from the earth. The charging environment for the phasing loops and at L2 was evaluated. There is a limited set of data for L2; the GEOTAIL spacecraft measured relatively low spacecraft potentials (approx. 50 V maximum) near L2. The main area of concern for charging on the MAP spacecraft is the well-established threat posed by the "geosynchronous region" between 6-10 Re. The launch in the autumn of 2000 will coincide with the falling of the solar maximum, a period when the likelihood of a substorm is higher than usual. The likelihood of a substorm at that time has been roughly estimated to be on the order of 20% for a typical MAP mission profile. Because of the possibility of spacecraft charging, a requirement for conductive spacecraft surfaces was established early in the program. Subsequent NASCAP/GEO analyses for the MAP spacecraft demonstrated that a significant portion of the sunlit surface (solar cell cover glass and sunshade) could have nonconductive surfaces without significantly raising differential charging. The need for conductive materials on surfaces continually in eclipse has also been reinforced by NASCAP analyses. A NASA engineer with the Commercial Remote Sensing Program (CRSP) at Stennis Space Center works with students from W.P. Daniels High School in New Albany, Miss., through NASA's Small Spacecraft Technology Initiative Program. CRSP is teaching students to use remote sensing to locate a potential site for a water reservoir to offset a predicted water shortage in the community's future. Golshan, Nasser (Editor) The NASA Propagation Experimenters (NAPEX) Meeting is convened each year to discuss studies supported by the NASA Propagation Program. Representatives from the satellite communications (satcom) industry, academia, and government who have an interest in space-ground radio wave propagation are invited to NAPEX meetings for discussions and exchange of information. The reports delivered at these meetings by program managers and investigators present recent activities and future plans. This forum provides an opportunity for peer discussion of work in progress, timely dissemination of propagation results, and close interaction with the satcom industry. The evolution of the national launch vehicle stable is presented along with lists of launch vehicles used in NASA programs. A partial list of spacecraft used throughout the world is also given. Scientific spacecraft costs are presented along with an historial overview of project development and funding in NASA. Zheng, Yihua; Kuznetsova, Maria M.; Pulkkinen, Antti A.; Maddox, Marlo M.; Mays, Mona Leila The Space Weather Research Center (http://swrc. gsfc.nasa.gov) at NASA Goddard, part of the Community Coordinated Modeling Center (http://ccmc.gsfc.nasa.gov), is committed to providing research-based forecasts and notifications to address NASA's space weather needs, in addition to its critical role in space weather education. It provides a host of services including spacecraft anomaly resolution, historical impact analysis, real-time monitoring and forecasting, tailored space weather alerts and products, and weekly summaries and reports. In this paper, we focus on how (near) real-time data (both in space and on ground), in combination with modeling capabilities and an innovative dissemination system called the integrated Space Weather Analysis system (http://iswa.gsfc.nasa.gov), enable monitoring, analyzing, and predicting the spacecraft charging environment for spacecraft users. Relevant tools and resources are discussed. Golshan, Nasser (Editor) The NASA Propagation Experimenters (NAPEX) meeting is convened each year to discuss studies supported by the NASA Propagation Program. Representatives from the satellite communications industry, academia and government who have an interest in space-ground radio wave propagation are invited to NAPEX meetings for discussions and exchange of information. The reports delivered at this meeting by program managers and investigators present recent activities and future plans. This forum provides an opportunity for peer discussion of work in progress, timely dissemination of propagation results, and close interaction with the satellite communications industry. NAPEX XXI took place in El Segundo, California on June 11-12, 1997 and consisted of three sessions. Session 1, entitled "ACTS Propagation Study Results & Outcome " covered the results of 20 station-years of Ka-band radio-wave propagation experiments. Session 11, 'Ka-band Propagation Studies and Models,' provided the latest developments in modeling, and analysis of experimental results about radio wave propagation phenomena for design of Ka-band satellite communications systems. Session 111, 'Propagation Research Topics,' covered a diverse range of propagation topics of interest to the space community, including overviews of handbooks and databases on radio wave propagation. The ACTS Propagation Studies miniworkshop was held on June 13, 1997 and consisted of a technical session in the morning and a plenary session in the afternoon. The morning session covered updates on the status of the ACTS Project & Propagation Program, engineering support for ACTS Propagation Terminals, and the Data Center. The plenary session made specific recommendations for the future direction of the program. National Aeronautics and Space Administration — Cassini spacecraft from SPACE rendering package, built by Michael Oberle under NASA contract at JPL. Includes orbiter only, Huygens probe detached. Accurate except... A short explanation of NASA's accomplishments and goals are discussed in this video. Space Station Freedom, lunar bases, manned Mars mission, and robotic spacecrafts to explore other worlds are briefly described. Biesiadecki, Jeffrey; Jain, Abhinandan A key goal of NASA's New Millennium Program is the development of technology for increased spacecraft on-board autonomy. Achievement of this objective requires the development of a new class of ground-based automony testbeds that can enable the low-cost and rapid design, test, and integration of the spacecraft autonomy software. This paper describes the development of an Autonomy Testbed Environment (ATBE) for the NMP Deep Space I comet/asteroid rendezvous mission. Anderson, Grant A. (Inventor) A spacecraft radiator system designed to provide structural support to the spacecraft. Structural support is provided by the geometric "crescent" form of the panels of the spacecraft radiator. This integration of radiator and structural support provides spacecraft with a semi-monocoque design. Love, Stanley G.; Morin, Lee M.; McCabe, Mary Fifty years ago, NASA decided that the cockpit controls in spacecraft should be like the ones in airplanes. But controls based on the stick and rudder may not be best way to manually control a vehicle in space. A different method is based on submersible vehicles controlled with foot pedals. A new pilot can learn the sub's control scheme in minutes and drive it hands-free. We are building a pair of foot pedals for spacecraft control, and will test them in a spacecraft flight simulator. The JSC Flight Safety Office has developed this compilation of historical information on spacecraft crew hatches to assist the Safety Tech Authority in the evaluation and analysis of worldwide spacecraft crew hatch design and performance. The document is prepared by SAIC s Gary Johnson, former NASA JSC S&MA Associate Director for Technical. Mr. Johnson s previous experience brings expert knowledge to assess the relevancy of data presented. He has experience with six (6) of the NASA spacecraft programs that are covered in this document: Apollo; Skylab; Apollo Soyuz Test Project (ASTP), Space Shuttle, ISS and the Shuttle/Mir Program. Mr. Johnson is also intimately familiar with the JSC Design and Procedures Standard, JPR 8080.5, having been one of its original developers. The observations and findings are presented first by country and organized within each country section by program in chronological order of emergence. A host of reference sources used to augment the personal observations and comments of the author are named within the text and/or listed in the reference section of this document. Careful attention to the selection and inclusion of photos, drawings and diagrams is used to give visual association and clarity to the topic areas examined. NASA and McDonnell Aircraft Corp. spacecraft technicians assist Astronaut L. Gordon Cooper into his spacecraft prior to undergoing tests in the altitude chamber. These tests are used to determine the operating characteristcs of the overall environmental control system. Schmidt, R.; Arends, H.; Pedersen, A. A low and actively controlled electrostatic potential on the outer surfaces of a scientific spacecraft is very important for accurate measurements of cold plasma electrons and ions and the DC to low-frequency electric field. The Japanese/NASA Geotail spacecraft carriers as part of its scientific payload a novel ion emitter for active control of the electrostatic potential on the surface of the spacecraft. The aim of the ion emitter is to reduce the positive surface potential which is normally encountered in the outer magnetosphere when the spacecraft is sunlit. Ion emission clamps the surface potential to near the ambient plasma potential. Without emission control, Geotail has encountered plasma conditions in the lobes of the magnetotail which resulted in surface potentials of up to about +70 V. The ion emitter proves to be able to discharge the outer surfaces of the spacecraft and is capable of keeping the surface potential stable at about +2 V. This potential is measured with respect to one of the electric field probes which are current biased and thus kept at a potential slightly above the ambient plasma potential. The instrument uses the liquid metal field ion emission principle to emit indium ions. The ion beam energy is about 6 keV and the typical total emission current amounts to about 15 μA. Neither variations in the ambient plasma conditions nor operation of two electron emitters on Geotail produce significant variations of the controlled surface potential as long as the resulting electron emission currents remain much smaller than the ion emission current. Typical results of the active potential control are shown, demonstrating the surface potential reduction and its stability over time. 25 refs., 5 figs National Aeronautics and Space Administration — Numerous spacecraft component databases have been developed to support NASA, DoD, and contractor design centers and design tools. Despite the clear utility of... National Aeronautics and Space Administration — Reliable monitoring of the microcrack formation in the complex composite structure components in NASA spacecraft and launch vehicles is critical for vehicle... Hwu, Shian U.; Desilva, Kanishka; Sham, Catherine C. The Communication Systems Simulation Laboratory (CSSL) at the NASA Johnson Space Center is tasked to perform spacecraft and ground network communication system simulations, design validation, and performance verification. The CSSL has developed simulation tools that model spacecraft communication systems and the space and ground environment in which the tools operate. In this paper, a spacecraft communication system with multiple arrays is simulated. Multiple array combined technique is used to increase the radio frequency coverage and data rate performance. The technique is to achieve phase coherence among the phased arrays to combine the signals at the targeting receiver constructively. There are many technical challenges in spacecraft integration with a high transmit power communication system. The array combining technique can improve the communication system data rate and coverage performances without increasing the system transmit power requirements. Example simulation results indicate significant performance improvement can be achieved with phase coherence implementation. Liewer, P.C.; Ayon, J.A.; Wallace, R.A.; Mewaldt, R.A. NASA's Interstellar Probe will be the first spacecraft designed to explore the nearby interstellar medium and its interaction with our solar system. As envisioned by NASA's Interstellar Probe Science and Technology Definition Team, the spacecraft will be propelled by a solar sail to reach >200 AU in 15 years. Interstellar Probe will investigate how the Sun interacts with its environment and will directly measure the properties and composition of the dust, neutrals and plasma of the local interstellar material which surrounds the solar system. In the mission concept developed in the spring of 1999, a 400-m diameter solar sail accelerates the spacecraft to ∼15 AU/year, roughly 5 times the speed of Voyager 1 and 2. The sail is used to first bring the spacecraft to ∼0.25 AU to increase the radiation pressure before heading out in the interstellar upwind direction. After jettisoning the sail at ∼5 AU, the spacecraft coasts to 200-400 AU, exploring the Kuiper Belt, the boundaries of the heliosphere, and the nearby interstellar medium Federal Laboratory Consortium — FUNCTION: Provides the capability to correct unbalances of spacecraft by using dynamic measurement techniques and static/coupled measurements to provide products of... Asoka Mendis, D.; Tsurutani, Bruce T. The characteristics of the Comet Halley spacecraft 'fleet' (VEGA 1 and VEGA 2, Giotto, Suisei, and Sakigake) are presented. The major aims of these missions were (1) to discover and characterize the nucleus, (2) to characterize the atmosphere and ionosphere, (3) to characterize the dust, and (4) to characterize the nature of the large-scale comet-solar wind interaction. While the VEGA and Giotto missions were designed to study all four areas, Suisei addressed the second and fourth. Sakigake was designed to study the solar wind conditions upstream of the comet. It is noted that NASA's Deep Space Network played an important role in spacecraft tracking. An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests Mullin, J. P.; Loria, J. C. The NASA program in photovoltaic energy conversion research is discussed. Solar cells, solar arrays, gallium arsenides, space station and spacecraft power supplies, and state of the art devices are discussed. Steinetz, Bruce M.; Hendricks, Robert C.; Delgado, Irebert The 2007 NASA Seal/Secondary Air System workshop covered the following topics: (i) Overview of NASA's new Orion project aimed at developing a new spacecraft that will fare astronauts to the International Space Station, the Moon, Mars, and beyond; (ii) Overview of NASA's fundamental aeronautics technology project; (iii) Overview of NASA Glenn s seal project aimed at developing advanced seals for NASA's turbomachinery, space, and reentry vehicle needs; (iv) Reviews of NASA prime contractor, vendor, and university advanced sealing concepts, test results, experimental facilities, and numerical predictions; and (v) Reviews of material development programs relevant to advanced seals development. Turbine engine studies have shown that reducing seal leakage as well as high-pressure turbine (HPT) blade tip clearances will reduce fuel burn, lower emissions, retain exhaust gas temperature margin, and increase range. Turbine seal development topics covered include a method for fast-acting HPT blade tip clearance control, noncontacting low-leakage seals, intershaft seals, and a review of engine seal performance requirements for current and future Army engine platforms. Kulkarni, Neeraj; Lubin, Philip; Zhang, Qicheng Achieving relativistic flight to enable extrasolar exploration is one of the dreams of humanity and the long-term goal of our NASA Starlight program. We derive a relativistic solution for the motion of a spacecraft propelled by radiation pressure from a directed energy (DE) system. Depending on the system parameters, low-mass spacecraft can achieve relativistic speeds, thus enabling interstellar exploration. The diffraction of the DE system plays an important role and limits the maximum speed of the spacecraft. We consider “photon recycling” as a possible method to achieving higher speeds. We also discuss recent claims that our previous work on this topic is incorrect and show that these claims arise from an improper treatment of causality. National Aeronautics and Space Administration — The NASA Thesaurus contains the authorized NASA subject terms used to index and retrieve materials in the NASA Technical Reports Server (NTRS) and the NTRS... The Soyuz TMA-3 spacecraft and its booster rocket (rear view) is shown on a rail car for transport to the launch pad where it was raised to a vertical launch position at the Baikonur Cosmodrome, Kazakhstan on October 16, 2003. Liftoff occurred on October 18th, transporting a three man crew to the International Space Station (ISS). Aboard were Michael Foale, Expedition-8 Commander and NASA science officer; Alexander Kaleri, Soyuz Commander and flight engineer, both members of the Expedition-8 crew; and European Space agency (ESA) Astronaut Pedro Duque of Spain. Photo Credit: 'NASA/Bill Ingalls' Dever, Timothy P.; May, Ryan D.; Morris, Paul H. Ground-based controllers can remain in continuous communication with spacecraft in low Earth orbit (LEO) with near-instantaneous communication speeds. This permits near real-time control of all of the core spacecraft systems by ground personnel. However, as NASA missions move beyond LEO, light-time communication delay issues, such as time lag and low bandwidth, will prohibit this type of operation. As missions become more distant, autonomous control of manned spacecraft will be required. The focus of this paper is the power subsystem. For present missions, controllers on the ground develop a complete schedule of power usage for all spacecraft components. This paper presents work currently underway at NASA to develop an architecture for an autonomous spacecraft, and focuses on the development of communication between the Mission Manager and the Autonomous Power Controller. These two systems must work together in order to plan future load use and respond to unanticipated plan deviations. Using a nominal spacecraft architecture and prototype versions of these two key components, a number of simulations are run under a variety of operational conditions, enabling development of content and format of the messages necessary to achieve the desired goals. The goals include negotiation of a load schedule that meets the global requirements (contained in the Mission Manager) and local power system requirements (contained in the Autonomous Power Controller), and communication of off-plan disturbances that arise while executing a negotiated plan. The message content is developed in two steps: first, a set of rapid-prototyping "paper" simulations are preformed; then the resultant optimized messages are codified for computer communication for use in automated testing. Johnston, R. S.; Pool, S. L. A number of medically oriented research and hardware development programs in support of manned space flights have been sponsored by NASA. Blood pressure measuring systems for use in spacecraft are considered. In some cases, complete new bioinstrumentation systems were necessary to accomplish a specific physiological study. Plans for medical research during the Skylab program are discussed along with general questions regarding space-borne health service systems and details concerning the Health Services Support Control Center. Dugel-Whitehead, Norma R. This talk will present the work which has been done at NASA Marshall Space Flight Center involving the use of Artificial Intelligence to control the power system in a spacecraft. The presentation will include a brief history of power system automation, and some basic definitions of the types of artificial intelligence which have been investigated at MSFC for power system automation. A video tape of one of our autonomous power systems using co-operating expert systems, and advanced hardware will be presented. This paper summarizes the advantages of space nuclear power and propulsion systems. It describes the actual status of international power level dependent spacecraft nuclear propulsion missions, especially the high power EU-Russian MEGAHIT study including the Russian Megawatt-Class Nuclear Power Propulsion System, the NASA GRC project and the low and medium power EU DiPoP study. Space nuclear propulsion based mission scenarios of these studies are sketched as well. Mathieu, Charlotte; Weigel, Annalisa .... Models were developed from a customer-centric perspective to assess different fractionated spacecraft architectures relative to traditional spacecraft architectures using multi-attribute analysis... Leve, Frederick A; Peck, Mason A The goal of this book is to serve both as a practical technical reference and a resource for gaining a fuller understanding of the state of the art of spacecraft momentum control systems, specifically looking at control moment gyroscopes (CMGs). As a result, the subject matter includes theory, technology, and systems engineering. The authors combine material on system-level architecture of spacecraft that feature momentum-control systems with material about the momentum-control hardware and software. This also encompasses material on the theoretical and algorithmic approaches to the control of space vehicles with CMGs. In essence, CMGs are the attitude-control actuators that make contemporary highly agile spacecraft possible. The rise of commercial Earth imaging, the advances in privately built spacecraft (including small satellites), and the growing popularity of the subject matter in academic circles over the past decade argues that now is the time for an in-depth treatment of the topic. CMGs are augmented ... National Aeronautics and Space Administration — This compilation of outgassing data of materials intended for spacecraft use were obtained at the Goddard Space Flight Center (GSFC), utilizing equipment developed... National Aeronautics and Space Administration — The objective of the Spacecraft Fire Safety Demonstration project is to develop and conduct large-scale fire safety experiments on an International Space Station... Larsen, Brian Arthur This is a presentation in PDF format which is a quick spacecraft charging primer, meant to be used for program training. It goes into detail about charging physics, RBSP examples, and how to identify charging. Rausch, J. R.; Maloney, J. W. Aerodynamic shield that could be opened and closed proposed. Report presents concepts for deployable aerodynamic brake. Brake used by spacecraft returning from high orbit to low orbit around Earth. Spacecraft makes grazing passes through atmosphere to slow down by drag of brake. Brake flexible shield made of woven metal or ceramic withstanding high temperatures created by air friction. Stored until needed, then deployed by set of struts. In the spring of 1962, engineers from the Engineering Mechanics Division of the Jet Propulsion Laboratory gave a series of lectures on spacecraft design at the Engineering Design seminars conducted at the California Institute of Technology. Several of these lectures were subsequently given at Stanford University as part of the Space Technology seminar series sponsored by the Department of Aeronautics and Astronautics. Presented here are notes taken from these lectures. The lectures were conceived with the intent of providing the audience with a glimpse of the activities of a few mechanical engineers who are involved in designing, building, and testing spacecraft. Engineering courses generally consist of heavily idealized problems in order to allow the more efficient teaching of mathematical technique. Students, therefore, receive a somewhat limited exposure to actual engineering problems, which are typified by more unknowns than equations. For this reason it was considered valuable to demonstrate some of the problems faced by spacecraft designers, the processes used to arrive at solutions, and the interactions between the engineer and the remainder of the organization in which he is constrained to operate. These lecture notes are not so much a compilation of sophisticated techniques of analysis as they are a collection of examples of spacecraft hardware and associated problems. They will be of interest not so much to the experienced spacecraft designer as to those who wonder what part the mechanical engineer plays in an effort such as the exploration of space. Golshan, Nasser (Editor) The NASA Propagation Experimenters (NAPEX) meeting is convened each year to discuss studies supported by the NASA Propagation Program. Representatives from the satellite communications industry, academia and government who have an interest in space-ground radio wave propagation are invited to NAPEX meetings for discussions and exchange of information. The reports delivered at this meeting by program managers and investigators present recent activities and future plans. This forum provides an opportunity for peer discussion of work in progress, timely dissemination of propagation results, and close interaction with the satellite communications industry. Pencil, Eric J.; Sarmiento, Charles J.; Lichtin, D. A.; Palchefsky, J. W.; Bogorad, A. L. Potential plume contamination of spacecraft surfaces was investigated by positioning spacecraft material samples relative to an arcjet thruster. Samples in the simulated solar array region were exposed to the cold gas arcjet plume for 40 hrs to address concerns about contamination by backstreaming diffusion pump oil. Except for one sample, no significant changes were measured in absorptance and emittance within experimental error. Concerns about surface property degradation due to electrostatic discharges led to the investigation of the discharge phenomenon of charged samples during arcjet ignition. Short duration exposure of charged samples demonstrated that potential differences are consistently and completely eliminated within the first second of exposure to a weakly ionized plume. The spark discharge mechanism was not the discharge phenomenon. The results suggest that the arcjet could act as a charge control device on spacecraft. This thesis describes the development of an attitude determination system for spacecraft based only on magnetic field measurements. The need for such system is motivated by the increased demands for inexpensive, lightweight solutions for small spacecraft. These spacecraft demands full attitude...... determination based on simple, reliable sensors. Meeting these objectives with a single vector magnetometer is difficult and requires temporal fusion of data in order to avoid local observability problems. In order to guaranteed globally nonsingular solutions, quaternions are generally the preferred attitude...... is a detailed study of the influence of approximations in the modeling of the system. The quantitative effects of errors in the process and noise statistics are discussed in detail. The third contribution is the introduction of these methods to the attitude determination on-board the Ørsted satellite... The EPOXI flight mission has been testing a new commercial system, Splunk, which employs data mining techniques to organize and present spacecraft telemetry data in a high-level manner. By abstracting away data-source specific details, Splunk unifies arbitrary data formats into one uniform system. This not only reduces the time and effort for retrieving relevant data, but it also increases operational visibility by allowing a spacecraft team to correlate data across many different sources. Splunk's scalable architecture coupled with its graphing modules also provide a solid toolset for generating data visualizations and building real-time applications such as browser-based telemetry displays. Determan, W.R.; Harty, R.B. The Department of Energy, in cooperation with the Department of Defense, has recently initiated the dynamic isotope power system (DIPS) demonstration program. DIPS is designed to provide 1 to 10 kW of electrical power for future military spacecraft. One of the near-term missions considered as a potential application for DIPS was the boost surveillance and tracking system (BSTS). A brief review and summary of the reasons behind a selection of DIPS for BSTS-type missions is presented. Many of these are directly related to spacecraft integration issues; these issues will be reviewed in the areas of system safety, operations, survivability, reliability, and autonomy Dzielski, John Edward Recent developments in the area of nonlinear control theory have shown how coordiante changes in the state and input spaces can be used with nonlinear feedback to transform certain nonlinear ordinary differential equations into equivalent linear equations. These feedback linearization techniques are applied to resolve two problems arising in the control of spacecraft equipped with control moment gyroscopes (CMGs). The first application involves the computation of rate commands for the gimbals that rotate the individual gyroscopes to produce commanded torques on the spacecraft. The second application is to the long-term management of stored momentum in the system of control moment gyroscopes using environmental torques acting on the vehicle. An approach to distributing control effort among a group of redundant actuators is described that uses feedback linearization techniques to parameterize sets of controls which influence a specified subsystem in a desired way. The approach is adapted for use in spacecraft control with double-gimballed gyroscopes to produce an algorithm that avoids problematic gimbal configurations by approximating sets of gimbal rates that drive CMG rotors into desirable configurations. The momentum management problem is stated as a trajectory optimization problem with a nonlinear dynamical constraint. Feedback linearization and collocation are used to transform this problem into an unconstrainted nonlinear program. The approach to trajectory optimization is fast and robust. A number of examples are presented showing applications to the proposed NASA space station. Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M. The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker. Davis, George; Cooter, Miranda; Updike, Clark; Carey, Everett; Mackey, Jennifer; Rykowski, Timothy; Powers, Edward I. (Technical Monitor) Spacecraft trend analysis is a vital mission operations function performed by satellite controllers and engineers, who perform detailed analyses of engineering telemetry data to diagnose subsystem faults and to detect trends that may potentially lead to degraded subsystem performance or failure in the future. It is this latter function that is of greatest importance, for careful trending can often predict or detect events that may lead to a spacecraft's entry into safe-hold. Early prediction and detection of such events could result in the avoidance of, or rapid return to service from, spacecraft safing, which not only results in reduced recovery costs but also in a higher overall level of service for the satellite system. Contemporary spacecraft trending activities are manually intensive and are primarily performed diagnostically after a fault occurs, rather than proactively to predict its occurrence. They also tend to rely on information systems and software that are oudated when compared to current technologies. When coupled with the fact that flight operations teams often have limited resources, proactive trending opportunities are limited, and detailed trend analysis is often reserved for critical responses to safe holds or other on-orbit events such as maneuvers. While the contemporary trend analysis approach has sufficed for current single-spacecraft operations, it will be unfeasible for NASA's planned and proposed space science constellations. Missions such as the Dynamics, Reconnection and Configuration Observatory (DRACO), for example, are planning to launch as many as 100 'nanospacecraft' to form a homogenous constellation. A simple extrapolation of resources and manpower based on single-spacecraft operations suggests that trending for such a large spacecraft fleet will be unmanageable, unwieldy, and cost-prohibitive. It is therefore imperative that an approach to automating the spacecraft trend analysis function be studied, developed, and applied to Radioisotope Thermoelectric Generators (RTGs) are going to supply power for the NASA Galileo and Ulysses spacecraft now scheduled to be launched in 1989 and 1990. The duration of the Galileo mission is expected to be over 8 years. This brings the total RTG lifetime to 13 years. In 13 years, the RTG power drops more than 20 percent leaving a very small power margin over what is consumed by the spacecraft. Thus it is very important to accurately predict the RTG performance and be able to assess the magnitude of errors involved. The paper lists all the error sources involved in the RTG power predictions and describes a statistical method for calculating the tolerance After arrival at the Shuttle Landing Facility in the early morning hours, the crated Stardust spacecraft waits to be unloaded from the aircraft. Built by Lockheed Martin Astronautics near Denver, Colo., for the Jet Propulsion Laboratory (JPL) NASA, the spacecraft Stardust will use a unique medium called aerogel to capture comet particles flying off the nucleus of comet Wild 2 in January 2004, plus collect interstellar dust for later analysis. Stardust will be launched aboard a Boeing Delta 7426 rocket from Complex 17, Cape Canaveral Air Station, targeted for Feb. 6, 1999. The collected samples will return to Earth in a re- entry capsule to be jettisoned from Stardust as it swings by in January 2006. Pappa, Richard S. NASA is focusing renewed attention on the topic of large, ultra-lightweight space structures, also known as 'gossamer' spacecraft. Nearly all of the details of the giant spacecraft are still to be worked out. But it's already clear that one of the most challenging aspects will be developing techniques to align and control these systems after they are deployed in space. A critical part of this process is creating new ground test methods to measure gossamer structures under stationary, deploying and vibrating conditions for validation of corresponding analytical predictions. In addressing this problem, I considered, first of all, the possibility of simply using conventional displacement or vibration sensor that could provide spatial measurements. Next, I turned my attention to photogrammetry, a method of determining the spatial coordinates of objects using photographs. The success of this research and development has convinced me that photogrammetry is the most suitable method to solve the gossamer measurement problem. Edwards, David L.; Burns, Howard D.; Miller, Sharon K.; Porter, Ron; Schneider, Todd A.; Spann, James F.; Xapsos, Michael The National Aeronautics and Space Administration (NASA) is embarking on a course to expand human presence beyond Low Earth Orbit (LEO) while also expanding its mission to explore the solar system. Destinations such as Near Earth Asteroids (NEA), Mars and its moons, and the outer planets are but a few of the mission targets. Each new destination presents an opportunity to increase our knowledge of the solar system and the unique environments for each mission target. NASA has multiple technical and science discipline areas specializing in specific space environments disciplines that will help serve to enable these missions. To complement these existing discipline areas, a concept is presented focusing on the development of a space environments and spacecraft effects (SENSE) organization. This SENSE organization includes disciplines such as space climate, space weather, natural and induced space environments, effects on spacecraft materials and systems and the transition of research information into application. This space environment and spacecraft effects organization will be composed of Technical Working Groups (TWG). These technical working groups will survey customers and users, generate products, and provide knowledge supporting four functional areas: design environments, engineering effects, operational support, and programmatic support. The four functional areas align with phases in the program mission lifecycle and are briefly described below. Design environments are used primarily in the mission concept and design phases of a program. Engineering effects focuses on the material, component, sub-system and system-level selection and the testing to verify design and operational performance. Operational support provides products based on real time or near real time space weather to mission operators to aid in real time and near-term decision-making. The programmatic support function maintains an interface with the numerous programs within NASA, other federal National Aeronautics and Space Administration — NASA has identified silver ions as the best candidate biocide for use in the potable water system on next-generation spacecraft. Though significant work has been... National Aeronautics and Space Administration — One of NASA's primary goals for the next decade is the design, development and launch of a spacecraft aimed at the in-situ exploration of the deep atmosphere and... Roman, Juan A. This presentation provides an overview of the activities National Aeronautics and Space Administration (NASA) is doing to encourage innovation across the agency. All information provided is available publicly. Kern, Dennis L.; Scharton, Terry D. The objective of the Mars Micromission program being managed by the Jet Propulsion Laboratory (JPL) for NASA is to develop a common spacecraft that can carry telecommunications equipment and a variety of science payloads for exploration of Mars. The spacecraft will be capable of carrying robot landers and rovers, cameras, probes, balloons, gliders or aircraft, and telecommunications equipment to Mars at much lower cost than recent NASA Mars missions. The lightweight spacecraft (about 220 Kg mass) will be launched in a cooperative venture with CNES as a TWIN auxiliary payload on the Ariane 5 launch vehicle. Two or more Mars Micromission launches are planned for each Mars launch opportunity, which occur every 26 months. The Mars launch window for the first mission is November 1, 2002 through April 2003, which is planned to be a Mars airplane technology demonstration mission to coincide with the 100 year anniversary of the Kittyhawk flight. Several subsequent launches will create a telecommunications network orbiting Mars, which will provide for continuous communication with lenders and rovers on the Martian surface. Dedicated science payload flights to Mars are slated to start in 2005. This new cheaper and faster approach to Mars exploration calls for innovative approaches to the qualification of the Mars Micromission spacecraft for the Ariane 5 launch vibration and acoustic environments. JPL has in recent years implemented new approaches to spacecraft testing that may be effectively applied to the Mars Micromission. These include 1) force limited vibration testing, 2) combined loads, vibration and modal testing, and 3) direct acoustic testing. JPL has performed nearly 200 force limited vibration tests in the past 9 years; several of the tests were on spacecraft and large instruments, including the Cassini and Deep Space One spacecraft. Force limiting, which measures and limits the spacecraft base reaction force using triaxial force gages sandwiched between the Jørgensen, John Leif The phenomenons and problems encountered when a rendezvous manoeuvre, and possible docking, of two spacecrafts has to be performed, have been the topic for numerous studies, and, details of a variety of scenarios has been analysed. So far, all solutions that has been brought into realization has...... been based entirely on direct human supervision and control. This paper describes a vision-based system and methodology, that autonomously generates accurate guidance information that may assist a human operator in performing the tasks associated with both the rendezvous and docking navigation... Fogel, L. J.; Calabrese, P. G.; Walsh, M. J.; Owens, A. J. Ways in which autonomous behavior of spacecraft can be extended to treat situations wherein a closed loop control by a human may not be appropriate or even possible are explored. Predictive models that minimize mean least squared error and arbitrary cost functions are discussed. A methodology for extracting cyclic components for an arbitrary environment with respect to usual and arbitrary criteria is developed. An approach to prediction and control based on evolutionary programming is outlined. A computer program capable of predicting time series is presented. A design of a control system for a robotic dense with partially unknown physical properties is presented. Urban, David L.; Ruff, Gary A. A presentation of the Saffire Experiment goals and scientific objectives for the Joint CSA/ESA/JAXA/NASA Increments 47 and 48 Science Symposium. The purpose of the presentation is to inform the ISS Cadre and the other investigators of the Saffire goals and objectives to enable them to best support a successful Saffire outcome. Portela, Pedro; Preller, Fabian; Wittke, Henrik; Sinnema, Gerben; Camanho, Pedro; Turon, Albert Fortunately only few cases are known where failure of spacecraft structures due to undetected damage has resulted in a loss of spacecraft and launcher mission. However, several problems related to damage tolerance and in particular delamination of composite materials have been encountered during structure development of various ESA projects and qualification testing. To avoid such costly failures during development, launch or service of spacecraft, launcher and reusable launch vehicles structures a comprehensive damage tolerance verification approach is needed. In 2009, the European Space Agency (ESA) initiated an activity called “Delamination Assessment Tool” which is led by the Portuguese company HPS Lda and includes academic and industrial partners. The goal of this study is the development of a comprehensive damage tolerance verification approach for launcher and reusable launch vehicles (RLV) structures, addressing analytical and numerical methodologies, material-, subcomponent- and component testing, as well as non-destructive inspection. The study includes a comprehensive review of current industrial damage tolerance practice resulting from ECSS and NASA standards, the development of new Best Practice Guidelines for analysis, test and inspection methods and the validation of these with a real industrial case study. The paper describes the main findings of this activity so far and presents a first iteration of a Damage Tolerance Verification Approach, which includes the introduction of novel analytical and numerical tools at an industrial level. This new approach is being put to the test using real industrial case studies provided by the industrial partners, MT Aerospace, RUAG Space and INVENT GmbH Esper, Jaime; Flatley, Thomas P.; Bull, James B.; Buckley, Steven J. The NASA Goddard Space Flight Center (GSFC) and the Department of Defense Operationally Responsive Space (ORS) Office are exercising a multi-year collaborative agreement focused on a redefinition of the way space missions are designed and implemented. A much faster, leaner and effective approach to space flight requires the concerted effort of a multi-agency team tasked with developing the building blocks, both programmatically and technologically, to ultimately achieve flights within 7-days from mission call-up. For NASA, rapid mission implementations represent an opportunity to find creative ways for reducing mission life-cycle times with the resulting savings in cost. This in tum enables a class of missions catering to a broader audience of science participants, from universities to private and national laboratory researchers. To that end, the SMART (Small Rocket/Spacecraft Technology) micro-spacecraft prototype demonstrates an advanced avionics system with integrated GPS capability, high-speed plug-and-playable interfaces, legacy interfaces, inertial navigation, a modular reconfigurable structure, tunable thermal technology, and a number of instruments for environmental and optical sensing. Although SMART was first launched inside a sounding rocket, it is designed as a free-flyer. Simpson, David G. In this case study, we learn how to compute the position of an Earth-orbiting spacecraft as a function of time. As an exercise, we compute the position of John Glenn's Mercury spacecraft Friendship 7 as it orbited the Earth during the third flight of NASA's Mercury program. The Small Spacecraft Technology (SST) Program within the NASA Space Technology Mission Directorate is chartered develop and demonstrate the capabilities that enable small spacecraft to achieve science and exploration missions in "unique" and "more affordable" ways. Specifically, the SST program seeks to enable new mission architectures through the use of small spacecraft, to expand the reach of small spacecraft to new destinations, and to make possible the augmentation existing assets and future missions with supporting small spacecraft. The SST program sponsors smallsat technology development partnerships between universities and NASA Centers in order to engage the unique talents and fresh perspectives of the university community and to share NASA experience and expertise in relevant university projects to develop new technologies and capabilities for small spacecraft. These partnerships also engage NASA personnel in the rapid, agile and cost-conscious small spacecraft approaches that have evolved in the university community, as well as increase support to university efforts and foster a new generation of innovators for NASA and the nation. Whitaker, A. F.; Dooling, D. To address the challenges of space environmental effects, NASA designed the Long Duration Exposure Facility (LDEF) for an 18-month mission to expose thousands of samples of candidate materials that might be used on a space station or other orbital spacecraft. LDEF was launched in April 1984 and was to have been returned to Earth in 1985. Changes in mission schedules postponed retrieval until January 1990, after 69 months in orbit. Analyses of the samples recovered from LDEF have provided spacecraft designers and managers with the most extensive data base on space materials phenomena. Many LDEF samples were greatly changed by extended space exposure. Among even the most radially altered samples, NASA and its science teams are finding a wealth of surprising conclusions and tantalizing clues about the effects of space on materials. Many were discussed at the first two LDEF results conferences and subsequent professional papers. The LDEF Materials Results for Spacecraft Applications Conference was convened in Huntsville to discuss implications for spacecraft design. Already, paint and thermal blanket selections for space station and other spacecraft have been affected by LDEF data. This volume synopsizes those results. Andrews, R.E.; Hayden, J.H.; Hedges, R.T.; Rehmann, D.W. A review of existing information pertaining to spacecraft power processing systems and equipment was accomplished with a view towards applicability to the modularization of multi-kilowatt power processors. Power requirements for future spacecraft were determined from the NASA mission model-shuttle systems payload data study which provided the limits for modular power equipment capabilities. Three power processing systems were compared to evaluation criteria to select the system best suited for modularity. The shunt regulated direct energy transfer system was selected by this analysis for a conceptual design effort which produced equipment specifications, schematics, envelope drawings, and power module configurations This is Block 1, the first evolution of the world's most powerful and versatile rocket, the Space Launch System, built to return humans to the area around the moon. Eventually, larger and even more powerful and capable configurations will take astronauts and cargo to Mars. On the sides of the rocket are the twin solid rocket boosters that provide more than 75 percent during liftoff and burn for about two minutes, after which they are jettisoned, lightening the load for the rest of the space flight. Four RS-25 main engines provide thrust for the first stage of the rocket. These are the world's most reliable rocket engines. The core stage is the main body of the rocket and houses the fuel for the RS-25 engines, liquid hydrogen and liquid oxygen, and the avionics, or "brain" of the rocket. The core stage is all new and being manufactured at NASA's "rocket factory," Michoud Assembly Facility near New Orleans. The Launch Vehicle Stage Adapter, or LVSA, connects the core stage to the Interim Cryogenic Propulsion Stage. The Interim Cryogenic Propulsion Stage, or ICPS, uses one RL-10 rocket engine and will propel the Orion spacecraft on its deep-space journey after first-stage separation. Finally, the Orion human-rated spacecraft sits atop the massive Saturn V-sized launch vehicle. Managed out of Johnson Space Center in Houston, Orion is the first spacecraft in history capable of taking humans to multiple destinations within deep space. 2) Each element of the SLS utilizes collaborative design processes to achieve the incredible goal of sending human into deep space. Early phases are focused on feasibility and requirements development. Later phases are focused on detailed design, testing, and operations. There are 4 basic phases typically found in each phase of development. Ramsay, Christopher M. NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those Layman, Lucas; Zelkowitz, Marvin; Basili, Victor; Nikora, Allen P. In this fast abstract, we provide preliminary findings an analysis of 14,500 spacecraft anomalies from unmanned NASA missions. We provide some baselines for the distributions of software vs. non-software anomalies in spaceflight systems, the risk ratings of software anomalies, and the corrective actions associated with software anomalies. Dehoff, Ryan R [ORNL; Holmans, Walter [Planetary Systems Corporation In this project Planetary Systems Corporation proposed utilizing additive manufacturing (3D printing) to manufacture a titanium spacecraft separation system for commercial and US government customers to realize a 90% reduction in the cost and energy. These savings were demonstrated via “printing-in” many of the parts and sub-assemblies into one part, thus greatly reducing the labor associated with design, procurement, assembly and calibration of mechanisms. Planetary Systems Corporation redesigned several of the components of the separation system based on additive manufacturing principles including geometric flexibility and the ability to fabricate complex designs, ability to combine multiple parts of an assembly into a single component, and the ability to optimize design for specific mechanical property targets. Shock absorption was specifically targeted and requirements were established to attenuate damage to the Lightband system from shock of initiation. Planetary Systems Corporation redesigned components based on these requirements and sent the designs to Oak Ridge National Laboratory to be printed. ORNL printed the parts using the Arcam electron beam melting technology based on the desire for the parts to be fabricated from Ti-6Al-4V based on the weight and mechanical performance of the material. A second set of components was fabricated from stainless steel material on the Renishaw laser powder bed technology due to the improved geometric accuracy, surface finish, and wear resistance of the material. Planetary Systems Corporation evaluated these components and determined that 3D printing is potentially a viable method for achieving significant cost and savings metrics. Moroz, V. I. In June 1999, Dr. Regis Courtin, Associate Editor of PSS, suggested that I write an article for the new section of this journal: "Planetary Pioneers". I hesitated , but decided to try. One of the reasons for my doubts was my primitive English, so I owe the reader an apology for this in advance. Writing took me much more time than I supposed initially, I have stopped and again returned to manuscript many times. My professional life may be divided into three main phases: pioneering work in ground-based IR astronomy with an emphasis on planetary spectroscopy (1955-1970), studies of the planets with spacecraft (1970-1989), and attempts to proceed with this work in difficult times. I moved ahead using the known method of trials and errors as most of us do. In fact, only a small percentage of efforts led to some important results, a sort of dry residue. I will try to describe below how has it been in my case: what may be estimated as the most important, how I came to this, what was around, etc. Bristow, John; Bauer, Frank; Hartman, Kate; How, Jonathan Formation Flying is revolutionizing the way the space community conducts science missions around the Earth and in deep space. This technological revolution will provide new, innovative ways for the community to gather scientific information, share that information between space vehicles and the ground, and expedite the human exploration of space. Once fully matured, formation flying will result in numerous sciencecraft acting as virtual platforms and sensor webs, gathering significantly more and better science data than call be collected today. To achieve this goal, key technologies must be developed including those that address the following basic questions posed by the spacecraft: Where am I? Where is the rest of the fleet? Where do I need to be? What do I have to do (and what am I able to do) to get there? The answers to these questions and the means to implement those answers will depend oil the specific mission needs and formation configuration. However, certain critical technologies are common to most formations. These technologies include high-precision position and relative-position knowledge including Global Positioning System (GPS) mid celestial navigation; high degrees of spacecraft autonomy inter-spacecraft communication capabilities; targeting and control including distributed control algorithms, and high precision control thrusters and actuators. This paper provides an overview of a selection of the current activities NASA/DoD/Industry/Academia are working to develop Formation Flying technologies as quickly as possible, the hurdles that need to be overcome to achieve our formation flying vision, and the team's approach to transfer this technology to space. It will also describe several of the formation flying testbeds, such as Orion and University Nanosatellites, that are being developed to demonstrate and validate many of these innovative sensing and formation control technologies. Fang, Wai-Chi; Alkalai, Leon Recent changes within NASA's space exploration program favor the design, implementation, and operation of low cost, lightweight, small and micro spacecraft with multiple launches per year. In order to meet the future needs of these missions with regard to the use of spacecraft microelectronics, NASA's advanced flight computing (AFC) program is currently considering industrial cooperation and advanced packaging architectures. In relation to this, the AFC program is reviewed, considering the design and implementation of NASA's AFC multichip module. National Aeronautics and Space Administration — To reduce the accumulation of human-made "space junk", NASA has implemented a rule requiring the disposal of spacecraft below 2,000 km within 25 years. By deploying... In 1994, the Clinton Administration issued a report, 'Science in the National Interest', which identified new national science goals. Two of the five goals are related to science communications: produce the finest scientists and engineers for the 21st century, and raise scientific and technological literacy of all Americans. In addition to the guidance and goals set forth by the Administration, NASA has been mandated by Congress under the 1958 Space Act to 'provide for the widest practicable and appropriate dissemination concerning its activities and the results thereof'. In addition to addressing eight Goals and Plans which resulted from a January 1994 meeting between NASA and members of the broader scientific, education, and communications community on the Public Communication of NASA's Science, the Science Communications Working Group (SCWG) took a comprehensive look at the way the Agency communicates its science to ensure that any changes the Agency made were long-term improvements. The SCWG developed a Science Communications Strategy for NASA and a plan to implement the Strategy. This report outlines a strategy from which effective science communications programs can be developed and implemented across the agency. Guiding principles and strategic themes for the strategy are provided, with numerous recommendations for improvement discussed within the respective themes of leadership, coordination, integration, participation, leveraging, and evaluation. Greenburg, J. S.; Gaelick, C.; Kaplan, M.; Fishman, J.; Hopkins, C. Commercial organizations as well as government agencies invest in spacecraft (S/C) technology programs that are aimed at increasing the performance of communications satellites. The value of these programs must be measured in terms of their impacts on the financial performane of the business ventures that may ultimately utilize the communications satellites. An economic evaluation and planning capability was developed and used to assess the impact of NASA on-orbit propulsion and space power programs on typical fixed satellite service (FSS) and direct broadcast service (DBS) communications satellite business ventures. Typical FSS and DBS spin and three-axis stabilized spacecraft were configured in the absence of NASA technology programs. These spacecraft were reconfigured taking into account the anticipated results of NASA specified on-orbit propulsion and space power programs. In general, the NASA technology programs resulted in spacecraft with increased capability. The developed methodology for assessing the value of spacecraft technology programs in terms of their impact on the financial performance of communication satellite business ventures is described. Results of the assessment of NASA specified on-orbit propulsion and space power technology programs are presented for typical FSS and DBS business ventures. State of the art of environment interactions dealing with low-Earth-orbit plasmas; high-voltage systems; spacecraft charging; materials effects; and direction of future programs are contained in over 50 papers. Bennett, Norman R; Burns, Kevin; Katz, Russell; Kirschenbaum, Jon; Mason, Gary; Shehata, Shawky The Gravity Probe B spacecraft, developed, integrated, and tested by Lockheed Missiles and Space Company and later Lockheed Martin Corporation, consisted of structures, mechanisms, command and data handling, attitude and translation control, electrical power, thermal control, flight software, and communications. When integrated with the payload elements, the integrated system became the space vehicle. Key requirements shaping the design of the spacecraft were: (1) the tight mission timeline (17 months, 9 days of on-orbit operation), (2) precise attitude and translational control, (3) thermal protection of science hardware, (4) minimizing aerodynamic, magnetic, and eddy current effects, and (5) the need to provide a robust, low risk spacecraft. The spacecraft met all mission requirements, as demonstrated by dewar lifetime meeting specification, positive power and thermal margins, precision attitude control and drag-free performance, reliable communications, and the collection of more than 97% of the available science data. (paper) Oungrinis, Konstantinos-Alketas; Liapi, Marianthi; Kelesidi, Anna; Gargalis, Leonidas; Telo, Marinela; Ntzoufras, Sotiris; Paschidi, Mariana The paper presents the development of an on-going research project that focuses on a human-centered design approach to habitable spacecraft modules. It focuses on the technical requirements and proposes approaches on how to achieve a spatial arrangement of the interior that addresses sufficiently the functional, physiological and psychosocial needs of the people living and working in such confined spaces that entail long-term environmental threats to human health and performance. Since the research perspective examines the issue from a qualitative point of view, it is based on establishing specific relationships between the built environment and its users, targeting people's bodily and psychological comfort as a measure toward a successful mission. This research has two basic branches, one examining the context of the system's operation and behavior and the other in the direction of identifying, experimenting and formulating the environment that successfully performs according to the desired context. The latter aspect is researched upon the construction of a scaled-model on which we run series of tests to identify the materiality, the geometry and the electronic infrastructure required. Guided by the principles of sensponsive architecture, the ISM research project explores the application of the necessary spatial arrangement and behavior for a user-centered, functional interior where the appropriate intelligent systems are based upon the existing mechanical and chemical support ones featured on space today, and especially on the ISS. The problem is set according to the characteristics presented at the Mars500 project, regarding the living quarters of six crew-members, along with their hygiene, leisure and eating areas. Transformable design techniques introduce spatial economy, adjustable zoning and increased efficiency within the interior, securing at the same time precise spatial orientation and character at any given time. The sensponsive configuration is Becker, Christopher; Merrill, Garrick To enable communication between spacecraft operating in a formation or small constellation, a mesh network architecture was developed and tested using a time division multiple access (TDMA) communication scheme. The network is designed to allow for the exchange of telemetry and other data between spacecraft to enable collaboration between small spacecraft. The system uses a peer-to-peer topology with no central router, so that it does not have a single point of failure. The mesh network is dynamically configurable to allow for addition and subtraction of new spacecraft into the communication network. Flight testing was performed using an unmanned aerial system (UAS) formation acting as a spacecraft analogue and providing a stressing environment to prove mesh network performance. The mesh network was primarily devised to provide low latency, high frequency communication but is flexible and can also be configured to provide higher bandwidth for applications desiring high data throughput. The network includes a relay functionality that extends the maximum range between spacecraft in the network by relaying data from node to node. The mesh network control is implemented completely in software making it hardware agnostic, thereby allowing it to function with a wide variety of existing radios and computing platforms.. ... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice (11-091)] Privacy Act of 1974; Privacy Act...: Revisions of NASA Appendices to Privacy Act System of Records. SUMMARY: Notice is hereby given that NASA is... Privacy Act of 1974. This notice publishes those amendments as set forth below under the caption... Stahl, H. Philip July 2010, NASA Office of Chief Technologist (OCT) initiated an activity to create and maintain a NASA integrated roadmap for 15 key technology areas which recommend an overall technology investment strategy and prioritize NASA?s technology programs to meet NASA?s strategic goals. Science Instruments, Observatories and Sensor Systems(SIOSS) roadmap addresses technology needs to achieve NASA?s highest priority objectives -- not only for the Science Mission Directorate (SMD), but for all of NASA. NASA has stepped forward to face the environmental challenge to eliminate the use of Ozone-Layer Depleting Substances (OLDS) and to reduce our Hazardous Air Pollutants (HAP) by 50 percent in 1995. These requirements have been issued by the Clean Air Act, the Montreal Protocol, and various other legislative acts. A proactive group, the NASA Operational Environment Team or NOET, received its charter in April 1992 and was tasked with providing a network through which replacement activities and development experiences can be shared. This is a NASA-wide team which supports the research and development community by sharing information both in person and via a computerized network, assisting in specification and standard revisions, developing cleaner propulsion systems, and exploring environmentally-compliant alternatives to current processes. Dennehy, Cornelius J.; Labbe, Steve; Lebsock, Kenneth L. Within the broad aerospace community the importance of identifying, documenting and widely sharing lessons learned during system development, flight test, operational or research programs/projects is broadly acknowledged. Documenting and sharing lessons learned helps managers and engineers to minimize project risk and improve performance of their systems. Often significant lessons learned on a project fail to get captured even though they are well known 'tribal knowledge' amongst the project team members. The physical act of actually writing down and documenting these lessons learned for the next generation of NASA GN&C engineers fails to happen on some projects for various reasons. In this paper we will first review the importance of capturing lessons learned and then will discuss reasons why some lessons are not documented. A simple proven approach called 'Pause and Learn' will be highlighted as a proven low-impact method of organizational learning that could foster the timely capture of critical lessons learned. Lastly some examples of 'lost' GN&C lessons learned from the aeronautics, spacecraft and launch vehicle domains are briefly highlighted. In the context of this paper 'lost' refers to lessons that have not achieved broad visibility within the NASA-wide GN&C CoP because they are either undocumented, masked or poorly documented in the NASA Lessons Learned Information System (LLIS). Feasibility and cost benefits of nuclear-powered standardized spacecraft are investigated. The study indicates that two shuttle-launched nuclear-powered spacecraft should be able to serve the majority of unmanned NASA missions anticipated for the 1980's. The standard spacecraft include structure, thermal control, power, attitude control, some propulsion capability and tracking, telemetry, and command subsystems. One spacecraft design, powered by the radioisotope thermoelectric generator, can serve missions requiring up to 450 watts. The other spacecraft design, powered by similar nuclear heat sources in a Brayton-cycle generator, can serve missions requiring up to 21000 watts. Design concepts and trade-offs are discussed. The conceptual designs selected are presented and successfully tested against a variety of missions. The thermal design is such that both spacecraft are capable of operating in any earth orbit and any orientation without modification. Three-axis stabilization is included. Several spacecraft can be stacked in the shuttle payload compartment for multi-mission launches. A reactor-powered thermoelectric generator system, operating at an electric power level of 5000 watts, is briefly studied for applicability to two test missions of divers requirements. A cost analysis indicates that use of the two standardized spacecraft offers sizable savings in comparison with specially designed solar-powered spacecraft. There is a duplicate copy. Chen, Guangming; McLennan, Douglas D. With an eye to the future strategic needs of NASA, the New Millennium Program is funding the Space Technology 5 (ST-5) project to address the future needs in the area of small satellites in constellation missions. The ST-5 project, being developed at Goddard Space Flight Center, involves the development and simultaneous launch of three small, 20-kilogram-class spacecraft. ST-5 is only a test drive and future NASA science missions may call for fleets of spacecraft containing tens of smart and capable satellites in an intelligent constellation. The objective of ST-5 project is to develop three such pioneering small spacecraft for flight validation of several critical new technologies. The ST-5 project team at Goddard Space Flight Center has completed the spacecraft design, is now building and testing the three flight units. The launch readiness date (LRD) is in December 2005. A critical part of ST-5 mission is to prove that it is possible to build these small but capable spacecraft with recurring cost low enough to make future NASA s multi- spacecraft constellation missions viable from a cost standpoint. Galileo spacecraft is illustrated in artist concept. Gallileo, named for the Italian astronomer, physicist and mathematician who is credited with construction of the first complete, practical telescope in 1620, will make detailed studies of Jupiter. A cooperative program with the Federal Republic of Germany the Galileo mission will amplify information acquired by two Voyager spacecraft in their brief flybys. Galileo is a two-element system that includes a Jupiter-orbiting observatory and an entry probe. Jet Propulsion Laboratory (JPL) is Galileo project manager and builder of the main spacecraft. Ames Research Center (ARC) has responsibility for the entry probe, which was built by Hughes Aircraft Company and General Electric. Galileo will be deployed from the payload bay (PLB) of Atlantis, Orbiter Vehicle (OV) 104, during mission STS-34. Holmes, W. M., Jr.; Beck, G. A. The multibeam communications package (MCP) for the Advanced Communications Technology Satellite (ACTS) to be STS-launched by NASA in 1988 for experimental demonstration of satellite-switched TDMA (at 220 Mbit/sec) and baseband-processor signal routing (at 110 or 27.5 Mbit/sec) is characterized. The developmental history of the ACTS, the program definition, and the spacecraft-bus and MCP parameters are reviewed and illustrated with drawings, block diagrams, and maps of the coverage plan. Advanced features of the MPC include 4.5-dB-noise-figure 30-GHz FET amplifiers and 20-GHz TWTA transmitters which provide either 40-W or 8-W RF output, depending on rain conditions. The technologies being tested in ACTS can give frequency-reuse factors as high as 20, thus greatly expanding the orbit/spectrum resources available for U.S. communications use. Ayres, Thomas J.; Bryant, Larry Deep space missions such as Voyager rely upon a large team of expert analysts who monitor activity in the various engineering subsystems of the spacecraft and plan operations. Senior teammembers generally come from the spacecraft designers, and new analysts receive on-the-job training. Neither of these methods will suffice for the creation of a new team in the middle of a mission, which may be the situation during the Magellan mission. New approaches are recommended, including electronic documentation, explicit cognitive modeling, and coached practice with archived data. Mueller, Juergen; Alkalai, Leon; Lewis, Carol NASA is seeking to embark on a new set of human and robotic exploration missions back to the Moon, to Mars, and destinations beyond. Key strategic technical challenges will need to be addressed to realize this new vision for space exploration, including improvements in safety and reliability to improve robustness of space operations. Under sponsorship by NASA's Exploration Systems Mission, the Jet Propulsion Laboratory (JPL), together with its partners in government (NASA Johnson Space Center) and industry (Boeing, Vacco Industries, Ashwin-Ushas Inc.) is developing an ultra-low mass (missions. The micro-inspector will provide remote vehicle inspections to ensure safety and reliability, or to provide monitoring of in-space assembly. The micro-inspector spacecraft represents an inherently modular system addition that can improve safety and support multiple host vehicles in multiple applications. On human missions, it may help extend the reach of human explorers, decreasing human EVA time to reduce mission cost and risk. The micro-inspector development is the continuation of an effort begun under NASA's Office of Aerospace Technology Enabling Concepts and Technology (ECT) program. The micro-inspector uses miniaturized celestial sensors; relies on a combination of solar power and batteries (allowing for unlimited operation in the sun and up to 4 hours in the shade); utilizes a low-pressure, low-leakage liquid butane propellant system for added safety; and includes multi-functional structure for high system-level integration and miniaturization. Versions of this system to be designed and developed under the H&RT program will include additional capabilities for on-board, vision-based navigation, spacecraft inspection, and collision avoidance, and will be demonstrated in a ground-based, space-related environment. These features make the micro-inspector design unique in its ability to serve crewed as well as robotic spacecraft, well beyond Earth-orbit and into arenas such Jones, Ross M. NASA's Space Communications & Navigation Program within the Space Operations Directorate is operating a program to develop and deploy Disruption Tolerant Networking [DTN] technology for a wide variety of mission types by the end of 2011. DTN is an enabling element of the Interplanetary Internet where terrestrial networking protocols are generally unsuitable because they rely on timely and continuous end-to-end delivery of data and acknowledgments. In fall of 2008 and 2009 and 2011 the Jet Propulsion Laboratory installed and tested essential elements of DTN technology on the Deep Impact spacecraft. These experiments, called Deep Impact Network Experiment (DINET 1) were performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. The DINET 1 software was installed on the backup software partition on the backup flight computer for DINET 1. For DINET 1, the spacecraft was at a distance of about 15 million miles (24 million kilometers) from Earth. During DINET 1 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. The first DINET 1 experiment successfully validated many of the essential elements of the DTN protocols. DINET 2 demonstrated: 1) additional DTN functionality, 2) automated certain tasks which were manually implemented in DINET 1 and 3) installed the ION SW on nodes outside of JPL. DINET 3 plans to: 1) upgrade the LTP convergence-layer adapter to conform to the international LTP CL specification, 2) add convergence-layer "stewardship" procedures and 3) add the BSP security elements [PIB & PCB]. This paper describes the planning and execution of the flight experiment and the Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor) It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems. This book describes Dragon V2, a futuristic vehicle that not only provides a means for NASA to transport its astronauts to the orbiting outpost but also advances SpaceX’s core objective of reusability. A direct descendant of Dragon, Dragon V2 can be retrieved, refurbished and re-launched. It is a spacecraft with the potential to completely revolutionize the economics of an industry where equipment costing hundreds of millions of dollars is routinely discarded after a single use. It was presented by SpaceX CEO Elon Musk in May 2014 as the spaceship that will carry NASA astronauts to the International Space Station as soon as 2016 SpaceX’s Dragon – America’s Next Generation Spacecraft describes the extraordinary feats of engineering and human achievement that have placed this revolutionary spacecraft at the forefront of the launch industry and positioned it as the precursor for ultimately transporting humans to Mars. It describes the design and development of Dragon, provides mission highlights of the f... Armstrong J. W. Full Text Available This paper discusses spacecraft Doppler tracking, the current-generation detector technology used in the low-frequency (~millihertz gravitational wave band. In the Doppler method the earth and a distant spacecraft act as free test masses with a ground-based precision Doppler tracking system continuously monitoring the earth-spacecraft relative dimensionless velocity $2 Delta v/c = Delta u/ u_0$, where $Delta u$ is the Doppler shift and $ u_0$ is the radio link carrier frequency. A gravitational wave having strain amplitude $h$ incident on the earth-spacecraft system causes perturbations of order $h$ in the time series of $Delta u/ u_0$. Unlike other detectors, the ~1-10 AU earth-spacecraft separation makes the detector large compared with millihertz-band gravitational wavelengths, and thus times-of-flight of signals and radio waves through the apparatus are important. A burst signal, for example, is time-resolved into a characteristic signature: three discrete events in the Doppler time series. I discuss here the principles of operation of this detector (emphasizing transfer functions of gravitational wave signals and the principal noises to the Doppler time series, some data analysis techniques, experiments to date, and illustrations of sensitivity and current detector performance. I conclude with a discussion of how gravitational wave sensitivity can be improved in the low-frequency band. Bauer, Jeffrey Ervin; Mulac, Brenda Lynn Last year may prove to be a pivotal year for the National Aeronautics and Space Administration (NASA) in the Unmanned Aircraft Systems (UAS) arena, especially in relation to routine UAS access to airspace as NASA accepted an invitation to join the UAS Executive Committee (UAS ExCom). The UAS ExCom is a multi-agency, Federal executive-level committee comprised of the Federal Aviation Administration (FAA), Department of Defense (DoD), Department of Homeland Security (DHS), and NASA with the goals to: 1) Coordinate and align efforts between key Federal Government agencies to achieve routine safe federal public UAS operations in the National Airspace System (NAS); 2) Coordinate and prioritize technical, procedural, regulatory, and policy solutions needed to deliver incremental capabilities; 3) Develop a plan to accommodate the larger stakeholder community at the appropriate time; and 4) Resolve conflicts between Federal Government agencies (FAA, DoD, DHS, and NASA), related to the above goals. The committee was formed in recognition of the need of UAS operated by these agencies to access to the National Airspace System (NAS) to support operational, training, development and research requirements. In order to meet that need, technical, procedural, regulatory, and policy solutions are required to deliver incremental capabilities leading to routine access. The formation of the UAS ExCom is significant in that it represents a tangible commitment by FAA senior leadership to address the UAS access challenge. While the focus of the ExCom is government owned and operated UAS, civil UAS operations are bound to benefit by the progress made in achieving routine access for government UAS. As the UAS ExCom was forming, NASA's Aeronautics Research Mission Directorate began to show renewed interest in UAS, particularly in relation to the future state of the air transportation system under the Next Generation Air Transportation System (NextGen). NASA made funding from the American Dennison, J. R.; Thomson, C. D.; Kite, J.; Zavyalov, V.; Corbridge, Jodie In an effort to improve the reliability and versatility of spacecraft charging models designed to assist spacecraft designers in accommodating and mitigating the harmful effects of charging on spacecraft, the NASA Space Environments and Effects (SEE) Program has funded development of facilities at Utah State University for the measurement of the electronic properties of both conducting and insulating spacecraft materials. We present here an overview of our instrumentation and capabilities, which are particularly well suited to study electron emission as related to spacecraft charging. These measurements include electron-induced secondary and backscattered yields, spectra, and angular resolved measurements as a function of incident energy, species and angle, plus investigations of ion-induced electron yields, photoelectron yields, sample charging and dielectric breakdown. Extensive surface science characterization capabilities are also available to fully characterize the samples in situ. Our measurements for a wide array of conducting and insulating spacecraft materials have been incorporated into the SEE Charge Collector Knowledge-base as a Database of Electronic Properties of Materials Applicable to Spacecraft Charging. This Database provides an extensive compilation of electronic properties, together with parameterization of these properties in a format that can be easily used with existing spacecraft charging engineering tools and with next generation plasma, charging, and radiation models. Tabulated properties in the Database include: electron-induced secondary electron yield, backscattered yield and emitted electron spectra; He, Ar and Xe ion-induced electron yields and emitted electron spectra; photoyield and solar emittance spectra; and materials characterization including reflectivity, dielectric constant, resistivity, arcing, optical microscopy images, scanning electron micrographs, scanning tunneling microscopy images, and Auger electron spectra. Further This paper discusses some potential problems of spacecraft charging as a result of interactions between a large spacecraft, such as the Space Station, and its environment. Induced electric field, due to VXB effect, may be important for large spacecraft at low earth orbits. Differential charging, due to different properties of surface materials, may be significant when the spacecraft is partly in sunshine and partly in shadow. Triple-root potential jump condition may occur because of differential charging. Sudden onset of severe differential charging may occur when an electron or ion beam is emitted from the spacecraft. The beam may partially return to the ''hot spots'' on the spacecraft. Wake effects, due to blocking of ambient ion trajectories, may result in an undesirable negative potential region in the vicinity of a large spacecraft. Outgassing and exhaust may form a significant spacecraft induced environment; ionization may occur. Spacecraft charging and discharging may affect the electronic components on board Acceptability limits and sampling and monitoring strategies for airborne particles in spacecraft were considered. Based on instances of eye and respiratory tract irritation reported by Shuttle flight crews, the following acceptability limits for airborne particles were recommended: for flights of 1 week or less duration (1 mg/cu m for particles less than 10 microns in aerodynamic diameter (AD) plus 1 mg/cu m for particles 10 to 100 microns in AD); and for flights greater than 1 week and up to 6 months in duration (0.2 mg/cu m for particles less than 10 microns in AD plus 0.2 mg/cu m for particles 10 to 100 microns in AD. These numerical limits were recommended to aid in spacecraft atmosphere design which should aim at particulate levels that are a low as reasonably achievable. Sampling of spacecraft atmospheres for particles should include size-fractionated samples of 0 to 10, 10 to 100, and greater than 100 micron particles for mass concentration measurement and elementary chemical analysis by nondestructive analysis techniques. Morphological and chemical analyses of single particles should also be made to aid in identifying airborne particulate sources. Air cleaning systems based on inertial collection principles and fine particle collection devices based on electrostatic precipitation and filtration should be considered for incorporation into spacecraft air circulation systems. It was also recommended that research be carried out in space in the areas of health effects and particle characterization. Jeletic, James F. The application of computer graphics techniques in NASA space missions is reviewed. Telemetric monitoring of the Space Shuttle and its components is discussed, noting the use of computer graphics for real-time visualization problems in the retrieval and repair of the Solar Maximum Mission. The use of the world map display for determining a spacecraft's location above the earth and the problem of verifying the relative position and orientation of spacecraft to celestial bodies are examined. The Flight Dynamics/STS Three-dimensional Monitoring System and the Trajectroy Computations and Orbital Products System world map display are described, emphasizing Space Shuttle applications. Also, consideration is given to the development of monitoring systems such as the Shuttle Payloads Mission Monitoring System and the Attitude Heads-Up Display and the use of the NASA-Goddard Two-dimensional Graphics Monitoring System during Shuttle missions and to support the Hubble Space Telescope. Upchurch, Paul R. GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training. Williams, Jacob; Falck, Robert D.; Beekman, Izaak B. In this paper, applications of the modern Fortran programming language to the field of spacecraft trajectory optimization and design are examined. Modern object-oriented Fortran has many advantages for scientific programming, although many legacy Fortran aerospace codes have not been upgraded to use the newer standards (or have been rewritten in other languages perceived to be more modern). NASA's Copernicus spacecraft trajectory optimization program, originally a combination of Fortran 77 and Fortran 95, has attempted to keep up with modern standards and makes significant use of the new language features. Various algorithms and methods are presented from trajectory tools such as Copernicus, as well as modern Fortran open source libraries and other projects. Beach, Edward; Giancola, Peter; Gibson, Steven; Mahmot, Ronald The Transportable Payload Operations Control Center (TPOCC) project is applying the latest in graphical user interface technology to the spacecraft control center environment. This project of the Mission Operations Division's (MOD) Control Center Systems Branch (CCSB) at NASA Goddard Space Flight Center (GSFC) has developed an architecture for control centers which makes use of a distributed processing approach and the latest in Unix workstation technology. The TPOCC project is committed to following industry standards and using commercial off-the-shelf (COTS) hardware and software components wherever possible to reduce development costs and to improve operational support. TPOCC's most successful use of commercial software products and standards has been in the development of its graphical user interface. This paper describes TPOCC's successful use and customization of four separate layers of commercial software products to create a flexible and powerful user interface that is uniquely suited to spacecraft monitoring and control. Harman, Richard R. The advantages of inducing a constant spin rate on a spacecraft are well known. A variety of science missions have used this technique as a relatively low cost method for conducting science. Starting in the late 1970s, NASA focused on building spacecraft using 3-axis control as opposed to the single-axis control mentioned above. Considerable effort was expended toward sensor and control system development, as well as the development of ground systems to independently process the data. As a result, spinning spacecraft development and their resulting ground system development stagnated. In the 1990s, shrinking budgets made spinning spacecraft an attractive option for science. The attitude requirements for recent spinning spacecraft are more stringent and the ground systems must be enhanced in order to provide the necessary attitude estimation accuracy. Since spinning spacecraft (SC) typically have no gyroscopes for measuring attitude rate, any new estimator would need to rely on the spacecraft dynamics equations. One estimation technique that utilized the SC dynamics and has been used successfully in 3-axis gyro-less spacecraft ground systems is the pseudo-linear Kalman filter algorithm. Consequently, a pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion and rate for a spinning SC. Recently, a filter using Markley variables was developed specifically for spinning spacecraft. The pseudo-linear Kalman filter has the advantage of being easier to implement but estimates the quaternion which, due to the relatively high spinning rate, changes rapidly for a spinning spacecraft. The Markley variable filter is more complicated to implement but, being based on the SC angular momentum, estimates parameters which vary slowly. This paper presents a comparison of the performance of these two filters. Monte-Carlo simulation runs will be presented which demonstrate the advantages and disadvantages of both filters. Nakamura, Rumi; Jeszenszky, Harald; Torkar, Klaus; Andriopoulou, Maria; Fremuth, Gerhard; Taijmar, Martin; Scharlemann, Carsten; Svenes, Knut; Escoubet, Philippe; Prattes, Gustav; Laky, Gunter; Giner, Franz; Hoelzl, Bernhard The NASA's Magnetospheric Multiscale (MMS) Mission is planned to be launched on March 12, 2015. The scientific objectives of the MMS mission are to explore and understand the fundamental plasma physics processes of magnetic reconnection, particle acceleration and turbulence in the Earth's magnetosphere. The region of scientific interest of MMS is in a tenuous plasma environment where the positive spacecraft potential reaches an equilibrium at several tens of Volts. An Active Spacecraft Potential Control (ASPOC) instrument neutralizes the spacecraft potential by releasing positive charge produced by indium ion emitters. ASPOC thereby reduces the potential in order to improve the electric field and low-energy particle measurement. The method has been successfully applied on other spacecraft such as Cluster and Double Star. Two ASPOC units are present on each of the MMS spacecraft. Each unit contains four ion emitters, whereby one emitter per instrument is operated at a time. ASPOC for MMS includes new developments in the design of the emitters and the electronics enabling lower spacecraft potentials, higher reliability, and a more uniform potential structure in the spacecraft's sheath compared to previous missions. Model calculations confirm the findings from previous applications that the plasma measurements will not be affected by the beam's space charge. A perfectly stable spacecraft potential precludes the utilization of the spacecraft as a plasma probe, which is a conventional technique used to estimate ambient plasma density from the spacecraft potential. The small residual variations of the potential controlled by ASPOC, however, still allow to determine ambient plasma density by comparing two closely separated spacecraft and thereby reconstructing the uncontrolled potential variation from the controlled potential. Regular intercalibration of controlled and uncontrolled potentials is expected to increase the reliability of this new method. Zebulum, Ricardo S. NASA's scientists are enjoying unprecedented access to astronomy data from space, both from missions launched and operated only by NASA, as well as missions led by other space agencies to which NASA contributed instruments or technology. This paper describes the NASA astrophysics program for the next decade, including NASA's response to the ASTRO2010 Decadal Survey. Miles, John D., II; Lunn, Griffin Electrostatic separation is a class of material processing technologies commonly used for the sorting of coarse mixtures by means of electrical forces acting on charged or polarized particles. Most if not all of the existing tribo-electrostatic separators had been initially developed for mineral ores beneficiation. It is a well-known process that has been successfully used to separate coal from minerals. Potash (potassium) enrichment where underground salt mines containing large amounts of sodium is another use of this techno logy. Through modification this technology can be used for spacecraft wastewater brine beneficiation. This will add in closing the gap beeen traveling around Earth's Gravity well and long-term space explorations. Food has been brought on all man missions, which is why plant growth for food crops continues to be of interest to NASA. For long-term mission considerations food productions is one of the top priorities. Nutrient recovery is essential for surviving in or past low earth orbit. In our advance bio-regenerative process instead of nitrogen gas produced; soluble nitrate salts that can be recovered for plant fertilizer would be produced instead. The only part missing is the beneficiation of brine to separate the potassium from the sodium. The use of electrostatic beneficiation in this experiment utilizes the electrical charge differences between aluminum and dried brine by surface contact. The helixes within the aluminum tribocharger allows for more surface contact when being agitated. When two materials are in contact, the material with the highest affinity for electrons becomes negatively charged, while the other becomes positively charged. This contact exchange of charge may cause the particles to agglomerate depending on their residence time within the tribocharger, compromising the efficiency of separation. The aim of this experiment is to further the development in electrostatic beneficiation by optimizing the separation of ersatz and National Aeronautics and Space Administration — For spacecraft design and development teams concerned with cost and schedule, the Quick Spacecraft Thermal Analysis Tool (QuickSTAT) is an innovative software suite... Perry, Jay L.; LeVan, Douglas; Crumbley, Robert (Technical Monitor) The primary goal for a collective protection system and a spacecraft environmental control and life support system (ECLSS) are strikingly similar. Essentially both function to provide the occupants of a building or vehicle with a safe, habitable environment. The collective protection system shields military and civilian personnel from short-term exposure to external threats presented by toxic agents and industrial chemicals while an ECLSS sustains astronauts for extended periods within the hostile environment of space. Both have air quality control similarities with various aircraft and 'tight' buildings. This paper reviews basic similarities between air purification system requirements for collective protection and an ECLSS that define surprisingly common technological challenges and solutions. Systems developed for air revitalization on board spacecraft are discussed along with some history on their early development as well as a view of future needs. Emphasis is placed upon two systems implemented by the National Aeronautics and Space Administration (NASA) onboard the International Space Station (ISS): the trace contaminant control system (TCCS) and the molecular sieve-based carbon dioxide removal assembly (CDRA). Over its history, the NASA has developed and implemented many life support systems for astronauts. As the duration, complexity, and crew size of manned missions increased from minutes or hours for a single astronaut during Project Mercury to days and ultimately months for crews of 3 or more during the Apollo, Skylab, Shuttle, and ISS programs, these systems have become more sophisticated. Systems aboard spacecraft such as the ISS have been designed to provide long-term environmental control and life support. Challenges facing the NASA's efforts include minimizing mass, volume, and power for such systems, while maximizing their safety, reliability, and performance. This paper will highlight similarities and differences among air purification systems Stachnik, R. V.; Arnold, D.; Melroy, P.; Mccormack, E. F.; Gezari, D. Y. Results of an orbital analysis and performance assessment of SAMSI (Spacecraft Array for Michelson Spatial Interferometry) are presented. The device considered includes two one-meter telescopes in orbits which are identical except for slightly different inclinations; the telescopes achieve separations as large as 10 km and relay starlight to a central station which has a one-meter optical delay line in one interferometer arm. It is shown that a 1000-km altitude, zero mean inclination orbit affords natural scanning of the 10-km baseline with departures from optical pathlength equality which are well within the corrective capacity of the optical delay line. Electric propulsion is completely adequate to provide the required spacecraft motions, principally those needed for repointing. Resolution of 0.00001 arcsec and magnitude limits of 15 to 20 are achievable. Anderson, John D. Current spacecraft tests of general relativity depend on coherent radio tracking referred to atomic frequency standards at the ground stations. This paper addresses the possibility of improved tests using essentially the current system, but with the added possibility of a space-borne atomic clock. Outside of the obvious measurement of the gravitational frequency shift of the spacecraft clock, a successor to the suborbital flight of a Scout D rocket in 1976 (GP-A Project), other metric tests would benefit most directly by a possible improved sensitivity for the reduced coherent data. For purposes of illustration, two possible missions are discussed. The first is a highly eccentric Earth orbiter, and the second a solar-conjunction experiment to measure the Shapiro time delay using coherent Doppler data instead of the conventional ranging modulation. Bjarnø, Jonas Bækby Spacecraft platform instability constitutes one of the most significant limiting factors in hyperacuity pointing and tracking applications, yet the demand for accurate, timely and reliable attitude information is ever increasing. The PhD research project described within this dissertation has...... served to investigate the solution space for augmenting the DTU μASC stellar reference sensor with a miniature Inertial Reference Unit (IRU), thereby obtaining improved bandwidth, accuracy and overall operational robustness of the fused instrument. Present day attitude determination requirements are met...... of the instrument, and affecting operations during agile and complex spacecraft attitude maneuvers. As such, there exists a theoretical foundation for augmenting the high frequency performance of the μASC instrument, by harnessing the complementary nature of optical stellar reference and inertial sensor technology... Meyer, Marit E. Fire safety in the indoor spacecraft environment is concerned with a unique set of fuels which are designed to not combust. Unlike terrestrial flaming fires, which often can consume an abundance of wood, paper and cloth, spacecraft fires are expected to be generated from overheating electronics consisting of flame resistant materials. Therefore, NASA prioritizes fire characterization research for these fuels undergoing oxidative pyrolysis in order to improve spacecraft fire detector design. A thermal precipitator designed and built for spacecraft fire safety test campaigns at the NASA White Sands Test Facility (WSTF) successfully collected an abundance of smoke particles from oxidative pyrolysis. A thorough microscopic characterization has been performed for ten types of smoke from common spacecraft materials or mixed materials heated at multiple temperatures using the following techniques: SEM, TEM, high resolution TEM, high resolution STEM and EDS. Resulting smoke particle morphologies and elemental compositions have been observed which are consistent with known thermal decomposition mechanisms in the literature and chemical make-up of the spacecraft fuels. Some conclusions about particle formation mechanisms are explored based on images of the microstructure of Teflon smoke particles and tar ball-like particles from Nomex fabric smoke. Breed, Julie; Fox, Jeffrey A.; Powers, Edward I. (Technical Monitor) True autonomy is the Holy Grail of spacecraft mission operations. The goal of launching a satellite and letting it manage itself throughout its useful life is a worthy one. With true autonomy, the cost of mission operations would be reduced to a negligible amount. Under full autonomy, any problems (no matter the severity or type) that may arise with the spacecraft would be handled without any human intervention via some combination of smart sensors, on-board intelligence, and/or smart automated ground system. Until the day that complete autonomy is practical and affordable to deploy, incremental steps of deploying ever-increasing levels of automation (computerization of once manual tasks) on the ground and on the spacecraft are gradually decreasing the cost of mission operations. For example, NASA's Goddard Space Flight Center (NASA-GSFC) has been flying spacecraft with low cost operations for several years. NASA-GSFC's SMEX (Small Explorer) and MIDEX (Middle Explorer) missions have effectively deployed significant amounts of automation to enable the missions to fly predominately in 'light-out' mode. Under light-out operations the ground system is run without human intervention. Various tools perform many of the tasks previously performed by the human operators. One of the major issues in reducing human staff in favor of automation is the perceived increased in risk of losing data, or even losing a spacecraft, because of anomalous conditions that may occur when there is no one in the control center. When things go wrong, missions deploying advanced automation need to be sure that anomalous conditions are detected and that key personal are notified in a timely manner so that on-call team members can react to those conditions. To ensure the health and safety of its lights-out missions, NASA-GSFC's Advanced Automation and Autonomy branch (Code 588) developed the Spacecraft Emergency Response System (SERS). The SERS is a Web-based collaborative environment that enables Tietz, J. C.; Almand, B. J. A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results. An existing tumbling criterion for the dumbbell satellite in planar librations is reexamined and modified to reflect a recently identified tumbling mode associated with the horizontal attitude orientation. It is shown that for any initial attitude there exists a critical angular rate below which the motion is oscillatory and harmonic and beyond which a continuous tumbling will ensue. If the angular rate is at the critical value the spacecraft drifts towards the horizontal attitude from which a spontaneous periodic tumbling occurs Somers, Jeffrey; Gernhardt, Michael; Lawrence, Charles Historically, spacecraft landing systems have been tested with human volunteers, because analytical methods for estimating injury risk were insufficient. These tests were conducted with flight-like suits and seats to verify the safety of the landing systems. Currently, NASA uses the Brinkley Dynamic Response Index to estimate injury risk, although applying it to the NASA environment has drawbacks: (1) Does not indicate severity or anatomical location of injury (2) Unclear if model applies to NASA applications. Because of these limitations, a new validated, analytical approach was desired. Leveraging off of the current state of the art in automotive safety and racing, a new approach was developed. The approach has several aspects: (1) Define the acceptable level of injury risk by injury severity (2) Determine the appropriate human surrogate for testing and modeling (3) Mine existing human injury data to determine appropriate Injury Assessment Reference Values (IARV). (4) Rigorously Validate the IARVs with sub-injurious human testing (5) Use validated IARVs to update standards and vehicle requirement Lightholder, Jack; Asphaug, Erik; Thangavelautham, Jekan Recent advances in small spacecraft allow for their use as orbiting microgravity laboratories (e.g. Asphaug and Thangavelautham LPSC 2014) that will produce substantial amounts of data. Power, bandwidth and processing constraints impose limitations on the number of operations which can be performed on this data as well as the data volume the spacecraft can downlink. We show that instrument autonomy and machine learning techniques can intelligently conduct data reduction and downlink queueing to meet data storage and downlink limitations. As small spacecraft laboratory capabilities increase, we must find techniques to increase instrument autonomy and spacecraft scientific decision making. The Asteroid Origins Satellite (AOSAT) CubeSat centrifuge will act as a testbed for further proving these techniques. Lightweight algorithms, such as connected components analysis, centroid tracking, K-means clustering, edge detection, convex hull analysis and intelligent cropping routines can be coupled with the tradition packet compression routines to reduce data transfer per image as well as provide a first order filtering of what data is most relevant to downlink. This intelligent queueing provides timelier downlink of scientifically relevant data while reducing the amount of irrelevant downlinked data. Resulting algorithms allow for scientists to throttle the amount of data downlinked based on initial experimental results. The data downlink pipeline, prioritized for scientific relevance based on incorporated scientific objectives, can continue from the spacecraft until the data is no longer fruitful. Coupled with data compression and cropping strategies at the data packet level, bandwidth reductions exceeding 40% can be achieved while still downlinking data deemed to be most relevant in a double blind study between scientist and algorithm. Applications of this technology allow for the incorporation of instrumentation which produces significant data volumes on small spacecraft The Cassini orbiter and Huygens probe, which were successfully launched on October 15, 1997, constitute NASA's last grand-scale interplanetary mission of this century. The mission, which consists of a four-year, close-up study of Saturn and its moons, begins in July 2004 with Cassini's 60 orbits of Saturn and about 33 fly-bys of the large moon Titan. The Huygens probe will descend and land on Titan. Investigations will include Saturn's atmosphere, its rings and its magnetosphere. The atmosphere and surface of Titan and other icy moons also will be characterized. Because of the great distance of Saturn from the sun, some of the instruments and equipment on both the orbiter and the probe require external heaters to maintain their temperature within normal operating ranges. These requirements are met by Light Weight Radioisotope Heater Units (LWRHUs) designed, fabricated and safety tested at Los Alamos National Laboratory, New Mexico. An improved gas tungsten arc welding procedure lowered costs and decreased processing time for heat units for the Cassini spacecraft Johnson, Gary B. Determination that equipment can operate in and survive exposure to the humidity environments unique to human rated spacecraft presents widely varying challenges. Equipment may need to operate in habitable volumes where the atmosphere contains perspiration, exhalation, and residual moisture. Equipment located outside the pressurized volumes may be exposed to repetitive diurnal cycles that may result in moisture absorption and/or condensation. Equipment may be thermally affected by conduction to coldplate or structure, by forced or ambient air convection (hot/cold or wet/dry), or by radiation to space through windows or hatches. The equipment s on/off state also contributes to the equipment s susceptibility to humidity. Like-equipment is sometimes used in more than one location and under varying operational modes. Due to these challenges, developing a test scenario that bounds all physical, environmental and operational modes for both pressurized and unpressurized volumes requires an integrated assessment to determine the "worst-case combined conditions." Such an assessment was performed for the Constellation program, considering all of the aforementioned variables; and a test profile was developed based on approximately 300 variable combinations. The test profile has been vetted by several subject matter experts and partially validated by testing. Final testing to determine the efficacy of the test profile on actual space hardware is in the planning stages. When validation is completed, the test profile will be formally incorporated into NASA document CxP 30036, "Constellation Environmental Qualification and Acceptance Testing Requirements (CEQATR)." Bundas, David J.; ONeill, Deborah; Field, Thomas; Meadows, Gary; Patterson, Peter The Global Precipitation Measurement (GPM) Mission is a collaboration between the National Aeronautics and Space Administration (NASA) and the Japanese Aerospace Exploration Agency (JAXA), and other US and international partners, with the goal of monitoring the diurnal and seasonal variations in precipitation over the surface of the earth. These measurements will be used to improve current climate models and weather forecasting, and enable improved storm and flood warnings. This paper gives an overview of the mission architecture and addresses the status of some key trade studies, including the geolocation budgeting, design considerations for spacecraft charging, and design issues related to the mitigation of orbital debris. probes for Jumbo. Both probes are produced by Trek Inc. Trek probe model 370 is capable of -3 to 3kV and has an extremely fast, 50µs/kV response to...changing surface potentials. Trek probe 341B is capable of -20 to 20kV with a 200 µs/kV response time. During our charging experiments the probe sits...unlimited. 12 REFERENCES R. D. Leach and M. B. Alexander, "Failures and anomalies attributed to spacecraft charging," NASA RP-1375, Marshall Space National Aeronautics and Space Administration — Spacecraft automation has the potential to assist crew members and spacecraft operators in managing spacecraft systems during extended space missions. Automation can... Feather, Martin S.; Cornford, Steven L.; Moran, Kelly A risk-based decision-making methodology conceived and developed at JPL and NASA has been used to aid in decision making for spacecraft technology assessment, adoption, development and operation. It takes a risk-centric perspective, through which risks are used as a reasoning step to interpose between mission objectives and risk mitigation measures. Webb, W. A. The Deep Space Network (DSN) is a network of tracking stations, located throughout the globe, used to track spacecraft for NASA's interplanetary missions. This paper describes a computer program, DSNTRAK, which provides an optimum daily tracking schedule for the DSN given the view periods at each station for a mission set of n spacecraft, where n is between 2 and 6. The objective function is specified in terms of relative total daily tracking time requirements between the n spacecraft. Linear programming is used to maximize the total daily tracking time and determine an optimal daily tracking schedule consistent with DSN station capabilities. DSNTRAK is used as part of a procedure to provide DSN load forecasting information for proposed future NASA mission sets. ... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice (11-092)] Privacy Act of 1974; Privacy Act... retirement of one Privacy Act system of records notice. SUMMARY: In accordance with the Privacy Act of 1974, NASA is giving notice that it proposes to cancel the following Privacy Act system of records notice... Farnham, Steven J., II; Garza, Joel, Jr.; Castillo, Theresa M.; Lutomski, Michael In 2007 NASA was preparing to send two new visiting vehicles carrying logistics and propellant to the International Space Station (ISS). These new vehicles were the European Space Agency s (ESA) Automated Transfer Vehicle (ATV), the Jules Verne, and the Japanese Aerospace and Explorations Agency s (JAXA) H-II Transfer Vehicle (HTV). The ISS Program wanted to quantify the increased risk to the ISS from these visiting vehicles. At the time, only the Shuttle, the Soyuz, and the Progress vehicles rendezvoused and docked to the ISS. The increased risk to the ISS was from an increase in vehicle traffic, thereby, increasing the potential catastrophic collision during the rendezvous and the docking or berthing of the spacecraft to the ISS. A universal method of evaluating the risk of rendezvous and docking or berthing was created by the ISS s Risk Team to accommodate the increasing number of rendezvous and docking or berthing operations due to the increasing number of different spacecraft, as well as the future arrival of commercial spacecraft. Before the first docking attempt of ESA's ATV and JAXA's HTV to the ISS, a probabilistic risk model was developed to quantitatively calculate the risk of collision of each spacecraft with the ISS. The 5 rendezvous and docking risk models (Soyuz, Progress, Shuttle, ATV, and HTV) have been used to build and refine the modeling methodology for rendezvous and docking of spacecrafts. This risk modeling methodology will be NASA s basis for evaluating the addition of future ISS visiting spacecrafts hazards, including SpaceX s Dragon, Orbital Science s Cygnus, and NASA s own Orion spacecraft. This paper will describe the methodology used for developing a visiting vehicle risk model. Backman, D. E.; Clark, C.; Harman, P. K. NASA's Airborne Astronomy Ambassadors (AAA) program is a three-part professional development (PD) experience for high school physics, astronomy, and earth science teachers. AAA PD consists of: (1) blended learning via webinars, asynchronous content learning, and in-person workshops, (2) a STEM immersion experience at NASA Armstrong's B703 science research aircraft facility in Palmdale, California, and (3) ongoing opportunities for connection with NASA astrophysics and planetary science Subject Matter Experts (SMEs). AAA implementation in 2016-18 involves partnerships between the SETI Institute and seven school districts in northern and southern California. AAAs in the current cohort were selected by the school districts based on criteria developed by AAA program staff working with WestEd evaluation consultants. The selected teachers were then randomly assigned by WestEd to a Group A or B to support controlled testing of student learning. Group A completed their PD during January - August 2017, then participated in NASA SOFIA science flights during fall 2017. Group B will act as a control during the 2017-18 school year, then will complete their professional development and SOFIA flights during 2018. A two-week AAA electromagnetic spectrum and multi-wavelength astronomy curriculum aligned with the Science Framework for California Public Schools and Next Generation Science Standards was developed by program staff for classroom delivery. The curriculum (as well as the AAA's pre-flight PD) capitalizes on NASA content by using "science snapshot" case studies regarding astronomy research conducted by SOFIA. AAAs also interact with NASA SMEs during flight weeks and will translate that interaction into classroom content. The AAA program will make controlled measurements of student gains in standards-based learning plus changes in student attitudes towards STEM, and observe & record the AAAs' implementation of curricular changes. Funded by NASA: NNX16AC51 Johnson, Lee; De Soria-Santacruz Pich, Maria; Conroy, David; Lobbia, Robert; Huang, Wensheng; Choi, Maria; Sekerak, Michael J. NASA's Asteroid Redirect Robotic Mission (ARRM) project plans included a set of plasma and space environment instruments, the Plasma Diagnostic Package (PDP), to fulfill ARRM requirements for technology extensibility to future missions. The PDP objectives were divided into the classes of 1) Plasma thruster dynamics, 2) Solar array-specific environmental effects, 3) Plasma environmental spacecraft effects, and 4) Energetic particle spacecraft environment. A reference design approach and interface requirements for ARRM's PDP was generated by the PDP team at JPL and GRC. The reference design consisted of redundant single-string avionics located on the ARRM spacecraft bus as well as solar array, driving and processing signals from multiple copies of several types of plasma, effects, and environments sensors distributed over the spacecraft and array. The reference design sensor types were derived in part from sensors previously developed for USAF Research Laboratory (AFRL) plasma effects campaigns such as those aboard TacSat-2 in 2007 and AEHF-2 in 2012. Dakermanji, George; Lee, Leonine; Spitzer, Thomas The Global Precipitation Measurement (GPM) spacecraft was jointly developed by NASA and JAXA. It is a Low Earth Orbit (LEO) spacecraft launched on February 27, 2014. The power system is a Direct Energy Transfer (DET) system designed to support 1950 watts orbit average power. The batteries use SONY 18650HC cells and consist of three 8s by 84p batteries operated in parallel as a single battery. During instrument integration with the spacecraft, large current transients were observed in the battery. Investigation into the matter traced the cause to the Dual-Frequency Precipitation Radar (DPR) phased array radar which generates cyclical high rate current transients on the spacecraft power bus. The power system electronics interaction with these transients resulted in the current transients in the battery. An accelerated test program was developed to bound the effect, and to assess the impact to the mission. Muller, J.-P.; Grindrod, P. The NASA RPIF (Regional Planetary Imaging Facility) network of 9 US and 8 international centres were originally set-up in 1977 to "maintain photographic and digital data as well as mission documentation and cartographic data. Each facility's general holding contains images and maps of planets and their satellites taken by solar system exploration spacecraft. These planetary image facilities are open to the public. The facilities are primarily reference centers for browsing, studying, and selecting lunar and planetary photographic and cartographic materials. Experienced staff can assist scientists, educators, students, media, and the public in ordering materials for their own use." In parallel, the NASA Planetary Data System (PDS) and ESA Planetary Science Archive (PSA) were set-up to distribute digital data initially on media such as CDROM and DVD but now entirely online. The UK NASA RPIF was the first RPIF to be established outside of the US, in 1980. In , the 3D-RPIF is described. Some example products derived using this equipment are illustrated here. In parallel, at MSSL a large linux cluster and associated RAID_based system has been created to act as a mirror PDS Imaging node so that huge numbers of rover imagery (from MER & MSL to begin with) and very high resolution (large size) data is available to users of the RPIF and a variety of EU-FP7 projects based at UCL. Meanwhile, the ESA/NASA investigation board concentrates its inquiry on three errors that appear to have led to the interruption of communications with SOHO on June 25. Officials remain hopeful that, based on ESA's successful recovery of the Olympus spacecraft after four weeks under similar conditions in 1991, recovery of SOHO may be possible. The SOHO Mission Interruption Joint ESA/NASA Investigation Board has determined that the first two errors were contained in preprogrammed command sequences executed on ground system computers, while the last error was a decision to send a command to the spacecraft in response to unexpected telemetry readings. The spacecraft is controlled by the Flight Operations Team, based at NASA's Goddard Space Flight Center, Greenbelt, MD. The first error was in a preprogrammed command sequence that lacked a command to enable an on-board software function designed to activate a gyro needed for control in Emergency Sun Reacquisition (ESR) mode. ESR mode is entered by the spacecraft in the event of anomalies. The second error, which was in a different preprogrammed command sequence, resulted in incorrect readings from one of the spacecraft's three gyroscopes, which in turn triggered an ESR. At the current stage of the investigation, the board believes that the two anomalous command sequences, in combination with a decision to send a command to SOHO to turn off a gyro in response to unexpected telemetry values, caused the spacecraft to enter a series of ESRs, and ultimately led to the loss of control. The efforts of the investigation board are now directed at identifying the circumstances that led to the errors, and at developing a recovery plan should efforts to regain contact with the spacecraft succeed. ESA and NASA engineers believe the spacecraft is currently spinning with its solar panels nearly edge-on towards the Sun, and thus not generating any power. Since the spacecraft is spinning around a fixed axis, as the spacecraft progresses Miyake, Robert N. The Thermal Control Subsystem engineers task is to maintain the temperature of all spacecraft components, subsystems, and the total flight system within specified limits for all flight modes from launch to end-of-mission. In some cases, specific stability and gradient temperature limits will be imposed on flight system elements. The Thermal Control Subsystem of "normal" flight systems, the mass, power, control, and sensing systems mass and power requirements are below 10% of the total flight system resources. In general the thermal control subsystem engineer is involved in all other flight subsystem designs. Koch, L. Danielle Transportation noise pollutes our worlds cities, suburbs, parks, and wilderness areas. NASAs fundamental research in aviation acoustics is helping to find innovative solutions to this multifaceted problem. NASA is learning from nature to develop the next generation of quiet aircraft.The number of road vehicles and airplanes has roughly tripled since the 1960s. Transportation noise is audible in nearly all the counties across the US. Noise can damage your hearing, raise your heart rate and blood pressure, disrupt your sleep, and make communication difficult. Noise pollution threatens wildlife when it prevents animals from hearing prey, predators, and mates. Noise regulations help drive industry to develop quieter aircraft. Noise standards for aircraft have been developed by the International Civil Aviation Organization and adopted by the US Federal Aviation Administration. The US National Park Service is working with the Federal Aviation Administration to try to balance the demand for access to the parks and wilderness areas with preservation of the natural soundscape. NASA is helping by conceptualizing quieter, more efficient aircraft of the future and performing the fundamental research to make these concepts a reality someday. Recently, NASA has developed synthetic structures that can absorb sound well over a wide frequency range, and particularly below 1000 Hz, and which mimic the acoustic performance of bundles of natural reeds. We are adapting these structures to control noise on aircraft, and spacecraft. This technology might be used in many other industrial or architectural applications where acoustic absorbers have tight constraints on weight and thickness, and may be exposed to high temperatures or liquids. Information about this technology is being made available through reports and presentations available through the NASA Technical Report Server, http:ntrs.nasa.gov. Organizations who would like to collaborate with NASA or commercialize NASAs technology recently acquired a NASA field office within the Technology Lenter; that is staffed by Mr. Wa~ne Hudson. We take our guidance from Air Force...apogee of 4.6 % geocentric and a perigee of 650 )a altitude. The DR-1 Nigh Altitude Plama instrument (DAPI) consists of five electrostatic analyzers The U. S., Russia, and, China have each addressed the question of human-rating spacecraft. NASA's operational experience with human-rating primarily resides with Mercury, Gemini, Apollo, Space Shuttle, and International Space Station. NASA s latest developmental experience includes Constellation, X38, X33, and the Orbital Space Plane. If domestic commercial crew vehicles are used to transport astronauts to and from space, Soyuz is another example of methods that could be used to human-rate a spacecraft and to work with commercial spacecraft providers. For Soyuz, NASA's normal assurance practices were adapted. Building on NASA's Soyuz experience, this report contends all past, present, and future vehicles rely on a range of methods and techniques for human-rating assurance, the components of which include: requirements, conceptual development, prototype evaluations, configuration management, formal development reviews (safety, design, operations), component/system ground-testing, integrated flight tests, independent assessments, and launch readiness reviews. When constraints (cost, schedule, international) limit the depth/breadth of one or more preferred assurance means, ways are found to bolster the remaining areas. This report provides information exemplifying the above safety assurance model for consideration with commercial or foreign-government-designed spacecraft. Topics addressed include: U.S./Soviet-Russian government/agency agreements and engineering/safety assessments performed with lessons learned in historic U.S./Russian joint space ventures St. Cyr, O. C.; Guhathakurta, M.; Bell, H.; Niemeyer, L.; Allen, J. Measurements from many of NASA's scientific spacecraft are used routinely by space weather forecasters, both in the U.S. and internationally. ACE, SOHO (an ESA/NASA collaboration), STEREO, and SDO provide images and in situ measurements that are assimilated into models and cited in alerts and warnings. A number of years ago, the Space Weather laboratory was established at NASA-Goddard, along with the Community Coordinated Modeling Center. Within that organization, a space weather service center has begun issuing alerts for NASA's operational users. NASA's operational user community includes flight operations for human and robotic explorers; atmospheric drag concerns for low-Earth orbit; interplanetary navigation and communication; and the fleet of unmanned aerial vehicles, high altitude aircraft, and launch vehicles. Over the past three years we have identified internal stakeholders within NASA and formed a Working Group to better coordinate their expertise and their needs. In this presentation we will describe this activity and some of the challenges in forming a diverse working group. There has recently been a push for adopting integrated modular avionics (IMA) principles in designing spacecraft architectures. This consolidation of multiple vehicle functions to shared computing platforms can significantly reduce spacecraft cost, weight, and de- sign complexity. Ethernet technology is attractive for inclusion in more integrated avionic systems due to its high speed, flexibility, and the availability of inexpensive commercial off-the-shelf (COTS) components. Furthermore, Ethernet can be augmented with a variety of quality of service (QoS) enhancements that enable its use for transmitting critical data. TTEthernet introduces a decentralized clock synchronization paradigm enabling the use of time-triggered Ethernet messaging appropriate for hard real-time applications. TTEthernet can also provide two forms of event-driven communication, therefore accommodating the full spectrum of traffic criticality levels required in IMA architectures. This paper explores the application of TTEthernet technology to future IMA spacecraft architectures as part of the Avionics and Software (A&S) project chartered by NASA's Advanced Exploration Systems (AES) program. Cummings, A. C.; Stone, E. C.; Heikkila, B.; Lal, N.; Webber, W. R. The Voyager spacecraft have been exploring the heliosheath since their crossings of the solar wind termination shock on December 2004 (Voyager 1) and August 2007 (Voyager 2). Starting on 7 May 2012, dramatic short-term changes in the intensities of heliospheric particles and galactic cosmic rays have been occurring periodically at Voyager 1. In July, a series of encounters with a heliospheric depletion region occurred, culminating on 25 August 2012 with the durable entry into the region by Voyager 1 (durable at least through the time of this writing in early February 2012). This depletion region is characterized by the disappearance of particles accelerated in the heliosphere, the anomalous cosmic rays and termination shock particles, and the increased intensity of galactic cosmic ray nuclei and electrons. The result is that the low-energy part of the galactic cosmic ray spectra is being revealed for the first time. Data from the magnetometer experiment on Voyager 1 implies that the spacecraft is not yet in the interstellar medium, but it apparently has a good connection path to it. At Voyager 2, dramatic changes haven't occurred but there are longer-term trends in the intensities that are different from what were observed on Voyager 1. We will report on the recent observations of energetic particles from both spacecraft. This work was supported by NASA under contract NNN12AA012. Mudgett, Paul D. Fire is one of the most critical contingencies in spacecraft and any closed environment including submarines. Currently, NASA uses particle based technology to detect fires and hand-held combustion product monitors to track the clean-up and restoration of habitable cabin environment after the fire is extinguished. In the future, chemical detection could augment particle detection to eliminate frequent nuisance false alarms triggered by dust. In the interest of understanding combustion from both particulate and chemical generation, NASA Centers have been collaborating on combustion studies at White Sands Test Facility using modern spacecraft materials as fuels, and both old and new technology to measure the chemical and particulate products of combustion. The tests attempted to study smoldering pyrolysis at relatively low temperatures without ignition to flaming conditions. This paper will summarize the results of two 1-week long tests undertaken in 2012, focusing on the chemical products of combustion. The results confirm the key chemical products are carbon monoxide (CO), hydrogen cyanide (HCN), hydrogen fluoride (HF) and hydrogen chloride (HCl), whose concentrations depend on the particular material and test conditions. For example, modern aerospace wire insulation produces significant concentration of HF, which persists in the test chamber longer than anticipated. These compounds are the analytical targets identified for the development of new tunable diode laser based hand-held monitors, to replace the aging electrochemical sensor based devices currently in use on the International Space Station. Full Text Available The hybrid subsystem design could be an attractive approach for futurespacecraft to cope with their demands. The idea of combining theconventional Attitude Control System and the Electrical Power System ispresented in this article. The Combined Energy and Attitude ControlSystem (CEACS consisting of a double counter rotating flywheel assemblyis investigated for small satellites in this article. Another hybrid systemincorporating the conventional Attitude Control System into the ThermalControl System forming the Combined Attitude and Thermal ControlSystem (CATCS consisting of a "fluid wheel" and permanent magnets isalso investigated for small satellites herein. The governing equationsdescribing both these novel hybrid subsystems are presented and theironboard architectures are numerically tested. Both the investigated novelhybrid spacecraft subsystems comply with the reference missionrequirements.The hybrid subsystem design could be an attractive approach for futurespacecraft to cope with their demands. The idea of combining theconventional Attitude Control System and the Electrical Power System ispresented in this article. The Combined Energy and Attitude ControlSystem (CEACS consisting of a double counter rotating flywheel assemblyis investigated for small satellites in this article. Another hybrid systemincorporating the conventional Attitude Control System into the ThermalControl System forming the Combined Attitude and Thermal ControlSystem (CATCS consisting of a "fluid wheel" and permanent magnets isalso investigated for small satellites herein. The governing equationsdescribing both these novel hybrid subsystems are presented and theironboard architectures are numerically tested. Both the investigated novelhybrid spacecraft subsystems comply with the reference missionrequirements. Andraschko, Mark; Antol, Jeffrey; Baize, Rosemary; Horan, Stephen; Neil, Doreen; Rinsland, Pamela; Zaiceva, Rita The 2010 National Space Policy encourages federal agencies to actively explore the use of inventive, nontraditional arrangements for acquiring commercial space goods and services to meet United States Government requirements, including...hosting government capabilities on commercial spacecraft. NASA's Science Mission Directorate has taken an important step towards this goal by adding an option for hosted payload responses to its recent Announcement of Opportunity (AO) for Earth Venture-2 missions. Since NASA selects a significant portion of its science missions through a competitive process, it is useful to understand the implications that this process has on the feasibility of successfully proposing a commercially hosted payload mission. This paper describes some of the impediments associated with proposing a hosted payload mission to NASA, and offers suggestions on how these impediments might be addressed. Commercially hosted payloads provide a novel way to serve the needs of the science and technology demonstration communities at a fraction of the cost of a traditional Geostationary Earth Orbit (GEO) mission. The commercial communications industry launches over 20 satellites to GEO each year. By exercising this repeatable commercial paradigm of privately financed access to space with proven vendors, NASA can achieve science goals at a significantly lower cost than the current dedicated spacecraft and launch vehicle approach affords. Commercial hosting could open up a new realm of opportunities for NASA science missions to make measurements from GEO. This paper also briefly describes two GEO missions recommended by the National Academies of Science Earth Science Decadal Survey, the Geostationary Coastal and Air Pollution Events (GEO-CAPE) mission and the Precipitation and All-weather Temperature and Humidity (PATH) mission. Hosted payload missions recently selected for implementation by the Office of the Chief Technologist are also discussed. Finally, there are Lee, Allan Y.; Wang, Eric K.; Macala, Glenn A. There have been a number of missions with spacecraft flying by planetary moons with atmospheres; there will be future missions with similar flybys. When a spacecraft such as Cassini flies by a moon with an atmosphere, the spacecraft will experience an atmospheric torque. This torque could be used to determine the density of the atmosphere. This is because the relation between the atmospheric torque vector and the atmosphere density could be established analytically using the mass properties of the spacecraft, known drag coefficient of objects in free-molecular flow, and the spacecraft velocity relative to the moon. The density estimated in this way could be used to check results measured by science instruments. Since the proposed methodology could estimate disturbance torque as small as 0.02 N-m, it could also be used to estimate disturbance torque imparted on the spacecraft during high-altitude flybys. Krupnikov, K.K.; Makletsov, A.A.; Mileev, V.N.; Novikov, L.S.; Sinolits, V.V. This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991-1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language Krupnikov, K K; Mileev, V N; Novikov, L S; Sinolits, V V This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991-1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language. Kraft, L. Alan To insure the reliability of a 20 kHz, alternating current (AC) power system on spacecraft, it is essential to analyze its behavior under many adverse operating conditions. Some of these conditions include overloads, short circuits, switching surges, and harmonic distortions. Harmonic distortions can become a serious problem. It can cause malfunctions in equipment that the power system is supplying, and, during distortions such as voltage resonance, it can cause equipment and insulation failures due to the extreme peak voltages. To address the harmonic distortion issue, work was begun under the 1990 NASA-ASEE Summer Faculty Fellowship Program. Software, originally developed by EPRI, called HARMFLO, a power flow program capable of analyzing harmonic conditions on three phase, balanced, 60 Hz AC power systems, was modified to analyze single phase, 20 kHz, AC power systems. Since almost all of the equipment used on spacecraft power systems is electrically different from equipment used on terrestrial power systems, it was also necessary to develop mathematical models for the equipment to be used on the spacecraft. The modelling was also started under the same fellowship work period. Details of the modifications and models completed during the 1990 NASA-ASEE Summer Faculty Fellowship Program can be found in a project report. As a continuation of the work to develop a complete package necessary for the full analysis of spacecraft AC power system behavior, deployment work has continued through NASA Grant NAG3-1254. This report details the work covered by the above mentioned grant. Farrell, W. M.; Hurley, D. M.; Poston, M. J.; Zimmerman, M. I.; Orlando, T. M.; Hibbitts, C. A.; Killen, R. M. NASA's asteroid redirect mission (ARM) will feature an encounter of the human-occupied Orion spacecraft with a portion of a near- Earth asteroid (NEA) previously placed in orbit about the Moon by a capture spacecraft. Applying a shuttle analog, we suggest that the Orion spacecraft should have a dominant local water exosphere, and that molecules from this exosphere can adsorb onto the NEA. The amount of adsorbed water is a function of the defect content of the NEA surface, with retention of shuttle-like water levels on the asteroid at 10(exp 15) H2O's/m2 for space weathered regolith at T approximately 300 K. Didion, Jeffrey R. Thermal Fluids and Analysis Workshop, Silver Spring MD NCTS 21070-15. NASA, the Defense Department and commercial interests are actively engaged in developing miniaturized spacecraft systems and scientific instruments to leverage smaller cheaper spacecraft form factors such as CubeSats. This paper outlines research and development efforts among Goddard Space Flight Center personnel and its several partners to develop innovative embedded thermal control subsystems. Embedded thermal control subsystems is a cross cutting enabling technology integrating advanced manufacturing techniques to develop multifunctional intelligent structures to reduce Size, Weight and Power (SWaP) consumption of both the thermal control subsystem and overall spacecraft. Embedded thermal control subsystems permit heat acquisition and rejection at higher temperatures than state of the art systems by employing both advanced heat transfer equipment (integrated heat exchangers) and high heat transfer phenomena. The Goddard Space Flight Center Thermal Engineering Branch has active investigations seeking to characterize advanced thermal control systems for near term spacecraft missions. The embedded thermal control subsystem development effort consists of fundamental research as well as development of breadboard and prototype hardware and spaceflight validation efforts. This paper will outline relevant fundamental investigations of micro-scale heat transfer and electrically driven liquid film boiling. The hardware development efforts focus upon silicon based high heat flux applications (electronic chips, power electronics etc.) and multifunctional structures. Flight validation efforts include variable gravity campaigns and a proposed CubeSat based flight demonstration of a breadboard embedded thermal control system. The CubeSat investigation is technology demonstration will characterize in long-term low earth orbit a breadboard embedded thermal subsystem and its individual components to develop Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A. An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV). This work was accepted for published by the American Institute of Aeronautics and Astronautics (AIAA) Journal of Spacecraft and Rockets in July 2014...publication in the AIAA Journal of Spacecraft and Rockets . Chapter 5 introduces an impulsive maneuvering strategy to deliver a spacecraft to its final...upon arrival r2 and v2 , respectively. The variable T2 determines the time of flight needed to make the maneuver, and the variable θ2 determines the Sternovsky, Zoltan; Collette, Andrew; Malaspina, David M.; Thayer, Frederick Electric field and plasma wave instruments act as dust detectors picking up voltage pulses induced by impacts of particulates on the spacecraft body. These signals enable the characterization of cosmic dust environments even with missions without dedicated dust instruments. For example, the Voyager 1 and 2 spacecraft performed the first detection of dust particles near Uranus, Neptune, and in the outer solar system [Gurnett et al., 1987, 1991, 1997]. The two STEREO spacecraft observed distinct signals at high rate that were interpreted as nano-sized particles originating from near the Sun and accelerated to high velocities by the solar wind [MeyerVernet et al, 2009a, Zaslavsky et al., 2012]. The MAVEN spacecraft is using the antennas onboard to characterize the dust environment of Mars [Andersson et al., 2014] and Solar Probe Plus will do the same in the inner heliosphere. The challenge, however, is the correct interpretation of the impact signals and calculating the mass of the dust particles. The uncertainties result from the incomplete understanding of the signal pickup mechanisms, and the variation of the signal amplitude with impact location, the ambient plasma environment, and impact speed. A comprehensive laboratory study of impact generated antenna signals has been performed recently using the IMPACT dust accelerator facility operated at the University of Colorado. Dust particles of micron and submicron sizes with velocities of tens of km/s are generated using a 3 MV electrostatic analyzer. A scaled down model spacecraft is exposed to the dust impacts and one or more antennas, connected to sensitive electronics, are used to detect the impact signals. The measurements showed that there are three clearly distinct signal pickup mechanisms due to spacecraft charging, antenna charging and antenna pickup sensing space charge from the expanding plasma cloud. All mechanisms vary with the spacecraft and antenna bias voltages and, furthermore, the latter two Mcguire, Melissa L.; Hack, Kurt J.; Manzella, David H.; Herman, Daniel A. Multiple Solar Electric Propulsion Technology Demonstration Mission were developed to assess vehicle performance and estimated mission cost. Concepts ranged from a 10,000 kilogram spacecraft capable of delivering 4000 kilogram of payload to one of the Earth Moon Lagrange points in support of future human-crewed outposts to a 180 kilogram spacecraft capable of performing an asteroid rendezvous mission after launched to a geostationary transfer orbit as a secondary payload. Low-cost and maximum Delta-V capability variants of a spacecraft concept based on utilizing a secondary payload adapter as the primary bus structure were developed as were concepts designed to be co-manifested with another spacecraft on a single launch vehicle. Each of the Solar Electric Propulsion Technology Demonstration Mission concepts developed included an estimated spacecraft cost. These data suggest estimated spacecraft costs of $200 million - $300 million if 30 kilowatt-class solar arrays and the corresponding electric propulsion system currently under development are used as the basis for sizing the mission concept regardless of launch vehicle costs. The most affordable mission concept developed based on subscale variants of the advanced solar arrays and electric propulsion technology currently under development by the NASA Space Technology Mission Directorate has an estimated cost of $50M and could provide a Delta-V capability comparable to much larger spacecraft concepts. Neeley, James R.; Jones, James V.; Watson, Michael D.; Bramon, Christopher J.; Inman, Sharon K.; Tuttle, Loraine The Space Launch System (SLS) is the new NASA heavy lift launch vehicle and is scheduled for its first mission in 2017. The goal of the first mission, which will be uncrewed, is to demonstrate the integrated system performance of the SLS rocket and spacecraft before a crewed flight in 2021. SLS has many of the same logistics challenges as any other large scale program. Common logistics concerns for SLS include integration of discreet programs geographically separated, multiple prime contractors with distinct and different goals, schedule pressures and funding constraints. However, SLS also faces unique challenges. The new program is a confluence of new hardware and heritage, with heritage hardware constituting seventy-five percent of the program. This unique approach to design makes logistics concerns such as commonality especially problematic. Additionally, a very low manifest rate of one flight every four years makes logistics comparatively expensive. That, along with the SLS architecture being developed using a block upgrade evolutionary approach, exacerbates long-range planning for supportability considerations. These common and unique logistics challenges must be clearly identified and tackled to allow SLS to have a successful program. This paper will address the common and unique challenges facing the SLS programs, along with the analysis and decisions the NASA Logistics engineers are making to mitigate the threats posed by each. Hamer, P. A.; Snowden, P. J. The baseline Ulysses spacecraft control and monitoring system (SCMS) concepts and the converted SCMS, residing on a DEC/VAX 8350 hardware, are considered. The main functions of the system include monitoring and displaying spacecraft telemetry, preparing spacecraft commands, producing hard copies of experimental data, and archiving spacecraft telemetry. The SCMS system comprises over 20 subsystems ranging from low-level utility routines to the major monitoring and control software. These in total consist of approximately 55,000 lines of FORTRAN source code and 100 VMS command files. The SCMS major software facilities are described, including database files, telemetry processing, telecommanding, archiving of data, and display of telemetry. National Aeronautics and Space Administration — Saber Astronautics proposes spacecraft subsystem control software which can autonomously reconfigure avionics for best performance during various mission conditions.... ... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice 13-071] Privacy Act of 1974; Privacy Act System of Records AGENCY: National Aeronautics and Space Administration (NASA). ACTION: Notice of Privacy Act system of records. SUMMARY: Each Federal agency is required by the Privacy Act of 1974 to publish... National Aeronautics and Space Administration — Spacecraft for NASA, DoD and commercial missions need higher power than ever before, with lower mass, compact stowage, and lower cost. While high efficiency,... This viewgraph presentation is a review of the career paths for chemicals engineer at NASA (specifically NASA Johnson Space Center.) The author uses his personal experience and history as an example of the possible career options. The NASA Strategic Plan is a living document. It provides far-reaching goals and objectives to create stability for NASA's efforts. The Plan presents NASA's top-level strategy: it articulates what NASA does and for whom; it differentiates between ends and means; it states where NASA is going and what NASA intends to do to get there. This Plan is not a budget document, nor does it present priorities for current or future programs. Rather, it establishes a framework for shaping NASA's activities and developing a balanced set of priorities across the Agency. Such priorities will then be reflected in the NASA budget. The document includes vision, mission, and goals; external environment; conceptual framework; strategic enterprises (Mission to Planet Earth, aeronautics, human exploration and development of space, scientific research, space technology, and synergy); strategic functions (transportation to space, space communications, human resources, and physical resources); values and operating principles; implementing strategy; and senior management team concurrence. Federal Laboratory Consortium — The NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory is a NASA funded facility, delivering heavy ion beams to a target area where scientists... Shishko, Robert; Aster, Robert; Chamberlain, Robert G.; McDuffee, Patrick; Pieniazek, Les; Rowell, Tom; Bain, Beth; Cox, Renee I.; Mooz, Harold; Polaski, Lou This handbook brings the fundamental concepts and techniques of systems engineering to NASA personnel in a way that recognizes the nature of NASA systems and environment. It is intended to accompany formal NASA training courses on systems engineering and project management when appropriate, and is designed to be a top-level overview. The concepts were drawn from NASA field center handbooks, NMI's/NHB's, the work of the NASA-wide Systems Engineering Working Group and the Systems Engineering Process Improvement Task team, several non-NASA textbooks and guides, and material from independent systems engineering courses taught to NASA personnel. Five core chapters cover systems engineering fundamentals, the NASA Project Cycle, management issues in systems engineering, systems analysis and modeling, and specialty engineering integration. It is not intended as a directive. Castro, V.A.; Ott, C.M.; Garcia, V.M.; John, J.; Buttner, M.P.; Cruz, P.; Pierson, D.L. The determination of risk from infectious disease during long-duration missions is composed of several factors including the concentration and the characteristics of the infectious agent. Thus, a thorough knowledge of the microorganisms aboard spacecraft is essential in mitigating infectious disease risk to the crew. While stringent steps are taken to minimize the transfer of potential pathogens to spacecraft, several medically significant organisms have been isolated from both the Mir and International Space Station (ISS). Historically, the method for isolation and identification of microorganisms from spacecraft environmental samples depended upon their growth on culture media. Unfortunately, only a fraction of the organisms may grow on a culture medium, potentially omitting those microorganisms whose nutritional and physical requirements for growth are not met. Thus, several pathogens may not have been detected, such as Legionella pneumophila, the etiological agent of Legionnaire s disease. We hypothesize that environmental analysis using non-culture-based technologies will reveal microorganisms, allergens, and microbial toxins not previously reported in spacecraft, allowing for a more complete health assessment. The development of techniques for this flight experiment, operationally named SWAB, has already provided advances in NASA laboratory processes and beneficial information toward human health risk assessment. The translation of 16S ribosomal DNA sequencing for the identification of bacteria from the SWAB experiment to nominal operations has increased bacterial speciation of environmental isolates from previous flights three fold compared to previous conventional methodology. The incorporation of molecular-based DNA fingerprinting using repetitive sequence-based polymerase chain reaction (rep-PCR) into the capabilities of the laboratory has provided a methodology to track microorganisms between crewmembers and their environment. Both 16S ribosomal DNA Aerospace projects have traditionally employed federated avionics architectures, in which each computer system is designed to perform one specific function (e.g. navigation). There are obvious downsides to this approach, including excessive weight (from so much computing hardware), and inefficient processor utilization (since modern processors are capable of performing multiple tasks). There has therefore been a push for integrated modular avionics (IMA), in which common computing platforms can be leveraged for different purposes. This consolidation of multiple vehicle functions to shared computing platforms can significantly reduce spacecraft cost, weight, and design complexity. However, the application of IMA principles introduces significant challenges, as the data network must accommodate traffic of mixed criticality and performance levels - potentially all related to the same shared computer hardware. Because individual network technologies are rarely so competent, the development of truly integrated network architectures often proves unreasonable. Several different types of networks are utilized - each suited to support a specific vehicle function. Critical functions are typically driven by precise timing loops, requiring networks with strict guarantees regarding message latency (i.e. determinism) and fault-tolerance. Alternatively, non-critical systems generally employ data networks prioritizing flexibility and high performance over reliable operation. Switched Ethernet has seen widespread success filling this role in terrestrial applications. Its high speed, flexibility, and the availability of inexpensive commercial off-the-shelf (COTS) components make it desirable for inclusion in spacecraft platforms. Basic Ethernet configurations have been incorporated into several preexisting aerospace projects, including both the Space Shuttle and International Space Station (ISS). However, classical switched Ethernet cannot provide the high level of network Norcross, Scott; Grieser, William H. This paper describes a product called the Intelligent Mission Toolkit (IMT), which was created to meet the changing demands of the spacecraft command and control market. IMT is a command and control system built upon an expert system. Its primary functions are to send commands to the spacecraft and process telemetry data received from the spacecraft. It also controls the ground equipment used to support the system, such as encryption gear, and telemetry front-end equipment. Add-on modules allow IMT to control antennas and antenna interface equipment. The design philosophy for IMT is to utilize available commercial products wherever possible. IMT utilizes Gensym's G2 Real-time Expert System as the core of the system. G2 is responsible for overall system control, spacecraft commanding control, and spacecraft telemetry analysis and display. Other commercial products incorporated into IMT include the SYBASE relational database management system and Loral Test and Integration Systems' System 500 for telemetry front-end processing. Learn about the Privacy Act of 1974, the Electronic Government Act of 2002, the Federal Information Security Management Act, and other information about the Environmental Protection Agency maintains its records. Bryan, C. G.; Williams, B. G.; Williams, K. E.; Taylor, A. H.; Carranza, E.; Page, B. R.; Stanbridge, D. R.; Mazarico, E.; Neumann, G. A.; O'Shaughnessy, D. J.; McAdams, J. V.; Calloway, A. B. The MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft orbited the planet Mercury from March 2011 until the end of April 2015, when it impacted the planetary surface after propellant reserves used to maintain the orbit were depleted. This highly successful mission was led by the principal investigator, Sean C. Solomon, of Columbia University. The Johns Hopkins University Applied Physics Laboratory (JHU/APL) designed and assembled the spacecraft and served as the home for spacecraft operations. Spacecraft navigation for the entirety of the mission was provided by the Space Navigation and Flight Dynamics Practice (SNAFD) of KinetX Aerospace. Orbit determination (OD) solutions were generated through processing of radiometric tracking data provided by NASA's Deep Space Network (DSN) using the MIRAGE suite of orbital analysis tools. The MESSENGER orbit was highly eccentric, with periapsis at a high northern latitude and periapsis altitude in the range 200-500 km for most of the orbital mission phase. In a low-altitude "hover campaign" during the final two months of the mission, periapsis altitudes were maintained within a narrow range between about 35 km and 5 km. Navigating a spacecraft so near a planetary surface presented special challenges. Tasks required to meet those challenges included the modeling and estimation of Mercury's gravity field and of solar and planetary radiation pressure, and the design of frequent orbit-correction maneuvers. Superior solar conjunction also presented observational modeling issues. One key to the overall success of the low-altitude hover campaign was a strategy to utilize data from an onboard laser altimeter as a cross-check on the navigation team's reconstructed and predicted estimates of periapsis altitude. Data obtained from the Mercury Laser Altimeter (MLA) on a daily basis provided near-real-time feedback that proved invaluable in evaluating alternative orbit estimation strategies, and McAfee, Julie; Culver, George; Naderi, Mahmoud NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version. Topics covered include: Calibration Test Set for a Phase-Comparison Digital Tracker; Wireless Acoustic Measurement System; Spiral Orbit Tribometer; Arrays of Miniature Microphones for Aeroacoustic Testing; Predicting Rocket or Jet Noise in Real Time; Computational Workbench for Multibody Dynamics; High-Power, High-Efficiency Ka-Band Space Traveling-Wave Tube; Gratings and Random Reflectors for Near-Infrared PIN Diodes; Optically Transparent Split-Ring Antennas for 1 to 10 GHz; Ice-Penetrating Robot for Scientific Exploration; Power-Amplifier Module for 145 to 165 GHz; Aerial Videography From Locally Launched Rockets; SiC Multi-Chip Power Modules as Power-System Building Blocks; Automated Design of Restraint Layer of an Inflatable Vessel; TMS for Instantiating a Knowledge Base With Incomplete Data; Simulating Flights of Future Launch Vehicles and Spacecraft; Control Code for Bearingless Switched- Reluctance Motor; Machine Aided Indexing and the NASA Thesaurus; Arbitrating Control of Control and Display Units; Web-Based Software for Managing Research; Driver Code for Adaptive Optics; Ceramic Paste for Patching High-Temperature Insulation; Fabrication of Polyimide-Matrix/Carbon and Boron-Fiber Tape; Protective Skins for Aerogel Monoliths; Code Assesses Risks Posed by Meteoroids and Orbital Debris; Asymmetric Bulkheads for Cylindrical Pressure Vessels; Self-Regulating Water-Separator System for Fuel Cells; Self-Advancing Step-Tap Drills; Array of Bolometers for Submillimeter- Wavelength Operation; Delta-Doped CCDs as Detector Arrays in Mass Spectrometers; Arrays of Bundles of Carbon Nanotubes as Field Emitters; Staggering Inflation To Stabilize Attitude of a Solar Sail; and Bare Conductive Tether for Decelerating a Spacecraft. The NASA Lewis Research Center's Advanced Communication Technology Satellite (ACTS) was launched in September 1993. ACTS introduced several new technologies, including a multibeam antenna (MBA) operating at extremely short wavelengths never before used in communications. This antenna, which has both fixed and rapidly reconfigurable high-energy spot beams (150 miles in diameter), serves users equipped with small antenna terminals. Extensive structural and thermal analyses have been performed for simulating the ACTS MBA on-orbit performance. The results show that the reflector surfaces (mainly the front subreflector), antenna support assembly, and metallic surfaces on the spacecraft body will be distorted because of the thermal effects of varying solar heating, which degrade the ACTS MBA performance. Since ACTS was launched, a number of evaluations have been performed to assess MBA performance in the space environment. For example, the on-orbit performance measurements found systematic environmental disturbances to the MBA beam pointing. These disturbances were found to be imposed by the attitude control system, antenna and spacecraft mechanical alignments, and on-orbit thermal effects. As a result, the MBA may not always exactly cover the intended service area. In addition, the on-orbit measurements showed that antenna pointing accuracy is the performance parameter most sensitive to thermal distortions on the front subreflector surface and antenna support assemblies. Several compensation approaches were tested and evaluated to restore on-orbit pointing stability. A combination of autotrack (75 percent of the time) and Earth sensor control (25 percent of the time) was found to be the best way to compensate for antenna pointing error during orbit. This approach greatly minimizes the effects of thermal distortions on antenna beam pointing. Adetona, O.; Keel, L. H.; Oakley, J. D.; Kappus, K.; Whorton, M. S.; Kim, Y. K.; Rakpczy, J. M. To realize design concepts, predict dynamic behavior and develop appropriate control strategies for high performance operation of a solar-sail spacecraft, we developed a simple analytical model that represents dynamic behavior of spacecraft with various sizes. Since motion of the vehicle is dominated by retractable booms that support the structure, our study concentrates on developing and validating a dynamic model of a long retractable boom. Extensive tests with various configurations were conducted for the 30 Meter, light-weight, retractable, lattice boom at NASA MSFC that is structurally and dynamically similar to those of a solar-sail spacecraft currently under construction. Experimental data were then compared with the corresponding response of the analytical model. Though mixed results were obtained, the analytical model emulates several key characteristics of the boom. The paper concludes with a detailed discussion of issues observed during the study. Simons, Rainee N. NASA's plan to launch several spacecrafts into low Earth Orbit (LEO) to support science missions in the next ten years and beyond requires down link throughput on the order of several terabits per day. The ability to handle such a large volume of data far exceeds the capabilities of current systems. This paper proposes two solutions, first, a high data rate link between the LEO spacecraft and ground via relay satellites in geostationary orbit (GEO). Second, a high data rate direct to ground link from LEO. Next, the paper presents results from computer simulations carried out for both types of links taking into consideration spacecraft transmitter frequency, EIRP, and waveform; elevation angle dependent path loss through Earths atmosphere, and ground station receiver GT. Bates, David M. NASA's Cassini Spacecraft, launched on October 15th, 1997 and arrived at Saturn on June 30th, 2004, is the largest and most ambitious interplanetary spacecraft in history. In order to meet the challenging attitude control and navigation requirements of the orbit profile at Saturn, Cassini is equipped with a monopropellant thruster based Reaction Control System (RCS), a bipropellant Main Engine Assembly (MEA) and a Reaction Wheel Assembly (RWA). In 2008, after 11 years of reliable service, several RCS thrusters began to show signs of end of life degradation, which led the operations team to successfully perform the swap to the backup RCS system, the details and challenges of which are described in this paper. With some modifications, it is hoped that similar techniques and design strategies could be used to benefit other spacecraft. Burnside, Christopher; Trinh, Huu; Pedersen, Kevin The Robotic Lunar Lander Development (RLLD) Project Office at NASA Marshall Space Flight Center (MSFC) has studied several lunar surface science mission concepts. These missions focus on spacecraft carrying multiple science instruments and power systems that will allow extended operations on the lunar surface. Initial trade studies of launch vehicle options for these mission concepts indicate that the spacecraft design will be significantly mass-constrained. To minimize mass and facilitate efficient packaging, the notional propulsion system for these landers has a baseline of an ultra-high pressure (10,000 psig) helium pressurization system that has been used on Defense missiles. The qualified regulator is capable of short duration use; however, the hardware has not been previously tested at NASA spacecraft requirements with longer duration. Hence, technical risks exist in using this missile-based propulsion component for spacecraft applications. A 10,000-psig helium pressure regulator test activity is being carried out as part of risk reduction testing for MSFC RLLD project. The goal of the test activity is to assess the feasibility of commercial off-the-shelf ultra-high pressure regulator by testing with a representative flight mission profile. Slam-start, gas blowdown, water expulsion, lock-up, and leak tests are also performed on the regulator to assess performance under various operating conditions. The preliminary test results indicated that the regulator can regulate helium to a stable outlet pressure of 740 psig within the +/- 5% tolerance band and maintain a lock-up pressure less than +5% for all tests conducted. Numerous leak tests demonstrated leakage less than 10-3 standard cubic centimeters per second (SCCS) for internal seat leakage at lock-up and less than10-5 SCCS for external leakage through the regulator ambient reference cavity. The successful tests have shown the potential for 10,000 psig helium systems in NASA spacecraft and have reduced risk Ramapriyan, H. K. NASA's Earth Observing System Data and Information System (EOSDIS) has been in operation since August 1994, managing most of NASA's Earth science data from satellites, airborne sensors, filed campaigns and other activities. Having been designated by the Federal Government as a project responsible for production, archiving and distribution of these data through its Distributed Active Archive Centers (DAACs), the Earth Science Data and Information System Project (ESDIS) is responsible for EOSDIS, and is legally bound by the Office of Management and Budgets circular A-130, the Federal Records Act. It must follow the regulations of the National Institute of Standards and Technologies (NIST) and National Archive and Records Administration (NARA). It must also follow the NASA Procedural Requirement 7120.5 (NASA Space Flight Program and Project Management). All these ensure that the data centers managed by ESDIS are trustworthy from the point of view of efficient and effective operations as well as preservation of valuable data from NASA's missions. Additional factors contributing to this trust are an extensive set of internal and external reviews throughout the history of EOSDIS starting in the early 1990s. Many of these reviews have involved external groups of scientific and technological experts. Also, independent annual surveys of user satisfaction that measure and publish the American Customer Satisfaction Index (ACSI), where EOSDIS has scored consistently high marks since 2004, provide an additional measure of trustworthiness. In addition, through an effort initiated in 2012 at the request of NASA HQ, the ESDIS Project and 10 of 12 DAACs have been certified by the International Council for Science (ICSU) World Data System (WDS) and are members of the ICSUWDS. This presentation addresses questions such as pros and cons of the certification process, key outcomes and next steps regarding certification. Recently, the ICSUWDS and Data Seal of Approval (DSA) organizations Cowardin, H.; Lederer, S.; Stansbery, G.; Seitzer, P.; Buckalew, B.; Abercromby, K.; Barker, E. modified Ritchey-Chrétien configuration on a double horseshoe equatorial mount to allow tracking objects at LEO rates through the dome's keyhole at zenith. Through the data collection techniques employed at these unique facilities, NASA's ODPO has developed a multi-faceted approach to characterize the orbital debris risk to satellites in various altitudes and provide material characterization of debris via photometric and spectroscopic measurements. Ultimately, the data are used in conjunction with in-situ and radar measurements to provide accurate data for models of our space environment and service spacecraft risk assessment. Coan, Mary R.; Hirshorn, Steven R.; Moreland, Robert The NASA Protoflight Research Initiative is an internal NASA study conducted within the Office of the Chief Engineer to better understand the use of Protoflight within NASA. Extensive literature reviews and interviews with key NASA members with experience in both robotic and human spaceflight missions has resulted in three main conclusions and two observations. The first conclusion is that NASA's Protoflight method is not considered to be "prescriptive." The current policies and guidance allows each Program/Project to tailor the Protoflight approach to better meet their needs, goals and objectives. Second, Risk Management plays a key role in implementation of the Protoflight approach. Any deviations from full qualification will be based on the level of acceptable risk with guidance found in NPR 8705.4. Finally, over the past decade (2004 - 2014) only 6% of NASA's Protoflight missions and 6% of NASA's Full qualification missions experienced a publicly disclosed mission failure. In other words, the data indicates that the Protoflight approach, in and of it itself, does not increase the mission risk of in-flight failure. The first observation is that it would be beneficial to document the decision making process on the implementation and use of Protoflight. The second observation is that If a Project/Program chooses to use the Protoflight approach with relevant heritage, it is extremely important that the Program/Project Manager ensures that the current project's requirements falls within the heritage design, component, instrument and/or subsystem's requirements for both the planned and operational use, and that the documentation of the relevant heritage is comprehensive, sufficient and the decision well documented. To further benefit/inform this study, a recommendation to perform a deep dive into 30 missions with accessible data on their testing/verification methodology and decision process to research the differences between Protoflight and Full Qualification Werka, Robert O.; Clark, Rodney; Sheldon, Rob; Percy, Thomas K. The NASA Office of Chief Technologist has funded from FY11 through FY14 successive studies of the physics, design, and spacecraft integration of a Fission Fragment Rocket Engine (FFRE) that directly converts the momentum of fission fragments continuously into spacecraft momentum at a theoretical specific impulse above one million seconds. While others have promised future propulsion advances if only you have the patience, the FFRE requires no waiting, no advances in physics and no advances in manufacturing processes. Such an engine unequivocally can create a new era of space exploration that can change spacecraft operation. The NIAC (NASA Institute for Advanced Concepts) Program Phase 1 study of FY11 first investigated how the revolutionary FFRE technology could be integrated into an advanced spacecraft. The FFRE combines existent technologies of low density fissioning dust trapped electrostatically and high field strength superconducting magnets for beam management. By organizing the nuclear core material to permit sufficient mean free path for escape of the fission fragments and by collimating the beam, this study showed the FFRE could convert nuclear power to thrust directly and efficiently at a delivered specific impulse of 527,000 seconds. The FY13 study showed that, without increasing the reactor power, adding a neutral gas to the fission fragment beam significantly increased the FFRE thrust through in a manner analogous to a jet engine afterburner. This frictional interaction of gas and beam resulted in an engine that continuously produced 1000 pound force of thrust at a delivered impulse of 32,000 seconds, thereby reducing the currently studied DRM 5 round trip mission to Mars from 3 years to 260 days. By decreasing the gas addition, this same engine can be tailored for much lower thrust at much higher impulse to match missions to more distant destinations. These studies created host spacecraft concepts configured for manned round trip journeys. While the Pleil, Joachim D; Hansel, Armin Foreword The International Association of Breath Research (IABR) meetings are an eclectic gathering of researchers in the medical, environmental and instrumentation fields; our focus is on human health as assessed by the measurement and interpretation of trace chemicals in human exhaled breath. What may have escaped our notice is a complementary field of research that explores the creation and maintenance of artificial atmospheres practised by the submarine air monitoring and air purification (SAMAP) community. SAMAP is comprised of manufacturers, researchers and medical professionals dealing with the engineering and instrumentation to support human life in submarines and spacecraft (including shuttlecraft and manned rockets, high-altitude aircraft, and the International Space Station (ISS)). Here, the immediate concerns are short-term survival and long-term health in fairly confined environments where one cannot simply 'open the window' for fresh air. As such, one of the main concerns is air monitoring and the main sources of contamination are CO(2) and other constituents of human exhaled breath. Since the inaugural meeting in 1994 in Adelaide, Australia, SAMAP meetings have been held every two or three years alternating between the North American and European continents. The meetings are organized by Dr Wally Mazurek (a member of IABR) of the Defense Systems Technology Organization (DSTO) of Australia, and individual meetings are co-hosted by the navies of the countries in which they are held. An overriding focus at SAMAP is life support (oxygen availability and carbon dioxide removal). Certainly, other air constituents are also important; for example, the closed environment of a submarine or the ISS can build up contaminants from consumer products, cooking, refrigeration, accidental fires, propulsion and atmosphere maintenance. However, the most immediate concern is sustaining human metabolism: removing exhaled CO(2) and replacing metabolized O(2). Another Linford, R. M. F. A detector sensitive to only the ultraviolet radiation emitted by flames has been selected as the basic element of the NASA Skylab fire detection system. It is sensitive to approximately 10(exp -12)W of radiation and will detect small flames at distances in excess of 3m. The performance of the detector was verified by experiments in an aircraft flying zero-gravity parabolas to simulate the characteristics of a fire which the detector must sense. Extensive investigation and exacting design was necessary to exclude all possible sources of false alarms. Optical measurements were made on all the spacecraft windows to determine the amount of solar radiation transmitted. The lighting systems and the onboard experiments also were appraised for ultraviolet emissions. Proton-accelerator tests were performed to determine the interaction of the Earth's trapped radiation belts with the detectors and the design of the instrument was modified to negate these effects. for deep space missions), also needs to orient its solar arrays toward the sun, none of which can be accomplished without the ability to control the...Spacecraft Thermal Control Handbook: Cryogenics. El Segundo, CA: The Aerospace Press. ESA and NASA. 2015. “ Solar and Heliospheric Observatory Home Page...Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Incorporating inexpensive low-impact targeted surface charging Menietti, J. D.; Santolík, Ondřej; Abaci, P. C. Roč. 57, č. 12 (2009), s. 1412-1418 ISSN 0032-0633 R&D Projects: GA AV ČR IAA301120601 Grant - others:NSF(US) ATM-04-43531; NASA (US) NNG05GM52G.; GA MŠk(CZ) ME 842 Institutional research plan: CEZ:AV0Z30420517 Keywords : chorus * mid-altitude cusp * Polar spacecraft Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.067, year: 2009 Farley, Rodger; Ngo, Son The X-ray Timing Explorer (XTE) spacecraft is a NASA science low-earth orbit explorer-class satellite to be launched in 1995, and is an in-house Goddard Space Flight Center (GSFC) project. It has two deployable aluminum honeycomb solar array wings with each wing being articulated by a single axis solar array drive assembly. This paper will address the design, the qualification testing, and the development problems as they surfaced of the Solar Array Deployment and Drive System. Baggerman, Clint; McCabe, Mary; Verma, Dinesh It has been 30 years since the National Aeronautics and Space Administration (NASA) last developed a crewed spacecraft capable of launch, on-orbit operations, and landing. During that time, aerospace avionics technologies have greatly advanced in capability, and these technologies have enabled integrated avionics architectures for aerospace applications. The inception of NASA s Orion Crew Exploration Vehicle (CEV) spacecraft offers the opportunity to leverage the latest integrated avionics technologies into crewed space vehicle architecture. The outstanding question is to what extent to implement these advances in avionics while still meeting the unique crewed spaceflight requirements for safety, reliability and maintainability. Historically, aircraft and spacecraft have very similar avionics requirements. Both aircraft and spacecraft must have high reliability. They also must have as much computing power as possible and provide low latency between user control and effecter response while minimizing weight, volume, and power. However, there are several key differences between aircraft and spacecraft avionics. Typically, the overall spacecraft operational time is much shorter than aircraft operation time, but the typical mission time (and hence, the time between preventive maintenance) is longer for a spacecraft than an aircraft. Also, the radiation environment is typically more severe for spacecraft than aircraft. A "loss of mission" scenario (i.e. - the mission is not a success, but there are no casualties) arguably has a greater impact on a multi-million dollar spaceflight mission than a typical commercial flight. Such differences need to be weighted when determining if an aircraft-like integrated modular avionics (IMA) system is suitable for a crewed spacecraft. This paper will explore the preliminary design process of the Orion vehicle avionics system by first identifying the Orion driving requirements and the difference between Orion requirements and those of Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J. Control of air contaminants is a crucial factor in the safety considerations of crewed space flight. Indoor air quality needs to be closely monitored during long range missions such as a Mars mission, and also on large complex space structures such as the International Space Station. This work mainly pertains to the detection and simulation of air contaminants in the space station, though much of the work is easily extended to buildings, and issues of ventilation systems. Here we propose a method with which to track the presence of contaminants using an accurate physical model, and also develop a robust procedure that would raise alarms when certain tolerance levels are exceeded. A part of this research concerns the modeling of air flow inside a spacecraft, and the consequent dispersal pattern of contaminants. Our objective is to also monitor the contaminants on-line, so we develop a state estimation procedure that makes use of the measurements from a sensor system and determines an optimal estimate of the contamination in the system as a function of time and space. The real-time optimal estimates in turn are used to detect faults in the system and also offer diagnoses as to their sources. This work is concerned with the monitoring of air contaminants aboard future generation spacecraft and seeks to satisfy NASA's requirements as outlined in their Strategic Plan document (Technology Development Requirements, 1996). Birmele, Michele; Caro, Janicce; Newsham, Gerard; Roberts, Michael; Morford, Megan; Wheeler, Ray Microbial detection, identification, and control are essential for the maintenance and preservation of spacecraft water systems. Requirements set by NASA put limitations on the energy, mass, materials, noise, cost, and crew time that can be devoted to microbial control. Efforts are being made to attain real-time detection and identification of microbial contamination in microgravity environments. Research for evaluating technologies for capability enhancement on-orbit is currently focused on the use of adenosine triphosphate (ATP) analysis for detection purposes and polymerase chain reaction (peR) for microbial identification. Additional research is being conducted on how to control for microbial contamination on a continual basis. Existing microbial control methods in spacecraft utilize iodine or ionic silver biocides, physical disinfection, and point-of-use sterilization filters. Although these methods are effective, they require re-dosing due to loss of efficacy, have low human toxicity thresholds, produce poor taste, and consume valuable mass and crew time. Thus, alternative methods for microbial control are needed. This project also explores ultraviolet light-emitting diodes (UV-LEDs), surface passivation methods for maintaining residual biocide levels, and several antimicrobial materials aimed at improving current microbial control techniques, as well as addressing other materials presently under analysis and future directions to be pursued. Grubbs, Rodney; Lindblom, Walt; Bowerman, Deborah S. (Technical Monitor) Since its creation in 1958 NASA has been making and documenting history, both on Earth and in space. To complete its missions NASA has long relied on still and motion imagery to document spacecraft performance, see what can't be seen by the naked eye, and enhance the safety of astronauts and expensive equipment. Today, NASA is working to take advantage of new digital imagery technologies and techniques to make its missions more safe and efficient. An HDTV camera was on-board the International Space Station from early August, to mid-December, 2001. HDTV cameras previously flown have had degradation in the CCD during the short duration of a Space Shuttle flight. Initial performance assessment of the CCD during the first-ever long duration space flight of a HDTV camera and earlier flights is discussed. Recent Space Shuttle launches have been documented with HDTV cameras and new long lenses giving clarity never before seen with video. Examples and comparisons will be illustrated between HD, highspeed film, and analog video of these launches and other NASA tests. Other uses of HDTV where image quality is of crucial importance will also be featured. ... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice: (11-061)] NASA Advisory Council; Commercial...: In accordance with the Federal Advisory Committee Act, Public Law 92-463, as amended, the National Aeronautics and Space Administration announces a meeting of the Commercial Space Committee of the NASA... .... Greg Mann, Office of International and Interagency Relations, (202) 358-5140, NASA Headquarters... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice 13-091] NASA International Space Station... meeting. SUMMARY: In accordance with the Federal Advisory Committee Act, Public Law 92-463, as amended... Piszczor, Michael; Benson, Scott; Scheiman, David; Finacannon, Homer; Oleson, Steve; Landis, Geoffrey A recent study by the NASA Glenn Research Center assessed the feasibility of using photovoltaics (PV) to power spacecraft for outer planetary, deep space missions. While the majority of spacecraft have relied on photovoltaics for primary power, the drastic reduction in solar intensity as the spacecraft moves farther from the sun has either limited the power available (severely curtailing scientific operations) or necessitated the use of nuclear systems. A desire by NASA and the scientific community to explore various bodies in the outer solar system and conduct "long-term" operations using using smaller, "lower-cost" spacecraft has renewed interest in exploring the feasibility of using photovoltaics for to Jupiter, Saturn and beyond. With recent advances in solar cell performance and continuing development in lightweight, high power solar array technology, the study determined that photovoltaics is indeed a viable option for many of these missions. McCamish, Shawn B This research contributes to multiple spacecraft control by developing an autonomous distributed control algorithm for close proximity operations of multiple spacecraft systems, including rendezvous... National Aeronautics and Space Administration — Fractionated spacecraft architectures to distribute mission performance from a single, monolithic satellite across large number of smaller spacecraft, for missions... National Aeronautics and Space Administration — We have built and tested an optical extinction monitor for the detection of spacecraft cabin particulates. This sensor sensitive to particle sizes ranging from a few... National Aeronautics and Space Administration — We propose to design, build and test an optical extinction monitor for the detection of spacecraft cabin particulates. This monitor will be sensitive to particle... Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Spitale, Joseph M.; Le, Dang Robotic spacecraft are controlled by onboard sets of commands called "sequences." Determining that sequences will have the desired effect on the spacecraft can be expensive in terms of both labor and computer coding time, with different particular costs for different types of spacecraft. Specification languages and appropriate user interface to the languages can be used to make the most effective use of engineering validation time. This paper describes one specification and verification environment ("SAVE") designed for validating that command sequences have not violated any flight rules. This SAVE system was subsequently adapted for flight use on the TOPEX/Poseidon spacecraft. The relationship of this work to rule-based artificial intelligence and to other specification techniques is discussed, as well as the issues that arise in the transfer of technology from a research prototype to a full flight system. National Aeronautics and Space Administration — Please note that funding to Dr. Simon Hsiang, a critical co-investigator for the development of the Spacecraft Optimization Layout and Volume (SOLV) model, was... Franck, R.; Graven, P.; Liptak, L. This paper describes the methodologies and findings from an industry survey of awareness and utility of Spacecraft Plug-& -Play Avionics (SPA). The survey was conducted via interviews, in-person and teleconference, with spacecraft prime contractors and suppliers. It focuses primarily on AFRL's SPA technology development activities but also explores the broader applicability and utility of Plug-& -Play (PnP) architectures for spacecraft. Interviews include large and small suppliers as well as large and small spacecraft prime contractors. Through these “ product marketing” interviews, awareness and attitudes can be assessed, key technical and market barriers can be identified, and opportunities for improvement can be uncovered. Although this effort focuses on a high-level assessment, similar processes can be used to develop business cases and economic models which may be necessary to support investment decisions. Kurnosova, L.V.; Fradkin, M.I.; Razorenov, L.A. Experiments performed on the spacecraft Salyut 1, Kosmos 410, and Kosmos 443 enable us to record the disintegration products of particles which are formed in the material of the detectors on board the spacecraft. The observations were made by means of a delayed coincidence method. We have detected a meson component and also a component which is apparently associated with the generation of radioactive isotopes in the detectors Billerbeck, W. J. Historical data on commercial spacecraft power systems are presented and their power requirements to the growth of satellite communications channel usage are related. Some approaches for estimating future power requirements of this class of spacecraft through the year 2000 are proposed. The key technology drivers in satellite power systems are addressed. Several technological trends in such systems are described, focusing on the most useful areas for research and development of major subsystems, including solar arrays, energy storage, and power electronics equipment. Wang, Jy-An J.; Ellis, Ronald J.; Hunter, Hamilton T.; Singleterry, Robert C. Jr. Research is being conducted to develop an integrated technology for the prediction of aging behavior for space structural materials during service. This research will utilize state-of-the-art radiation experimental apparatus and analysis, updated codes and databases, and integrated mechanical and radiation testing techniques to investigate the suitability of numerous current and potential spacecraft structural materials. Also included are the effects on structural materials in surface modules and planetary landing craft, with or without fission power supplies. Spacecraft structural materials would also be in hostile radiation environments on the surface of the moon and planets without appreciable atmospheres and moons around planets with large intense magnetic and radiation fields (such as the Jovian moons). The effects of extreme temperature cycles in such locations compounds the effects of radiation on structural materials. This paper describes the integrated methodology in detail and shows that it will provide a significant technological advance for designing advanced spacecraft. This methodology will also allow for the development of advanced spacecraft materials through the understanding of the underlying mechanisms of material degradation in the space radiation environment. Thus, this technology holds a promise for revolutionary advances in material damage prediction and protection of space structural components as, for example, in the development of guidelines for managing surveillance programs regarding the integrity of spacecraft components, and the safety of the aging spacecraft. (authors) Easton, C. R. This paper presents an information architecture developed for the Space Station Freedom as a model from which to derive an information architecture standard for advanced spacecraft. The information architecture provides a way of making information available across a program, and among programs, assuming that the information will be in a variety of local formats, structures and representations. It provides a format that can be expanded to define all of the physical and logical elements that make up a program, add definitions as required, and import definitions from prior programs to a new program. It allows a spacecraft and its control center to work in different representations and formats, with the potential for supporting existing spacecraft from new control centers. It supports a common view of data and control of all spacecraft, regardless of their own internal view of their data and control characteristics, and of their communications standards, protocols and formats. This information architecture is central to standardizing spacecraft operations, in that it provides a basis for information transfer and translation, such that diverse spacecraft can be monitored and controlled in a common way. Hadaegh, Fred Y.; Blackmore, James C. An attitude estimation was examined in fractioned free-flying spacecraft. Instead of a single, monolithic spacecraft, a fractionated free-flying spacecraft uses multiple spacecraft modules. These modules are connected only through wireless communication links and, potentially, wireless power links. The key advantage of this concept is the ability to respond to uncertainty. For example, if a single spacecraft module in the cluster fails, a new one can be launched at a lower cost and risk than would be incurred with onorbit servicing or replacement of the monolithic spacecraft. In order to create such a system, however, it is essential to know what the navigation capabilities of the fractionated system are as a function of the capabilities of the individual modules, and to have an algorithm that can perform estimation of the attitudes and relative positions of the modules with fractionated sensing capabilities. Looking specifically at fractionated attitude estimation with startrackers and optical relative attitude sensors, a set of mathematical tools has been developed that specify the set of sensors necessary to ensure that the attitude of the entire cluster ( cluster attitude ) can be observed. Also developed was a navigation filter that can estimate the cluster attitude if these conditions are satisfied. Each module in the cluster may have either a startracker, a relative attitude sensor, or both. An extended Kalman filter can be used to estimate the attitude of all modules. A range of estimation performances can be achieved depending on the sensors used and the topology of the sensing network. Dennison, J. R.; Swaminathan, Prasanna; Jost, Randy; Brunson, Jerilyn; Green, Nelson; Frederickson, A. Robb A key parameter in modeling differential spacecraft charging is the resistivity of insulating materials. This determines how charge will accumulate and redistribute across the spacecraft, as well as the time scale for charge transport and dissipation. Existing spacecraft charging guidelines recommend use of tests and imported resistivity data from handbooks that are based principally upon ASTM methods that are more applicable to classical ground conditions and designed for problems associated with power loss through the dielectric, than for how long charge can be stored on an insulator. These data have been found to underestimate charging effects by one to four orders of magnitude for spacecraft charging applications. A review is presented of methods to measure the resistive of highly insulating materials, including the electrometer-resistance method, the electrometer-constant voltage method, the voltage rate-of-change method and the charge storage method. This is based on joint experimental studies conducted at NASA Jet Propulsion Laboratory and Utah State University to investigate the charge storage method and its relation to spacecraft charging. The different methods are found to be appropriate for different resistivity ranges and for different charging circumstances. A simple physics-based model of these methods allows separation of the polarization current and dark current components from long duration measurements of resistivity over day- to month-long time scales. Model parameters are directly related to the magnitude of charge transfer and storage and the rate of charge transport. The model largely explains the observed differences in resistivity found using the different methods and provides a framework for recommendations for the appropriate test method for spacecraft materials with different resistivities and applications. The proposed changes to the existing engineering guidelines are intended to provide design engineers more appropriate methods for Meyer, Marit E. In a spacecraft cabin environment, the size range of indoor aerosols is much larger and they persist longer than on Earth because they are not removed by gravitational settling. A previous aerosol experiment in 1991 documented that over 90 of the mass concentration of particles in the NASA Space Shuttle air were between 10 m and 100 m based on measurements with a multi-stage virtual impactor and a nephelometer (Liu et al. 1991). While the now-retired Space Shuttle had short duration missions (less than two weeks), the International Space Station (ISS) has been continually inhabited by astronauts for over a decade. High concentrations of inhalable particles on ISS are potentially responsible for crew complaints of respiratory and eye irritation and comments about 'dusty' air. Air filtration is the current control strategy for airborne particles on the ISS, and filtration modeling, performed for engineering and design validation of the air revitalization system in ISS, predicted that PM requirements would be met. However, aerosol monitoring has never been performed on the ISS to verify PM levels. A flight experiment is in preparation which will provide data on particulate matter in ISS ambient air. Particles will be collected with a thermophoretic sampler as well as with passive samplers which will extend the particle size range of sampling. Samples will be returned to Earth for chemical and microscopic analyses, providing the first aerosol data for ISS ambient air. Stavnes, Mark W.; Hammoud, Ahmad N.; Bercaw, Robert W. Electrical wiring systems are used extensively on NASA space systems for power management and distribution, control and command, and data transmission. The reliability of these systems when exposed to the harsh environments of space is very critical to mission success and crew safety. Failures have been reported both on the ground and in flight due to arc tracking in the wiring harnesses, made possible by insulation degradation. This report was written as part of a NASA Office of Safety and Mission Assurance (Code Q) program to identify and characterize wiring systems in terms of their potential use in aerospace vehicles. The goal of the program is to provide the information and guidance needed to develop and qualify reliable, safe, lightweight wiring systems, which are resistant to arc tracking and suitable for use in space power applications. This report identifies the environments in which NASA spacecraft will operate, and determines the specific NASA testing requirements. A summary of related test programs is also given in this report. This data will be valuable to spacecraft designers in determining the best wiring constructions for the various NASA applications. Minow, J. I.; Nicholas, A. C.; Parker, L. N.; Xapsos, M.; Walker, P. W.; Stauffer, C. The Space Environment Technical Discipline Team (TDT) is a technical organization led by NASA's Technical Fellow for Space Environments that supports NASA's Office of the Chief Engineer through the NASA Engineering and Safety Center. The Space Environments TDT conducts independent technical assessments related to the space environment and space weather impacts on spacecraft for NASA programs and provides technical expertise to NASA management and programs where required. This presentation will highlight the status of applied space weather activities within the Space Environment TDT that support development of operational space weather applications and a better understanding of the impacts of space weather on space systems. We will first discuss a tool that has been developed for evaluating space weather launch constraints that are used to protect launch vehicles from hazardous space weather. We then describe an effort to better characterize three-dimensional radiation transport for CubeSat spacecraft and processing of micro-dosimeter data from the International Space Station which the team plans to make available to the space science community. Finally, we will conclude with a quick description of an effort to maintain access to the real-time solar wind data provided by the Advanced Composition Explorer satellite at the Sun-Earth L1 point. Wiegmann, Bruce M. There is great interest in examining the outer planets of our solar system and Heliopause region (edge of Solar System) and beyond regions of interstellar space by both the Planetary and Heliophysics communities. These needs are well docu-mented in the recent National Academy of Sciences Decadal Surveys. There is significant interest in developing revolutionary propulsion techniques that will enable such Heliopause scientific missions to be completed within 10 to15 years of the launch date. One such enabling propulsion technique commonly known as Electric Sail (E-Sail) propulsion employs positively charged bare wire tethers that extend radially outward from a rotating spacecraft spinning at a rate of one revolution per hour. Around the positively charged bare-wire tethers, a Debye Sheath is created once positive voltage is applied. This sheath stands off of the bare wire tether at a sheath diameter that is proportional to the voltage in the wire coupled with the flux density of solar wind ions within the solar system (or the location of spacecraft in the solar system. The protons that are expended from the sun (solar wind) at 400 to 800 km/sec are electrostatically repelled away from these positively charged Debye sheaths and propulsive thrust is produced via the resulting momentum transfer. The amount of thrust produced is directly proportional to the total wire length. The Marshall Space Flight Center (MSFC) Electric Sail team is currently funded via a two year Phase II NASA Innovative Advanced Concepts (NIAC) awarded in July 2015. The team's current activities are: 1) Developing a Particle in Cell (PIC) numeric engineering model from the experimental data collected at MSFC's Solar Wind Facility on the interaction between simulated solar wind interaction with a charged bare wire that can be applied to a variety of missions, 2) The development of the necessary tether deployers and tethers to enable successful de-ployment of multiple, multi km length bare tethers Scott, James R.; Martini, Michael C. A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and This NASA Strategic Plan describes an ambitious, exciting vision for the Agency across all its Strategic Enterprises that addresses a series of fundamental questions of science and research. This vision is so challenging that it literally depends on the success of an aggressive, cutting-edge advanced technology development program. The objective of this plan is to describe the NASA-wide technology program in a manner that provides not only the content of ongoing and planned activities, but also the rationale and justification for these activities in the context of NASA's future needs. The scope of this plan is Agencywide, and it includes technology investments to support all major space and aeronautics program areas, but particular emphasis is placed on longer term strategic technology efforts that will have broad impact across the spectrum of NASA activities and perhaps beyond. Our goal is to broaden the understanding of NASA technology programs and to encourage greater participation from outside the Agency. By relating technology goals to anticipated mission needs, we hope to stimulate additional innovative approaches to technology challenges and promote more cooperative programs with partners outside NASA who share common goals. We also believe that this will increase the transfer of NASA-sponsored technology into nonaerospace applications, resulting in an even greater return on the investment in NASA. Brine drying systems may be used in spaceflight. There are several advantages to using brine processing technologies for long-duration human missions including a reduction in resupply requirements and achieving high water recovery ratios. The objective of this project was to evaluate four technologies for the drying of spacecraft water recycling system brine byproducts. The technologies tested were NASA's Forward Osmosis Brine Drying (FOBD), Paragon's Ionomer Water Processor (IWP), NASA's Brine Evaporation Bag (BEB) System, and UMPQUA's Ultrasonic Brine Dewatering System (UBDS). The purpose of this work was to evaluate the hardware using feed streams composed of brines similar to those generated on board the International Space Station (ISS) and future exploration missions. The brine formulations used for testing were the ISS Alternate Pretreatment and Solution 2 (Alt Pretreat). The brines were generated using the Wiped-film Rotating-disk (WFRD) evaporator, which is a vapor compression distillation system that is used to simulate the function of the ISS Urine Processor Assembly (UPA). Each system was evaluated based on the results from testing and Equivalent System Mass (ESM) calculations. A Quality Function Deployment (QFD) matrix was also developed as a method to compare the different technologies based on customer and engineering requirements. Chambers, Katherine H.; Koschmeder, Louis A.; Hollansworth, James E.; ONeill, Jack; Jones, Robert E.; Gibbons, Richard C. Emerging applications of commercial mobile satellite communications include satellite delivery of compact disc (CD) quality radio to car drivers who can select their favorite programming as they drive any distance; transmission of current air traffic data to aircraft; and handheld communication of data and images from any remote corner of the world. Experiments with the enabling technologies and tests and demonstrations of these concepts are being conducted before the first satellite is launched by utilizing an existing NASA spacecraft. Shull, Sarah A.; Schneider, Walter F. The NASA Advanced Exploration Systems (AES) Life Support Systems (LSS) project strives to develop reliable, energy-efficient, and low-mass spacecraft systems to provide environmental control and life support systems (ECLSS) critical to enabling long duration human missions beyond low Earth orbit (LEO). Highly reliable, closed-loop life support systems are among the capabilities required for the longer duration human space exploration missions assessed by NASA’s Habitability Architecture Team. A. I. Altukhov Full Text Available The paper deals with the method for formation of quality requirements to the images of emergency spacecrafts. The images are obtained by means of remote sensing of near-earth space orbital deployment in the visible range. of electromagnetic radiation. The method is based on a joint taking into account conditions of space survey, characteristics of surveillance equipment, main design features of the observed spacecrafts and orbital inspection tasks. Method. Quality score is the predicted linear resolution image that gives the possibility to create a complete view of pictorial properties of the space image obtained by electro-optical system from the observing satellite. Formulation of requirements to the numerical value of this indicator is proposed to perform based on the properties of remote sensing system, forming images in the conditions of outer space, and the properties of the observed emergency spacecraft: dimensions, platform construction of the satellite, on-board equipment placement. For method implementation the authors have developed a predictive model of requirements to a linear resolution for images of emergency spacecrafts, making it possible to select the intervals of space shooting and get the satellite images required for quality interpretation. Main results. To verify the proposed model functionality we have carried out calculations of the numerical values for the linear resolution of the image, ensuring the successful task of determining the gross structural damage of the spacecrafts and identifying changes in their spatial orientation. As input data were used with dimensions and geometric primitives corresponding to the shape of deemed inspected spacecrafts: Resurs-P", "Canopus-B", "Electro-L". Numerical values of the linear resolution images have been obtained, ensuring the successful task solution for determining the gross structural damage of spacecrafts. Chu, SShao-sheng R.; Allen, Christopher S. carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed. Amzajerdian, Farzin; Roback, Vincent E.; Bulyshev, Alexander E.; Brewster, Paul F.; Carrion, William A; Pierrottet, Diego F.; Hines, Glenn D.; Petway, Larry B.; Barnes, Bruce W.; Noe, Anna M. NASA has been pursuing flash lidar technology for autonomous, safe landing on solar system bodies and for automated rendezvous and docking. During the final stages of the landing from about 1 kilometer to 500 meters above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard flight computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16,000 pixels range images with 7 centimeters precision, at 20 Hertz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument and presents the results of recent flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus) built by NASA Johnson Space Center. The flights were conducted at a simulated lunar terrain site, consisting of realistic hazard features and designated landing areas, built at NASA Kennedy Space Center specifically for this demonstration test. This paper also provides an overview of the plan for continued advancement of the flash lidar technology aimed at enhancing its performance to meet both landing and automated rendezvous and docking applications. Patterson, Richard L.; Hammoud, Ahmad; Elbuluk, Malik Electronic systems capable of extreme temperature operation are required for many future NASA space exploration missions where it is desirable to have smaller, lighter, and less expensive spacecraft and probes. Presently, spacecraft on-board electronics are maintained at about room temperature by use of thermal control systems. An Extreme Temperature Electronics Program at the NASA Glenn Research Center focuses on development of electronics suitable for space exploration missions. The effects of exposure to extreme temperatures and thermal cycling are being investigated for commercial-off-the-shelf components as well as for components specially developed for harsh environments. An overview of this program along with selected data is presented. Bazhenov, V. I.; Osin, M. I.; Zakharov, Y. V. The fundamental aspects of modeling of spacecraft characteristics by using computing means are considered. Particular attention is devoted to the design studies, the description of physical appearance of the spacecraft, and simulated modeling of spacecraft systems. The fundamental questions of organizing the on-the-ground spacecraft testing and the methods of mathematical modeling were presented. 'Standards of Conduct' for employees (14 CFR Part 1207) is set forth in this handbook and is hereby incorporated in the NASA Directives System. This handbook incorporates, for the convenience of NASA employees, the regulations now in effect prescribing standards of conduct for NASA employees. These regulations set forth the high ethical standards of conduct required of NASA employees in carrying out their duties and responsibilities. These regulations have been approved by the Office of Government Ethics, Office of Personnel Management. The regulations incorporated in this handbook were first published in the Federal Register on October 21, 1967 (32 FR 14648-14659); Part B concerning the acceptance of gifts, gratuities, or entertainment was extensively revised on January 19, 1976 (41 FR 2631-2633) to clarify and generally to restrict the exceptions to the general rule against the acceptance by a NASA employee from persons or firms doing or seeking business with NASA. Those regulations were updated on January 29, 1985 (50 FR 3887) to ensure conformity to the Ethics in Government Act of 1978 regarding the public financial disclosure statement. These regulations were published in the Federal Register on June 16, 1987 (52 FR 22755-764) and a correction was printed on Sept. 28, 1987 (52 FR 36234). Co-curator of ACTS 2014 together with Rasmus Holmboe, Judith Schwarzbart and Sanne Kofoed. ACTS is the Museum of Contemporary Art’s international bi-annual festival. ACTS was established in 2011 and, while the primary focus is on sound and performance art, it also looks toward socially oriented art....... For the 2014 festival, the museum has entered into a collaboration with the Department for Performance Design at Roskilde University – with continued focus on sound and performance art, and social art in public spaces. With ACTS, art moves out of its usual exhibition space and instead utilizes the city, its...... various possibilities and public spaces as a stage. ACTS takes place in and around the museum and diverse locations in Roskilde city. ACTS is partly curated by the museum staff and partly by guest curators. ACTS 2014 is supported by Nordea-fonden and is a part of the project The Museum goes downtown.... Full Text Available Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. Curry, Robert E. The National Aeronautics and Space Administration conducts a wide variety of remote sensing projects using several unique aircraft platforms. These vehicles have been selected and modified to provide capabilities that are particularly important for geophysical research, in particular, routine access to very high altitudes, long range, long endurance, precise trajectory control, and the payload capacity to operate multiple, diverse instruments concurrently. While the NASA program has been in operation for over 30 years, new aircraft and technological advances that will expand the capabilities for airborne observation are continually being assessed and implemented. This presentation will review the current state of NASA's science platforms, recent improvements and new missions concepts as well as provide a survey of emerging technologies unmanned aerial vehicles for long duration observations (Global Hawk and Predator). Applications of information technology that allow more efficient use of flight time and the ability to rapidly reconfigure systems for different mission objectives are addressed. Johnson, Les; Young, Roy; Montgomery, Edward; Alhorn, Dean In the early 2000s, NASA made substantial progress in the development of solar sail propulsion systems for use in robotic science and exploration of the solar system. Two different 20-m solar sail systems were produced and they successfully completed functional vacuum testing in NASA Glenn Research Center's (GRC's) Space Power Facility at Plum Brook Station, Ohio. The sails were designed and developed by ATK Space Systems and L Garde, respectively. The sail systems consist of a central structure with four deployable booms that support the sails. These sail designs are robust enough for deployment in a one-atmosphere, one-gravity environment and were scalable to much larger solar sails perhaps as large as 150 m on a side. Computation modeling and analytical simulations were also performed to assess the scalability of the technology to the large sizes required to implement the first generation of missions using solar sails. Life and space environmental effects testing of sail and component materials were also conducted. NASA terminated funding for solar sails and other advanced space propulsion technologies shortly after these ground demonstrations were completed. In order to capitalize on the $30M investment made in solar sail technology to that point, NASA Marshall Space Flight Center (MSFC) funded the NanoSail-D, a subscale solar sail system designed for possible small spacecraft applications. The NanoSail-D mission flew on board the ill-fated Falcon-1 Rocket launched August 2, 2008, and due to the failure of that rocket, never achieved orbit. The NanoSail-D flight spare will be flown in the Fall of 2010. This paper will summarize NASA's investment in solar sail technology to-date and discuss future opportunities Johnson, Les; Young, Roy; Montgomery, Edward; Alhorn, Dean In the early 2000s, NASA made substantial progress in the development of solar sail propulsion systems for use in robotic science and exploration of the solar system. Two different 20-m solar sail systems were produced. NASA has successfully completed functional vacuum testing in their Glenn Research Center's Space Power Facility at Plum Brook Station, Ohio. The sails were designed and developed by Alliant Techsystems Space Systems and L'Garde, respectively. The sail systems consist of a central structure with four deployable booms that support each sail. These sail designs are robust enough for deployment in a one-atmosphere, one-gravity environment and are scalable to much larger solar sails - perhaps as large as 150 m on a side. Computation modeling and analytical simulations were performed in order to assess the scalability of the technology to the larger sizes that are required to implement the first generation of missions using solar sails. Furthermore, life and space environmental effects testing of sail and component materials was also conducted.NASA terminated funding for solar sails and other advanced space propulsion technologies shortly after these ground demonstrations were completed. In order to capitalize on the $30 M investment made in solar sail technology to that point, NASA Marshall Space Flight Center funded the NanoSail-D, a subscale solar sail system designed for possible small spacecraft applications. The NanoSail-D mission flew on board a Falcon-1 rocket, launched August 2, 2008. As a result of the failure of that rocket, the NanoSail-D was never successfully given the opportunity to achieve orbit. The NanoSail-D flight spare was flown in the Fall of 2010. This review paper summarizes NASA's investment in solar sail technology to date and discusses future opportunities. Full Text Available Communication delays are inherently present in information exchange between spacecraft and have an effect on the control performance of spacecraft formation. In this work, attitude coordination control of spacecraft formation is addressed, which is in the presence of multiple communication delays between spacecraft. Virtual system-based approach is utilized in case that a constant reference attitude is available to only a part of the spacecraft. The feedback from the virtual systems to the spacecraft formation is introduced to maintain the formation. Using backstepping control method, input torque of each spacecraft is designed such that the attitude of each spacecraft converges asymptotically to the states of its corresponding virtual system. Furthermore, the backstepping technique and the Lyapunov–Krasovskii method contribute to the control law design when the reference attitude is time-varying and can be obtained by each spacecraft. Finally, effectiveness of the proposed methodology is illustrated by the numerical simulations of a spacecraft formation. Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results. Full Text Available A dedicated mission in low Earth orbit is proposed to test predictions of gravitational interaction theories and to directly measure the atmospheric density in a relevant altitude range, as well as to provide a metrological platform able to tie different space geodesy techniques. The concept foresees a small spacecraft to be placed in a dawn-dusk eccentric orbit between 450 and 1200 km of altitude. The spacecraft will be tracked from the ground with high precision, and a three-axis accelerometer package on-board will measure the non-gravitational accelerations acting on its surface. Estimates of parameters related to fundamental physics and geophysics should be obtained by a precise orbit determination, while the accelerometer data will be instrumental in constraining the atmospheric density. Along with the mission scientific objectives, a conceptual configuration is described together with an analysis of the dynamical environment experienced by the spacecraft and the accelerometer. Morgan, Daniel James There has been considerable interest in formation flying spacecraft due to their potential to perform certain tasks at a cheaper cost than monolithic spacecraft. Formation flying enables the use of smaller, cheaper spacecraft that distribute the risk of the mission. Recently, the ideas of formation flying have been extended to spacecraft swarms made up of hundreds to thousands of 100-gram-class spacecraft known as femtosatellites. The large number of spacecraft and limited capabilities of each individual spacecraft present a significant challenge in guidance, navigation, and control. This dissertation deals with the guidance and control algorithms required to enable the flight of spacecraft swarms. The algorithms developed in this dissertation are focused on achieving two main goals: swarm keeping and swarm reconfiguration. The objectives of swarm keeping are to maintain bounded relative distances between spacecraft, prevent collisions between spacecraft, and minimize the propellant used by each spacecraft. Swarm reconfiguration requires the transfer of the swarm to a specific shape. Like with swarm keeping, minimizing the propellant used and preventing collisions are the main objectives. Additionally, the algorithms required for swarm keeping and swarm reconfiguration should be decentralized with respect to communication and computation so that they can be implemented on femtosats, which have limited hardware capabilities. The algorithms developed in this dissertation are concerned with swarms located in low Earth orbit. In these orbits, Earth oblateness and atmospheric drag have a significant effect on the relative motion of the swarm. The complicated dynamic environment of low Earth orbits further complicates the swarm-keeping and swarm-reconfiguration problems. To better develop and test these algorithms, a nonlinear, relative dynamic model with J2 and drag perturbations is developed. This model is used throughout this dissertation to validate the algorithms Thomas, Evan A.; Klaus, David M. It is well recognized that water handling systems used in a spacecraft are prone to failure caused by biofouling and mineral scaling, which can clog mechanical systems and degrade the performance of capillary-based technologies. Long duration spaceflight applications, such as extended stays at a Lunar Outpost or during a Mars transit mission, will increasingly benefit from hardware that is generally more robust and operationally sustainable overtime. This paper presents potential design and testing considerations for improving the reliability of water handling technologies for exploration spacecraft. Our application of interest is to devise a spacecraft wastewater management system wherein fouling can be accommodated by design attributes of the management hardware, rather than implementing some means of preventing its occurrence. Didion, Jeffrey R. Optimization of spacecraft size, weight and power (SWaP) resources is an explicit technical priority at Goddard Space Flight Center. Embedded Thermal Control Subsystems are a promising technology with many cross cutting NSAA, DoD and commercial applications: 1.) CubeSatSmallSat spacecraft architecture, 2.) high performance computing, 3.) On-board spacecraft electronics, 4.) Power electronics and RF arrays. The Embedded Thermal Control Subsystem technology development efforts focus on component, board and enclosure level devices that will ultimately include intelligent capabilities. The presentation will discuss electric, capillary and hybrid based hardware research and development efforts at Goddard Space Flight Center. The Embedded Thermal Control Subsystem development program consists of interrelated sub-initiatives, e.g., chip component level thermal control devices, self-sensing thermal management, advanced manufactured structures. This presentation includes technical status and progress on each of these investigations. Future sub-initiatives, technical milestones and program goals will be presented. Sasaki, Daisuke; Yamakawa, Hiroshi; Usui, Hideyuki; Funaki, Ikkoh; Kojima, Hirotsugu To capture the kinetic energy of the solar wind by creating a large magnetosphere around the spacecraft, magneto-plasma sail injects a plasma jet into a strong magnetic field produced by an electromagnet onboard the spacecraft. The aim of this paper is to investigate the effect of the IMF (interplanetary magnetic field) on the magnetosphere of magneto-plasma sail. First, using an axi-symmetric two-dimensional MHD code, we numerically confirm the magnetic field inflation, and the formation of a magnetosphere by the interaction between the solar wind and the magnetic field. The expansion of an artificial magnetosphere by the plasma injection is then simulated, and we show that the magnetosphere is formed by the interaction between the solar wind and the magnetic field expanded by the plasma jet from the spacecraft. This simulation indicates the size of the artificial magnetosphere becomes smaller when applying the IMF. Data catalog series for space science and applications flight missions. Volume 5A: Descriptions of astronomy, astrophysics, and solar physics spacecraft and investigations. Volume 5B: Descriptions of data sets from astronomy, astrophysics, and solar physics spacecraft and investigations Kim, Sang J. (Editor) The main purpose of the data catalog series is to provide descriptive references to data generated by space science flight missions. The data sets described include all of the actual holdings of the Space Science Data Center (NSSDC), all data sets for which direct contact information is available, and some data collections held and serviced by foreign investigators, NASA and other U.S. government agencies. This volume contains narrative descriptions of data sets of astronomy, astrophysics, solar physics spacecraft and investigations. The following spacecraft series are included: Mariner, Pioneer, Pioneer Venus, Venera, Viking, Voyager, and Helios. Separate indexes to the planetary and interplanetary missions are also provided. Cramer, K. Elliott; Leckey, Cara A. C.; Howell, Patricia A.; Johnston, Patrick H.; Burke, Eric R.; Zalameda, Joseph N.; Winfree, William P.; Seebo, Jeffery P. The use of composite materials continues to increase in the aerospace community due to the potential benefits of reduced weight, increased strength, and manufacturability. Ongoing work at NASA involves the use of the large-scale composite structures for spacecraft (payload shrouds, cryotanks, crew modules, etc). NASA is also working to enable the use and certification of composites in aircraft structures through the Advanced Composites Project (ACP). The rapid, in situ characterization of a wide range of the composite materials and structures has become a critical concern for the industry. In many applications it is necessary to monitor changes in these materials over a long time. The quantitative characterization of composite defects such as fiber waviness, reduced bond strength, delamination damage, and microcracking are of particular interest. The research approaches of NASA's Nondestructive Evaluation Sciences Branch include investigation of conventional, guided wave, and phase sensitive ultrasonic methods, infrared thermography and x-ray computed tomography techniques. The use of simulation tools for optimizing and developing these methods is also an active area of research. This paper will focus on current research activities related to large area NDE for rapidly characterizing aerospace composites. Stebbins, Robin; Jennrich, Oliver; McNamara, Paul With the conclusion of the NASA/ESA partnership on the Laser Interferometer Space Antenna (LISA) Project, NASA initiated a study to explore mission concepts that will accomplish some or all of the LISA science objectives at lower cost. The Gravitational-Wave Mission Concept Study consisted of a public Request for Information (RFI), a Core Team of NASA engineers and scientists, a Community Science Team, a Science Task Force, and an open workshop. The RFI yielded were 12 mission concepts, 3 instrument concepts and 2 technologies. The responses ranged from concepts that eliminated the drag-free test mass of LISA to concepts that replace the test mass with an atom interferometer. The Core Team reviewed the noise budgets and sensitivity curves, the payload and spacecraft designs and requirements, orbits and trajectories and technical readiness and risk. The Science Task Force assessed the science performance by calculating the horizons. the detection rates and the accuracy of astrophysical parameter estimation for massive black hole mergers, stellar-mass compact objects inspiraling into central engines. and close compact binary systems. Three mission concepts have been studied by Team-X, JPL's concurrent design facility. to define a conceptual design evaluate kt,y performance parameters. assess risk and estimate cost and schedule. The Study results are summarized. NASA Officials gather around a console in the Mission Operations Control Room (MOCR) in the Mission Control Center (MCC) prior to the making of a decision whether to land Apollo 16 on the moon or to abort the landing. Seated, left to right, are Dr. Christopher C. Kraft Jr., Director of the Manned Spacecraft Center (MSC), and Brig. Gen. James A. McDivitt (USAF), Manager, Apollo Spacecraft Program Office, MSC; and standing, left to right, are Dr. Rocco A. Petrone, Apollo Program Director, Office Manned Space Flight (OMSF), NASA HQ.; Capt. John K. Holcolmb (U.S. Navy, Ret.), Director of Apollo Operations, OMSF; Sigurd A. Sjoberg, Deputy Director, MSC; Capt. Chester M. Lee (U.S. Navy, Ret.), Apollo Mission Director, OMSF; Dale D. Myers, NASA Associate Administrator for Manned Space Flight; and Dr. George M. Low, NASA Deputy Administrator. Chanteur, G. M.; Le Contel, O.; Sahraoui, F.; Retino, A.; Mirioni, L. Multi-spacecraft missions like the ESA mission CLUSTER and the NASA mission MMS are essential to improve our understanding of physical processes in space plasmas. Several methods were designed in the 90's during the preparation phase of the CLUSTER mission to estimate gradients of physical fields from simultaneous multi-points measurements [1, 2]. Both CLUSTER and MMS involve four spacecraft with identical full scientific payloads including various sensors of electromagnetic fields and different type of particle detectors. In the standard methods described in [1, 2], which are presently in use, data from the four spacecraft have identical weights and the estimated gradients are most reliable when the tetrahedron formed by the four spacecraft is regular. There are three types of errors affecting the estimated gradients (see chapter 14 in ) : i) truncature errors are due to local non-linearity of spatial variations, ii) physical errors are due to instruments, and iii) geometrical errors are due to uncertainties on the positions of the spacecraft. An assessment of truncature errors for a given observation requires a theoretical model of the measured field. Instrumental errors can easily be taken into account for a given geometry of the cluster but are usually less than the geometrical errors which diverge quite fast when the tetrahedron flattens, a circumstance occurring twice per orbit of the cluster. Hence reliable gradients can be estimated only on part of the orbit. Reciprocal vectors of the tetrahedron were presented in chapter 4 of , they have the advantage over other methods to treat the four spacecraft symmetrically and to allow a theoretical analysis of the errors (see chapters 4 of and 4 of ). We will present Generalized Reciprocal Vectors for weighted data and an optimization procedure to improve the reliability of the estimated gradients when the tetrahedron is not regular. A brief example using CLUSTER or MMS data will be given. This approach The last thirty years have seen the Space Shuttle as the prime United States spacecraft for manned spaceflight missions. Many lessons have been learned about spacecraft design and operation throughout these years. Over the next few decades, a large increase of manned spaceflight in the commercial sector is expected. This will result in the exposure of commercial crews and passengers to many of the same risks crews of the Space Shuttle have encountered. One of the more dire situations that can be encountered is the loss of pressure in the habitable volume of the spacecraft during on orbit operations. This is referred to as a cabin leak. This paper seeks to establish a general cabin leak response philosophy with the intent of educating future spacecraft designers and operators. After establishing a relative definition for a cabin leak, the paper covers general descriptions of detection equipment, detection methods, and general operational methods for management of a cabin leak. Subsequently, all these items are addressed from the perspective of the Space Shuttle Program, as this will be of the most value to future spacecraft due to similar operating profiles. Emphasis here is placed upon why and how these methods and philosophies have evolved to meet the Space Shuttle s needs. This includes the core ideas of: considerations of maintaining higher cabin pressures vs. lower cabin pressures, the pros and cons of a system designed to feed the leak with gas from pressurized tanks vs. using pressure suits to protect against lower cabin pressures, timeline and consumables constraints, re-entry considerations with leaks of unknown origin, and the impact the International Space Station (ISS) has had to the standard Space Shuttle cabin leak response philosophy. This last item in itself includes: procedural management differences, hardware considerations, additional capabilities due to the presence of the ISS and its resource, and ISS docking/undocking considerations with a Greenwell, T. J. The Multimission Modular Spacecraft (MMS) provides a standard spacecraft bus to a user for a variety of space missions ranging from near-earth to synchronous orbits. The present paper describes the philosophy behind the MMS module test program and discusses the implementation of the test program. It is concluded that the MMS module test program provides an effective and comprehensive customer buy-off at the subsystem contractor's plant, is an optimum approach for checkout of the subsystems prior to use for on-orbit servicing in the Shuttle Cargo Bay, and is a cost-effective technique for environmental testing. Full Text Available This paper proposes a method to design the robust parametric control for autonomous rendezvous of spacecrafts with the inertial information with uncertainty. We consider model uncertainty of traditional C-W equation to formulate the dynamic model of the relative motion. Based on eigenstructure assignment and model reference theory, a concise control law for spacecraft rendezvous is proposed which could be fixed through solving an optimization problem. The cost function considers the stabilization of the system and other performances. Simulation results illustrate the robustness and effectiveness of the proposed control. Lai, Shu T. This paper presents an overview of the roles played by incoming and outgoing electrons in spacecraft surface and stresses the importance of surface conditions for spacecraft charging. The balance between the incoming electron current from the ambient plasma and the outgoing currents of secondary electrons, backscattered electrons, and photoelectrons from the surfaces determines the surface potential. Since surface conditions significantly affect the outgoing currents, the critical temperature and the surface potential are also significantly affected. As a corollary, high level differential charging of adjacent surfaces with very different surface conditions is a space hazard. Wu, Baolin; Shen, Qiang; Cao, Xibin The problem of spacecraft attitude stabilization control system with limited communication and external disturbances is investigated based on an event-triggered control scheme. In the proposed scheme, information of attitude and control torque only need to be transmitted at some discrete triggered times when a defined measurement error exceeds a state-dependent threshold. The proposed control scheme not only guarantees that spacecraft attitude control errors converge toward a small invariant set containing the origin, but also ensures that there is no accumulation of triggering instants. The performance of the proposed control scheme is demonstrated through numerical simulation. The presentation highlights NASA's jet noise research for 2016. Jet-noise modeling efforts, jet-surface interactions results, acoustic characteristics of multi-stream jets, and N+2 Supersonic Aircraft system studies are presented. National Aeronautics and Space Administration — NASA Technical Reports Server (NTRS) provides access to aerospace-related citations, full-text online documents, and images and videos. The types of information... National Aeronautics and Space Administration — The NASA Earth Exchange (NEX) represents a new platform for the Earth science community that provides a mechanism for scientific collaboration and knowledge sharing.... National Aeronautics and Space Administration — MY NASA DATA (MND) is a tool that allows anyone to make use of satellite data that was previously unavailable.Through the use of MND’s Live Access Server (LAS) a... National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with... Toll, David L. With increasing population pressure and water usage coupled with climate variability and change, water issues are being reported by numerous groups as the most critical environmental problems facing us in the 21st century. Competitive uses and the prevalence of river basins and aquifers that extend across boundaries engender political tensions between communities, stakeholders and countries. In addition to the numerous water availability issues, water quality related problems are seriously affecting human health and our environment. The potential crises and conflicts especially arise when water is competed among multiple uses. For example, urban areas, environmental and recreational uses, agriculture, and energy production compete for scarce resources, not only in the Western U.S. but throughout much of the U.S. and also in numerous parts of the world. Mitigating these conflicts and meeting water demands and needs requires using existing water resources more efficiently. The NASA Water Resources Program Element works to use NASA products and technology to address these critical water issues. The primary goal of the Water Resources is to facilitate application of NASA Earth science products as a routine use in integrated water resources management for the sustainable use of water. This also includes the extreme events of drought and floods and the adaptation to the impacts from climate change. NASA satellite and Earth system observations of water and related data provide a huge volume of valuable data in both near-real-time and extended back nearly 50 years about the Earth's land surface conditions such as precipitation, snow, soil moisture, water levels, land cover type, vegetation type, and health. NASA Water Resources Program works closely to use NASA and Earth science data with other U.S. government agencies, universities, and non-profit and private sector organizations both domestically and internationally. The NASA Water Resources Program organizes its Richman, Barbara T. President Ronald Reagan recently said he intended to nominate James Montgomery Beggs as NASA Administrator and John V. Byrne as NOAA Administrator. These two positions are key scientific posts that have been vacant since the start of the Reagan administration on January 20. The President also said he intends to nominate Hans Mark as NASA Deputy Administrator. At press time, Reagan had not designated his nominee for the director of the Office of Science and Technology Policy. Mitchell, Jason W.; Baldwin, Philip J.; Kurichh, Rishi; Naasz, Bo J.; Luquette, Richard J. The Formation Flying Testbed (FFTB) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) provides a hardware-in-the-loop test environment for formation navigation and control. The facility is evolving as a modular, hybrid, dynamic simulation facility for end-to-end guidance, navigation and. control (GN&C) design and analysis of formation flying spacecraft. The core capabilities of the FFTB, as a platform for testing critical hardware and software algorithms in-the-loop, have expanded to include S-band Radio Frequency (RF) modems for inter-spacecraft communication and ranging. To enable realistic simulations that require RF ranging sensors for relative navigation, a mechanism is needed to buffer the RF signals exchanged between spacecraft that accurately emulates the dynamic environment through which the RF signals travel, including the effects of medium, moving platforms, and radiated power. The Path Emulator for RF Signals (PERFS), currently under development at NASA GSFC, provides this capability. The function and performance of a prototype device are presented. This slide presentation reviews current NASA Earth Remote Sensing observations in specific reference to improving public health information in view of pollen sensing. While pollen sampling has instrumentation, there are limitations, such as lack of stations, and reporting lag time. Therefore it is desirable use remote sensing to act as early warning system for public health reasons. The use of Juniper Pollen was chosen to test the possibility of using MODIS data and a dust transport model, Dust REgional Atmospheric Model (DREAM) to act as an early warning system. LeBeau, Gerald J. The Lyndon B. Johnson Space Center (JSC) has been a critical element of the United State's human space flight program for over 50 years. It is the home to NASA s Mission Control Center, the astronaut corps, and many major programs and projects including the Space Shuttle Program, International Space Station Program, and the Orion Project. As part of JSC's Engineering Directorate, the Applied Aeroscience and Computational Fluid Dynamics Branch is charted to provide aerosciences support to all human spacecraft designs and missions for all phases of flight, including ascent, exo-atmospheric, and entry. The presentation will review past and current aeroscience applications and how NASA works to apply a balanced philosophy that leverages ground testing, computational modeling and simulation, and flight testing, to develop and validate related products. The speaker will address associated aspects of aerodynamics, aerothermodynamics, rarefied gas dynamics, and decelerator systems, involving both spacecraft vehicle design and analysis, and operational mission support. From these examples some of NASA leading aerosciences challenges will be identified. These challenges will be used to provide foundational motivation for the development of specific advanced modeling and simulation capabilities, and will also be used to highlight how development activities are increasing becoming more aligned with flight projects. NASA s efforts to apply principles of innovation and inclusion towards improving its ability to support the myriad of vehicle design and operational challenges will also be briefly reviewed. Young, C.; Bowie, J.; Rust, R.; Lenius, J.; Anderson, M.; Connolly, J. Future human exploration of the Moon will require an optimized spacecraft design with each sub-system achieving the required minimum capability and maintaining high reliability. The objective of this study was to trade capability with reliability and minimize mass for the lunar lander spacecraft. The NASA parametric concept for a 3-person vehicle to the lunar surface with a 30% mass margin totaled was considerably heavier than the Apollo 15 Lunar Module "as flown" mass of 16.4 metric tons. The additional mass was attributed to mission requirements and system design choices that were made to meet the realities of modern spaceflight. The parametric tool used to size the current concept, Envision, accounts for primary and secondary mass requirements. For example, adding an astronaut increases the mass requirements for suits, water, food, oxygen, as well as, the increase in volume. The environmental control sub-systems becomes heavier with the increased requirements and more structure was needed to support the additional mass. There was also an increase in propellant usage. For comparison, an "Apollo-like" vehicle was created by removing these additional requirements. Utilizing the Envision parametric mass calculation tool and a quantitative reliability estimation tool designed by Valador Inc., it was determined that with today?s current technology a Lunar Module (LM) with Apollo capability could be built with less mass and similar reliability. The reliability of this new lander was compared to Apollo Lunar Module utilizing the same methodology, adjusting for mission timeline changes as well as component differences. Interestingly, the parametric concept's overall estimated risk for loss of mission (LOM) and loss of crew (LOC) did not significantly improve when compared to Apollo. Shireman, Kirk; McSwain, Gene; McCormick, Bernell; Fardelos, Panayiotis Spacecraft Engineering Simulation II (SES II) is a C-language computer program for simulating diverse aspects of operation of a spacecraft characterized by either three or six degrees of freedom. A functional model in SES can include a trajectory flight plan; a submodel of a flight computer running navigational and flight-control software; and submodels of the environment, the dynamics of the spacecraft, and sensor inputs and outputs. SES II features a modular, object-oriented programming style. SES II supports event-based simulations, which, in turn, create an easily adaptable simulation environment in which many different types of trajectories can be simulated by use of the same software. The simulation output consists largely of flight data. SES II can be used to perform optimization and Monte Carlo dispersion simulations. It can also be used to perform simulations for multiple spacecraft. In addition to its generic simulation capabilities, SES offers special capabilities for space-shuttle simulations: for this purpose, it incorporates submodels of the space-shuttle dynamics and a C-language version of the guidance, navigation, and control components of the space-shuttle flight software. About half a century ago a small satellite, Sputnik 1, was launched. The satellite did very little other than to transmit a radio signal to announce its presence in orbit. However, this humble beginning heralded the dawn of the Space Age. Today literally thousands of robotic spacecraft have been launched, many of which have flown to far-flung regions of the Solar System carrying with them the human spirit of scientific discovery and exploration. Numerous other satellites have been launched in orbit around the Earth providing services that support our technological society on the ground. How Spacecraft Fly: Spaceflight Without Formulae by Graham Swinerd focuses on how these spacecraft work. The book opens with a historical perspective of how we have come to understand our Solar System and the Universe. It then progresses through orbital flight, rocket science, the hostile environment within which spacecraft operate, and how they are designed. The concluding chapters give a glimpse of what the 21st century may ... Legros, Guillaume; Minster, Olivier; Tóth, Balazs As fire behaviour in manned spacecraft still remains poorly understood, an international topical team has been created to design a validation experiment that has an unprecedented large scale for a microgravity flammability experiment. While the validation experiment is being designed for a re-sup... Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation. Butman, Stanley; Satorius, Edgar; Ilott, Peter A semaphore scheme has been devised to satisfy a requirement to enable ultrahigh- frequency (UHF) radio communication between a spacecraft descending from orbit to a landing on Mars and a spacecraft, in orbit about Mars, that relays communications between Earth and the lander spacecraft. There are also two subsidiary requirements: (1) to use UHF transceivers, built and qualified for operation aboard the spacecraft that operate with residual-carrier binary phase-shift-keying (BPSK) modulation at a selectable data rate of 8, 32, 128, or 256 kb/s; and (2) to enable low-rate signaling even when received signals become so weak as to prevent communication at the minimum BPSK rate of 8 kHz. The scheme involves exploitation of Manchester encoding, which is used in conjunction with residual-carrier modulation to aid the carrier-tracking loop. By choosing various sequences of 1s, 0s, or 1s alternating with 0s to be fed to the residual-carrier modulator, one would cause the modulator to generate sidebands at a fundamental frequency of 4 or 8 kHz and harmonics thereof. These sidebands would constitute the desired semaphores. In reception, the semaphores would be detected by a software demodulator. Wiksten, D.; Swanson, J. The rationale and requirements for conducting accelerated life tests on electronic subsystems of spacecraft are presented. A method for applying data on the reliability and temperature sensitivity of the parts contained in a sybsystem to the selection of accelerated life test parameters is described. Additional considerations affecting the formulation of test requirements are identified, and practical limitations of accelerated aging are described. Wisniewski, Rafal; Kulczycki, P. The paper adopts the energy shaping method to control of rotational motion. A global representation of the rigid body motion is given in the canonical form by a quaternion and its conjugate momenta. A general method for motion control on a cotangent bundle to the 3-sphere is suggested. The design...... algorithm is validated for three-axis spacecraft attitude control... Wisniewski, Rafal; Kulczycki, P. The paper adopts the energy shaping method to control of rotational motion. A global representation of the rigid body motion is given in the canonical form by a quaternion and its conjugate momenta. A general method for motion control on a cotangent bundle to the 3-sphere is suggested. The design...... algorithm is validated for three-axis spacecraft attitude control. Udgivelsesdato: APR... The objective of this paper is to give a design scheme for attitude control algorithms of a generic spacecraft. Along with the system model formulated in the Hamilton's canonical form the algorithm uses information about a required potential energy and a dissipative term. The control action... Dakermanji, George; Burns, Michael; Lee, Leonine; Lyons, John; Kim, David; Spitzer, Thomas; Kercheval, Bradford The Global Precipitation Measurement (GPM) spacecraft was jointly developed by National Aeronautics and Space Administration (NASA) and Japan Aerospace Exploration Agency (JAXA). It is a Low Earth Orbit (LEO) spacecraft launched on February 27, 2014. The spacecraft is in a circular 400 Km altitude, 65 degrees inclination nadir pointing orbit with a three year basic mission life. The solar array consists of two sun tracking wings with cable wraps. The panels are populated with triple junction cells of nominal 29.5% efficiency. One axis is canted by 52 degrees to provide power to the spacecraft at high beta angles. The power system is a Direct Energy Transfer (DET) system designed to support 1950 Watts orbit average power. The batteries use SONY 18650HC cells and consist of three 8s x 84p batteries operated in parallel as a single battery. The paper describes the power system design details, its performance to date and the lithium ion battery model that was developed for use in the energy balance analysis and is being used to predict the on-orbit health of the battery. Klem, B.; Swann, D. Anomalous behavior of on-orbit spacecraft can often be detected using passive, remote sensors which measure electro-optical signatures that vary in time and spectral content. Analysts responsible for assessing spacecraft operational status and detecting detrimental anomalies using non-resolved imaging sensors are often presented with various sensing and identification issues. Modeling and measuring spacecraft self emission and reflected radiant intensity when the radiation patterns exhibit a time varying reflective glint superimposed on an underlying diffuse signal contribute to assessment of spacecraft behavior in two ways: (1) providing information on body component orientation and attitude; and, (2) detecting changes in surface material properties due to the space environment. Simple convex and cube-shaped spacecraft, designed to operate without protruding solar panel appendages, may require an enhanced level of preflight characterization to support interpretation of the various physical effects observed during on-orbit monitoring. This paper describes selected portions of the signature database generated using streamlined signature modeling and simulations of basic geometry shapes apparent to non-imaging sensors. With this database, summarization of key observable features for such shapes as spheres, cylinders, flat plates, cones, and cubes in specific spectral bands that include the visible, mid wave, and long wave infrared provide the analyst with input to the decision process algorithms contained in the overall sensing and identification architectures. The models typically utilize baseline materials such as Kapton, paints, aluminum surface end plates, and radiators, along with solar cell representations covering the cylindrical and side portions of the spacecraft. Multiple space and ground-based sensors are assumed to be located at key locations to describe the comprehensive multi-viewing aspect scenarios that can result in significant specular reflection In a self-critical inquiry into my own recent work of co-curating and the experience of seeing my video work being curated by others, this article examines acts of framing as performative acts that seek to transform visitors' preconceptions. This affective effect is pursued by means of immersion, Topics include: Test Waveform Applications for JPL STRS Operating Environment; Pneumatic Proboscis Heat-Flow Probe; Method to Measure Total Noise Temperature of a Wireless Receiver During Operation; Cursor Control Device Test Battery; Functional Near-Infrared Spectroscopy Signals Measure Neuronal Activity in the Cortex; ESD Test Apparatus for Soldering Irons; FPGA-Based X-Ray Detection and Measurement for an X-Ray Polarimeter; Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions; Silicon/Carbon Nanotube Photocathode for Splitting Water; Advanced Materials and Fabrication Techniques for the Orion Attitude Control Motor; Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements; RF Reference Switch for Spaceflight Radiometer Calibration; An Offload NIC for NASA, NLR, and Grid Computing; Multi-Scale CNT-Based Reinforcing Polymer Matrix Composites for Lightweight Structures; Ceramic Adhesive and Methods for On-Orbit Repair of Re-Entry Vehicles; Self-Healing Nanocomposites for Reusable Composite Cryotanks; Pt-Ni and Pt-Co Catalyst Synthesis Route for Fuel Cell Applications; Aerogel-Based Multilayer Insulation with Micrometeoroid Protection; Manufacturing of Nanocomposite Carbon Fibers and Composite Cylinders; Optimized Radiator Geometries for Hot Lunar Thermal Environments; A Mission Concept: Re-Entry Hopper-Aero-Space-Craft System on-Mars (REARM-Mars); New Class of Flow Batteries for Terrestrial and Aerospace Energy Storage Applications; Reliability of CCGA 1152 and CCGA 1272 Interconnect Packages for Extreme Thermal Environments; Using a Blender to Assess the Microbial Density of Encapsulated Organisms; Mixed Integer Programming and Heuristic Scheduling for Space Communication; Video Altimeter and Obstruction Detector for an Aircraft; Control Software for Piezo Stepping Actuators; Galactic Cosmic Ray Event-Based Risk Model (GERM) Code; Sasquatch Footprint Tool; and Multi-User Space Link Extension (SLE) System. National Aeronautics and Space Administration — During a NASA Phase I SBIR program, ACT addressed the need for light-weight, non-venting PCM heat storage devices by successfully demonstrating proof-of-concept of a... National Aeronautics and Space Administration — In response to NASA SBIR solicitation H3.01 "Thermal Control for Future Human Exploration", Advanced Cooling Technologies, Inc. (ACT) is proposing a novel Phase... Messenger, Scott; Lauretta, Dante S.; Connolly, Harold C., Jr. The NASA New Frontiers OSIRIS-REx spacecraft executed a flawless launch on September 8, 2016 to begin its 23-month journey to near-Earth asteroid (101955). The primary objective of the OSIRIS-REx mission is to collect and return to Earth a pristine sample of regolith from the asteroid surface. The sampling event will occur after a two-year period of remote sensing that will ensure a high probability of successful sampling of a region on the asteroid surface having high science value and within well-defined geological context. The OSIRIS-REx instrument payload includes three high-resolution cameras (OCAMS), a visible and near-infrared spectrometer (OVIRS), a thermal imaging spectrometer (OTES), an X-ray imaging spectrometer (REXIS), and a laser altimeter (OLA). As the spacecraft follows its nominal outbound-cruise trajectory, the propulsion, power, communications, and science instruments have undergone basic functional tests, with no major issues. Outbound cruise science investigations include a search for Earth Trojan asteroids as the spacecraft approaches the Sun-Earth L4 Lagrangian point in February 2017. Additional instrument checkouts and calibrations will be carried out during the Earth gravity assist maneuver in September 2017. During the Earth-moon flyby, visual and spectral images will be acquired to validate instrument command sequences planned for Bennu remote sensing. The asteroid Bennu remote sensing campaign will yield high resolution maps of the temperature and thermal inertia, distributions of major minerals and concentrations of organic matter across the asteroid surface. A high resolution 3d shape model including local surface slopes and a high-resolution gravity field will also be determined. Together, these data will be used to generate four separate maps that will be used to select the sampling site(s). The Safety map will identify hazardous and safe operational regions on the asteroid surface. The Deliverability map will quantify the accuracy Krisko, P. H. The NASA Orbital Debris Program Office (ODPO) has released its latest Orbital Debris Engineering Model, ORDEM 3.0. It supersedes ORDEM 2000, now referred to as ORDEM 2.0. This newer model encompasses the Earth satellite and debris flux environment from altitudes of low Earth orbit (LEO) through geosynchronous orbit (GEO). Debris sizes of 10 micron through larger than 1 m in non-GEO and 10 cm through larger than 1 m in GEO are available. The inclusive years are 2010 through 2035. The ORDEM model series has always been data driven. ORDEM 3.0 has the benefit of many more hours of data from existing sources and from new sources than past ORDEM versions. The object data range in size from 10 µm to larger than 1 m, and include in situ and remote measurements. The in situ data reveals material characteristics of small particles. Mass densities are grouped in ORDEM 3.0 in terms of 'high-density', represented by 7.9 g/cc, 'medium-density' represented by 2.8 g/cc and 'low-density' represented by 1.4 g/cc. Supporting models have also advanced significantly. The LEO-to-GEO ENvironment Debris model (LEGEND) includes an historical and a future projection component with yearly populations that include launched and maneuvered intact spacecraft and rocket bodies, mission related debris, and explosion and collision event fragments. LEGEND propagates objects with ephemerides and physical characteristics down to 1 mm in size. The full LEGEND yearly population acts as an a priori condition for a Bayesian statistical model. Specific populations are added from sodium potassium droplet releases, recent major accidental and deliberate collisions, and known anomalous debris events. This paper elaborates on the upgrades of this model over previous versions. Sample validation results with remote and in situ measurements are shown, and the consequences of including material density are discussed as it relates to heightened risks to crewed and robotic spacecraft Bajpayee Jaya; Durham, Darcie; Ichkawich, Thomas The Glory program is an Earth and Solar science mission designed to broaden science community knowledge of the environment. The causes and effects of global warming have become a concern in recent years and Glory aims to contribute to the knowledge base of the science community. Glory is designed for two functions: one is solar viewing to monitor the total solar irradiance and the other is observing the Earth s atmosphere for aerosol composition. The former is done with an active cavity radiometer, while the latter is accomplished with an aerosol polarimeter sensor to discern atmospheric particles. The Glory program is managed by NASA Goddard Space Flight Center (GSFC) with Orbital Sciences in Dulles, VA as the prime contractor for the spacecraft bus, mission operations, and ground system. This paper will describe some of the more unique features of the Glory program including the integration and testing of the satellite and instruments as well as the science data processing. The spacecraft integration and test approach requires extensive analysis and additional planning to ensure existing components are successfully functioning with the new Glory components. The science mission data analysis requires development of mission unique processing systems and algorithms. Science data analysis and distribution will utilize our national assets at the Goddard Institute for Space Studies (GISS) and the University of Colorado's Laboratory for Atmospheric and Space Physics (LASP). The Satellite was originally designed and built for the Vegetation Canopy Lidar (VCL) mission, which was terminated in the middle of integration and testing due to payload development issues. The bus was then placed in secure storage in 2001 and removed from an environmentally controlled container in late 2003 to be refurbished to meet the Glory program requirements. Functional testing of all the components was done as a system at the start of the program, very different from a traditional program Smith, Timothy D.; Kamhawi, Hani; Hickman, Tyler; Haag, Thomas; Dankanich, John; Polzin, Kurt; Byrne, Lawrence; Szabo, James NASA is continuing to invest in advancing Hall thruster technologies for implementation in commercial and government missions. The most recent focus has been on increasing the power level for large-scale exploration applications. However, there has also been a similar push to examine applications of electric propulsion for small spacecraft in the range of 300 kg or less. There have been several recent iodine Hall propulsion system development activities performed by the team of the NASA Glenn Research Center, the NASA Marshall Space Flight Center, and Busek Co. Inc. In particular, the work focused on qualification of the Busek 200-W BHT-200-I and development of the 600-W BHT-600-I systems. This paper discusses the current status of iodine Hall propulsion system developments along with supporting technology development efforts. Winter, J.; Dudenhoefer, J.; Juhasz, A.; Schwarze, G.; Patterson, R.; Ferguson, D.; Schmitz, P.; Vandersande, J. This paper describes the elements of NASA's CSTI High Capacity Power Project which include Systems Analysis, Stirling Power Conversion, Thermoelectric Power Conversion, Thermal Management, Power Management, Systems Diagnostics, Environmental Interactions, and Material/Structural Development. Technology advancement in all elements is required to provide the growth capability, high reliability and 7 to 10 year lifetime demanded for future space nuclear power systems. The overall project will develop and demonstrate the technology base required to provide a wide range of modular power systems compatible with the SP-100 reactor which facilitates operation during lunar and planetary day/night cycles as well as allowing spacecraft operation at any attitude or distance from the sun. Significant accomplishments in all of the project elements will be presented, along with revised goals and project timeliness recently developed Antipov Kirill A. Full Text Available The paper deals with spacecraft in the circular near-Earth orbit. The spacecraft interacts with geomagnetic field by the moments of Lorentz and magnetic forces. The octupole approximation of the Earth’s magnetic field is accepted. The spacecraft electromagnetic parameters, namely the electrostatic charge moment of the first order and the eigen magnetic moment are the controlled quasiperiodic functions. The control algorithms for the spacecraft electromagnetic parameters, which allows to stabilize the spacecraft attitude position in the orbital frame are obtained. The stability of the spacecraft stabilized orientation is proved both analytically and by PC computations. Hirshorn, Steven R.; Voss, Linda D.; Bromley, Linda K. The update of this handbook continues the methodology of the previous revision: a top-down compatibility with higher level Agency policy and a bottom-up infusion of guidance from the NASA practitioners in the field. This approach provides the opportunity to obtain best practices from across NASA and bridge the information to the established NASA systems engineering processes and to communicate principles of good practice as well as alternative approaches rather than specify a particular way to accomplish a task. The result embodied in this handbook is a top-level implementation approach on the practice of systems engineering unique to NASA. Material used for updating this handbook has been drawn from many sources, including NPRs, Center systems engineering handbooks and processes, other Agency best practices, and external systems engineering textbooks and guides. This handbook consists of six chapters: (1) an introduction, (2) a systems engineering fundamentals discussion, (3) the NASA program project life cycles, (4) systems engineering processes to get from a concept to a design, (5) systems engineering processes to get from a design to a final product, and (6) crosscutting management processes in systems engineering. The chapters are supplemented by appendices that provide outlines, examples, and further information to illustrate topics in the chapters. The handbook makes extensive use of boxes and figures to define, refine, illustrate, and extend concepts in the chapters. NASA is piloting fiscal year (FY) 1997 Accountability Reports, which streamline and upgrade reporting to Congress and the public. The document presents statements by the NASA administrator, and the Chief Financial Officer, followed by an overview of NASA's organizational structure and the planning and budgeting process. The performance of NASA in four strategic enterprises is reviewed: (1) Space Science, (2) Mission to Planet Earth, (3) Human Exploration and Development of Space, and (4) Aeronautics and Space Transportation Technology. Those areas which support the strategic enterprises are also reviewed in a section called Crosscutting Processes. For each of the four enterprises, there is discussion about the long term goals, the short term objectives and the accomplishments during FY 1997. The Crosscutting Processes section reviews issues and accomplishments relating to human resources, procurement, information technology, physical resources, financial management, small and disadvantaged businesses, and policy and plans. Following the discussion about the individual areas is Management's Discussion and Analysis, about NASA's financial statements. This is followed by a report by an independent commercial auditor and the financial statements. Barth, Janet L.; Xapsos, Michael This presentation focuses on the effects of the space environment on spacecraft systems and applying this knowledge to spacecraft pre-launch engineering and operations. Particle radiation, neutral gas particles, ultraviolet and x-rays, as well as micrometeoroids and orbital debris in the space environment have various effects on spacecraft systems, including degradation of microelectronic and optical components, physical damage, orbital decay, biasing of instrument readings, and system shutdowns. Space climate and weather must be considered during the mission life cycle (mission concept, mission planning, systems design, and launch and operations) to minimize and manage risk to both the spacecraft and its systems. A space environment model for use in the mission life cycle is presented. Hart, R. C.; Long, A. C.; Lee, T. The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) is pursuing the application of Global Positioning System (GPS) technology to improve the accuracy and economy of spacecraft navigation. High-accuracy autonomous navigation algorithms are being flight qualified in conjunction with GSFC's GPS Attitude Determination Flyer (GADFLY) experiment on the Small Satellite Technology Initiative (SSTI) Lewis spacecraft, which is scheduled for launch in 1997. Preflight performance assessments indicate that these algorithms can provide a real-time total position accuracy of better than 10 meters (1 sigma) and velocity accuracy of better than 0.01 meter per second (1 sigma), with selective availability at typical levels. This accuracy is projected to improve to the 2-meter level if corrections to be provided by the GPS Wide Area Augmentation System (WAAS) are included. By now, the philosophies of Total Quality Management have had an impact on every aspect of American industrial life. The trail-blazing work of Deming, Juran, and Crosby, first implemented in Japan, has 're-migrated' across the Pacific and now plays a growing role in America's management culture. While initially considered suited only for a manufacturing environment, TQM has moved rapidly into the 'service' areas of offices, sales forces, and even fast-food restaurants. The next logical step has also been taken - TQM has found its way into virtually all departments of the Federal Government, including NASA. Because of this widespread success, it seems fair to ask whether this new discipline is directly applicable to the profession of spacecraft operations. The results of quality emphasis on OAO Corporation's contract at JPL provide strong support for Total Quality Management as a useful tool in spacecraft operations. By now, the philosophies of Total Quality Management have had an impact on every aspect of American industrial life. The trail-blazing work of Deming, Juran, and Crosby, first implemented in Japan, has 're-migrated' across the Pacific and now plays a growing role in America's management culture. While initially considered suited only for a manufacturing environment, TQM has moved rapidly into the 'service' areas of offices, sales forces, and even fast-food restaurants. The next logical step has also been taken - TQM has found its way into virtually all departments of the Federal Government, including NASA. Because of this widespread success, it seems fair to ask whether this new discipline is directly applicable to the profession of spacecraft operations. The results of quality emphasis on OAO Corporation's contract at JPL provide strong support for Total Quality Management as a useful tool in spacecraft operations. McComas, David; Wilmot, Jonathan; Cudmore, Alan In February 2015 the NASA Goddard Space Flight Center (GSFC) completed the open source release of the entire Core Flight Software (cFS) suite. After the open source release a multi-NASA center Configuration Control Board (CCB) was established that has managed multiple cFS product releases. The cFS was developed and is being maintained in compliance with the NASA Class B software development process requirements and the open source release includes all Class B artifacts. The cFS is currently running on three operational science spacecraft and is being used on multiple spacecraft and instrument development efforts. While the cFS itself is a viable flight software (FSW) solution, we have discovered that the cFS community is a continuous source of innovation and growth that provides products and tools that serve the entire FSW lifecycle and future mission needs. This paper summarizes the current state of the cFS community, the key FSW technologies being pursued, the development/verification tools and opportunities for the small satellite community to become engaged. The cFS is a proven high quality and cost-effective solution for small satellites with constrained budgets. Budinger, James M.; Niederhaus, Charles; Reinhart, Richard; Downey, Joe; Roberts, Anthony As the scientific capabilities and number of small spacecraft missions in the near Earth region increase, standard yet configurable user spacecraft terminals operating in Ka-band are needed to lower mission cost and risk and enable significantly higher data return than current UHF or S-band terminals. These compact Ka-band terminals are intended to operate with both the current and next generation of Ka-band relay satellites and via direct data communications with near Earth tracking terminals. This presentation provides an overview of emerging NASA-sponsored and commercially provided technologies in software defined radios (SDRs), transceivers, and electronically steered antennas that will enable data rates from hundreds of kbps to over 1 Gbps and operate in multiple frequency bands (such as S- and X-bands) and expand the use of NASA's common Ka-bands frequencies: 22.55-23.15 GHz for forward data or uplink; and 25.5-27.0 GHz for return data or downlink. Reductions in mass, power and volume come from integration of multiple radio functions, operations in Ka-band, high efficiency amplifiers and receivers, and compact, flat and vibration free electronically steered narrow beam antennas for up to + 60 degrees field of regard. The software defined near Earth space transceiver (SD-NEST) described in the presentation is intended to be compliant with NASA's space telecommunications radio system (STRS) standard for communications waveforms and hardware interoperability. Hayes, J.; Pierson, W. J.; Cardone, V. J. The instrument aboard Skylab designated S193 - a combined passive and active microwave radar system acting as a radiometer, scatterometer, and altimeter - is used to measure the surface vector wind speeds in the planetary boundary layer over the oceans. Preliminary results corroborate the hypothesis that sea surface winds in the planetary boundary layer can be determined from satellite data. Future spacecraft plans for measuring a geoid with an accuracy up to 10 cm are discussed. Heldenfels, R. R. The application of composites in aerospace vehicle structures is reviewed. Research and technology program results and specific applications to space vehicles, aircraft engines, and aircraft and helicopter structures are discussed in detail. Particular emphasis is given to flight service evaluation programs that are or will be accumulating substantial experience with secondary and primary structural components on military and commercial aircraft to increase confidence in their use. Lander Tech is three separate but synergistic efforts: Lunar CATALYST (Lunar Cargo Transportation and Landing by Soft Touchdown) Support U.S. industry led robotic lunar lander development via three public-private efforts. Support U.S. industry led robotic lunar lander development via three public-private partnerships. Infuse or transfer landing technologies into these public private partnerships. Advanced Exploration Systems-Automated Propellant Loading (APL) -Integrated Ground Operations. Demonstrate LH2 zero loss storage, loading and transfer operations via testing on a large scale in a relevant launch vehicle servicing environment. (KSC, GRC). Game Changing Technology-20 Kelvin -20 Watt Cryocooler Development of a Reverse Turbo-Brayton Cryocooler operating at 20 Kelvin with 20 Watts of refrigeration lift. Pellis, Neal R. The challenge of human space exploration places demands on technology that push concepts and development to the leading edge. In biotechnology and biomedical equipment development, NASA science has been the seed for numerous innovations, many of which are in the commercial arena. The biotechnology effort has led to rational drug design, analytical equipment, and cell culture and tissue engineering strategies. Biomedical research and development has resulted in medical devices that enable diagnosis and treatment advances. NASA Biomedical developments are exemplified in the new laser light scattering analysis for cataracts, the axial flow left ventricular-assist device, non contact electrocardiography, and the guidance system for LASIK surgery. Many more developments are in progress. NASA will continue to advance technologies, incorporating new approaches from basic and applied research, nanotechnology, computational modeling, and database analyses. Mitchell, Horace G. Since 1988, the Scientific Visualization Studio(SVS) at NASA Goddard Space Flight Center has produced scientific visualizations of NASA s scientific research and remote sensing data for public outreach. These visualizations take the form of images, animations, and end-to-end systems and have been used in many venues: from the network news to science programs such as NOVA, from museum exhibits at the Smithsonian to White House briefings. This presentation will give an overview of the major activities and accomplishments of the SVS, and some of the most interesting projects and systems developed at the SVS will be described. Particular emphasis will be given to the practices and procedures by which the SVS creates visualizations, from the hardware and software used to the structures and collaborations by which products are designed, developed, and delivered to customers. The web-based archival and delivery system for SVS visualizations at svs.gsfc.nasa.gov will also be described. The successful test launch of two three-quarter ton satellites in the European Space Agency's (ESA) Ariane rocket last June firmly placed ESA in competition with NASA for the lucrative and growing satellite launching market. Under the auspices of the private (but largely French-government financed) Arianespace company, ESA is already attracting customers to its three-stage rocket by offering low costs.According to recent reports [Nature, 292, pp. 785 and 788, 1981], Arianespace has been able to win several U.S. customers away from NASA, including Southern Pacific Communications, Western Union, RCA, Satellite Television Corporation, and GTE. Nature [292, 1981] magazine in an article entitled ‘More Trouble for the Hapless Shuttle’ suggests that it will be possible for Ariane to charge lower prices for a launch than NASA, even with the space shuttle. NASA project managers attempt to manage risk by relying on mature, well-understood process and technology when designing spacecraft. In the case of crewed systems, the margin for error is even tighter and leads to risk aversion. But as we look to future missions to the Moon and Mars, the complexity of the systems will increase as the spacecraft and crew work together with less reliance on Earth-based support. NASA will be forced to look for new ways to do business. Formal methods technologies can help NASA develop complex but cost effective spacecraft in many domains, including requirements and design, software development and inspection, and verification and validation of vehicle subsystems. To realize these gains, the technologies must be matured and field-tested so that they are proven when needed. During this discussion, current activities used to evaluate FM technologies for Orion spacecraft design will be reviewed. Also, suggestions will be made to demonstrate value to current designers, and mature the technology for eventual use in safety-critical NASA missions. Stewart, W.L.; Weber, R.J. Future advances in aircraft propulsion systems will be aided by the research performed by NASA and its contractors. This paper gives selected examples of recent accomplishments and current activities relevant to the principal classes of civil and military aircraft. Some instances of new emerging technologies with potential high impact on further progress are discussed. NASA research described includes noise abatement and fuel economy measures for commercial subsonic, supersonic, commuter, and general aviation aircraft, aircraft engines of the jet, turboprop, diesel and rotary types, VTOL, X-wing rotocraft, helicopters, and ''stealth'' aircraft. Applications to military aircraft are also discussed. ... to clot, the higher the degree of clotting inhibition. During surgery, the ACT is kept above a ... What is ECLS? An Introduction to Extracorporeal Life Support. University of Michigan Health System [On-line information]. ... National Aeronautics and Space Administration — The danger from fire aboard spacecraft is immediate with only moments for detection and suppression. Spacecraft are unique high-value systems where the cost of... Edwards, David L.; Burns, Howard D.; Garrett, Henry B.; Miller, Sharon K.; Peddie, Darilyn; Porter Ron; Spann, James F.; Xapsos, Michael A. The National Aeronautics and Space Administration (NASA) is embarking on a course to expand human presence beyond Low Earth Orbit (LEO) while also expanding its mission to explore our Earth, and the solar system. Destinations such as Near Earth Asteroids (NEA), Mars and its moons, and the outer planets are but a few of the mission targets. Each new destination presents an opportunity to increase our knowledge on the solar system and the unique environments for each mission target. NASA has multiple technical and science discipline areas specializing in specific space environments fields that will serve to enable these missions. To complement these existing discipline areas, a concept is presented focusing on the development of a space environment and spacecraft effects (SESE) organization. This SESE organization includes disciplines such as space climate, space weather, natural and induced space environments, effects on spacecraft materials and systems, and the transition of research information into application. This space environment and spacecraft effects organization will be composed of Technical Working Groups (TWG). These technical working groups will survey customers and users, generate products, and provide knowledge supporting four functional areas: design environments, engineering effects, operational support, and programmatic support. The four functional areas align with phases in the program mission lifecycle and are briefly described below. Design environments are used primarily in the mission concept and design phases of a program. Environment effects focuses on the material, component, sub-system, and system-level response to the space environment and include the selection and testing to verify design and operational performance. Operational support provides products based on real time or near real time space weather to mission operators to aid in real time and near-term decision-making. The programmatic support function maintains an interface with ) Full Piezoelectric Multilayer-Stacked Hybrid Actuation/Transduction Systems; 34) Active Flow Effectors for Noise and Separation Control; 35) Method and System for Temporal Filtering in Video Compression Systems; 36) Apparatus for Measuring Total Emissivity of Small, Low-Emissivity Samples; 37) Multiple-Zone Diffractive Optic Element for Laser Ranging Applications; 38) Simplified Architecture for Precise Aiming of a Deep-Space Communication Laser Transceiver; 39) Two-Photon-Absorption Scheme for Optical Beam Tracking; 40) High-Sensitivity, Broad-Range Vacuum Gauge Using Nanotubes for Micromachined Cavities; 41) Wide-Field Optic for Autonomous Acquisition of Laser Link; 42) Extracting Zero-Gravity Surface Figure of a Mirror; 43) Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects; 44) Visual Object Recognition and Tracking of Tools; 45) Method for Implementing Optical Phase Adjustment; 46) Visual SLAM Using Variance Grid Maps; 47) Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration; 48) Efficient Kriging Algorithms; 49) Predicting Spacecraft Trajectories by the WeavEncke Method; 50) An Augmentation of G-Guidance Algorithms; 51) Comparison of Aircraft Icing Growth Assessment Software; 52) Silicon-Germanium Voltage-Controlled Oscillator at 105 GHz; 53) Estimation of Coriolis Force and Torque Acting on Ares-1; 54) Null Lens Assembly for X-Ray Mirror Segments; and 55) High-Precision Pulse Generator. Chobotov, V. A. Control elements such as sensors, momentum exchange devices, and thrusters are described which can be used to define space replaceable units (SRU), in accordance with attitude control, guidance, and navigation performance requirements selected for NASA space serviceable mission spacecraft. A number of SRU's are developed, and their reliability block diagrams are presented. An SRU assignment is given in order to define a set of feasible space serviceable spacecraft for the missions of interest. Roberts, E W The subject of tribology encompasses the friction, wear and lubrication of mechanical components such as bearings and gears. Tribological practices are aimed at ensuring that such components operate with high efficiency (low friction) and achieve long lives. On spacecraft mechanisms the route to achieving these goals brings its own unique challenges. This review describes the problems posed by the space environment, the types of tribological component used on spacecraft and the approaches taken to their lubrication. It is shown that in many instances lubrication needs can be met by synthetic oils having exceedingly low volatilities, but that at temperature extremes the only means of reducing friction and wear is by solid lubrication. As the demands placed on space engineering increase, innovatory approaches will be needed to solve future tribological problems. The direction that future developments might take is anticipated and discussed. Pisacane, V. L.; Ziegler, J. F.; Nelson, M. E.; Caylor, M.; Flake, D.; Heyen, L.; Youngborg, E.; Rosenfeld, A. B.; Cucinotta, F.; Zaider, M.; Dicello, J. F. MIDN (Micro-dosimetry instrument) is a payload on the MidSTAR-I spacecraft (Midshipman Space Technology Applications Research) under development at the United States Naval Academy. MIDN is a solid-state system being designed and constructed to measure Micro-dosimetric spectra to determine radiation quality factors for space environments. Radiation is a critical threat to the health of astronauts and to the success of missions in low-Earth orbit and space exploration. The system will consist of three separate sensors, one external to the spacecraft, one internal and one embedded in polyethylene. Design goals are mass <3 kg and power <2 W. The MidSTAR-I mission in 2006 will provide an opportunity to evaluate a preliminary version of this system. Its low power and mass makes it useful for the International Space Station and manned and unmanned interplanetary missions as a real-time system to assess and alert astronauts to enhanced radiation environments. (authors) Detwiler, R.C.; Smith, R.L. It has been twelve years since two Voyager spacecraft began the direct route to the outer planets. In October 1989 a single Galileo spacecraft started the return to Jupiter. Conceived as a simple Voyager look-alike, the Galileo power management and distribution (PMAD) system has undergone many iterations in configuration. Major changes to the PMAD resulted from dual spun slip ring limitations, variations in launch vehicle thrust capabilities, and launch delays. Lack of an adequate launch vehicle for an interplanetary mission of Galileo's size has resulted in an extremely long flight duration. A Venus-Earth-Earth Gravity Assist (VEEGA) tour, vital to attain the required energy, results in a 6 year trip to Jupiter and its moons. This paper provides a description of the Galileo PMAD and documents the design drivers that established the final as-built hardware Shaddock, Daniel A.; Tinto, Massimo; Estabrook, Frank B.; Armstrong, J.W. The laser interferometer space antenna is an array of three spacecraft in an approximately equilateral triangle configuration which will be used as a low-frequency gravitational wave detector. We present here new generalizations of the Michelson- and Sagnac-type time-delay interferometry data combinations. These combinations cancel laser phase noise in the presence of different up and down propagation delays in each arm of the array, and slowly varying systematic motion of the spacecraft. The gravitational wave sensitivities of these generalized combinations are the same as previously computed for the stationary cases, although the combinations are now more complicated. We introduce a diagrammatic representation to illustrate that these combinations are actually synthesized equal-arm interferometers Urban, David L.; Ruff, Gary A.; Minster, Olivier -based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame......Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due...... to the complexity, cost and risk associ-ated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground... Pickering, Karen D.; Wiesner, Mark R. Ultrafiltration is examined for use as the first stage of a primary treatment process for spacecraft wastewater. It is hypothesized that ultrafiltration can effectively serve as pretreatment for a reverse osmosis system, removing the majority of organic material in a spacecraft wastewater. However, it is believed that the interaction between the membrane material and the surfactant found in the wastewater will have a significant impact on the fouling of the ultrafiltration membrane. In this study, five different ultrafiltration membrane materials are examined for the filtration of wastewater typical of that expected to be produced onboard the International Space Station. Membranes are used in an unstirred batch cell. Flux, organic carbon rejection, and recovery from fouling are measured. The results of this evaluation will be used to select the most promising membranes for further study. This report documents work that was performed by CSA Engineering, Inc., for Los Alamos National Laboratory (LANL), to reduce vibrations of the FORTE spacecraft by retrofitting damped structural components into the spacecraft structure. The technical objective of the work was reduction of response at the location of payload components when the structure is subjected to the dynamic loading associated with launch and proto-qualification testing. FORTE is a small satellite that will be placed in orbit in 1996. The structure weighs approximately 425 lb, and is roughly 80 inches high and 40 inches in diameter. It was developed and built by LANL in conjunction with Sandia National Laboratories Albuquerque for the United States Department of Energy. The FORTE primary structure was fabricated primarily with graphite epoxy, using aluminum honeycomb core material for equipment decks and solar panel substrates. Equipment decks were bonded and bolted through aluminum mounting blocks to adjoining structure Smith, Robert J.; Flew, Alastair R. The parts of electric motors which should be duplicated in order to provide maximum reliability in spacecraft application are identified. Various common types of redundancy are described. The advantages and disadvantages of each are noted. The principal types are illustrated by reference to specific examples. For each example, constructional details, basic performance data and failure modes are described, together with a discussion of the suitability of particular redundancy techniques to motor types. Wilson, T. G. The history of spacecraft electrical power conversion in literature, research and practice is reviewed. It is noted that the design techniques, analyses and understanding which were developed make today's contribution to power computers and communication installations. New applications which require more power, improved dynamic response, greater reliability, and lower cost are outlined. The switching mode approach in electronic power conditioning is discussed. Technical aspects of the research are summarized. Laubach, Sharon; Garcia, Celina; Maxwell, Scott; Wright, Jesse An Extensible Markup Language (XML) schema was developed as a means of defining and describing a structure for capturing spacecraft command- definition and tracking information in a single location in a form readable by both engineers and software used to generate software for flight and ground systems. A structure defined within this schema is then used as the basis for creating an XML file that contains command definitions. Swanson, Theodore; Stephenson, Timothy Reliable manufacturing requires that material properties and fabrication processes be well defined in order to insure that the manufactured parts meet specified requirements. While this issue is now relatively straightforward for traditional processes such as subtractive manufacturing and injection molding, this capability is still evolving for AM products. Hence, one of the principal challenges within AM is in qualifying and verifying source material properties and process control. This issue is particularly critical for applications in harsh environments and demanding applications, such as spacecraft. Goodzeit, Neil E. (Inventor); Linder, David M. (Inventor) A spacecraft attitude control system uses at least four reaction wheels. In order to minimize reaction wheel speed and therefore power, a wheel speed management system is provided. The management system monitors the wheel speeds and generates a wheel speed error vector. The error vector is integrated, and the error vector and its integral are combined to form a correction vector. The correction vector is summed with the attitude control torque command signals for driving the reaction wheels. This slide presentation describes the career path and projects that the author worked on during her internship at NASA. As a Graduate Student Research Program (GSRP) participant the assignments that were given include: Human Mesenchymal Stem Cell Research, Spaceflight toxicology, Lunar Airborne Dust Toxicity Advisory Group (LADTAG) and a special study at Devon Island. Label, Kenneth A.; Guertin, Steven M. NASA has a long history of using commercial grade electronics in space. In this presentation we will provide a brief history of NASA's trends and approaches to commercial grade electronics focusing on processing and memory systems. This will include providing summary information on the space hazards to electronics as well as NASA mission trade space. We will also discuss developing recommendations for risk management approaches to Electrical, Electronic and Electromechanical (EEE) parts usage in space. Two examples will be provided focusing on a near-earth Polar-orbiting spacecraft as well as a mission to Mars. The final portion will discuss emerging trends impacting usage. Bozzano, Marco; Cimatti, Alessandro; Katoen, Joost-Pieter; Katsaros, Panagiotis; Mokos, Konstantinos; Nguyen, Viet Yen; Noll, Thomas; Postma, Bart; Roveri, Marco The size and complexity of software in spacecraft is increasing exponentially, and this trend complicates its validation within the context of the overall spacecraft system. Current validation methods are labor-intensive as they rely on manual analysis, review and inspection. For future space missions, we developed – with challenging requirements from the European space industry – a novel modeling language and toolset for a (semi-)automated validation approach. Our modeling language is a dialect of AADL and enables engineers to express the system, the software, and their reliability aspects. The COMPASS toolset utilizes state-of-the-art model checking techniques, both qualitative and probabilistic, for the analysis of requirements related to functional correctness, safety, dependability and performance. Several pilot projects have been performed by industry, with two of them having focused on the system-level of a satellite platform in development. Our efforts resulted in a significant advancement of validating spacecraft designs from several perspectives, using a single integrated system model. The associated technology readiness level increased from level 1 (basic concepts and ideas) to early level 4 (laboratory-tested) Dietrich, Daniel L.; Ruff, Gary A.; Urban, David This paper expands on previous work that examined how large a fire a crew member could successfully survive and extinguish in the confines of a spacecraft. The hazards to the crew and equipment during an accidental fire include excessive pressure rise resulting in a catastrophic rupture of the vehicle skin, excessive temperatures that burn or incapacitate the crew (due to hyperthermia), carbon dioxide build-up or accumulation of other combustion products (e.g. carbon monoxide). The previous work introduced a simplified model that treated the fire primarily as a source of heat and combustion products and sink for oxygen prescribed (input to the model) based on terrestrial standards. The model further treated the spacecraft as a closed system with no capability to vent to the vacuum of space. The model in the present work extends this analysis to more realistically treat the pressure relief system(s) of the spacecraft, include more combustion products (e.g. HF) in the analysis and attempt to predict the fire spread and limiting fire size (based on knowledge of terrestrial fires and the known characteristics of microgravity fires) rather than prescribe them in the analysis. Including the characteristics of vehicle pressure relief systems has a dramatic mitigating effect by eliminating vehicle overpressure for all but very large fires and reducing average gas-phase temperatures. Rodeghiero, G.; Gini, F.; Marchili, N.; Jain, P.; Ralston, J. P.; Dallacasa, D.; Naletto, G.; Possenti, A.; Barbieri, C.; Franceschini, A.; Zampieri, L. We describe an experimental scenario for testing a novel method to measure distance and proper motion of astronomical sources. The method is based on multi-epoch observations of amplitude or intensity correlations between separate receiving systems. This technique is called Interferometric Parallax, and efficiently exploits phase information that has traditionally been overlooked. The test case we discuss combines amplitude correlations of signals from deep space interplanetary spacecraft with those from distant galactic and extragalactic radio sources with the goal of estimating the interplanetary spacecraft distance. Interferometric parallax relies on the detection of wavefront curvature effects in signals collected by pairs of separate receiving systems. The method shows promising potentialities over current techniques when the target is unresolved from the background reference sources. Developments in this field might lead to the construction of an independent, geometrical cosmic distance ladder using a dedicated project and future generation instruments. We present a conceptual overview supported by numerical estimates of its performances applied to a spacecraft orbiting the Solar System. Simulations support the feasibility of measurements with a simple and time-saving observational scheme using current facilities. Vandervoort, Richard J. Spacecraft systems of the 1990's and beyond will be substantially more complex than their predecessors. They will have demanding performance requirements and will be expected to operate more autonomously. This underscores the need for innovative approaches to Fault Detection, Isolation and Recovery (FDIR). A hierarchical expert system is presented that provides on-orbit supervision using intelligent FDIR techniques. Each expert system in the hierarchy supervises the operation of a local set of spacecraft functions. Spacecraft operational goals flow top down while responses flow bottom up. The expert system supervisors have a fairly high degree of autonomy. Bureaucratic responsibilities are minimized to conserve bandwidth and maximize response time. Data for FDIR can be acquired local to an expert and from other experts. By using a blackboard architecture for each supervisor, the system provides a great degree of flexibility in implementing the problem solvers for each problem domain. In addition, it provides for a clear separation between facts and knowledge, leading to an efficient system capable of real time response. Littell, Justin D. The Landing and Impact Research Facility (LandIR) at NASA Langley Research Center is a 240 ft. high A-frame structure which is used for full-scale crash testing of aircraft and rotorcraft vehicles. Because the LandIR provides a unique capability to introduce impact velocities in the forward and vertical directions, it is also serving as the facility for landing tests on full-scale and sub-scale Orion spacecraft mass simulators. Recently, a three-dimensional photogrammetry system was acquired to assist with the gathering of vehicle flight data before, throughout and after the impact. This data provides the basis for the post-test analysis and data reduction. Experimental setups for pendulum swing tests on vehicles having both forward and vertical velocities can extend to 50 x 50 x 50 foot cubes, while weather, vehicle geometry, and other constraints make each experimental setup unique to each test. This paper will discuss the specific calibration techniques for large fields of views, camera and lens selection, data processing, as well as best practice techniques learned from using the large field of view photogrammetry on a multitude of crash and landing test scenarios unique to the LandIR. Koch, L. Danielle; Brown, Clifford A.; Shook, Tony D.; Winkel, James; Kolacz, John S.; Podboy, Devin M.; Loew, Raymond A.; Mirecki, Julius H. Sound pressure measurements were recorded for a prototype of a spacecraft cabin ventilation fan in a test in the NASA Glenn Acoustical Testing Laboratory. The axial fan is approximately 0.089 m (3.50 in.) in diameter and 0.223 m (9.00 in.) long and has nine rotor blades and eleven stator vanes. At design point of 12,000 rpm, the fan was predicted to produce a flow rate of 0.709 cu m/s (150 cfm) and a total pressure rise of 925 Pa (3.72 in. of water) at 12,000 rpm. While the fan was designed to be part of a ducted atmospheric revitalization system, no attempt was made to throttle the flow or simulate the installed configuration during this test. The fan was operated at six speeds from 6,000 to 13,500 rpm. A 13-microphone traversing array was used to collect sound pressure measurements along two horizontal planes parallel to the flow direction, two vertical planes upstream of the fan inlet and two vertical planes downstream of the fan exhaust. Measurements indicate that sound at blade passing frequency harmonics contribute significantly to the overall audible noise produced by the fan at free delivery conditions. Mohammad, Atif F.; Straub, Jeremy It is a known fact that the amount of data about space that is stored is getting larger on an everyday basis. However, the utilization of Big Data and related tools to perform ETL (Extract, Transform and Load) applications will soon be pervasive in the space sciences. We have entered in a crucial time where using Big Data can be the difference (for terrestrial applications) between organizations underperforming and outperforming their peers. The same is true for NASA and other space agencies, as well as for individual missions and the highly-competitive process of mission data analysis and publication. In most industries, conventional opponents and new candidates alike will influence data-driven approaches to revolutionize and capture the value of Big Data archives. The Open Space Box Model is poised to take the proverbial "giant leap", as it provides autonomic data processing and communications for spacecraft. We can find economic value generated from such use of data processing in our earthly organizations in every sector, such as healthcare, retail. We also can easily find retailers, performing research on Big Data, by utilizing sensors driven embedded data in products within their stores and warehouses to determine how these products are actually used in the real world. Lohn, Jason D.; Homby, Gregory S.; Linden, Derek S. A document discusses the use of computer- aided evolution in arriving at a design for X-band communication antennas for NASA s three Space Technology 5 (ST5) satellites, which were launched on March 22, 2006. Two evolutionary algorithms, incorporating different representations of the antenna design and different fitness functions, were used to automatically design and optimize an X-band antenna design. A set of antenna designs satisfying initial ST5 mission requirements was evolved by use these algorithms. The two best antennas - one from each evolutionary algorithm - were built. During flight-qualification testing of these antennas, the mission requirements were changed. After minimal changes in the evolutionary algorithms - mostly in the fitness functions - new antenna designs satisfying the changed mission requirements were evolved and within one month of this change, two new antennas were designed and prototypes of the antennas were built and tested. One of these newly evolved antennas was approved for deployment on the ST5 mission, and flight-qualified versions of this design were built and installed on the spacecraft. At the time of writing the document, these antennas were the first computer-evolved hardware in outer space. Rolincik, Mark; Lauriente, Michael; Koons, Harry C.; Gorney, David A new rule-based, machine independent analytical tool was designed for diagnosing spacecraft anomalies using an expert system. Expert systems provide an effective method for saving knowledge, allow computers to sift through large amounts of data pinpointing significant parts, and most importantly, use heuristics in addition to algorithms, which allow approximate reasoning and inference and the ability to attack problems not rigidly defined. The knowledge base consists of over two-hundred (200) rules and provides links to historical and environmental databases. The environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose. The system's driver translates forward chaining rules into a backward chaining sequence, prompting the user for information pertinent to the causes considered. The use of heuristics frees the user from searching through large amounts of irrelevant information and allows the user to input partial information (varying degrees of confidence in an answer) or 'unknown' to any question. The modularity of the expert system allows for easy updates and modifications. It not only provides scientists with needed risk analysis and confidence not found in algorithmic programs, but is also an effective learning tool, and the window implementation makes it very easy to use. The system currently runs on a Micro VAX II at Goddard Space Flight Center (GSFC). The inference engine used is NASA's C Language Integrated Production System (CLIPS). Reynolds, R.; White, R. L.; Hume, P. The mission of a tracking station within the NASA/Jet Propulsion Deep Space Network is characterized by a wide diversity of spacecraft types, communications ranges, and data accuracy requirements. In the present paper, the system architecture, communications techniques, and operators interfaces for a utility controller are described. The control equipment as designed and installed is meant to be a tool to study applications of automated control in the dynamic environment of a tracking station. It allows continuous experimenting with new technology without disruption of the tracking activities. Fielhauer, Karl, B.; Boone, Bradley, G.; Raible, Daniel, E. This paper describes a system engineering approach to examining the potential for combining elements of a deep-space RF and optical communications payload, for the purpose of reducing the size, weight and power burden on the spacecraft and the mission. Figures of merit and analytical methodologies are discussed to conduct trade studies, and several potential technology integration strategies are presented. Finally, the NASA Integrated Radio and Optical Communications (iROC) project is described, which directly addresses the combined RF and optical approach. The NASA Augmented Virtual Reality (AVR) Lab at Kennedy Space Center is dedicated to the investigation of Augmented Reality (AR) and Virtual Reality (VR) technologies, with the goal of determining potential uses of these technologies as human-computer interaction (HCI) devices in an aerospace engineering context. Begun in 2012, the AVR Lab has concentrated on commercially available AR and VR devices that are gaining in popularity and use in a number of fields such as gaming, training, and telepresence. We are working with such devices as the Microsoft Kinect, the Oculus Rift, the Leap Motion, the HTC Vive, motion capture systems, and the Microsoft Hololens. The focus of our work has been on human interaction with the virtual environment, which in turn acts as a communications bridge to remote physical devices and environments which the operator cannot or should not control or experience directly. Particularly in reference to dealing with spacecraft and the oftentimes hazardous environments they inhabit, it is our hope that AR and VR technologies can be utilized to increase human safety and mission success by physically removing humans from those hazardous environments while virtually putting them right in the middle of those environments. Birmele, Michele N.; McCoy, LaShelle e.; Roberts, Michael S. Microbial growth is common on wetted surfaces in spacecraft environmental control and life support systems despite the use of chemical and physical disinfection methods. Advanced control technologies are needed to limit microorganisms and increase the reliability of life support systems required for long-duration human missions. Silver ions and compounds are widely used as antimicrobial agents for medical applications and continue to be used as a residual biocide in some spacecraft water systems. The National Aeronautics and Space Administration (NASA) has identified silver fluoride for use in the potable water system on the next generation spacecraft. Due to ionic interactions between silver fluoride in solution and wetted metallic surfaces, ionic silver is rapidly depleted from solution and loses its antimicrobial efficacy over time. This report describes research to prolong the antimicrobial efficacy of ionic silver by maintaining its solubility. Three types of metal coupons (lnconel 718, Stainless Steel 316, and Titanium 6AI-4V) used in spacecraft potable water systems were exposed to either a continuous flow of water amended with 0.4 mg/L ionic silver fluoride or to a static, pre-treatment passivation in 50 mg/L ionic silver fluoride with or without a surface oxidation pre-treatment. Coupons were then challenged in a high-shear, CDC bioreactor (BioSurface Technologies) by exposure to six bacteria previously isolated from spacecraft potable water systems. Continuous exposure to 0.4 mg/L ionic silver over the course of 24 hours during the flow phase resulted in a >7-log reduction. The residual effect of a 24-hour passivation treatment in 50 mg/L of ionic silver resulted in a >3-log reduction, whereas a two-week treatment resulted in a >4-log reduction. Results indicate that 0.4 mg/L ionic silver is an effective biocide against many bacteria and that a prepassivation of metal surfaces with silver can provide additional microbial control. The purpose of schedule management is to provide the framework for time-phasing, resource planning, coordination, and communicating the necessary tasks within a work effort. The intent is to improve schedule management by providing recommended concepts, processes, and techniques used within the Agency and private industry. The intended function of this handbook is two-fold: first, to provide guidance for meeting the scheduling requirements contained in NPR 7120.5, NASA Space Flight Program and Project Management Requirements, NPR 7120.7, NASA Information Technology and Institutional Infrastructure Program and Project Requirements, NPR 7120.8, NASA Research and Technology Program and Project Management Requirements, and NPD 1000.5, Policy for NASA Acquisition. The second function is to describe the schedule management approach and the recommended best practices for carrying out this project control function. With regards to the above project management requirements documents, it should be noted that those space flight projects previously established and approved under the guidance of prior versions of NPR 7120.5 will continue to comply with those requirements until project completion has been achieved. This handbook will be updated as needed, to enhance efficient and effective schedule management across the Agency. It is acknowledged that most, if not all, external organizations participating in NASA programs/projects will have their own internal schedule management documents. Issues that arise from conflicting schedule guidance will be resolved on a case by case basis as contracts and partnering relationships are established. It is also acknowledged and understood that all projects are not the same and may require different levels of schedule visibility, scrutiny and control. Project type, value, and complexity are factors that typically dictate which schedule management practices should be employed. Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being Struk, Peter M.; Oeftering, Richard C.; Easton, John W.; Anderson, Eric E. NASA's Constellation Program for Exploration of the Moon and Mars places human crews in extreme isolation in resource scarce environments. Near Earth, the discontinuation of Space Shuttle flights after 2010 will alter the up- and down-mass capacity for the International Space Station (ISS). NASA is considering new options for logistics support strategies for future missions. Aerospace systems are often composed of replaceable modular blocks that minimize the need for complex service operations in the field. Such a strategy however, implies a robust and responsive logistics infrastructure with relatively low transportation costs. The modular Orbital Replacement Units (ORU) used for ISS requires relatively large blocks of replacement hardware even though the actual failed component may really be three orders of magnitude smaller. The ability to perform in-situ repair of electronics circuits at the component level can dramatically reduce the scale of spares and related logistics cost. This ability also reduces mission risk, increases crew independence and improves the overall supportability of the program. The Component-Level Electronics Assembly Repair (CLEAR) task under the NASA Supportability program was established to demonstrate the practicality of repair by first investigating widely used soldering materials and processes (M&P) performed by modest manual means. The work will result in program guidelines for performing manual repairs along with design guidance for circuit reparability. The next phase of CLEAR recognizes that manual repair has its limitations and some highly integrated devices are extremely difficult to handle and demand semi-automated equipment. Further, electronics repairs require a broad range of diagnostic capability to isolate the faulty components. Finally repairs must pass functional tests to determine that the repairs are successful and the circuit can be returned to service. To prevent equipment demands from exceeding spacecraft volume Space geodesy measurement requirements have become more and more stringent as our understanding of the physical processes and our modeling techniques have improved. In addition, current and future spacecraft will have ever-increasing measurement capability and will lead to increasingly sophisticated models of changes in the Earth system. Ground-based space geodesy networks with enhanced measurement capability will be essential to meeting these oncoming requirements and properly interpreting the sate1!ite data. These networks must be globally distributed and built for longevity, to provide the robust data necessary to generate improved models for proper interpretation ofthe observed geophysical signals. These requirements have been articulated by the Global Geodetic Observing System (GGOS). The NASA Space Geodesy Project (SGP) is developing a prototype core site as the basis for a next generation Space Geodetic Network (SGN) that would be NASA's contribution to a global network designed to produce the higher quality data required to maintain the Terrestrial Reference Frame and provide information essential for fully realizing the measurement potential of the current and coming generation of Earth Observing spacecraft. Each of the sites in the SGN would include co-located, state of-the-art systems from all four space geodetic observing techniques (GNSS, SLR, VLBI, and DORIS). The prototype core site is being developed at NASA's Geophysical and Astronomical Observatory at Goddard Space Flight Center. The project commenced in 2011 and is scheduled for completion in late 2013. In January 2012, two multiconstellation GNSS receivers, GODS and GODN, were established at the prototype site as part of the local geodetic network. Development and testing are also underway on the next generation SLR and VLBI systems along with a modern DORIS station. An automated survey system is being developed to measure inter-technique vector ties, and network design studies are being Novikov, L S Various space environment effects on spacecraft materials and equipment, and the reverse effects of spacecrafts and rockets on space environment are considered. The necessity of permanent updating and perfection of our knowledge on spacecraft/environment interaction processes is noted. Requirements imposed on models of space environment in theoretical and experimental researches of various aspects of the spacecraft/environment interaction problem are formulated. In this field, main problems which need to be solved today and in the nearest future are specified. The conclusion is made that the joint analysis of both aspects of spacecraft/environment interaction problem promotes the most effective solution of the problem. Spacecraft flight environments are characterized both by a wide range of space plasma conditions and by ionizing radiation (IR), solar ultraviolet and X-rays, magnetic fields, micrometeoroids, orbital debris, and other environmental factors, all of which can affect spacecraft performance. Dr. Steven Koontz's lecture will provide a solid foundation in the basic engineering physics of spacecraft charging and charging effects that can be applied to solving practical spacecraft and spacesuit engineering design, verification, and operations problems, with an emphasis on spacecraft operations in low-Earth orbit, Earth's magnetosphere, and cis-Lunar space. McElrath, T. P.; Cangahuala, L. A.; Miller, K. J.; Stravert, L. R.; Garcia-Perez, Raul Ulysses is a spin-stabilized spacecraft that experienced significant nutation after its launch in October 1990. This was due to the Sun-spacecraft-Earth geometry, and a study of the phenomenon predicted that the nutation would again be a problem during 1994-95. The difficulty of obtaining nutation estimates in real time from the spacecraft telemetry forced the ESA/NASA Ulysses Team to explore alternative information sources. The work performed by the ESA Operations Team provided a model for a system that uses the radio signal strength measurements to monitor the spacecraft dynamics. These measurements (referred to as AGC) are provided once per second by the tracking stations of the DSN. The system was named ARGOS (Attitude Reckoning from Ground Observable Signals) after the ever-vigilant, hundred-eyed giant of Greek Mythology. The ARGOS design also included Doppler processing, because Doppler shifts indicate thruster firings commanded by the active nutation control carried out onboard the spacecraft. While there is some visibility into thruster activity from telemetry, careful processing of the high-sample-rate Doppler data provides an accurate means of detecting the presence and time of thruster firings. DSN Doppler measurements are available at a ten-per-second rate in the same tracking data block as the AGC data. Samaan, Malak A. The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec. GHRSST Level 2P Regional Subskin Sea Surface Temperature from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) on the NASA Aqua satellite for the Atlantic Ocean (GDS version 1) National Oceanic and Atmospheric Administration, Department of Commerce — The Advanced Microwave Scanning Radiometer (AMSR-E) was launched on 4 May 2002, aboard NASA's Aqua spacecraft. The National Space Development Agency of Japan (NASDA)... National Oceanic and Atmospheric Administration, Department of Commerce — The Advanced Microwave Scanning Radiometer (AMSR-E) was launched on 4 May 2002, aboard NASA's Aqua spacecraft. The National Space Development Agency of Japan (NASDA)... National Oceanic and Atmospheric Administration, Department of Commerce — The Advanced Microwave Scanning Radiometer (AMSR-E) was launched on 4 May 2002, aboard NASA's Aqua spacecraft. The National Space Development Agency of Japan (NASDA)... Hasan, H.; Hanisch, R.; Bredekamp, J. The NASA Office of Space Science has established a series of archival centers where science data acquired through its space science missions is deposited. The availability of high quality data to the general public through these open archives enables the maximization of science return of the flight missions. The Astrophysics Data Centers Coordinating Council, an informal collaboration of archival centers, coordinates data from five archival centers distiguished primarily by the wavelength range of the data deposited there. Data are available in FITS format. An overview of NASA's data centers and services is presented in this paper. A standard front-end modifyer called `Astrowbrowse' is described. Other catalog browsers and tools include WISARD and AMASE supported by the National Space Scince Data Center, as well as ISAIA, a follow on to Astrobrowse. Success in executing future NASA space missions will depend on advanced technology developments that should already be underway. It has been years since NASA has had a vigorous, broad-based program in advanced space technology development, and NASA's technology base is largely depleted. As noted in a recent National Research Council report on the U.S. civil space program: Future U.S. leadership in space requires a foundation of sustained technology advances that can enable the development of more capable, reliable, and lower-cost spacecraft and launch vehicles to achieve space program goals. A strong advanced technology development foundation is needed also to enhance technology readiness of new missions, mitigate their technological risks, improve the quality of cost estimates, and thereby contribute to better overall mission cost management. Yet financial support for this technology base has eroded over the years. The United States is now living on the innovation funded in the past and has an obligation to replenish this foundational element. NASA has developed a draft set of technology roadmaps to guide the development of space technologies under the leadership of the NASA Office of the Chief Technologist. The NRC appointed the Steering Committee for NASA Technology Roadmaps and six panels to evaluate the draft roadmaps, recommend improvements, and prioritize the technologies within each and among all of the technology areas as NASA finalizes the roadmaps. The steering committee is encouraged by the initiative NASA has taken through the Office of the Chief Technologist (OCT) to develop technology roadmaps and to seek input from the aerospace technical community with this study. Atkinson, David J.; James, Mark L.; Martin, R. G. Briefly discussed here are the spacecraft and ground systems monitoring process at the Jet Propulsion Laboratory (JPL). Some of the difficulties associated with the existing technology used in mission operations are highlighted. A new automated system based on artificial intelligence technology is described which seeks to overcome many of these limitations. The system, called the Spacecraft Health Automated Reasoning Prototype (SHARP), is designed to automate health and status analysis for multi-mission spacecraft and ground data systems operations. The system has proved to be effective for detecting and analyzing potential spacecraft and ground systems problems by performing real-time analysis of spacecraft and ground data systems engineering telemetry. Telecommunications link analysis of the Voyager 2 spacecraft was the initial focus for evaluation of the system in real-time operations during the Voyager spacecraft encounter with Neptune in August 1989. Atkinson, David J.; James, Mark L.; Martin, R. Gaius Briefly discussed here are the spacecraft and ground systems monitoring process at the Jet Propulsion Laboratory (JPL). Some of the difficulties associated with the existing technology used in mission operations are highlighted. A new automated system based on artificial intelligence technology is described which seeks to overcome many of these limitations. The system, called the Spacecraft Health Automated Reasoning Prototype (SHARP), is designed to automate health and status analysis for multi-mission spacecraft and ground data systems operations. The system has proved to be effective for detecting and analyzing potential spacecraft and ground systems problems by performing real-time analysis of spacecraft and ground data systems engineering telemetry. Telecommunications link analysis of the Voyager 2 spacecraft was the initial focus for evaluation of the system in real-time operations during the Voyager spacecraft encounter with Neptune in August 1989. Ross, James C. This is a photographic record of NASA Dryden flight research aircraft, spanning nearly 25 years. The author has served as a Dryden photographer, and now as its chief photographer and airborne photographer. The results are extraordinary images of in-flight aircraft never seen elsewhere, as well as pictures of aircraft from unusual angles on the ground. The collection is the result of the agency required documentation process for its assets. Balboni, John A.; Gokcen, Tahir; Hui, Frank C. L.; Graube, Peter; Morrissey, Patricia; Lewis, Ronald The paper describes the consolidation of NASA's high powered arc-jet testing at a single location. The existing plasma arc-jet wind tunnels located at the Johnson Space Center were relocated to Ames Research Center while maintaining NASA's technical capability to ground-test thermal protection system materials under simulated atmospheric entry convective heating. The testing conditions at JSC were reproduced and successfully demonstrated at ARC through close collaboration between the two centers. New equipment was installed at Ames to provide test gases of pure nitrogen mixed with pure oxygen, and for future nitrogen-carbon dioxide mixtures. A new control system was custom designed, installed and tested. Tests demonstrated the capability of the 10 MW constricted-segmented arc heater at Ames meets the requirements of the major customer, NASA's Orion program. Solutions from an advanced computational fluid dynamics code were used to aid in characterizing the properties of the plasma stream and the surface environment on the calorimeters in the supersonic flow stream produced by the arc heater. Des Marais, David J; Nuth, Joseph A; Allamandola, Louis J; Boss, Alan P; Farmer, Jack D; Hoehler, Tori M; Jakosky, Bruce M; Meadows, Victoria S; Pohorille, Andrew; Runnegar, Bruce; Spormann, Alfred M The NASA Astrobiology Roadmap provides guidance for research and technology development across the NASA enterprises that encompass the space, Earth, and biological sciences. The ongoing development of astrobiology roadmaps embodies the contributions of diverse scientists and technologists from government, universities, and private institutions. The Roadmap addresses three basic questions: how does life begin and evolve, does life exist elsewhere in the universe, and what is the future of life on Earth and beyond? Seven Science Goals outline the following key domains of investigation: understanding the nature and distribution of habitable environments in the universe, exploring for habitable environments and life in our own Solar System, understanding the emergence of life, determining how early life on Earth interacted and evolved with its changing environment, understanding the evolutionary mechanisms and environmental limits of life, determining the principles that will shape life in the future, and recognizing signatures of life on other worlds and on early Earth. For each of these goals, Science Objectives outline more specific high priority efforts for the next three to five years. These eighteen objectives are being integrated with NASA strategic planning. The NASA Astrobiology Roadmap provides guidance for research and technology development across the NASA enterprises that encompass the space, Earth, and biological sciences. The ongoing development of astrobiology roadmaps embodies the contributions of diverse scientists and technologists from government, universities, and private institutions. The Roadmap addresses three basic questions: How does life begin and evolve, does life exist elsewhere in the universe, and what is the future of life on Earth and beyond? Seven Science Goals outline the following key domains of investigation: understanding the nature and distribution of habitable environments in the universe, exploring for habitable environments and life in our own solar system, understanding the emergence of life, determining how early life on Earth interacted and evolved with its changing environment, understanding the evolutionary mechanisms and environmental limits of life, determining the principles that will shape life in the future, and recognizing signatures of life on other worlds and on early Earth. For each of these goals, Science Objectives outline more specific high-priority efforts for the next 3-5 years. These 18 objectives are being integrated with NASA strategic planning. Ruf, Chris; Atlas, Robert; Majumdar, Sharan; Ettammal, Suhas; Waliser, Duane The NASA Cyclone Global Navigation Satellite System (CYGNSS) mission consists of a constellation of eight microsatellites that were launched into low-Earth orbit on 15 December 2016. Each observatory carries a four-channel bistatic scatterometer receiver to measure near surface wind speed over the ocean. The transmitter half of the scatterometer is the constellation of GPS satellites. CYGNSS is designed to address the inadequacy in observations of the inner core of tropical cyclones (TCs) that result from two causes: 1) much of the TC inner core is obscured from conventional remote sensing instruments by intense precipitation in the eye wall and inner rain bands; and 2) the rapidly evolving (genesis and intensification) stages of the TC life cycle are poorly sampled in time by conventional polar-orbiting, wide-swath surface wind imagers. The retrieval of wind speed by CYGNSS in the presence of heavy precipitation is possible due to the long operating wavelength used by GPS (19 cm), at which scattering and attenuation by rain are negligible. Improved temporal sampling by CYGNSS is possible due to the use of eight spacecraft with 4 scatterometer channels on each one. Median and mean revisit times everywhere in the tropics are 3 and 7 hours, respectively. Wind speed referenced to 10m height above the ocean surface is retrieved from CYGNSS measurements of bistatic radar cross section in a manner roughly analogous to that of conventional ocean wind scatterometers. The technique has been demonstrated previously from space by the UK-DMC and UK-TDS missions. Wind speed is retrieved with 25 km spatial resolution and an uncertainty of 2 m/s at low wind speeds and 10% at wind speeds above 20 m/s. Extensive simulation studies conducted prior to launch indicate that there will be a significant positive impact on TC forecast skill for both track and intensity with CYGNSS measurements assimilated into HWRF numerical forecasts. Simulations of CYGNSS spatial and temporal sampling Full Text Available The two PEACE (Plasma Electron And Current Experiment sensors on board each Cluster spacecraft sample the electron velocity distribution across the full 4 Norbury, John W. When protons or heavy ions from galactic cosmic rays (GCR) or solar particle events (SPE) interact with target nuclei in spacecraft, there can be two different types of interactions. The more familiar strong nuclear interaction often dominates and is responsible for nuclear fragmentation in either the GCR or SPE projectile nucleus or the spacecraft target nucleus. (Of course, the proton does not break up, except possibly to produce pions or other hadrons.) The less familiar, second type of interaction is due to the very strong electromagnetic fields that exist when two charged nuclei pass very close to each other. This process is called electromagnetic dissociation (EMD) and primarily results in the emission of neutrons, protons and light ions (isotopes of hydrogen and helium). The cross section for particle production is approximately defined as the number of particles produced in nucleus-nucleus collisions or other types of reactions. (There are various kinematic and other factors which multiply the particle number to arrive at the cross section.) Strong, nuclear interactions usually dominate the nuclear reactions of most interest that occur between GCR and target nuclei. However, for heavy nuclei (near Fe and beyond) at high energy the EMD cross section can be much larger than the strong nuclear interaction cross section. This paper poses a question: Are there projectile or target nuclei combinations in the interaction of GCR or SPE where the EMD reaction cross section plays a dominant role? If the answer is affirmative, then EMD mechanisms should be an integral part of codes that are used to predict damage to spacecraft electronics. The question can become more fine-tuned and one can ask about total reaction cross sections as compared to double differential cross sections. These issues will be addressed in the present paper. This report covers the period from October 1992 through the close of the project. FY 92 closed out with the successful briefing to industry and with many potential and important initiatives in the spacecraft arena. Due to the funding uncertainties, we were directed to proceed as if our funding would be approximately the same as FY 92 ($2M), but not to make any major new commitments. However, the MODIL`s FY 93 funding was reduced to $810K and we were directed to concentrate on the cryocooler area. The cryocooler effort completed its demonstration project. The final meetings with the cryocooler fabricators were very encouraging as we witnessed the enthusiastic reception of technology to help them reduce fabrication uncertainties. Support of the USAF Phillips Laboratory cryocooler program was continued including kick-off meetings for the Prototype Spacecraft Cryocooler (PSC). Under Phillips Laboratory support, Gill Cruz visited British Aerospace and Lucas Aerospace in the United Kingdom to assess their manufacturing capabilities. In the Automated Spacecraft & Assembly Project (ASAP), contracts were pursued for the analysis by four Brilliant Eyes prime contractors to provide a proprietary snap shot of their current status of Integrated Product Development. In the materials and structure thrust the final analysis was completed of the samples made under the contract (``Partial Automation of Matched Metal Net Shape Molding of Continuous Fiber Composites``) to SPARTA. The Precision Technologies thrust funded the Jet Propulsion Laboratory to prepare a plan to develop a Computer Aided Alignment capability to significantly reduce the time for alignment and even possibly provide real time and remote alignment capability of systems in flight. Shirley, D. J. Southwest Research Institute (SwRI) has developed and delivered spacecraft computers for a number of different near-Earth-orbit spacecraft including shuttle experiments and SDIO free-flyer experiments. We describe the evolution of the basic SwRI spacecraft computer design from those weighing in at 20 to 25 lb and using 20 to 30 W to newer models weighing less than 5 lb and using only about 5 W, yet delivering twice the processing throughput. Because of their reduced size, weight, and power, these newer designs are especially applicable to planetary instrument requirements. The basis of our design evolution has been the availability of more powerful processor chip sets and the development of higher density packaging technology, coupled with more aggressive design strategies in incorporating high-density FPGA technology and use of high-density memory chips. In addition to reductions in size, weight, and power, the newer designs also address the necessity of survival in the harsh radiation environment of space. Spurred by participation in such programs as MSTI, LACE, RME, Delta 181, Delta Star, and RADARSAT, our designs have evolved in response to program demands to be small, low-powered units, radiation tolerant enough to be suitable for both Earth-orbit microsats and for planetary instruments. Present designs already include MIL-STD-1750 and Multi-Chip Module (MCM) technology with near-term plans to include RISC processors and higher-density MCM's. Long term plans include development of whole-core processors on one or two MCM's. Metzger, Philip T.; Lane, John E. The rocket exhaust of spacecraft landing on the Moon causes a number of observable effects that need to be quantified, including: disturbance of the regolith and volatiles at the landing site; damage to surrounding hardware such as the historic Apollo sites through the impingement of high-velocity ejecta; and levitation of dust after engine cutoff through as-yet unconfirmed mechanisms. While often harmful, these effects also beneficially provide insight into lunar geology and physics. Some of the research results from the past 10 years is summarized and reviewed here. Jensen, Hans-Christian Becker; Wisniewski, Rafal This article realizes nonlinear Fault Detection and Isolation for actuators, given there is no measurement of the states in the actuators. The Fault Detection and Isolation of the actuators is instead based on angular velocity measurement of the spacecraft and knowledge about the dynamics...... of the satellite. The algorithms presented in this paper are based on a geometric approach to achieve nonlinear Fault Detection and Isolation. The proposed algorithms are tested in a simulation study and the pros and cons of the algorithms are discussed.... A few quality assurance programs outside the purview of the Nuclear Regulatory Commission were studied to identify features or practices which the NRC could use to enhance its program for assuring quality in the design and construction of nuclear power plants. The programs selected were: the manufacture of large commercial transport aircraft, regulated by the Federal Aviation Administration; US Navy shipbuilding; commercial shipbuilding regulated by the Maritime Administration and the US Coast Guard; Government-owned nuclear plants under the Department of Energy; spacecraft under the National Aeronautics and Space Administration; and the construction of nuclear power plants in Canada, West Germany, France, Japan, Sweden, and the United Kingdom Longanecker, G. W.; Hoffman, R. A. The scientific objectives of the Explorer-45 mission are discussed. The primary objective is the study of the ring current responsible for the main phase of magnetic storms. Closely associated with this objective is the determination of the relationship between magnetic storms, substorms, and the acceleration of charged particles in the magnetosphere. Further objectives are the measurement of a wide range of proton, electron and alpha-particle energies, and studies of wave-particle interactions responsible for particle transport and loss in the inner magnetosphere. The orbital parameters, the spacecraft itself, and some of its unique features, such as the data handling system, which is programmable from the ground, are described. This viewgraph presentation describes NASA's product peer review process. The contents include: 1) Inspection/Peer Review at NASA; 2) Reasons for product peer reviews; 3) Different types of peer reviews; and 4) NASA requirements for peer reviews. This presentation also includes a demonstration of an actual product peer review. Manzo, Michelle A. NASA's return to the moon will require advanced battery, fuel cell and regenerative fuel cell energy storage systems. This paper will provide an overview of the planned energy storage systems for the Orion Spacecraft and the Aries rockets that will be used in the return journey to the Moon. Technology development goals and approaches to provide batteries and fuel cells for the Altair Lunar Lander, the new space suit under development for extravehicular activities (EY A) on the Lunar surface, and the Lunar Surface Systems operations will also be discussed. Manzo, Michelle A.; Reid, Concha M. NASA's return to the moon will require advanced battery, fuel cell and regenerative fuel cell energy storage systems. This paper will provide an overview of the planned energy storage systems for the Orion Spacecraft and the Aries rockets that will be used in the return journey to the Moon. Technology development goals and approaches to provide batteries and fuel cells for the Altair Lunar Lander, the new space suit under development for extravehicular activities (EVA) on the Lunar surface, and the Lunar Surface Systems operations will also be discussed. Jensen, W. N. The responsibilities and structural organization of the Operations Planning Group of NASA Deep Space Network (DSN) Operations are outlined. The Operations Planning group establishes an early interface with a user's planning organization to educate the user on DSN capabilities and limitations for deep space tracking support. A team of one or two individuals works through all phases of the spacecraft launch and also provides planning and preparation for specific events such as planetary encounters. Coordinating interface is also provided for nonflight projects such as radio astronomy and VLBI experiments. The group is divided into a Long Range Support Planning element and a Near Term Operations Coordination element. A new method of intercontinental clock synchronization has been developed and proposed for possible use by NASA's Deep Space Network (DSN), using a two-way/three-way radio link with a spacecraft. Analysis of preliminary data indicates that the real-time method has an uncertainty of 0.6 microsec, and it is very likely that further work will decrease the uncertainty. Also, the method is compatible with a variety of nonreal-time analysis techniques, which may reduce the uncertainty down to the tens of nanosecond range. To better explore the solar system, the NASA will uses new propulsion modes, in particular the nuclear energy. These articles present the research programs in the domain and the particularities of the nuclear energy in the projects. (A.L.B.) Rieber, Richard R.; LaBorde, Gregory R. NASA's Deep Impact mission ended successfully in 2005 after an impact and close flyby of the comet 9P/Tempel-1. The Flyby spacecraft was placed in hibernation and was left to orbit the sun. In 2007, engineers at the Jet Propulsion Laboratory brought the spacecraft out of hibernation and successfully performed two additional missions. These missions were EPOCh, Extra-solar Planetary Observation and Characterization, a photometric investigation of transiting exo-planets, and DIXI, Deep Impact eXtended Investigation, which maneuvered the Flyby spacecraft towards a close encounter with the comet 103P/Hartley- 2 on 4 November 2010. The names of these two scientific investigations combine to form the overarching mission's name, EPOXI. The encounter with 103P/Hartley-2 was vastly different from the prime mission's encounter with 9P/Tempel-1. The geometry of encounter was nearly 180 ? different and 103P/Hartley-2 was approximately one-quarter the size of 9P/Tempel-1. Mission operations for the comet flyby were broken into three phases: a) Approach, b) Encounter, and c) Departure. This paper will focus on the approach phase of the comet encounter. It will discuss the strategies used to decrease both cost and risk while maximizing science return and some of the challenges experienced during operations. Hojnicki, Jeffrey S.; Kerslake, Thomas W.; Ayres, Mark; Han, Augustina H.; Adamson, Adrian M. NASA's Constellation Program is embarking on a new era of space exploration, returning to the Moon and beyond. The Constellation architecture will consist of a number of new spacecraft elements, including the Orion crew exploration vehicle, the Altair lunar lander, and the Ares family of launch vehicles. Each of these new spacecraft elements will need an electric power system, and those power systems will need to be designed to fulfill unique mission objectives and to survive the unique environments encountered on a lunar exploration mission. As with any new spacecraft power system development, preliminary design work will rely heavily on analysis to select the proper power technologies, size the power system components, and predict the system performance throughout the required mission profile. Constellation projects have the advantage of leveraging power system modeling developments from other recent programs such as the International Space Station (ISS) and the Mars Exploration Program. These programs have developed mature power system modeling tools, which can be quickly modified to meet the unique needs of Constellation, and thus provide a rapid capability for detailed power system modeling that otherwise would not exist. Gerberich, Matthew W.; Oleson, Steven R. The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions. Ruff, Gary A.; Urban, David Our understanding of the fire safety risk in manned spacecraft has been limited by the small scale of the testing we have been able to conduct in low-gravity. Fire growth and spread cannot be expected to scale linearly with sample size so we cannot make accurate predictions of the behavior of realistic scale fires in spacecraft based on the limited low-g testing to date. As a result, spacecraft fire safety protocols are necessarily very conservative and costly. Future crewed missions are expected to be longer in duration than previous exploration missions outside of low-earth orbit and accordingly, more complex in terms of operations, logistics, and safety. This will increase the challenge of ensuring a fire-safe environment for the crew throughout the mission. Based on our fundamental uncertainty of the behavior of fires in low-gravity, the need for realistic scale testing at reduced gravity has been demonstrated. To address this concern, a spacecraft fire safety research project is underway to reduce the uncertainty and risk in the design of spacecraft fire safety systems by testing at nearly full scale in low-gravity. This project is supported by the NASA Advanced Exploration Systems Program Office in the Human Exploration and Operations Mission Directorate. The activity of this project is supported by an international topical team of fire experts from other space agencies to maximize the utility of the data and to ensure the widest possible scrutiny of the concept. The large-scale space flight experiment will be conducted on three missions; each in an Orbital Sciences Corporation Cygnus vehicle after it has deberthed from the ISS. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew allows the fire products to be released into the cabin. The tests will be fully automated with the data downlinked at the conclusion of the test before the Cygnus vehicle reenters the Mitchell, Jason W.; Baldwin, Philip J.; Kurichh, Rishi; Naasz, Bo J.; Luquette, Richard J. The Formation Flying Testbed (FFTB) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) provides a hardware-in-the-loop test environment for formation navigation and control. The facility is evolving as a modular, hybrid, dynamic simulation facility for end-to-end guidance, navigation and control (GN&C) design and analysis of formation flying spacecraft. The core capabilities of the FFTB, as a platform for testing critical hardware and software algorithms in-the-loop, have expanded to include S-band Radio Frequency (RF) modems for interspacecraft communication and ranging. To enable realistic simulations that require RF ranging sensors for relative navigation, a mechanism is needed to buffer the RF signals exchanged between spacecraft that accurately emulates the dynamic environment through which the RF signals travel, including the effects of the medium, moving platforms, and radiated power. The Path Emulator for Radio Frequency Signals (PERFS), currently under development at NASA GSFC, provides this capability. The function and performance of a prototype device are presented. Tinto, Massimo; Estabrook, F.B.; Armstrong, J.W. Space-borne interferometric gravitational wave detectors, sensitive in the low-frequency (millihertz) band, will fly in the next decade. In these detectors the spacecraft-to-spacecraft light-travel-times will necessarily be unequal, time varying, and (due to aberration) have different time delays on up and down links. The reduction of data from moving interferometric laser arrays in solar orbit will in fact encounter nonsymmetric up- and down-link light time differences that are about 100 times larger than has previously been recognized. The time-delay interferometry (TDI) technique uses knowledge of these delays to cancel the otherwise dominant laser phase noise and yields a variety of data combinations sensitive to gravitational waves. Under the assumption that the (different) up- and down-link time delays are constant, we derive the TDI expressions for those combinations that rely only on four interspacecraft phase measurements. We then turn to the general problem that encompasses time dependence of the light-travel times along the laser links. By introducing a set of noncommuting time-delay operators, we show that there exists a quite general procedure for deriving generalized TDI combinations that account for the effects of time dependence of the arms. By applying our approach we are able to re-derive the 'flex-free' expression for the unequal-arm Michelson combinations X 1 , and obtain the generalized expressions for the TDI combinations called relay, beacon, monitor, and symmetric Sagnac Shanklin, Nathaniel; West, Joseph A variation of the recently introduced Trolley Paradox, itself is a variation of the Ehrenfest Paradox is presented. In the Trolley Paradox, a ``stationary'' set of observers tracking a wheel rolling with a constant velocity find that the wheel travels further than its rest length circumference during one revolution of the wheel, despite the fact that the Lorentz contracted circumference is less than its rest value. In the variation presented, a rectangular spacecraft with onboard observers moves with constant velocity and is circumnavigated by several small ``sloops'' forming teams of inertial observers. This whole precession moves relative to a set of ``stationary'' Earth observers. Two cases are presented, one in which the sloops are evenly spaced according to the spacecraft observers, and one in which the sloops are evenly spaced according to the Earth observes. These two cases, combined with the rectangular geometry and an emphasis on what is seen by, and what is measured by, each set of observers is very helpful in sorting out the apparent contradictions. To aid in the visualizations stationary representations in excel along with animation in Visual Python and Unity are presented. The analysis presented is suitable for undergraduate physics majors. Scott, John H. The fuel cell uses a catalyzed reaction between a fuel and an oxidizer to directly produce electricity. Its high theoretical efficiency and low temperature operation made it a subject of much study upon its invention ca. 1900, but its relatively high life cycle costs kept it as "solution in search of a problem" for its first half century. The first problem for which fuel cells presented a cost effective solution was, starting in the 1960's that of a power source for NASA's manned spacecraft. NASA thus invested, and continues to invest, in the development of fuel cell power plants for this application. However, starting in the mid-1990's, prospective environmental regulations have driven increased governmental and industrial interest in "green power" and the "Hydrogen Economy." This has in turn stimulated greatly increased investment in fuel cell development for a variety of terrestrial applications. This investment is bringing about notable advances in fuel cell technology, but these advances are often in directions quite different from those needed for NASA spacecraft applications. This environment thus presents both opportunities and challenges for NASA's manned space program. Bretagne, J.-M.; Fragnito, M.; Massier, S. In the last years the significant increase in satellite broadcasting demand, with the wide band communication dawn, has given a great impulse to the telecommunication satellite market. The big demand is translated from operators (such as SES/Astra, Eutelsat, Intelsat, Inmarsat, EuroSkyWay etc.) in an increase of orders of telecom satellite to the world industrials. The largest part of these telecom satellite orders consists of Geostationary platforms which grow more and more in mass (over 5 tons) due to an ever longer demanded lifetime (up to 20 years), and become more complex due to the need of implementing an ever larger number of repeaters, antenna reflectors and feeds, etc... In this frame, the mechanical design and verification of these large spacecraft become difficult and ambitious at the same time, driven by the dry mass limitation objective. By the Finite Element Method (FEM), and on the basis of the telecom satellite heritage of a world leader constructor such as Alcatel Space Industries it is nowadays possible to model these spacecraft in a realistic and confident way in order to identify the main global dynamic aspects such as mode shapes, mass participation and/or dynamic responses. But on the other hand, one of the main aims consists in identifying soon in a program the most critical aspects of the system behavior in the launch dynamic environment, such as possible dynamic coupling between the different subsystems and secondary structures of the spacecraft (large deployable reflectors, thrusters, etc.). To this aim a numerical method has been developed in the frame of the Alcatel SPACEBUS family program, using MSC/Nastran capabilities and it is presented in this paper. The method is based on Spacecraft sub-structuring and strain energy calculation. The method mainly consists of two steps : 1) subsystem modal strain energy ratio (with respect to the global strain energy); 2) subsystem strain energy calculation for each mode according to the base driven Cataldo, Robert L. The NASA Glenn Research Center (GRC) Radioisotope Power System Program Office (RPSPO) sponsored two studies lead by their mission analysis team. The studies were performed by NASA GRCs Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team. Typically a complete toplevel design reference mission (DRM) is performed assessing conceptual spacecraft design, launch mass, trajectory, science strategy and sub-system design such as, power, propulsion, structure and thermal. Suggs, Robert M.; Moser, D. E. The MSFC lunar impact monitoring program began in 2006 in support of environment definition for the Constellation (return to Moon) program. Work continued by the Meteoroid Environment Office after Constellation cancellation. Over 330 impacts have been recorded. A paper published in Icarus reported on the first 5 years of observations and 126 calibrated flashes. Icarus: http://www.sciencedirect.com/science/article/pii/S0019103514002243; ArXiv: http://arxiv.org/abs/1404.6458 A NASA Technical Memorandum on flash locations is in press Tran, Peter B.; Okimura, Takeshi NTTS is the IT infrastructure for the Agency's Technology Transfer (T2) program containing 60,000+ technology portfolio supporting all ten NASA field centers and HQ. It is the enterprise IT system for facilitating the Agency's technology transfer process, which includes reporting of new technologies (e.g., technology invention disclosures NF1679), protecting intellectual properties (e.g., patents), and commercializing technologies through various technology licenses, software releases, spinoffs, and success stories using custom built workflow, reporting, data consolidation, integration, and search engines. Jannazo, Mary Ann The services of NASA's Technology Utilization Program are detailed and highlights of spinoff products in various stages of completion are described. Areas discussed include: Stirling engines for automotive applications, klystron tubes used to reduce power costs at UHF television stations, sports applications of riblet film (e.g., boat racing), reinforced plastic for high-temperature applications, coating technology appropriate for such applications similar to the renovation of the Statue of Liberty, and medical uses of fuel pump technology (e.g., heart pumps). Freitas, R. A., Jr. (Editor); Carlson, P. A. (Editor) Adoption of an aggressive computer science research and technology program within NASA will: (1) enable new mission capabilities such as autonomous spacecraft, reliability and self-repair, and low-bandwidth intelligent Earth sensing; (2) lower manpower requirements, especially in the areas of Space Shuttle operations, by making fuller use of control center automation, technical support, and internal utilization of state-of-the-art computer techniques; (3) reduce project costs via improved software verification, software engineering, enhanced scientist/engineer productivity, and increased managerial effectiveness; and (4) significantly improve internal operations within NASA with electronic mail, managerial computer aids, an automated bureaucracy and uniform program operating plans. Bhasin, Kul; Hayden, Jeffrey L. NASA's future communications services will be supplied through a space communications network that mirrors the terrestrial Internet in its capabilities and flexibility. The notional requirements for future data gathering and distribution by this Space Internet have been gathered from NASA's Earth Science Enterprise (ESE), the Human Exploration and Development in Space (HEDS), and the Space Science Enterprise (SSE). This paper describes a communications infrastructure for the Space Internet, the architectures within the infrastructure, and the elements that make up the architectures. The architectures meet the requirements of the enterprises beyond 2010 with Internet 'compatible technologies and functionality. The elements of an architecture include the backbone, access, inter-spacecraft and proximity communication parts. From the architectures, technologies have been identified which have the most impact and are critical for the implementation of the architectures. Stebbins, R T Over the last year, the NASA half of the joint LISA project has focused its efforts on responding to a major review, and advancing the formulation and technology development of the mission. The NAS/NRC Beyond Einstein program assessment review will be described, including the outcome. The basis of the LISA science requirements has changed from detection determined by integrated signal-to-noise ratio to observation determined by uncertainty in the estimation of astrophysical source parameters. The NASA team has further defined the spacecraft bus design, participated in many design trade studies and advanced the requirements flow down and the associated current best estimates of performance. Recent progress in technology development is also summarized Roberta Veloso Garcia Full Text Available An analytical approach for spin-stabilized satellites attitude propagation is presented, considering the influence of the residual magnetic torque and eddy currents torque. It is assumed two approaches to examine the influence of external torques acting during the motion of the satellite, with the Earth's magnetic field described by the quadripole model. In the first approach is included only the residual magnetic torque in the motion equations, with the satellites in circular or elliptical orbit. In the second approach only the eddy currents torque is analyzed, with the satellite in circular orbit. The inclusion of these torques on the dynamic equations of spin stabilized satellites yields the conditions to derive an analytical solution. The solutions show that residual torque does not affect the spin velocity magnitude, contributing only for the precession and the drift of the spacecraft's spin axis and the eddy currents torque causes an exponential decay of the angular velocity magnitude. Numerical simulations performed with data of the Brazilian Satellites (SCD1 and SCD2 show the period that analytical solution can be used to the attitude propagation, within the dispersion range of the attitude determination system performance of Satellite Control Center of Brazil National Research Institute. Yang, Yaguang; Zhou, Zhiqiang Kalman filter based spacecraft attitude estimation has been used in some high-profile missions and has been widely discussed in literature. While some models in spacecraft attitude estimation include spacecraft dynamics, most do not. To our best knowledge, there is no comparison on which model is a better choice. In this paper, we discuss the reasons why spacecraft dynamics should be considered in the Kalman filter based spacecraft attitude estimation problem. We also propose a reduced quaternion spacecraft dynamics model which admits additive noise. Geometry of the reduced quaternion model and the additive noise are discussed. This treatment is more elegant in mathematics and easier in computation. We use some simulation example to verify our claims. Hoang, Thiem [Korea Astronomy and Space Science Institute, Daejeon 34055 (Korea, Republic of); Loeb, Abraham, E-mail: email@example.com, E-mail: firstname.lastname@example.org [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA (United States) A relativistic spacecraft of the type envisioned by the Breakthrough Starshot initiative will inevitably become charged through collisions with interstellar particles and UV photons. Interstellar magnetic fields would therefore deflect the trajectory of the spacecraft. We calculate the expected deflection for typical interstellar conditions. We also find that the charge distribution of the spacecraft is asymmetric, producing an electric dipole moment. The interaction between the moving electric dipole and the interstellar magnetic field is found to produce a large torque, which can result in fast oscillation of the spacecraft around the axis perpendicular to the direction of motion, with a period of ∼0.5 hr. We then study the spacecraft rotation arising from impulsive torques by dust bombardment. Finally, we discuss the effect of the spacecraft rotation and suggest several methods to mitigate it. ... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice (12-006)] NASA Advisory Council; Commercial... meeting. SUMMARY: In accordance with the Federal Advisory Committee Act, Public Law 92-463, as amended, the National Aeronautics and Space Administration announces a meeting of the Commercial Space... ... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice (12-027)] NASA Advisory Council; Commercial... Meeting. SUMMARY: In accordance with the Federal Advisory Committee Act, Public Law 92-463, as amended, the National Aeronautics and Space Administration announces a meeting of the Commercial Space... .... Donald Miller, Office of International and Interagency Relations, (202) 358-1527, National Aeronautics... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice (10-090)] NASA International Space Station... meeting. SUMMARY: In accordance with the Federal Advisory Committee Act, Public Law 92-463, as amended... The grand opening of NASA's new, world-class laboratory for research into future space transportation technologies located at the Marshall Space Flight Center (MSFC) in Huntsville, Alabama, took place in July 2004. The state-of-the-art Propulsion Research Laboratory (PRL) serves as a leading national resource for advanced space propulsion research. Its purpose is to conduct research that will lead to the creation and development of innovative propulsion technologies for space exploration. The facility is the epicenter of the effort to move the U.S. space program beyond the confines of conventional chemical propulsion into an era of greatly improved access to space and rapid transit throughout the solar system. The laboratory is designed to accommodate researchers from across the United States, including scientists and engineers from NASA, the Department of Defense, the Department of Energy, universities, and industry. The facility, with 66,000 square feet of useable laboratory space, features a high degree of experimental capability. Its flexibility allows it to address a broad range of propulsion technologies and concepts, such as plasma, electromagnetic, thermodynamic, and propellant propulsion. An important area of emphasis is the development and utilization of advanced energy sources, including highly energetic chemical reactions, solar energy, and processes based on fission, fusion, and antimatter. The Propulsion Research Laboratory is vital for developing the advanced propulsion technologies needed to open up the space frontier, and sets the stage of research that could revolutionize space transportation for a broad range of applications. Billingham, J.; Brocker, D. H. In 1959, it was proposed that a sensible way to conduct interstellar communication would be to use radio at or near the frequency of hydrogen. In 1960, the first Search for Extraterrestrial Intelligence (SETI) was conducted using a radiotelescope at Green Bank in West Virginia. Since 1970, NASA has systematically developed a definitive program to conduct a sophisticated search for evidence of extraterrestrial intelligent life. The basic hypothesis is that life may be widespread in the univers, and that in many instances extraterrestrial life may have evolved into technological civilizations. The underlying scientific arguments are based on the continuously improving knowledge of astronomy and astrophysics, especially star system formation, and of planetary science, chemical evolution, and biological evolution. If only one in a million sun-like stars in our galaxy harbors species with cognitive intelligence, then there are 100,000 civilizations in the Milky Way alone. The fields of radioastronomy digital electronic engineering, spectrum analysis, and signal detection have advanced rapidly in the last twenty years and now allow for sophisticated systems to be built in order to attempt the detection of extraterrestrial intelligence signals. In concert with the scientific and engineering communities, NASA has developed, over the last several years, a Microwave Observing Project whose goal is to design, build, and operate SETI systems during the decade of the nineties in pursuit of the goal signal detection. The Microwave Observing Project is now approved and underway. There are two major components in the project: the Target Search Element and the Sky Survey Element. Holley, Daniel C.; Haight, Kyle G.; Lindstrom, Ted The purpose of this study was to expose a range of naive individuals to the NASA Data Archive and to obtain feedback from them, with the goal of learning how useful people with varied backgrounds would find the Archive for research and other purposes. We processed 36 subjects in four experimental categories, designated in this report as C+R+, C+R-, C-R+ and C-R-, for computer experienced researchers, computer experienced non-researchers, non-computer experienced researchers, and non-computer experienced non-researchers, respectively. This report includes an assessment of general patterns of subject responses to the various aspects of the NASA Data Archive. Some of the aspects examined were interface-oriented, addressing such issues as whether the subject was able to locate information, figure out how to perform desired information retrieval tasks, etc. Other aspects were content-related. In doing these assessments, answers given to different questions were sometimes combined. This practice reflects the tendency of the subjects to provide answers expressing their experiences across question boundaries. Patterns of response are cross-examined by subject category in order to bring out deeper understandings of why subjects reacted the way they did to the archive. After the general assessment, there will be a more extensive summary of the replies received from the test subjects. Bao, Han P. Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer Bonner, J. K.; Tudryn, Carissa D.; Choi, Sun J.; Eulogio, Sebastian E.; Roberts, Timothy J.; Tudryn, Carissa D. Legitimate concern exists regarding sending spacecraft and their associated hardware to solar system bodies where they could possibly contaminate the body's surface with terrestrial microorganisms. The NASA approved guidelines for sterilization as set forth in NPG 8020.12C, which is consistent with the biological contamination control objectives of the Committee on Space Research (COSPAR), recommends subjecting the spacecraft and its associated hardware to dry heat-a dry heat regimen that could potentially employ a temperature of 110(deg)C for up to 200 hours. Such a temperature exposure could prove detrimental to the spacecraft electronics. The stimulated growth of intermetallic compounds (IMCs) in metallic interconnects and/or thermal degradation of organic materials composing much of the hardware could take place over a prolonged temperature regimen. Such detrimental phenomena would almost certainly compromise the integrity and reliability of the electronics. Investigation of sterilization procedures in the medical field suggests that hydrogen peroxide (H202) gas plasma (HPGP) technology can effectively function as an alternative to heat sterilization, especially for heat-sensitive items. Treatment with isopropyl alcohol (IPA) in liquid form prior to exposure of the hardware to HPGP should also prove beneficial. Although IPA is not a sterilant, it is frequently used as a disinfectant because of its bactericidal properties. The use of IPA in electronics cleaning is widely recognized and has been utilized for many years with no adverse affects reported. In addition, IPA is the principal ingredient of the test fluid used in ionic contamination testers to assess the amount of ionic contamination found on the surfaces of printed wiring assemblies. This paper will set forth experimental data confirming the feasibility of the IPA/H202 approach to reach acceptable microbial reduction (MR) levels of spacecraft electronic hardware. In addition, a proposed process flow in Boyer, Jeffrey S. Since Mariner, NASA-JPL planetary missions have been supported by ground software to plan and design remote sensing science observations. The software used by the science and sequence designers to plan and design observations has evolved with mission and technological advances. The original program, PEGASIS (Mariners 4, 6, and 7), was re-engineered as POGASIS (Mariner 9, Viking, and Mariner 10), and again later as POINTER (Voyager and Galileo). Each of these programs were developed under technological, political, and fiscal constraints which limited their adaptability to other missions and spacecraft designs. Implementation of a multi-mission tool, SEQ POINTER, under the auspices of the JPL Multimission Operations Systems Office (MOSO) is in progress. This version has been designed to address the limitations experienced on previous versions as they were being adapted to a new mission and spacecraft. The tool has been modularly designed with subroutine interface structures to support interchangeable celestial body and spacecraft definition models. The computational and graphics modules have also been designed to interface with data collected from previous spacecraft, or on-going observations, which describe the surface of each target body. These enhancements make SEQ POINTER a candidate for low-cost mission usage, when a remote sensing science observation design capability is required. The current and planned capabilities of the tool will be discussed. The presentation will also include a 5-10 minute video presentation demonstrating the capabilities of a proto-Cassini Project version that was adapted to test the tool. The work described in this abstract was performed by the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration. Requirements for the ITRF have increased dramatically since the 1980s. The most stringent requirement comes from critical sea level monitoring programs: a global accuracy of 1.0 mm, and 0.1mm/yr stability, a factor of 10 to 20 beyond current capability. Other requirements for the ITRF coming from ice mass change, ground motion, and mass transport studies are similar. Current and future satellite missions will have ever-increasing measurement capability and will lead to increasingly sophisticated models of these and other changes in the Earth system. Ground space geodesy networks with enhanced measurement capability will be essential to meeting the ITRF requirements and properly interpreting the satellite data. These networks must be globally distributed and built for longevity, to provide the robust data necessary to generate improved models for proper interpretation of the observed geophysical signals. NASA has embarked on a Space Geodesy Program with a long-range goal to build, deploy and operate a next generation NASA Space Geodetic Network (SGN). The plan is to build integrated, multi-technique next-generation space geodetic observing systems as the core contribution to a global network designed to produce the higher quality data required to maintain the Terrestrial Reference Frame and provide information essential for fully realizing the measurement potential of the current and coming generation of Earth Observing spacecraft. Phase 1 of this project has been funded to (1) Establish and demonstrate a next-generation prototype integrated Space Geodetic Station at Goddard's Geophysical and Astronomical Observatory (GGAO), including next-generation SLR and VLBI systems along with modern GNSS and DORIS; (2) Complete ongoing Network Design Studies that describe the appropriate number and distribution of next-generation Space Geodetic Stations for an improved global network; (3) Upgrade analysis capability to handle the next-generation data; (4) Implement a modern Christian, John A.; Cryan, Scott P. This paper provides a survey of modern LIght Detection And Ranging (LIDAR) sensors from a perspective of how they can be used for spacecraft relative navigation. In addition to LIDAR technology commonly used in space applications today (e.g. scanning, flash), this paper reviews emerging LIDAR technologies gaining traction in other non-aerospace fields. The discussion will include an overview of sensor operating principles and specific pros/cons for each type of LIDAR. This paper provides a comprehensive review of LIDAR technology as applied specifically to spacecraft relative navigation. HE problem of orbital rendezvous and docking has been a consistent challenge for complex space missions since before the Gemini 8 spacecraft performed the first successful on-orbit docking of two spacecraft in 1966. Over the years, a great deal of effort has been devoted to advancing technology associated with all aspects of the rendezvous, proximity operations, and docking (RPOD) flight phase. After years of perfecting the art of crewed rendezvous with the Gemini, Apollo, and Space Shuttle programs, NASA began investigating the problem of autonomous rendezvous and docking (AR&D) to support a host of different mission applications. Some of these applications include autonomous resupply of the International Space Station (ISS), robotic servicing/refueling of existing orbital assets, and on-orbit assembly.1 The push towards a robust AR&D capability has led to an intensified interest in a number of different sensors capable of providing insight into the relative state of two spacecraft. The present work focuses on exploring the state-of-the-art in one of these sensors - LIght Detection And Ranging (LIDAR) sensors. It should be noted that the military community frequently uses the acronym LADAR (LAser Detection And Ranging) to refer to what this paper calls LIDARs. A LIDAR is an active remote sensing device that is typically used in space applications to obtain the range to one or more Sheikh, Suneel; Hanson, John The current primary method of deepspace navigation is the NASA Deep Space Network (DSN). High-performance navigation is achieved using Delta Differential One-Way Range techniques that utilize simultaneous observations from multiple DSN sites, and incorporate observations of quasars near the line-of-sight to a spacecraft in order to improve the range and angle measurement accuracies. Over the past four decades, x-ray astronomers have identified a number of xray pulsars with pulsed emissions having stabilities comparable to atomic clocks. The x-ray pulsar-based navigation and time determination (XNAV) system uses phase measurements from these sources to establish autonomously the position of the detector, and thus the spacecraft, relative to a known reference frame, much as the Global Positioning System (GPS) uses phase measurements from radio signals from several satellites to establish the position of the user relative to an Earth-centered fixed frame of reference. While a GPS receiver uses an antenna to detect the radio signals, XNAV uses a detector array to capture the individual xray photons from the x-ray pulsars. The navigation solution relies on detailed xray source models, signal processing, navigation and timing algorithms, and analytical tools that form the basis of an autonomous XNAV system. Through previous XNAV development efforts, some techniques have been established to utilize a pulsar pulse time-of-arrival (TOA) measurement to correct a position estimate. One well-studied approach, based upon Kalman filter methods, optimally adjusts a dynamic orbit propagation solution based upon the offset in measured and predicted pulse TOA. In this delta position estimator scheme, previously estimated values of spacecraft position and velocity are utilized from an onboard orbit propagator. Using these estimated values, the detected arrival times at the spacecraft of pulses from a pulsar are compared to the predicted arrival times defined by the pulsar s pulse Backman, D. E.; Harman, P. K.; Clark, C. NASA's Airborne Astronomy Ambassadors (AAA) is a three-part professional development (PD) program for high school physics and astronomy teachers. The AAA experience consists of: (1) blended-learning professional development composed of webinars, asynchronous content learning, and a series of hands-on workshops (2) a STEM immersion experience at NASA Armstrong Flight Research Center's B703 science research aircraft facility in Palmdale, California, and (3) ongoing participation in the AAA community of practice (CoP) connecting participants with astrophysics and planetary science Subject Matter Experts (SMEs). The SETI Institute (SI) is partnering with school districts in Santa Clara and Los Angeles Counties during the AAA program's "incubation" period, calendar years 2016 through 2018. AAAs will be selected by the school districts based on criteria developed during spring 2016 focus group meetings led by the program's external evaluator, WestEd.. Teachers with 3+ years teaching experience who are assigned to teach at least 2 sections in any combination of the high school courses Physics (non-AP), Physics of the Universe (California integrated model), Astronomy, or Earth & Space Sciences are eligible. Partner districts will select at least 48 eligible applicants with SI oversight. WestEd will randomly assign selected AAAs to group A or group B. Group A will complete PD in January - June of 2017 and then participate in SOFIA science flights during fall 2017 (SOFIA Cycle 5). Group B will act as a control during the 2017-18 school year. Group B will then complete PD in January - June of 2018 and participate in SOFIA science flights in fall 2018 (Cycle 6). Under the current plan, opportunities for additional districts to seek AAA partnerships with SI will be offered in 2018 or 2019. A nominal two-week AAA curriculum component will be developed by SI for classroom delivery that will be aligned with selected California Draft Science Framework Disciplinary Core Ideas Full Text Available 3D display of spacecraft motion by using telemetry data received from satellite in real-time is described. Telemetry data are converted to the appropriate form for 3-D display by the real-time preprocessor. Stored playback telemetry data also can be processed for the display. 3D display of spacecraft motion by using real telemetry data provides intuitive comprehension of spacecraft dynamics. Carney, P. C. Advancements in hardware and software technology are summarized with specific emphasis on spacecraft computer capabilities. Available state of the art technology is reviewed and candidate architectures are defined. Zakrzwski, C. M.; Davis, Mitch; Sarmiento, Charles; Bauer, Frank H. (Technical Monitor) The Pulsed Plasma Thruster (PPT) Experiment on the Earth Observing One (EO-1) spacecraft has been designed to demonstrate the capability of a new generation PPT to perform spacecraft attitude control. Results from PPT unit level radiated electromagnetic interference (EMI) tests led to concerns about potential interference problems with other spacecraft subsystems. Initial plans to address these concerns included firing the PPT at the spacecraft level both in atmosphere, with special ground support equipment. and in vacuum. During the spacecraft level tests, additional concerns where raised about potential harm to the Advanced Land Imager (ALI). The inadequacy of standard radiated emission test protocol to address pulsed electromagnetic discharges and the lack of resources required to perform compatibility tests between the PPT and an ALI test unit led to changes in the spacecraft level validation plan. An EMI shield box for the PPT was constructed and validated for spacecraft level ambient testing. Spacecraft level vacuum tests of the PPT were deleted. Implementation of the shield box allowed for successful spacecraft level testing of the PPT while eliminating any risk to the ALI. The ALI demonstration will precede the PPT demonstration to eliminate any possible risk of damage of ALI from PPT operation. This report concerns a designed and built experimental facility that will allow the conduction of experiments for validating advanced attitude control algorithms for spacecraft in a weightless environment... Brophy, John R.; Larson, Tim The Solar Array System contracts awarded by NASA's Space Technology Mission Directorate are developing solar arrays in the 30 kW to 50 kW power range (beginning of life at 1 AU) that have significantly higher specific powers (W/kg) and much smaller stowed volumes than conventional rigid-panel arrays. The successful development of these solar array technologies has the potential to enable new types of solar electric propulsion (SEP) vehicles and missions. This paper describes a 30-kW electric propulsion vehicle built into an EELV Secondary Payload Adapter (ESPA) ring. The system uses an ESPA ring as the primary structure and packages two 15-kW Megaflex solar array wings, two 14-kW Hall thrusters, a hydrazine Reaction Control Subsystem (RCS), 220 kg of xenon, 26 kg of hydrazine, and an avionics module that contains all of the rest of the spacecraft bus functions and the instrument suite. Direct-drive is used to maximize the propulsion subsystem efficiency and minimize the resulting waste heat and required radiator area. This is critical for packaging a high-power spacecraft into a very small volume. The fully-margined system dry mass would be approximately 1120 kg. This is not a small dry mass for a Discovery-class spacecraft, for example, the Dawn spacecraft dry mass was only about 750 kg. But the Dawn electric propulsion subsystem could process a maximum input power of 2.5 kW, and this spacecraft would process 28 kW, an increase of more than a factor of ten. With direct-drive the specific impulse would be limited to about 2,000 s assuming a nominal solar array output voltage of 300 V. The resulting spacecraft would have a beginning of life acceleration that is more than an order of magnitude greater than the Dawn spacecraft. Since the spacecraft would be built into an ESPA ring it could be launched as a secondary payload to a geosynchronous transfer orbit significantly reducing the launch costs for a planetary spacecraft. The SEP system would perform the escape Software was developed to characterize the drag in each of the Cassini spacecraft's Reaction Wheel Assemblies (RWAs) to determine the RWA friction parameters. This tool measures the drag torque of RWAs for not only the high spin rates (greater than 250 RPM), but also the low spin rates (less than 250 RPM) where there is a lack of an elastohydrodynamic boundary layer in the bearings. RWA rate and drag torque profiles as functions of time are collected via telemetry once every 4 seconds and once every 8 seconds, respectively. Intermediate processing steps single-out the coast-down regions. A nonlinear model for the drag torque as a function of RWA spin rate is incorporated in order to characterize the low spin rate regime. The tool then uses a nonlinear parameter optimization algorithm based on the Nelder-Mead simplex method to determine the viscous coefficient, the Dahl friction, and the two parameters that account for the low spin-rate behavior. Fimmel, R. O.; Baker, T. E. The MULTIPAC is a central data system developed for deep-space probes with the distinctive feature that it may be repaired during flight via command and telemetry links by reprogramming around the failed unit. The computer organization uses pools of identical modules which the program organizes into one or more computers called processors. The interaction of these modules is dynamically controlled by the program rather than hardware. In the event of a failure, new programs are entered which reorganize the central data system with a somewhat reduced total processing capability aboard the spacecraft. Emphasis is placed on the evolution of the system architecture and the final overall system design rather than the specific logic design. Pior to the Halley flybys in 1986, the distribution of cometary dust grains with particle size were approximated using models which provided reasonable fits to the dynamics of dust tails, anti-tails, and infrared spectra. These distributions have since been improved using fluence data (i.e., particle fluxes integrated over time along the flyby trajectory) from three spacecraft. The fluence derived distributions are appropriate for comparison with simultaneous infrared photometry (from Earth) because they sample the particles in the same way as the IR data do (along the line of sight) and because they are directly proportional to the concentration distribution in that region of the coma which dominates the IR emission Full Text Available We present the application of a numerical method to correct electron moments calculated on-board spacecraft from the effects of potential broadening and energy range truncation. Assuming a shape for the natural distribution of the ambient plasma and employing the scalar approximation, the on-board moments can be represented as non-linear integral functions of the underlying distribution. We have implemented an algorithm which inverts this system successfully over a wide range of parameters for an assumed underlying drifting Maxwellian distribution. The outputs of the solver are the corrected electron plasma temperature Te, density Ne and velocity vector Ve. We also make an estimation of the temperature anisotropy A of the distribution. We present corrected moment data from Cluster's PEACE experiment for a range of plasma environments and make comparisons with electron and ion data from other Cluster instruments, as well as the equivalent ground-based calculations using full 3-D distribution PEACE telemetry. Candey, Robert M.; Chimiak, Reine A.; Harris, Bernard T. Tool for Interactive Plotting, Sonification, and 3D Orbit Display (TIPSOD) is a computer program for generating interactive, animated, four-dimensional (space and time) displays of spacecraft orbits. TIPSOD utilizes the programming interface of the Satellite Situation Center Web (SSCWeb) services to communicate with the SSC logic and database by use of the open protocols of the Internet. TIPSOD is implemented in Java 3D and effects an extension of the preexisting SSCWeb two-dimensional static graphical displays of orbits. Orbits can be displayed in any or all of the following seven reference systems: true-of-date (an inertial system), J2000 (another inertial system), geographic, geomagnetic, geocentric solar ecliptic, geocentric solar magnetospheric, and solar magnetic. In addition to orbits, TIPSOD computes and displays Sibeck's magnetopause and Fairfield's bow-shock surfaces. TIPSOD can be used by the scientific community as a means of projection or interpretation. It also has potential as an educational tool. Williams, W. P. The next generation of studies of the Inmarsat service are outlined, such as traffic forecasting studies, communications capacity estimates, space segment design, cost estimates, and financial analysis. Traffic forecasting will require future demand estimates, and a computer model has been developed which estimates demand over the Atlantic, Pacific, and Indian ocean regions. Communications estimates are based on traffic estimates, as a model converts traffic demand into a required capacity figure for a given area. The Erlang formula is used, requiring additional data such as peak hour ratios and distribution estimates. Basic space segment technical requirements are outlined (communications payload, transponder arrangements, etc), and further design studies involve such areas as space segment configuration, launcher and spacecraft studies, transmission planning, and earth segment configurations. Cost estimates of proposed design parameters will be performed, but options must be reduced to make construction feasible. Finally, a financial analysis will be carried out in order to calculate financial returns. Ryan, Robert E. A plan developed by the Jet Propulsion Laboratory for mission control of unmanned spacecraft is outlined. A technical matrix organization from which, in the past, project teams were formed to uniquely support a mission is replaced in this new plan. A cost effective approach was needed to make best use of limited resources. Mission control is a focal point operations and a good place to start a multimission concept. Co-location and sharing common functions are the keys to obtaining efficiencies at minimum additional risk. For the projects, the major changes are sharing a common operations area and having indirect control of personnel. The plan identifies the still direct link for the mission control functions. Training is a major element in this plan. Personnel are qualified for a position and certified for a mission. This concept is more easily accepted by new missions than the ongoing missions. Hashmall, Joseph A. This paper describes the alignment calibration of spacecraft High Gain Antennas (HGAs) for three missions. For two of the missions (the Lunar Reconnaissance Orbiter and the Solar Dynamics Observatory) the calibration was performed on orbit. For the third mission (the Global Precipitation Measurement core satellite) ground simulation of the calibration was performed in a calibration feasibility study. These three satellites provide a range of calibration situations-Lunar orbit transmitting to a ground antenna for LRO, geosynchronous orbit transmitting to a ground antenna fer SDO, and low Earth orbit transmitting to TDRS satellites for GPM The calibration results depend strongly on the quality and quantity of calibration data. With insufficient data the calibration Junction may give erroneous solutions. Manual intervention in the calibration allowed reliable parameters to be generated for all three missions. Cohen, Marc M.; Brody, Adam R. Developments in research on space human factors are reviewed in the context of a self-sustaining interstellar spacecraft based on the notion of traveling space settlements. Assumptions about interstellar travel are set forth addressing costs, mission durations, and the need for multigenerational space colonies. The model of human motivation by Maslow (1970) is examined and directly related to the design of space habitat architecture. Human-factors technology issues encompass the human-machine interface, crew selection and training, and the development of spaceship infrastructure during transtellar flight. A scenario for feasible instellar travel is based on a speed of 0.5c, a timeframe of about 100 yr, and an expandable multigenerational crew of about 100 members. Crew training is identified as a critical human-factors issue requiring the development of perceptual and cognitive aids such as expert systems and virtual reality. Perry, J. L. Storing hydrogen on board the Space Station presents both safety and logistics problems. Conventional storage using pressurized bottles requires large masses, pressures, and volumes to handle the hydrogen to be used in experiments in the U.S. Laboratory Module and residual hydrogen generated by the ECLSS. Rechargeable metal hydrides may be competitive with conventional storage techniques. The basic theory of hydride behavior is presented and the engineering properties of LaNi5 are discussed to gain a clear understanding of the potential of metal hydrides for handling spacecraft hydrogen resources. Applications to Space Station and the safety of metal hydrides are presented and compared to conventional hydride storage. This comparison indicates that metal hydrides may be safer and require lower pressures, less volume, and less mass to store an equivalent mass of hydrogen. The National Space programs of the 21st century will require abundant and relatively low cost power and energy produced by high reliability-low mass systems. Advancement of current power system related technologies will enable the U.S. to realize increased scientific payload for government missions or increased revenue producing payload for commercial space endeavors. Autonomous, unattended operation will be a highly desirable characteristic of these advanced power systems. Those space power-energy related technologies, which will comprise the space craft of the late 1990's and the early 2000's, will evolve from today's state-of-the-art systems and those long term technology development programs presently in place. However, to foster accelerated development of the more critical technologies which have the potential for high-payoffs, additional programs will be proposed and put in place between now and the end of the century. Such a program is ''Spacecraft 2000'', which is described in this paper Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful Sahin, Suemer; Sahin, Haci Mehmet; Acir, Adem The VISTA spacecraft design concept has been proposed for manned or heavy cargo deep space missions beyond earth orbit with inertial fusion energy propulsion. Rocket propulsion is provided by fusion power deposited in the inertial confined fuel pellet debris and with the help of a magnetic nozzle. The calculations for the radiation shielding have been revised under the fact that the highest jet efficiency of the vehicle could be attained only if the propelling plasma would have a narrow temperature distribution. The shield mass could be reduced from 600 tons in the original design to 62 tons. Natural and enriched lithium were the principle shielding materials. The allowable nuclear heating in the superconducting magnet coils (up to 5 mW/cm 3 ) is taken as the crucial criterion for dimensioning the radiation shielding structure of the spacecraft. The space craft mass is 6000 tons. Total peak nuclear power density in the coils is calculated as ∼5.0 mW/cm 3 for a fusion power output of 17 500 MW. The peak neutron heating density is ∼2.0 mW/cm 3 , and the peak γ-ray heating density is ∼3.0 mW/cm 3 (on different points) using natural lithium in the shielding. However, the volume averaged heat generation in the coils is much lower, namely 0.21, 0.71 and 0.92 mW/cm 3 for the neutron, γ-ray and total nuclear heating, respectively. The coil heating will be slightly lower if highly enriched 6 Li (90%) is used instead of natural lithium. Peak values are then calculated as 2.05, 2.15 and 4.2 mW/cm 3 for the neutron, γ-ray and total nuclear heating, respectively. The corresponding volume averaged heat generation in the coils became 0.19, 0.58 and 0.77 mW/cm 3 Jackson, John E. (Editor); Horowitz, Richard (Editor) The main purpose of the data catalog series is to provide descriptive references to data generated by space science flight missions. The data sets described include all of the actual holdings of the Space Science Data Center (NSSDC), all data sets for which direct contact information is available, and some data collections held and serviced by foreign investigators, NASA and other U.S. government agencies. This volume contains narrative descriptions of data sets from low and medium altitude scientific spacecraft and investigations. The following spacecraft series are included: Mariner, Pioneer, Pioneer Venus, Venera, Viking, Voyager, and Helios. Separate indexes to the planetary and interplanetary missions are also provided. Horowitz, Richard (Compiler); Jackson, John E. (Compiler); Cameron, Winifred S. (Compiler) The main purpose of the data catalog series is to provide descriptive references to data generated by space science flight missions. The data sets described include all of the actual holdings of the Space Science Data Center (NSSDC), all data sets for which direct contact information is available, and some data collections held and serviced by foreign investigators, NASA and other U.S. government agencies. This volume contains narrative descriptions of planetary and heliocentric spacecraft and associated experiments. The following spacecraft series are included: Mariner, Pioneer, Pioneer Venus, Venera, Viking, Voyager, and Helios. Separate indexes to the planetary and interplanetary missions are also provided. Schofield, Norman J. (Editor); Parthasarathy, R. (Editor); Hills, H. Kent (Editor) The main purpose of the data catalog series is to provide descriptive references to data generated by space science flight missions. The data sets described include all of the actual holdings of the Space Science Data Center (NSSDC), all data sets for which direct contact information is available, and some data collections held and serviced by foreign investigators, NASA and other U.S. government agencies. This volume contains narrative descriptions of data sets from geostationary and high altitude scientific spacecraft and investigations. The following spacecraft series are included: Mariner, Pioneer, Pioneer Venus, Venera, Viking, Voyager, and Helios. Separate indexes to the planetary and interplanetary missions are also provided. Omidyar, Guy C.; Butler, Thomas E.; Laios, Straton C. The NASA Communications (Nascom) Division of the Mission Operations and Data Systems Directorate (MO&DSD) is to undertake a major initiative to develop the Nascom Augmentation (NAUG) network to achieve its long-range service objectives for operational data transport to support the Space Station Freedom Program, the Earth Observing System (EOS), and other projects. The NAUG is the Nascom ground communications network being developed to accommodate the operational traffic of the mid-1990s and beyond. The NAUG network development will be based on the Open Systems Interconnection Reference Model (OSI-RM). This paper describes the NAUG network architecture, subsystems, topology, and services; addresses issues of internetworking the Nascom network with other elements of the Space Station Information System (SSIS); discusses the operations environment. This paper also notes the areas of related research and presents the current conception of how the network will provide broadband services in 1998. Highlights of NASA-sponsored and assisted commercial space activities of 1989 are presented. Industrial R and D in space, centers for the commercial development of space, and new cooperative agreements are addressed in the U.S. private sector in space section. In the building U.S. competitiveness through technology section, the following topics are presented: (1) technology utilization as a national priority; (2) an exploration of benefits; and (3) honoring Apollo-Era spinoffs. International and domestic R and D trends, and the space sector are discussed in the section on selected economic indicators. Other subjects included in this report are: (1) small business innovation; (2) budget highlights and trends; (3) commercial programs management; and (4) the commercial programs advisory committee. Adair, Jerry R. This paper is a consolidated report on ten major planning and scheduling systems that have been developed by the National Aeronautics and Space Administration (NASA). A description of each system, its components, and how it could be potentially used in private industry is provided in this paper. The planning and scheduling technology represented by the systems ranges from activity based scheduling employing artificial intelligence (AI) techniques to constraint based, iterative repair scheduling. The space related application domains in which the systems have been deployed vary from Space Shuttle monitoring during launch countdown to long term Hubble Space Telescope (HST) scheduling. This paper also describes any correlation that may exist between the work done on different planning and scheduling systems. Finally, this paper documents the lessons learned from the work and research performed in planning and scheduling technology and describes the areas where future work will be conducted. This slide presentation reviews the requirements that NASA has for the medical service of a crew returning to earth after long duration space flight. The scenarios predicate a water landing. Two scenarios are reviewed that outline the ship-board medical operations team and the ship board science reseach team. A schedule for the each crew upon landing is posited for each of scenarios. The requirement for a heliport on board the ship is reviewed and is on the requirement for a helicopter to return the Astronauts to the Baseline Data Collection Facility (BDCF). The ideal is to integrate the medical and science requirements, to minimize the risks and Inconveniences to the returning astronauts. The medical support that is required for all astronauts returning from long duration space flight (30 days or more) is reviewed. The personnel required to support the team is outlined. The recommendations for medical operations and science research for crew support are stated. We, as NASA, continue to Dare Mighty Things. Here we are in October. In my country, the United States of America, we celebrate the anniversary of Christopher Columbus's arrival in the Americas, which occurred on October 12, 1492. His story, although happening over 500 years ago, is still very valid today. It is a part of the American spirit; part of the international human spirit. Columbus is famous for discovering the new world we now call America, but he probably never envisioned what great discoveries would be revealed many generations later. But in order for Columbus to begin his great adventure, he needed a business plan. Ho would he go about obtaining the funds and support necessary to build, supply, and man the ships required for his travels? He had a lot of obstacles and distractions. He needed a strong, internal drive to achieve his plans and recruit a willing crew of explorers also ready to risk their all for the unknown journey ahead. As Columbus set sail, he said "By prevailing over all obstacles and distractions, one may unfailingly arrive at his chosen goal or destination." Columbus may not have known he was on a journey for all human exploration. Recently, Charlie Bolden, the NASA Administrator, said, "Human exploration is and has always been about making life better for humans on Earth." Today, NASA and the U.S. human spaceflight program hold many of the same attributes as did Columbus and his contemporaries - a willing, can-do spirit. We are on the threshold of exciting new times in space exploration. Like Columbus, we need a business plan to take us into the future. We need to design the best ships and utilize the best designers, with their past knowledge and experience, to build those ships. We need funding and support from governments to achieve these goals of space exploration into the unknown. NASA does have that business plan, and it is an ambitious plan for human spaceflight and exploration. Today, we have a magnificent spaceflight NASA programs are characterized by complexity, harsh environments and the fact that we usually have one chance to get it right. Programs last decades and need to accept new hardware and technology as it is developed. We have multiple suppliers and international partners Our challenges are many, our costs are high and our failures are highly visible. CM systems need to be scalable, adaptable to new technology and span the life cycle of the program (30+ years). Multiple Systems, Contractors and Countries added major levels of complexity to the ISS program and CM/DM and Requirements management systems center dot CM Systems need to be designed for long design life center dot Space Station Design started in 1984 center dot Assembly Complete in 2012 center dot Systems were developed on a task basis without an overall system perspective center dot Technology moves faster than a large project office, try to make sure you have a system that can adapt Buxbaum, Karen; Conley, Catharine; Lin, Ying; Hayati, Samad NASA continues to invest in capabilities that will enable or enhance planetary protection planning and implementation for future missions. These investments are critical to the Mars Exploration Program and will be increasingly important as missions are planned for exploration of the outer planets and their icy moons. Since the last COSPAR Congress, there has been an opportunity to respond to the advice of NRC-PREVCOM and the analysis of the MEPAG Special Regions Science Analysis Group. This stimulated research into such things as expanded bioburden reduction options, modern molecular assays and genetic inventory capability, and approaches to understand or avoid recontamination of spacecraft parts and samples. Within NASA, a portfolio of PP research efforts has been supported through the NASA Office of Planetary Protection, the Mars Technology Program, and the Mars Program Office. The investment strategy focuses on technology investments designed to enable future missions and reduce their costs. In this presentation we will provide an update on research and development supported by NASA to enhance planetary protection capability. Copyright 2008 California Institute of Technology. Government sponsorship acknowledged. Thieme, Lanny G.; Schreiber, Jeffrey G. The Department of Energy, NASA Glenn Research Center (GRC), and Stirling Technology Company (STC) are developing a free-piston Stirling convertor for a Stirling radioisotope power system (SRPS) to provide spacecraft on-board electric power for NASA deep space missions. The SRPS has recently been identified for potential use on the Europa Orbiter and Solar Probe Space Science missions. Stirling is also now being considered for unmanned Mars rovers. NASA GRC is conducting an in-house project to assist in developing the Stirling convertor for readiness for space qualification and mission implementation. As part of this continuing effort, the Stirling convertor will be further characterized under launch environment random vibration testing, methods to reduce convertor electromagnetic interference (EMI) will be developed, and an independent performance verification will be completed. Convertor life assessment and permanent magnet aging characterization tasks are also underway. Substitute organic materials for the linear alternator and piston bearing coatings for use in a high radiation environment have been identified and have now been incorporated in Stirling convertors built by STC for GRC. Electromagnetic and thermal finite element analyses for the alternator are also being conducted. This paper discusses the recent results and status for this NASA GRC in-house project. . McMonigal, K. A.; Pietrzyk, R. A.; Sams, C. F.; Johnson, M. A. The NASA Biological Specimen Repository (NBSR) was established in 2006 to collect, process, preserve and distribute spaceflight-related biological specimens from long duration ISS astronauts. This repository provides unique opportunities to study longitudinal changes in human physiology spanning may missions. The NBSR collects blood and urine samples from all participating ISS crewmembers who have provided informed consent. These biological samples are collected once before flight, during flight scheduled on flight days 15, 30, 60, 120 and within 2 weeks of landing. Postflight sessions are conducted 3 and 30 days after landing. The number of in-flight sessions is dependent on the duration of the mission. Specimens are maintained under optimal storage conditions in a manner that will maximize their integrity and viability for future research The repository operates under the authority of the NASA/JSC Committee for the Protection of Human Subjects to support scientific discovery that contributes to our fundamental knowledge in the area of human physiological changes and adaptation to a microgravity environment. The NBSR will institute guidelines for the solicitation, review and sample distribution process through establishment of the NBSR Advisory Board. The Advisory Board will be composed of representatives of all participating space agencies to evaluate each request from investigators for use of the samples. This process will be consistent with ethical principles, protection of crewmember confidentiality, prevailing laws and regulations, intellectual property policies, and consent form language. Operations supporting the NBSR are scheduled to continue until the end of U.S. presence on the ISS. Sample distribution is proposed to begin with selections on investigations beginning in 2017. The availability of the NBSR will contribute to the body of knowledge about the diverse factors of spaceflight on human physiology. Anderson, Michael L.; Wright, Nathaniel; Tai, Wallace Natural disasters, terrorist attacks, civil unrest, and other events have the potential of disrupting mission-essential operations in any space communications network. NASA's Space Communications and Navigation office (SCaN) is in the process of studying options for integrating the three existing NASA network elements, the Deep Space Network, the Near Earth Network, and the Space Network, into a single integrated network with common services and interfaces. The need to maintain Continuity of Operations (COOP) after a disastrous event has a direct impact on the future network design and operations concepts. The SCaN Integrated Network will provide support to a variety of user missions. The missions have diverse requirements and include anything from earth based platforms to planetary missions and rovers. It is presumed that an integrated network, with common interfaces and processes, provides an inherent advantage to COOP in that multiple elements and networks can provide cross-support in a seamless manner. The results of trade studies support this assumption but also show that centralization as a means of achieving integration can result in single points of failure that must be mitigated. The cost to provide this mitigation can be substantial. In support of this effort, the team evaluated the current approaches to COOP, developed multiple potential approaches to COOP in a future integrated network, evaluated the interdependencies of the various approaches to the various network control and operations options, and did a best value assessment of the options. The paper will describe the trade space, the study methods, and results of the study. Miller, Robert D. NASA has been interested in wireless communications for many years, especially when the crew size of the International Space Station (ISS) was reduced to two members. NASA began a study to find ways to improve crew efficiency to make sure the ISS could be maintained with limited crew capacity and still be a valuable research testbed in Low-Earth Orbit (LEO). Currently the ISS audio system requires astronauts to be tethered to the audio system, specifically a device called the Audio Terminal Unit (ATU). Wireless communications would remove the tether and allow astronauts to freely float from experiment to experiment without having to worry about moving and reconnecting the associated cabling or finding the space equivalent of an extension cord. A wireless communication system would also improve safety and reduce system susceptibility to Electromagnetic Interference (EMI). Safety would be improved because a crewmember could quickly escape a fire while maintaining communications with the ground and other crewmembers at any location. In addition, it would allow the crew to overcome the volume limitations of the ISS ATU. This is especially important to the Portable Breathing Apparatus (PBA). The next generation of space vehicles and habitats also demand wireless attention. Orion will carry up to six crewmembers in a relatively small cabin. Yet, wireless could become a driving factor to reduce launch weight and increase habitable volume. Six crewmembers, each tethered to a panel, could result in a wiring mess even in nominal operations. In addition to Orion, research is being conducted to determine if Bluetooth is appropriate for Lunar Habitat applications. Perry, J. L. Contamination of a crewed spacecraft's cabin environment leading to environmental control and life support system (ECLSS) functional capability and operational margin degradation or loss can have an adverse effect on NASA's space exploration mission figures of merit-safety, mission success, effectiveness, and affordability. The role of evaluating the ECLSS's compatibility and cabin environmental impact as a key component of pass trace contaminant control is presented and the technical approach is described in the context of implementing NASA's safety and mission success objectives. Assessment examples are presented for a variety of chemicals used in vehicle systems and experiment hardware for the International Space Station program. The ECLSS compatibility and cabin environmental impact assessment approach, which can be applied to any crewed spacecraft development and operational effort, can provide guidance to crewed spacecraft system and payload developers relative to design criteria assigned ECLSS compatibility and cabin environmental impact ratings can be used by payload and system developers as criteria for ensuring adequate physical and operational containment. In additional to serving as an aid for guiding containment design, the assessments can guide flight rule and procedure development toward protecting the ECLSS as well as approaches for contamination event remediation. Leonard, Regis F. Over the last ten years, NASA has undertaken an extensive program aimed at development of solid state power amplifiers for space applications. Historically, the program may be divided into three phases. The first efforts were carried out in support of the advanced communications technology satellite (ACTS) program, which is developing an experimental version of a Ka-band commercial communications system. These first amplifiers attempted to use hybrid technology. The second phase was still targeted at ACTS frequencies, but concentrated on monolithic implementations, while the current, third phase, is a monolithic effort that focusses on frequencies appropriate for other NASA programs and stresses amplifier efficiency. The topics covered include: (1) 20 GHz hybrid amplifiers; (2) 20 GHz monolithic MESFET power amplifiers; (3) Texas Instruments' (TI) 20 GHz variable power amplifier; (4) TI 20 GHz high power amplifier; (5) high efficiency monolithic power amplifiers; (6) GHz high efficiency variable power amplifier; (7) TI 32 GHz monolithic power amplifier performance; (8) design goals for Hughes' 32 GHz variable power amplifier; and (9) performance goals for Hughes' pseudomorphic 60 GHz power amplifier. Russell, C.T.; Mellott, M.M.; Smith, E.J.; King, J.H. ISEE 1,2,3 IMP8, and Prognoz 7 observations of interplanetary shocks in 1978 and 1979 provide five instances where a single shock is observed by four spacecraft. These observations are used to determine best-fit normals for these five shocks. In addition to providing well-documented shocks for furture techniques. When the angle between upstream and downstream magnetic field is greater than 20, magnetic coplanarity can be an accurate single spacecraft method. However, no technique based solely on the magnetic measurements at one or multiple sites was universally accurate. Thus, we recommend using overdetermined shock normal solutions whenever possible, utilizing plasma measurements, separation vectors, and time delays together with magnetic constraints Russell, C. T.; Mellott, M. M.; Smith, E. J.; King, J. H. ISEE 1, 2, 3, IMP 8, and Prognoz 7 observations of interplanetary shocks in 1978 and 1979 provide five instances where a single shock is observed by four spacecraft. These observations are used to determine best-fit normals for these five shocks. In addition to providing well-documented shocks for future investigations these data allow the evaluation of the accuracy of several shock normal determination techniques. When the angle between upstream and downstream magnetic field is greater than 20 deg, magnetic coplanarity can be an accurate single spacecraft method. However, no technique based solely on the magnetic measurements at one or multiple sites was universally accurate. Thus, the use of overdetermined shock normal solutions, utilizing plasma measurements, separation vectors, and time delays together with magnetic constraints, is recommended whenever possible. Africano, J. L.; Stansbery, E. G. Since the launch of Sputnik in 1957, the number of manmade objects in orbit around the Earth has dramatically increased. The United States Space Surveillance Network (SSN) tracks and maintains orbits on over nine thousand objects down to a limiting diameter of about ten centimeters. Unfortunately, active spacecraft are only a small percentage ( ~ 7%) of this population. The rest of the population is orbital debris or ``space junk" consisting of expended rocket bodies, dead payloads, bits and pieces from satellite launches, and fragments from satellite breakups. The number of these smaller orbital debris objects increases rapidly with decreasing size. It is estimated that there are at least 130,000 orbital debris objects between one and ten centimeters in diameter. Most objects smaller than 10 centimeters go untracked! As the orbital debris population grows, the risk to other orbiting objects, most importantly manned space vehicles, of a collision with a piece of debris also grows. The kinetic energy of a solid 1 cm aluminum sphere traveling at an orbital velocity of 10 km/sec is equivalent to a 400 lb. safe traveling at 60 mph. Fortunately, the volume of space in which the orbiting population resides is large, collisions are infrequent, but they do occur. The Space Shuttle often returns to earth with its windshield pocked with small pits or craters caused by collisions with very small, sub-millimeter-size pieces of debris (paint flakes, particles from solid rocket exhaust, etc.), and micrometeoroids. To get a more complete picture of the orbital-debris environment, NASA has been using both radar and optical techniques to monitor the orbital debris environment. This paper gives an overview of the orbital debris environment and NASA's measurement program. We provide an overview of several ongoing NASA endeavors based on concepts, systems, and technology from the Semantic Web arena. Indeed NASA has been one of the early adopters of Semantic Web Technology and we describe ongoing and completed R&D efforts for several applications ranging from collaborative systems to airspace information management to enterprise search to scientific information gathering and discovery systems at NASA. Petersen, Walter A.; Wolff, David B. Characteristics of the NASA NPOL S-band dual-polarimetric radar are presented including its operating characteristics, field configuration, scanning capabilities and calibration approaches. Examples of precipitation science data collections conducted using various scan types, and associated products, are presented for different convective system types and previous field campaign deployments. Finally, the NASA NPOL radar location is depicted in its home base configuration within the greater Wallops Flight Facility precipitation research array supporting NASA Global Precipitation Measurement Mission ground validation. NASA's Information Technology (IT) resources and IT support continue to be a growing and integral part of all NASA missions. Furthermore, the growing IT support requirements are becoming more complex and diverse. The following are a few examples of the growing complexity and diversity of NASA's IT environment. NASA is conducting basic IT research in the Intelligent Synthesis Environment (ISE) and Intelligent Systems (IS) Initiatives. IT security, infrastructure protection, and privacy of data are requiring more and more management attention and an increasing share of the NASA IT budget. Outsourcing of IT support is becoming a key element of NASA's IT strategy as exemplified by Outsourcing Desktop Initiative for NASA (ODIN) and the outsourcing of NASA Integrated Services Network (NISN) support. Finally, technology refresh is helping to provide improved support at lower cost. Recently the NASA Automated Data Processing (ADP) Consolidation Center (NACC) upgraded its bipolar technology computer systems with Complementary Metal Oxide Semiconductor (CMOS) technology systems. This NACC upgrade substantially reduced the hardware maintenance and software licensing costs, significantly increased system speed and capacity, and reduced customer processing costs by 11 percent. Krisko, Paula H. This paper describes the functionality and use of ORDEM2010, which replaces ORDEM2000, as the NASA Orbital Debris Program Office (ODPO) debris engineering model. Like its predecessor, ORDEM2010 serves the ODPO mission of providing spacecraft designers/operators and debris observers with a publicly available model to calculate orbital debris flux by current-state-of-knowledge methods. The key advance in ORDEM2010 is the input file structure of the yearly debris populations from 1995-2035 of sizes 10 micron - 1 m. These files include debris from low-Earth orbits (LEO) through geosynchronous orbits (GEO). Stable orbital elements (i.e., those that do not randomize on a sub-year timescale) are included in the files as are debris size, debris number, material density, random error and population error. Material density is implemented from ground-test data into the NASA breakup model and assigned to debris fragments accordingly. The random and population errors are due to machine error and uncertainties in debris sizes. These high-fidelity population files call for a much higher-level model analysis than what was possible with the populations of ORDEM2000. Population analysis in the ORDEM2010 model consists of mapping matrices that convert the debris population elements to debris fluxes. One output mode results in a spacecraft encompassing 3-D igloo of debris flux, compartmentalized by debris size, velocity, pitch, and yaw with respect to spacecraft ram direction. The second output mode provides debris flux through an Earth-based telescope/radar beam from LEO through GEO. This paper compares the new ORDEM2010 with ORDEM2000 in terms of processes and results with examples of specific orbits. Response Damage Prediction Tool (IMPACT2); ISSM: Ice Sheet System Model; Automated Loads Analysis System (ATLAS); Integrated Main Propulsion System Performance Reconstruction Process/Models. Phoenix Telemetry Processor; Contact Graph Routing Enhancements Developed in ION for DTN; GFEChutes Lo-Fi; Advanced Strategic and Tactical Relay Request Management for the Mars Relay Operations Service; Software for Generating Troposphere Corrections for InSAR Using GPS and Weather Model Data; Ionospheric Specifications for SAR Interferometry (ISSI); Implementation of a Wavefront-Sensing Algorithm; Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In; Trade Space Specification Tool (TSST) for Rapid Mission Architecture (Version 1.2); Acoustic Emission Analysis Applet (AEAA) Software; Memory-Efficient Onboard Rock Segmentation; Advanced Multimission Operations System (ATMO); Robot Sequencing and Visualization Program (RSVP); Automating Hyperspectral Data for Rapid Response in Volcanic Emergencies; Raster-Based Approach to Solar Pressure Modeling; Space Images for NASA JPL Android Version; Kinect Engineering with Learning (KEWL); Spacecraft 3D Augmented Reality Mobile App; MPST Software: grl_pef_check; Real-Time Multimission Event Notification System for Mars Relay; SIM_EXPLORE: Software for Directed Exploration of Complex Systems; Mobile Timekeeping Application Built on Reverse-Engineered JPL Infrastructure; Advanced Query and Data Mining Capabilities for MaROS; Jettison Engineering Trajectory Tool; MPST Software: grl_suppdoc; PredGuid+A: Orion Entry Guidance Modified for Aerocapture; Planning Coverage Campaigns for Mission Design and Analysis: CLASP for DESDynl; and Space Place Prime. Davis, Jeffrey R.; Richard, Elizabeth E. On October 18, 2010, the NASA Human Health and Performance center (NHHPC) was opened to enable collaboration among government, academic and industry members. Membership rapidly grew to 60 members (http://nhhpc.nasa.gov ) and members began identifying collaborative projects as detailed below. In addition, a first workshop in open collaboration and innovation was conducted on January 19, 2011 by the NHHPC resulting in additional challenges and projects for further development. This first workshop was a result of the SLSD successes in running open innovation challenges over the past two years. In 2008, the NASA Johnson Space Center, Space Life Sciences Directorate (SLSD) began pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical problems. From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external challenges were conducted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive platform, customized to NASA use, and promoted as NASA@Work. The results from the 34 challenges involved not only technical solutions that were reported previously at the 61st IAC, but also the formation of new collaborative relationships. For example, the TopCoder pilot was expanded by the NASA Space Operations Mission Directorate to the NASA Tournament Lab in collaboration with Harvard Business School and TopCoder. Building on these initial successes, the NHHPC workshop in January of 2011, and ongoing NHHPC member discussions, several important collaborations are in development: Space Act Agreement between NASA and GE for collaborative projects, NASA and academia for a Visual Impairment / Intracranial Hypertension summit (February 2011), NASA and the DoD through the Defense Venture Catalyst Initiative (DeVenCI) for a technical needs workshop (June 2011), NASA and the San Diego Zoo Dugal-Whitehead, Norma R.; Johnson, Yvette B. NASA - Marshall Space Flight Center is creating a large high voltage electrical power system testbed called LASEPS. This testbed is being developed to simulate an end-to-end power system from power generation and source to loads. When the system is completed it will have several power configurations, which will include several battery configurations. These configurations are: two 120 V batteries, one or two 150 V batteries, and one 250 to 270 V battery. This breadboard encompasses varying levels of autonomy from remote power converters to conventional software control to expert system control of the power system elements. In this paper, the construction and provisions of this breadboard are discussed. Schmidt, R.; Domingo, V.; Shawhan, S.D.; Bohlin, D. The NASA/ESA Solar-Terrestrial Science Program, which consists of the four-spacecraft cluster mission and the Solar and Heliospheric Observatory (SOHO), is examined. It is expected that the SOHO spacecraft will be launched in 1995 to study solar interior structure and the physical processes associated with the solar corona. The SOHO design, operation, data, and ground segment are discussed. The Cluster mission is designed to study small-scale structures in the earth's plasma environment. The Soviet Union is expected to contribute two additional spacecraft, which will be similar to Cluster in instrumentation and design. The capabilities, mission strategy, spacecraft design, payload, and ground segment of Cluster are discussed. 19 references In this paper, a nonlinear trajectory control algorithm of rendezvous with maneuvering target spacecraft is presented. The disturbance forces on the chaser and target spacecraft and the thrust forces on the chaser spacecraft are considered in the analysis. The control algorithm developed in this paper uses the relative distance and relative velocity between the target and chaser spacecraft as the inputs. A general formula of reference relative trajectory of the chaser spacecraft to the target spacecraft is developed and applied to four different proximity maneuvers, which are in-track circling, cross-track circling, in-track spiral rendezvous and cross-track spiral rendezvous. The closed-loop differential equations of the proximity relative motion with the control algorithm are derived. It is proven in the paper that the tracking errors between the commanded relative trajectory and the actual relative trajectory are bounded within a constant region determined by the control gains. The prediction of the tracking errors is obtained. Design examples are provided to show the implementation of the control algorithm. The simulation results show that the actual relative trajectory tracks the commanded relative trajectory tightly. The predicted tracking errors match those calculated in the simulation results. The control algorithm developed in this paper can also be applied to interception of maneuver target spacecraft and relative trajectory control of spacecraft formation flying. Miele, A.; Mancuso, S. This paper deals with the optimization of the ascent trajectories for single-stage-sub-orbit (SSSO), single-stage-to-orbit (SSTO), and two-stage-to-orbit (TSTO) rocket-powered spacecraft. The maximum payload weight problem is studied for different values of the engine specific impulse and spacecraft structural factor. The main conclusions are that: feasibility of SSSO spacecraft is guaranteed for all the parameter combinations considered; feasibility of SSTO spacecraft depends strongly on the parameter combination chosen; not only feasibility of TSTO spacecraft is guaranteed for all the parameter combinations considered, but the TSTO payload is several times the SSTO payload. Improvements in engine specific impulse and spacecraft structural factor are desirable and crucial for SSTO feasibility; indeed, aerodynamic improvements do not yield significant improvements in payload. For SSSO, SSTO, and TSTO spacecraft, simple engineering approximations are developed connecting the maximum payload weight to the engine specific impulse and spacecraft structural factor. With reference to the specific impulse/structural factor domain, these engineering approximations lead to the construction of zero-payload lines separating the feasibility region (positive payload) from the unfeasibility region (negative payload). appears to work similarly in Internet Explorer, FireFox , and Opera, but fails in Safari and Chrome. Note that the SEE Spacecraft Charging Handbook is... Characteristics of Spacecraft Charging in Low Earth Orbit, J Geophys Res. 11 7, doi: 10.1029/20 11JA016875, 2012. 2 M. Cho, K. Saito, T. Hamanaga, Data Spacecraft formation flying is considered as a key technology for advanced space missions. Compared to large individual spacecraft, the distribution of sensor systems amongst multiple platforms offers improved flexibility, shorter times to mission, and the prospect of being more cost effective. Simpson, David G. A method is presented by which the attitude of a low-Earth orbiting spacecraft may be determined using a vector magnetometer, a digital Sun sensor, and a mathematical model of the Earth's magnetic field. The method is currently being implemented for the Solar Maximum Mission spacecraft (as a backup for the failing star trackers) as a way to determine roll gyro drift. The development of software for spacecraft represents a particular challenge and is, in many ways, a worst case scenario from a design perspective. Spacecraft software must be "bulletproof" and operate for extended periods of time without user intervention. If the software fails, it cannot be manually serviced. Software failure may… Moleski, Walt; Luczak, Ed; Morris, Kim; Clayton, Bill; Scherf, Patricia; Obenschain, Arthur F. (Technical Monitor) This paper describes how intelligent agent technology was successfully prototyped and then deployed in a smart eCommerce application for NASA. An intelligent software agent called the Intelligent Service Validation Agent (ISVA) was added to an existing web-based ordering application to validate complex orders for spacecraft mission services. This integration of intelligent agent technology with conventional web technology satisfies an immediate NASA need to reduce manual order processing costs. The ISVA agent checks orders for completeness, consistency, and correctness, and notifies users of detected problems. ISVA uses NASA business rules and a knowledge base of NASA services, and is implemented using the Java Expert System Shell (Jess), a fast rule-based inference engine. The paper discusses the design of the agent and knowledge base, and the prototyping and deployment approach. It also discusses future directions and other applications, and discusses lessons-learned that may help other projects make their aerospace eCommerce applications smarter. Manning, Robert M. Over the last few decades, application of current terrestrial computer technology in embedded spacecraft control systems has been expensive and wrought with many technical challenges. These challenges have centered on overcoming the extreme environmental constraints (protons, neutrons, gamma radiation, cosmic rays, temperature, vibration, etc.) that often preclude direct use of commercial off-the-shelf computer technology. Reliability, fault tolerance and power have also greatly constrained the selection of spacecraft control system computers. More recently, new constraints are being felt, cost and mass in particular, that have again narrowed the degrees of freedom spacecraft designers once enjoyed. This paper discusses these challenges, how they were previously overcome, how future trends in commercial computer technology will simplify (or hinder) selection of computer technology for spacecraft control applications, and what spacecraft electronic system designers can do now to circumvent them. Rabideau, G.; Knight, R.; Chien, S.; Fukunaga, A.; Govindjee, A. This paper describes the Automated Scheduling and Planning Environment (ASPEN). ASPEN encodes complex spacecraft knowledge of operability constraints, flight rules, spacecraft hardware, science experiments and operations procedures to allow for automated generation of low level spacecraft sequences. Using a technique called iterative repair, ASPEN classifies constraint violations (i.e., conflicts) and attempts to repair each by performing a planning or scheduling operation. It must reason about which conflict to resolve first and what repair method to try for the given conflict. ASPEN is currently being utilized in the development of automated planner/scheduler systems for several spacecraft, including the UFO-1 naval communications satellite and the Citizen Explorer (CX1) satellite, as well as for planetary rover operations and antenna ground systems automation. This paper focuses on the algorithm and search strategies employed by ASPEN to resolve spacecraft operations constraints, as well as the data structures for representing these constraints. Moroney, Dave; Lashbrook, Dave; Mckibben, Barry; Gardener, Nigel; Rivers, Thane; Nottingham, Greg; Golden, Bill; Barfield, Bill; Bruening, Joe; Wood, Dave This is the final product of the spacecraft design project completed to fulfill the academic requirements of the Spacecraft Design and Integration 2 course (AE-4871) taught at the U.S. Naval Postgraduate School. The Spacecraft Design and Integration 2 course is intended to provide students detailed design experience in selection and design of both satellite system and subsystem components, and their location and integration into a final spacecraft configuration. The design team pursued a design to support a Low Earth Orbiting (LEO) communications system (GLOBALSTAR) currently under development by the Loral Cellular Systems Corporation. Each of the 14 team members was assigned both primary and secondary duties in program management or system design. Hardware selection, spacecraft component design, analysis, and integration were accomplished within the constraints imposed by the 11 week academic schedule and the available design facilities. Righter, Kevin; Pace, Lisa F.; Messenger, Keiko Final Paper and not the abstract is attached. The OSIRIS-REx asteroid sample return mission launched to asteroid Bennu September 8, 2016. The spacecraft will arrive at Bennu in late 2019, orbit and map the asteroid, and perform a touch and go (TAG) sampling maneuver in July 2020. After confirma-tion of successful sample stowage, the spacecraft will return to Earth, and the sample return capsule (SRC) will land in Utah in September 2023. Samples will be recovered from Utah and then transported and stored in a new sample cleanroom at NASA Johnson Space Center in Houston. All curation-specific ex-amination and documentation activities related to Ben-nu samples will be conducted in the dedicated OSIRIS-REx sample cleanroom to be built at NASA-JSC. Sun, Liang; Huo, Wei; Jiao, Zongxia This paper studies relative pose control for a rigid spacecraft with parametric uncertainties approaching to an unknown tumbling target in disturbed space environment. State feedback controllers for relative translation and relative rotation are designed in an adaptive nonlinear robust control framework. The element-wise and norm-wise adaptive laws are utilized to compensate the parametric uncertainties of chaser and target spacecraft, respectively. External disturbances acting on two spacecraft are treated as a lumped and bounded perturbation input for system. To achieve the prescribed disturbance attenuation performance index, feedback gains of controllers are designed by solving linear matrix inequality problems so that lumped disturbance attenuation with respect to the controlled output is ensured in the L 2 -gain sense. Moreover, in the absence of lumped disturbance input, asymptotical convergence of relative pose are proved by using the Lyapunov method. Numerical simulations are performed to show that position tracking and attitude synchronization are accomplished in spite of the presence of couplings and uncertainties. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved. Theiss, Harold L. NASA was quick to realize the potential that the Global Positioning System (GPS) had to offer for its many diverse vehicles, experiments and platforms. Soon after the first Block 1 GPS satellites were launched, NASA began to use the tremendous capabilities that they had to offer. Even with a partial GPS constellation in place, important results have been obtained about the shape, orientation and rotation of the earth and calibration of the ionosphere and troposphere. These calibrations enhance geophysical science and facilitate the navigation of interplanetary spacecraft. Some very important results have been obtained in the continuing NASA program for aircraft terminal area operations. Currently, a large amount of activity is being concentrated on real time kinematic carrier phase tracking which has the potential to revolutionize aircraft navigation. This year marks the launch of the first GPS receiver equipped earth-orbiting NASA spacecraft: the Extreme Ultraviolet Explorer and the Ocean Topography Experiment (TOPEX/Poseidon). This paper describes a cross section of GPS-based research at NASA. The Low Cost Rapid Response Spacecraft (LCRRS) is an ongoing research development project at NASA Ames Research Center (ARC), Moffett Field, California. The prototype spacecraft, called Cost Optimized Test for Spacecraft Avionics and Technologies (COTSAT) is the first of what could potentially be a series of rapidly produced low-cost satellites. COTSAT has a target launch date of March 2009 on a SpaceX Falcon 9 launch vehicle. The LCRRS research system design incorporates use of COTS (Commercial Off The Shelf), MOTS (Modified Off The Shelf), and GOTS (Government Off The Shelf) hardware for a remote sensing satellite. The design concept was baselined to support a 0.5 meter Ritchey-Chretien telescope payload. This telescope and camera system is expected to achieve 1.5 meter/pixel resolution. The COTSAT team is investigating the possibility of building a fully functional spacecraft for $500,000 parts and $2,000,000 labor. Cost is dramatically reduced by using a sealed container, housing the bus and payload subsystems. Some electrical and RF designs were improved/upgraded from GeneSat-1 heritage systems. The project began in January 2007 and has yielded two functional test platforms. It is expected that a flight-qualified unit will be finished in December 2008. Flight quality controls are in place on the parts and materials used in this development with the aim of using them to finish a proto-flight satellite. For LEO missions the team is targeting a mission class requiring a minimum of six months lifetime or more. The system architecture incorporates several design features required by high reliability missions. This allows for a true skunk works environment to rapidly progress toward a flight design. Engineering and fabrication is primarily done in-house at NASA Ames with flight certifications on materials. The team currently employs seven Full Time Equivalent employees. The success of COTSATs small team in this effort can be attributed to highly cross trained Genova, Anthony L.; Loucks, Michael; Carrico, John The purpose of this extended abstract is to present results from a failed lunar-orbit insertion (LOI) maneuver contingency analysis for the Lunar Atmosphere Dust Environment Explorer (LADEE) mission, managed and operated by NASA Ames Research Center in Moffett Field, CA. The LADEE spacecrafts nominal trajectory implemented multiple sub-lunar phasing orbits centered at Earth before eventually reaching the Moon (Fig. 1) where a critical LOI maneuver was to be performed [1,2,3]. If this LOI was missed, the LADEE spacecraft would be on an Earth-escape trajectory, bound for heliocentric space. Although a partial mission recovery is possible from a heliocentric orbit (to be discussed in the full paper), it was found that an escape-prevention maneuver could be performed several days after a hypothetical LOI-miss, allowing a return to the desired science orbit around the Moon without leaving the Earths sphere-of-influence (SOI). This slide presentation reviews NASA's use of systems engineering for the complete life cycle of a project. Systems engineering is a methodical, disciplined approach for the design, realization, technical management, operations, and retirement of a system. Each phase of a NASA project is terminated with a Key decision point (KDP), which is supported by major reviews. Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J. Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session. WASHINGTON -- NASA has selected fellows in three areas of astronomy and astrophysics for its Einstein, Hubble, and Sagan Fellowships. The recipients of this year's post-doctoral fellowships will conduct independent research at institutions around the country. "The new fellows are among the best and brightest young astronomers in the world," said Jon Morse, director of the Astrophysics Division in NASA's Science Mission Directorate in Washington. "They already have contributed significantly to studies of how the universe works, the origin of our cosmos and whether we are alone in the cosmos. The fellowships will serve as a springboard for scientific leadership in the years to come, and as an inspiration for the next generation of students and early career researchers." Each fellowship provides support to the awardees for three years. The fellows may pursue their research at any host university or research center of their choosing in the United States. The new fellows will begin their programs in the fall of 2009. "I cannot tell you how much I am looking forward to spending the next few years conducting research in the U.S., thanks to the fellowships," said Karin Oberg, a graduate student in Leiden, The Netherlands. Oberg will study the evolution of water and ices during star formation when she starts her fellowship at the Smithsonian Astrophysical Observatory in Cambridge, Mass. People Who Read This Also Read... Milky Way's Super-efficient Particle Accelerators Caught in The Act Cosmic Heavyweights in Free-for-all Galaxies Coming of Age in Cosmic Blobs Cassiopeia A Comes Alive Across Time and Space A diverse group of 32 young scientists will work on a wide variety of projects, such as understanding supernova hydrodynamics, radio transients, neutron stars, galaxy clusters and the intercluster medium, supermassive black holes, their mergers and the associated gravitational waves, dark energy, dark matter and the reionization process. Other research topics include Deutsch, Leslie J.; Lesh, J. R. There has never been a long-duration deep space mission that did not have unexpected problems during operations. JPL's Interplanetary Network Directorate (IND) Technology Program was created to develop new and improved methods of communication, navigation, and operations. A side benefit of the program is that it maintains a cadre of human talent and experimental systems that can be brought to bear on unexpected problems that may occur during mission operations. Solutions fall into four categories: applying new technology during operations to enhance science performance, developing new operational strategies, providing domain experts to help find solutions, and providing special facilities to trouble-shoot problems. These are illustrated here using five specific examples of spacecraft anomalies that have been solved using, at least in part, expertise or facilities from the IND Technology Program: Mariner 10, Voyager, Galileo, SOHO, and Cassini/Huygens. In this era of careful cost management, and emphasis on returns-on-investment, it is important to recognize this crucial additional benefit from such technology program investments. Deredempt, Marie-Helene; Kollias, Vangelis; Sun, Zhili; Canamares, Ernest; Ricco, Philippe In aeronautical domain, ARINC-664 Part 7 specification (AFDX) provides the enabling technology for interfacing equipment in Integrated Modular Avionics (IMA) architectures. The complementary part of AFDX for a complete interoperability - Time and Space Partitioning (ARINC 653) concepts - was already studied as part of space domain ESA roadmap (i.e. IMA4Space project)Standardized IMA based architecture is already considered in aeronautical domain as more flexible, reliable and secure. Integration and validation become simple, using a common set of tools and data base and could be done by part on different means with the same definition (hardware and software test benches, flight control or alarm test benches, simulator and flight test installation).In some area, requirements in terms of data processing are quite similar in space domain and the concept could be applicable to take benefit of the technology itself and of the panel of hardware and software solutions and tools available on the market. The Mission project (Methodology and assessment for the applicability of ARINC-664 (AFDX) in Satellite/Spacecraft on-board communicatION networks), as an FP7 initiative for bringing terrestrial SME research into the space domain started to evaluate the applicability of the standard in space domain. Paluszek, Michael A. (Inventor); Piper, Jr., George E. (Inventor) A spacecraft attitude and/or velocity control system includes a controller which responds to at least attitude errors to produce command signals representing a force vector F and a torque vector T, each having three orthogonal components, which represent the forces and torques which are to be generated by the thrusters. The thrusters may include magnetic torquer or reaction wheels. Six difference equations are generated, three having the form ##EQU1## where a.sub.j is the maximum torque which the j.sup.th thruster can produce, b.sub.j is the maximum force which the j.sup.th thruster can produce, and .alpha..sub.j is a variable representing the throttling factor of the j.sup.th thruster, which may range from zero to unity. The six equations are summed to produce a single scalar equation relating variables .alpha..sub.j to a performance index Z: ##EQU2## Those values of .alpha. which maximize the value of Z are determined by a method for solving linear equations, such as a linear programming method. The Simplex method may be used. The values of .alpha..sub.j are applied to control the corresponding thrusters. Markley, F. Landis; Sedlak, Joseph E. This paper presents a Kalman filter using a seven-component attitude state vector comprising the angular momentum components in an inertial reference frame, the angular momentum components in the body frame, and a rotation angle. The relatively slow variation of these parameters makes this parameterization advantageous for spinning spacecraft attitude estimation. The filter accounts for the constraint that the magnitude of the angular momentum vector is the same in the inertial and body frames by employing a reduced six-component error state. Four variants of the filter, defined by different choices for the reduced error state, are tested against a quaternion-based filter using simulated data for the THEMIS mission. Three of these variants choose three of the components of the error state to be the infinitesimal attitude error angles, facilitating the computation of measurement sensitivity matrices and causing the usual 3x3 attitude covariance matrix to be a submatrix of the 6x6 covariance of the error state. These variants differ in their choice for the other three components of the error state. The variant employing the infinitesimal attitude error angles and the angular momentum components in an inertial reference frame as the error state shows the best combination of robustness and efficiency in the simulations. Attitude estimation results using THEMIS flight data are also presented. Pierson, Duane L.; Ott, C. Mark Microorganisms can spoil food supplies, contaminate drinking water, release noxious volatile compounds, initiate allergic responses, contaminate the environment, and cause infectious diseases. International acceptability limits have been established for bacterial and fungal contaminants in air and on surfaces, and environmental monitoring is conducted to ensure compliance. Allowable levels of microorganism in water and food have also been established. Environmental monitoring of the space shuttle, the Mir, and the ISS have allowed for some general conclusions. Generally, the bacteria found in air and on interior surfaces are largely of human origin such as Staphylococcus spp., Micrococcus spp. Common environmental genera such as Bacillus spp. are the most commonly isolated bacteria from all spacecraft. Yeast species associated with humans such as Candida spp. are commonly found. Aspergillus spp., Penicillium spp., and Cladosporium spp. are the most commonly isolated filamentous fungi. Microbial levels in the environment differ significantly depending upon humidity levels, condensate accumulation, and availability of carbon sources. However, human "normal flora" of bacteria and fungi can result in serious, life-threatening diseases if human immunity is compromised. Disease incidence is expected to increase as mission duration increases. Brauer, G. L.; Petersen, F. M.; Cornick, D.E.; Stevenson, R.; Olson, D. W. POST/6D POST is set of two computer programs providing ability to target and optimize trajectories of powered or unpowered spacecraft or aircraft operating at or near rotating planet. POST treats point-mass, three-degree-of-freedom case. 6D POST treats more-general rigid-body, six-degree-of-freedom (with point masses) case. Used to solve variety of performance, guidance, and flight-control problems for atmospheric and orbital vehicles. Applications include computation of performance or capability of vehicle in ascent, or orbit, and during entry into atmosphere, simulation and analysis of guidance and flight-control systems, dispersion-type analyses and analyses of loads, general-purpose six-degree-of-freedom simulation of controlled and uncontrolled vehicles, and validation of performance in six degrees of freedom. Written in FORTRAN 77 and C language. Two machine versions available: one for SUN-series computers running SunOS(TM) (LAR-14871) and one for Silicon Graphics IRIS computers running IRIX(TM) operating system (LAR-14869). Ribak, Erez N.; Gurfil, Pini; Moreno, Coral Interferometry in space has marked advantages: long integration times and observation in spectral bands where the atmosphere is opaque. When installed on separate spacecraft, it also has extended and flexible baselines for better filling of the uv plane. Intensity interferometry has an additional advantage, being insensitive to telescope and path errors, but is unfortunately much less light-sensitive. In planning towards such a mission, we are experimenting with some fundamental research issues. Towards this end, we constructed a system of three vehicles floating on an air table in formation flight, with an autonomous orbit control. Each such device holds its own light collector, detector, and transmitter, to broadcast its intensity signal towards a central receiving station. At this station we implement parallel radio receivers, analogue to digital converters, and a digital three-way correlator. Current technology limits us to ~1GHz transmission frequency, which corresponds to a comfortable 0.3m accuracy in light-bucket shape and in its relative position. Naïve calculations place our limiting magnitude at ~7 in the blue and ultraviolet, where amplitude interferometers are limited. The correlation signal rides on top of this huge signal with its own Poisson noise, requiring a very large dynamic range, which needs to be transmitted in full. We are looking at open questions such as deployable optical collectors and radio antennae of similar size of a few meters, and how they might influence our data transmission and thus set our flux limit. Merhav, Tamir R. (Inventor); Festa, Michael T. (Inventor); Stetson, Jr., John B. (Inventor) A spacecraft (8) includes a movable appendage such as solar panels (12) operated by a stepping motor (28) driven by pulses (311). In order to reduce vibration andor attitude error, the drive pulses are generated by a clock down-counter (312) with variable count ratio. Predetermined desired clock ratios are stored in selectable memories (314a-d), and the selected ratio (R) is coupled to a comparator (330) together with the current ratio (C). An up-down counter (340) establishes the current count-down ratio by counting toward the desired ratio under the control of the comparator; thus, a step change of solar panel speed never occurs. When a direction change is commanded, a flag signal generator (350) disables the selectable memories, and enables a further store (360), which generates a count ratio representing a very slow solar panel rotational rate, so that the rotational rate always slows to a low value before direction is changed. The principles of the invention are applicable to any movable appendage. Delzanno, G. L.; Lucco Castello, F.; Borovsky, J.; Miars, G.; Leon, O.; Gilchrist, B. E. The idea of using a high-power electron beam to actively probe magnetic-field-line connectivity in space has been discussed since the 1970's. It could solve longstanding questions in magnetospheric/ionospheric physics by establishing causality between phenomena occurring in the magnetosphere and their image in the ionosphere. However, this idea has never been realized onboard a magnetospheric spacecraft because the tenuous magnetospheric plasma cannot provide the return current necessary to keep the charging of the spacecraft under control. Recently, Delzanno et al. have proposed a spacecraft-charging mitigation scheme to enable the emission of a high-power electron beam from a magnetospheric spacecraft. It is based on the plasma contactor, i.e. a high-density neutral plasma emitted prior to and with the electron beam. The contactor acts as an ion emitter (not as an electron collector, as previously thought): a high ion current can be emitted off the quasi-spherical contactor surface, without the strong space-charge limitations typical of planar ion beams, and the electron-beam current can be successfully compensated. In this work, we will discuss our theoretical/simulation effort to improve the understanding of contactor-based ion emission. First, we will present a simple mathematical model useful for the interpretation of the results of . The model is in spherical geometry and the contactor dynamics is described by only two surfaces (its quasi-neutral surface and the front of the outermost ions). It captures the results of self-consistent Particle-In-Cell (PIC) simulations with good accuracy and highlights the physics behind the charge-mitigation scheme clearly. PIC simulations connecting the 1D model to the actual geometry of the problem will be presented to obtain the scaling of the spacecraft potential varying contactor emission area. Finally, results for conditions relevant to an actual mission will also be discussed. G. L. Delzanno, J. E. Borovsky Thigpen, William W. The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods. Clark, John S. NASA-Lewis has undertaken the conceptual development of spacecraft nuclear propulsion systems with DOE support, in order to establish the bases for Space Exploration Initiative lunar and Mars missions. This conceptual evolution project encompasses nuclear thermal propulsion (NTP) and nuclear electric propulsion (NEP) systems. A technology base exists for NTP in the NERVA program files; more fundamental development efforts are entailed in the case of NEP, but this option is noted to offer greater advantages in the long term.
<urn:uuid:f690ad7c-cad8-420d-97e4-e30c47dc7580>
2.65625
106,077
Content Listing
Science & Tech.
23.925911
95,567,330
The intent attribute specifies what a dummy argument's intended use is inside of a function or subroutine. The intent attribute is used to give the computer information about the intended use of dummy arguments in a subroutine or function. This is accomplished by specifying one of three different intents, so that the computer knows how you are planning on treating your dummy arguments. These three different intent specifications and their meanings are listed below. This attribute setting tells the compiler that this argument will only be used to pass information into the subprogram. As a consequence of specifying intent(in), the variables that are marked with this designation can not have their values changed inside of the subprogram. A Fortran error will result if a variable declared Intent(in) appears on the left side of an assignment statement. Intent(out) will tell the computer that whatever dummy arguments are given this attribute can only be used to return information to the calling procedure. The value of any dummy argument given this attribute will be undefined upon entry to the subprogram. A Fortran error will result if a variable declared Intent(out) does not appear on the left side of an assignment statement. This tells the compiler that these dummy arguments can be used to both receive information from a subprogram and provide information to it. The following block of code gives a brief example of how the intent attributes can be used inside both the main program and in a subroutine. Program add_it implicit none real a,b,c interface subroutine addition(x,y,z) implicit none real x,y,z intent (in) x,y intent (out) z end subroutine addition end interface data a,b / 12.1,56.2 / call addition (a,b,c) print *, c stop end Subroutine addition(x,y,z) implicit none real x,y,z intent(in) x,y intent(out) z z= x + y return end Please, note two things about this example before you take it to be the only way to use the intent attribute.. First, the dummy arguments can be declared inside the subroutine and interface block like this, real, intent(in) :: x,y real, intent(out) :: z instead of the manner in which they are done above. Also, the interface block is not a necessary item. The program would function fine if the intent arguments were used inside of the subroutine only. The interface block is just included to illustrate how extra security check can be added to your program to ensure the proper arguments get used the correct manner. lecture thirty six examples: interface.f and dual-interface.f Written by Jason Wehr: firstname.lastname@example.org and Maintained by John Mahaffy : email@example.com
<urn:uuid:2d72ebdb-2740-41b9-9304-200860835ac7>
3.578125
598
Documentation
Software Dev.
43.516686
95,567,372
|시간 제한||메모리 제한||제출||정답||맞은 사람||정답 비율| |1 초||64 MB||91||47||32||52.459%| Output all the ways in which a given positive integer N can be obtained as the sum of several (two or more) consecutive positive integers. -The author apologizes if reading the task took too long, and promises that, in the future, he will try to be more concise, i.e., that he will try to explain the task using as less words and apologies in the footnotes as possible- The first line of input contains the positive integer N (3 ≤ N ≤ 1010). For each sum of consecutive positive integers that is equal to N, in one line output the first and the last addend. The order of lines in the output is not important. In each test case, at least one corresponding sum will exist. 13 14 8 10 2 7 10 = 1 + 2 + 3 + 4.
<urn:uuid:74d2faa6-d247-4e4e-8af7-50dac33cfa20>
3.03125
255
Academic Writing
Science & Tech.
82.922186
95,567,375
Ammonia volatilization was measured at three sites in the Chihuahuan Desert of southern New Mexico, U.S.A. In dry soils, ammonia volatilization ranged from 9 to 11 micrograms of nitrogen per square meter per day, but rates increased to 95 micrograms of nitrogen per square meter per day in a shrubland site after an experimental addition of water. Ammonia volatilization also increased with experimental additions of NH4Cl and decreased with additions of sucrose. Competition by nitrifiers for available NH4+ had little effect on NH3 volatilization: N-Serve, added to inhibit nitrification, decreased NH3 volatilization in a grassland site and had little effect at other sites. We suggest that NH3 volatilization is controlled by the rate of mineralization of NH4+ from soil organic matter, and mineralization is stimulated by rainfall. Overall rates of NH3 volatilization from undisturbed desert ecosystems appear to be much lower than those reported for rangeland and agricultural soils. The data set shows ammonia volatilization from grassland, cresotebush, and playa habitats in response to a variety of experimental treatments chosen to elucidate the processes controlling the volatilization under dry and post-rainfall conditions. Ammonia is collected in weak acid in scintillation vials placed inside PVC chambers in the field. The rate of ammonia volatilized per unit area ugN/m2/day) is found by mulitplying the concentration in the acid by 1250 to account for volume and area corrections. Data file information for the following Jornada data set: Ammonia volatilization from Chihuahuan Desert habitats - 1988 Data for rabbits, birds, and lizards recorded from the LTER II animal transects. Data consists of species names, numbers of individuals, and distances observed from transects. Data is collected from each transect once every two weeks. See history file for exceptions. Data information for the following Jornada data set: Animal Transects
<urn:uuid:b382c656-ec7f-48e9-ae41-cf654f21baaa>
3.015625
448
Academic Writing
Science & Tech.
21.44483
95,567,434
Fundamental discovery triggers paradigm shift in crystallography A scientific breakthrough gives researchers access to the blueprint of thousands of molecules of great relevance to medicine and biology. The novel technique, pioneered by a team led by DESY scientist Professor Henry Chapman from the Center for Free-Electron Laser Science CFEL and reported this week in the scientific journal Nature, opens up an easy way to determine the spatial structures of proteins and other molecules, many of which are practically inaccessible by existing methods. Slightly disordered crystals of complex biomolecules like that of the photosystem II molecule shown here produce a complex continous diffraction pattern (right) under X-ray light that contains far more information than the so-called Bragg peaks of a strongly ordered crystal alone (left). The degree of disorder is greatly exaggerated in the crystal on the right. Credit: Eberhard Reimann/DESY The structures of biomolecules reveal their modes of action and give insights into the workings of the machinery of life. Obtaining the molecular structure of particular proteins, for example, can provide the basis for the development of tailor-made drugs against many diseases. "Our discovery will allow us to directly view large protein complexes in atomic detail," says Chapman, who is also a professor at the University of Hamburg and a member of the Hamburg Centre for Ultrafast Imaging CUI. To determine the spatial structure of a biomolecule, scientists mainly rely on a technique called crystallography. The new work offers a direct route to "read" the atomic structure of complex biomolecules by crystallography without the usual need for prior knowledge and chemical insight. "This discovery has the potential to become a true revolution for the crystallography of complex matter," says the chairman of DESY's board of directors, Professor Helmut Dosch. In crystallography, the structure of a crystal and of its constituents can be investigated by shining X-rays on it. The X-rays scatter from the crystal in many different directions, producing an intricate and characteristic pattern of numerous bright spots, called Bragg peaks (named after the British crystallography pioneers William Henry and William Lawrence Bragg). The positions and strengths of these spots contain information about the structure of the crystal and of its constituents. Using this approach, researchers have already determined the atomic structures of tens of thousands of proteins and other biomolecules. But the method suffers from two significant barriers, which make structure determination extremely difficult or sometimes impossible. The first is that the molecules must be formed into very high quality crystals. Most biomolecules do not naturally form crystals. However, without the necessary perfect, regular arrangement of the molecules in the crystal, only a limited number of Bragg peaks are visible. This means the structure cannot be determined, or at best only a fuzzy "low resolution" facsimile of the molecule can be found. This barrier is most severe for large protein complexes such as membrane proteins. These systems participate in a range of biological processes and many are the targets of today's drugs. Great skill and quite some luck are needed to obtain high-quality crystals of them. "Extreme Sudoku in three dimensions" The second barrier is that the structure of a complex molecule is still extremely difficult to determine, even when good diffraction is available. "This task is like extreme Sudoku in three dimensions and a million boxes, but with only half the necessary clues," explains Chapman. In crystallography, this puzzle is referred to as the phase problem. Without knowing the phase - the lag of the crests of one diffracted wave to another - it is not possible to compute an image of the molecule from the measured diffraction pattern. But phases can't be measured. To solve the tricky phase puzzle, more information must be known than just the measured Bragg peaks. This additional information can sometimes be obtained by X-raying crystals of chemically modified molecules, or by already knowing the structure of a closely-related molecule. When thinking about why protein crystals do not always "diffract", Chapman realised that imperfect crystals and the phase problem are linked. The key lies in a weak "continuous" scattering that arises when crystals become disordered. Usually, this non-Bragg, continuous diffraction is thought of as a nuisance, although it can be useful for providing insights into vibrations and dynamics of molecules. But when the disorder consists only of displacements of the individual molecules from their ideal positions in the crystal then the "background" takes on a much more complex character - and its rich structure is anything but diffuse. It then offers a much bigger prize than the analysis of the Bragg peaks: the continuously-modulated "background" fully encodes the diffracted waves from individual "single" molecules. "If you would shoot X-rays on a single molecule, it would produce a continuous diffraction pattern free of any Bragg spots," explains lead author Dr. Kartik Ayyer from Chapman's CFEL group at DESY. "The pattern would be extremely weak, however, and very difficult to measure. But the 'background' in our crystal analysis is like accumulating many shots from individually-aligned single molecules. We essentially just use the crystal as a way to get a lot of single molecules, aligned in common orientations, into the beam." With imperfect, disordered crystals, the continuous diffraction fills in the gaps and beyond the Bragg peaks, giving vastly more information than in normal crystallography. With this additional gain in information, the phase problem can be uniquely solved without having to resort to other measurements or assumptions. In the analogy of the Sudoku puzzle, the measurements provide enough clues to always arrive at the right answer. "The best crystals are imperfect crystals" This novel concept leads to a paradigm shift in crystallography -- the most ordered crystals are no longer the best to analyse with the novel method. "For the first time we have access to single molecule diffraction - we have never had this in crystallography before," he explains. "But we have long known how to solve single-molecule diffraction if we could measure it." The field of coherent diffractive imaging, spurred by the availability of laser-like beams from X-ray free-electron lasers, has developed powerful algorithms to directly solve the phase problem in this case, without having to know anything at all about the molecule. "You don't even have to know chemistry," says Chapman, "but you can learn it by looking at the three-dimensional image you get." To demonstrate their novel analysis method, the Chapman group teamed up with the group of Professor Petra Fromme from the Arizona State University (ASU), and other colleagues from ASU, University of Wisconsin, the Greek Foundation for Research and Technology - Hellas FORTH, and SLAC National Accelerator Laboratory in the U.S. They used the world's most powerful X-ray laser LCLS at SLAC to X-ray imperfect microcrystals of a membrane protein complex called Photosystem II that is part of the photosynthesis machinery in plants. Including the continuous diffraction pattern into the analysis immediately improved the spatial resolution around a quarter from 4.5 Ångström to 3.5 Ångström (an Ångström is 0.1 nanometres). The obtained image gave fine definition of molecular features that usually require fitting a chemical model to see. "That is a pretty big deal for biomolecules," explains co-author Dr. Anton Barty from DESY. "And we can further improve the resolution if we take more patterns." The team had only a few hours of measuring time for these experiments, while full-scale measuring campaigns usually last a couple of days. The scientists hope to obtain even clearer and higher resolution images of photosystem II and many other macromolecules with their new technique. "This kind of continuous diffraction has actually been seen for a long time from many different poorly-diffracting crystals," says Chapman. "It wasn't understood that you can get structural information from it and so analysis techniques suppressed it. We're going to be busy to see if we can solve structures of molecules from old discarded data." Deutsches Elektronen-Synchrotron DESY is the leading German accelerator centre and one of the leading in the world. DESY is a member of the Helmholtz Association and receives its funding from the German Federal Ministry of Education and Research (BMBF) (90 per cent) and the German federal states of Hamburg and Brandenburg (10 per cent). At its locations in Hamburg and Zeuthen near Berlin, DESY develops, builds and operates large particle accelerators, and uses them to investigate the structure of matter. DESY's combination of photon science and particle physics is unique in Europe. Macromolecular diffractive imaging using imperfect crystals; Kartik Ayyer et al. „Nature", 2016; DOI: 10.1038/nature16949 Thomas Zoufal | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:31788be3-e89f-443f-8bad-1390830f6025>
3.421875
2,421
Content Listing
Science & Tech.
34.091957
95,567,440
Scientists have designed the first large DNA crystals with precisely prescribed depths and complex 3D features, which could create revolutionary nanodevices DNA has garnered attention for its potential as a programmable material platform that could spawn entire new and revolutionary nanodevices in computer science, microscopy, biology, and more. Researchers have been working to master the ability to coax DNA molecules to self assemble into the precise shapes and sizes needed in order to fully realize these nanotechnology dreams. For the last 20 years, scientists have tried to design large DNA crystals with precisely prescribed depth and complex features – a design quest just fulfilled by a team at Harvard's Wyss Institute for Biologically Inspired Engineering. The team built 32 DNA crystals with precisely-defined depth and an assortment of sophisticated three-dimensional (3D) features, an advance reported in Nature Chemistry. The team used their "DNA-brick self-assembly" method, which was first unveiled in a 2012 Science publication when they created more than 100 3D complex nanostructures about the size of viruses. The newly-achieved periodic crystal structures are more than 1000 times larger than those discrete DNA brick structures, sizing up closer to a speck of dust, which is actually quite large in the world of DNA nanotechnology. "We are very pleased that our DNA brick approach has solved this challenge," said senior author and Wyss Institute Core Faculty member Peng Yin, Ph.D., who is also an Associate Professor of Systems Biology at Harvard Medical School, "and we were actually surprised by how well it works." Scientists have struggled to crystallize complex 3D DNA nanostructures using more conventional self-assembly methods. The risk of error tends to increase with the complexity of the structural repeating units and the size of the DNA crystal to be assembled. The DNA brick method uses short, synthetic strands of DNA that work like interlocking Lego® bricks to build complex structures. Structures are first designed using a computer model of a molecular cube, which becomes a master canvas. Each brick is added or removed independently from the 3D master canvas to arrive at the desired shape – and then the design is put into action: the DNA strands that would match up to achieve the desired structure are mixed together and self assemble to achieve the designed crystal structures. "Therein lies the key distinguishing feature of our design strategy—its modularity," said co-lead author Yonggang Ke, Ph.D., formerly a Wyss Institute Postdoctoral Fellow and now an assistant professor at the Georgia Institute of Technology and Emory University. "The ability to simply add or remove pieces from the master canvas makes it easy to create virtually any design." The modularity also makes it relatively easy to precisely define the crystal depth. "This is the first time anyone has demonstrated the ability to rationally design crystal depth with nanometer precision, up to 80 nm in this study," Ke said. In contrast, previous two-dimensional DNA lattices are typically single-layer structures with only 2 nm depth. "DNA crystals are attractive for nanotechnology applications because they are comprised of repeating structural units that provide an ideal template for scalable design features", said co-lead author graduate student Luvena Ong. Furthermore, as part of this study the team demonstrated the ability to position gold nanoparticles into prescribed 2D architectures less than two nanometers apart from each other along the crystal structure – a critical feature for future quantum devices and a significant technical advance for their scalable production, said co-lead author Wei Sun, Ph.D., Wyss Institute Postdoctoral Fellow. "My preconceived notions of the limitations of DNA have been consistently shattered by our new advances in DNA nanotechnology," said William Shih, Ph.D., who is co-author of the study and a Wyss Institute Founding Core Faculty member, as well as Associate Professor in the Department of Biological Chemistry and Molecular Pharmacology at Harvard Medical School and the Department of Cancer Biology at the Dana-Farber Cancer Institute. "DNA nanotechnology now makes it possible for us to assemble, in a programmable way, prescribed structures rivaling the complexity of many molecular machines we see in Nature." "Peng's team is using the DNA-brick self-assembly method to build the foundation for the new landscape of DNA nanotechnology at an impressive pace," said Wyss Institute Founding Director Don Ingber, M.D., Ph.D. "What have been mere visions of how the DNA molecule could be used to advance everything from the semiconductor industry to biophysics are fast becoming realities." The work involved collaborators from Aarhus University in Denmark and was supported by the Office of Naval Research (ONR), the Army Research Office (ARO), the National Science Foundation (NSF), the National Institutes of Health (NIH), and the Wyss Institute for Biologically Inspired Engineering at Harvard University. About the Wyss Institute for Biologically Inspired Engineering at Harvard University The Wyss Institute for Biologically Inspired Engineering at Harvard University uses Nature's design principles to develop bioinspired materials and devices that will transform medicine and create a more sustainable world. Working as an alliance among all of Harvard's Schools, and in partnership with Beth Israel Deaconess Medical Center, Brigham and Women's Hospital, Boston Children's Hospital, Dana Farber Cancer Institute, Massachusetts General Hospital, the University of Massachusetts Medical School, Spaulding Rehabilitation Hospital, Boston University, Tufts University, Charité - Universitätsmedizin Berlin, and the University of Zurich, the Institute crosses disciplinary and institutional barriers to engage in high-risk research that leads to transformative technological breakthroughs. By emulating Nature's principles for self-organizing and self-regulating, Wyss researchers are developing innovative new engineering solutions for healthcare, energy, architecture, robotics, and manufacturing. These technologies are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and new start-ups. Kat J. McAlpine | Eurek Alert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:3cbea15c-f37d-46ba-87b2-c9a45e52e596>
3.546875
1,884
Content Listing
Science & Tech.
28.36365
95,567,477
Scientists are interested in how the shape of this hidden terrain affects how ice moves -- a key factor in making predictions about the future of these massive ice reservoirs and their contribution to sea level rise in a changing climate. Image showing ice surface, internal layering and bedrock (jagged line at bottom of image) in thick ice between Dome C and Vostok. Data gathered by the Multichannel Coherent Radar Depth Sounder during IceBridge’s Nov. 27, 2013 survey flight. Image Credit: CReSIS / Theresa Stumpf NASA has been monitoring Antarctic and Arctic ice since 2009 with the Operation IceBridge airborne mission. Although the primary objective is to continue the data record of ice sheet surface elevation changes from NASA's Ice, Cloud and Land Elevation Satellite, or ICESat, which stopped functioning in 2009, IceBridge is also gathering data on other aspects of polar ice from snow on top to the bedrock below. One radar instrument on these flights that is currently headed to Antarctica for another year of observations is revealing insights about the bedrock hidden beneath the ice sheet. IceBridge is carrying a suite of radar instruments designed, built and operated by scientists, engineers and university students with the Center for the Remote Sensing of Ice Sheets (CReSIS), a National Science Foundation-funded center based at the University of Kansas. This bedrock-mapping radar is known as the Multichannel Coherent Radar Depth Sounder or MCoRDS. MCoRDS measures ice thickness and maps sub-glacial rock by sending radar waves down through thick polar ice. This ice-penetrating radar is the result of efforts that started with a collaboration between NASA and the National Science Foundation 20 years ago. In the early 1980s, researchers started showing interest in using radar to measure ice thickness and map sub-ice rock. Among the interested was NSF's Office of Polar Programs, which provided the initial funding for a thickness-measuring instrument. "We were given one year to prove it would work," said Prasad Gogineni, scientist at CReSIS. CReSIS researchers used that funding to build their first radar depth sounder, which started flying aboard NASA aircraft in 1993. Over the years, CReSIS has built a number of instruments – each more advanced that the last – leading to the radar IceBridge relies on today. Through the Ice One of the biggest obstacles faced when building an ice-penetrating instrument like MCoRDS is the nature of radar itself. Radar works by sending out radio waves and timing how long it takes for them to reflect back. Radio waves travel through air virtually unimpeded, but materials like metal, rock and water act almost as mirrors. Ice, on the other hand reacts differently depending on the radar's frequency. It reflects high-frequency radio waves, but despite being solid, lower frequency radar can pass through ice to some degree. This is why MCoRDS uses a relatively low frequency—between 120 and 240 MHz. This allows the instrument to detect the ice surface, internal layers of the ice and the bedrock below. "To sound the bottom of ice you have to use a lower frequency," said John Paden, CReSIS scientist. "Too high a frequency and signal will be lost in the ice." These radio waves are sent out in rapid pulses through an array of downward-pointing antennas mounted beneath the aircraft. This array of multiple antennas, up to 15 on NASA's P-3, allows researchers to survey a larger area and record several signals at once to get a clearer picture. The 15 element array is the largest ever flown on the P-3 and is was built as part of a joint effort between NASA, the University of Kansas and private industry. The design and construction of this array, much of which was done by University of Kansas undergraduate and graduate students, took about six months. Radar pulses travel down to the surface, through the ice to bedrock below and back up through the ice to MCoRDS's array, where they are routed to the instrument's receiver and recorded on solid state drives aboard the aircraft. Each survey flight yields a great deal of data, often as much as two terabytes, that then needs to be downloaded, archived and backed up. The computing infrastructure needed to handle this data is managed by people from the University of Indiana, which is also a partnering organization in CReSIS. During each campaign, University of Indiana personnel support the mission by staying up through the night to ensure that the data collected each day is successfully stored and backed up. After returning from an IceBridge campaign, CReSIS researchers spend months processing the archived data to build a detailed view of ice sheets and bedrock. First, researchers tease out the return signals from the ice surface and bed. Because thick ice attenuates, or weakens, radar so researchers need to filter the data to pick out the weak return signal from the bed, which would otherwise be drowned out by the much stronger surface return and any noise in the data. After finding the ice surface and bedrock, researchers use something called synthetic aperture radar processing. This combines many readings from a radar antenna as it moves over the surface researchers can create a large simulated array. "You can make an aperture one kilometer long by moving the radar one kilometer," said Paden. As with camera lenses, bigger is better, and a larger array lets researchers see more detail. This sort of processing yields a detailed, but narrow, swath of the ice and sub-ice terrain for each antenna. Building a wider view is more complicated than just combining these separate signals. Although MCoRDS records signals coming back from the left and right of the plane's flight path, it cannot determine which side the signals are coming from. To overcome this, CReSIS researchers use something known as tomography, a technique that uses specialized computer software to calculate the position and distance of signals returned from the bedrock. Once researchers can tell where terrain features are relative to the array they can combine the several channels to build a swath of terrain data useful for creating three-dimensional representations of the bedrock. The Road Ahead These terrain data are helping scientists better understand what's under ice sheets. In the past year, researchers have produced new maps of Greenland's and Antarctica's bedrock and discovered a large and previously undetected canyon under Greenland's ice sheet. Better information on sub-ice terrain will help researchers develop the next-generation ice sheet models needed to project future changes to glaciers, and better understand the flow of water at ice sheet bases. As IceBridge continues to add to the record of sub-ice terrain measurements through its surveys over Greenland and Antarctica, scientists, engineers and students at CReSIS will keep making more advances. Improvements such as larger antenna arrays and improved data-processing techniques promise to make radar depth sounding even more effective. And in the future, uninhabited aerial vehicles like NASA's Global Hawk, could greatly increase the amount of terrain that can be covered from the air. Exactly what the future holds remains to be seen, but researchers have made great strides in probing polar ice thanks to a decision made by one person, former NASA program manager Bob Thomas, who provided Gogineni and his team the opportunity to prove that their instrument worked and funding from NASA's PARCA initiative. "Without that, we would not have a depth sounder and imager program at KU," said Gogineni. "It took both agencies to make it to the stage we are today."For more information about Operation IceBridge, visit: George Hale | EurekAlert! Further reports about: > Airborne ecologists > Antarctica > Goddard Space Flight Center > IceBridge > NASA > Polar Day > data-processing techniques > ice sheet > ice sheets > ice thickness > ice-penetrating radar > polar ice > radar instrument > radar waves > radio waves > sea level rise > sub-glacial rock Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:63b962ff-8bee-4b57-b541-6342d9c7f614>
3.90625
2,208
Content Listing
Science & Tech.
38.111078
95,567,554
posted by Kensey Two Earth satellites, A and B, each of mass m = 960 kg, are launched into circular orbits around the Earth's center. Satellite A orbits at an altitude of 4500 km, and satellite B orbits at an altitude of 13600 km. a) What are the potential energies of the two satellites? b) What are the kinetic energies of the two satellites? c)How much work would it require to change the orbit of satellite A to match that of satellite B?
<urn:uuid:17c2f971-7da2-4019-8711-43a70ee6df01>
3.34375
102
Q&A Forum
Science & Tech.
64.897286
95,567,555
Following the previous posts on small n correlations [post 1][post 2][post 3], in this post we’re going to consider power estimation (if you do not care about power, but you’d rather focus on estimation, this post is for you). To get started, let’s look at examples of n=1000 samples from bivariate populations with known correlations (rho), with rho increasing from 0.1 to 0.9 in steps of 0.1. For each rho, we draw a random sample and plot Y as a function of X. The variances of the two correlated variables are independent – there is homoscedasticity. Later we will look at heteroscedasticity, when the variance of Y varies with X. For the same distributions illustrated in the previous figure, we compute the proportion of positive Pearson’s correlation tests for different sample sizes. This gives us power curves (here based on simulations with 50,000 samples). We also include rho = 0 to determine the proportion of false positives. Power increases with sample size and with rho. When rho = 0, the proportion of positive tests is the proportion of false positives. It should be around 0.05 for a test with alpha = 0.05. This is the case here, as Pearson’s correlation is well behaved for bivariate normal data. For a given expected population correlation and a desired long run power value, we can use interpolation to find out the matching sample size. To achieve at least 80% power given an expected population rho of 0.4, the minimum sample size is 46 observations. To achieve at least 90% power given an expected population rho of 0.3, the minimum sample size is 118 observations. Alternatively, for a given sample size and a desired power, we can determine the minimum effect size we can hope to detect. For instance, given n = 40 and a desired power of at least 90%, the minimum effect size we can detect is 0.49. So far, we have only considered situations where we sample from bivariate normal distributions. However, Wilcox (2012 p. 444-445) describes 6 aspects of data that affect Pearson’s r: - the magnitude of the slope around which points are clustered the magnitude of the residuals restriction of range The effect of outliers on Pearson’s and Spearman’s correlations is described in detail in Pernet et al. (2012) and Rousselet et al. (2012). Next we focus on heteroscedasticity. Let’s look at Wilcox’s heteroscedasticity example (2012, p. 445). If we correlate variable X with variable Y, heteroscedasticity means that the variance of Y depends on X. Wilcox considers this example: “X and Y have normal distributions with both means equal to zero. […] X and Y have variance 1 unless |X|>0.5, in which case Y has standard deviation |X|.” Here is an example of such data: Next, Wilcox (2012) considers the effect of this heteroscedastic situation on false positives. We superimpose results for the homoscedastic case for comparison. In the homoscedastic case, as expected for a test with alpha = 0.05, the proportion of false positives is very close to 0.05 at every sample size. In the heteroscedastic case, instead of 5%, the number of false positives is between 12% and 19%. The number of false positives actually increases with sample size! That’s because the standard T statistics associated with Pearson’s correlation assumes homoscedasticity, so the formula is incorrect when there is heteroscedasticity. As a consequence, when Pearson’s test is positive, it doesn’t always imply the existence of a correlation. There could be dependence due to heteroscedasticity, in the absence of a correlation. Let’s consider another heteroscedastic situation, in which the variance of Y increases linearly with X. This could correspond for instance to situations in which cognitive performance or income are correlated with age – we might expect the variance amongst participants to increase with age. We keep rho constant at 0.4 and increase the maximum variance from 1 (homoscedastic case) to 9. That is, the variance of Y linear increases from 1 to the maximum variance as a function of X. For rho = 0, we can compute the proportion of false positives as a function of both sample size and heteroscedasticity. In the next figure, variance refers to the maximum variance. From 0.05 for the homoscedastic case (max variance = 1), the proportion of false positives increases to 0.07-0.08 for a max variance of 9. This relatively small increase in the number of false positives could have important consequences if 100’s of labs are engaged in fishing expeditions and they publish everything with p<0.05. However, it seems we shouldn’t worry much about linear heteroscedasticity as long as sample sizes are sufficiently large and we report estimates with appropriate confidence intervals. An easy way to build confidence intervals when there is heteroscedasticity is to use the percentile bootstrap (see Pernet et al. 2012 for illustrations and Matlab code). Finally, we can run the same simulation for rho = 0.4. Power progressively decreases with increasing heteroscedasticity. Put another way, with larger heteroscedasticity, larger sample sizes are needed to achieve the same power. We can zoom in: The vertical bars mark approximately a 13 observation increase to keep power at 0.8 between a max variance of 0 and 9. This decrease in power can be avoided by using the percentile bootstrap or robust correlation techniques, or both (Wilcox, 2012). The results presented in this post are based on simulations. You could also use a sample size calculator for correlation analyses – for instance this one. But running simulations has huge advantages. For instance, you can compare multiple estimators of association in various situations. In a simulation, you can also include as much information as you have about your target populations. For instance, if you want to correlate brain measurements with response times, there might be large datasets you could use to perform data-driven simulations (e.g. UK biobank), or you could estimate the shape of the sampling distributions to draw samples from appropriate theoretical distributions (maybe a gamma distribution for brain measurements and an exGaussian distribution for response times). Simulations also put you in charge, instead of relying on a black box, which most likely will only cover Pearson’s correlation in ideal conditions, and not robust alternatives when there are outliers or heteroscedasticity or other potential issues. The R code to reproduce the simulations and the figures is on GitHub. Pernet, C.R., Wilcox, R. & Rousselet, G.A. (2012) Robust correlation analyses: false positive and power validation using a new open source matlab toolbox. Front Psychol, 3, 606. Rousselet, G.A. & Pernet, C.R. (2012) Improving standards in brain-behavior correlation analyses. Frontiers in human neuroscience, 6, 119. Wilcox, R.R. (2012) Introduction to robust estimation and hypothesis testing. Academic Press, San Diego, CA.
<urn:uuid:5c1cb6fa-82ba-42c1-9710-7da006365d6e>
3.1875
1,577
Academic Writing
Science & Tech.
48.02521
95,567,563
Machine learning is an important subfield of artificial intelligence. While it is very difficult to even define what intelligence is (there are even more definitions than for quantum computers), one thing that is pretty much universally recognized is that anything we’d call intelligent must be able to learn. Trying to understand how learning from experience works has driven a lot of progress in understanding how human perception and cognition might work.Google on iso tekijä hakukoneissa ja hakukoneiden kehittyessä yksi mahdollinen kehityssuunta on suurempi älykkyys. The Quantum Artificial Intelligence Lab’s mandate is to bring the world’s best machine learning experts together with the world’s most advanced quantum computers, and perform thousands of experiments to explore to what extent machine intelligence and cognition can be advanced by using these new types of computers. SpaceX hyperloop yarışmasında yeni rekor: Saatte 467 kilometre hız 54 minuuttia sitten
<urn:uuid:59821877-3cc9-4c2a-91d6-c6133f945f35>
2.609375
223
Personal Blog
Science & Tech.
12.163973
95,567,570
- Frontier letter - Open Access Slip-partitioned surface ruptures for the Mw 7.0 16 April 2016 Kumamoto, Japan, earthquake © The Author(s) 2016 Received: 15 July 2016 Accepted: 29 October 2016 Published: 22 November 2016 The sequence of the 2016 Kumamoto earthquakes occurred in the center of Kyushu Island where N–S stretching has been ongoing since 6 Ma (Kamata and Kodama 1994). Subduction-related volcanism started at ~1.5 Ma in Kyushu (Kamata et al. 1988) and has been significantly promoted by the rift-zone activity along the Beppu-Shimabara Rift Zone (BSRZ, Matsumoto 1979). The BSRZ is composed of numerous EW-trending normal faults synchronous with extensive volcanism. Right-lateral motion along the southern margin of the BSRZ, also known as the Oita-Kumamoto Tectonic Line (Yabe 1925), is thought to have been active since 0.5 Ma following the onset of dextral motion of the Median Tectonic Line in Shikoku Island (Fig. 1a) since 2 Ma (Tsukuda 1990), involving several caldera-forming eruptions. The Aso Caldera, a part of the 2016 rupture zone, was formed by four major catastrophic eruptions from ~270 to 90 ka (Ono and Watanabe 1985). Present-day N–S extension occurs at a rate of 1–2 cm/year across the BSRZ, detected by both decadal triangulation networks (Tada 1984) and modern GPS networks (e.g., Nishimura and Hashimoto 2006), together with frequent small-to-moderate magnitude normal faulting earthquakes. The Futagawa fault, striking N60°E, forms part of the Oita-Kumamoto Tectonic Line and was a predominant part of the 2016 seismic source. The fault has been accommodating N–S stretching on the southern margin of the BSRZ. This fault is hypothesized to be a right-lateral strike-slip fault with significant southeast-side-up vertical movement, making an E–W-trending graben and contributing to the subsidence of the Kumamoto Plain (e.g., Ishizaka et al. 1995). In contrast the ~80-km-long and N40°E-trending Hinagu fault, located outside the BSRZ, is more favorably oriented for right-lateral strike-slip faulting, except the central and southern sections that are estimated to have modest east-side-up vertical movement (Research Group for Active Faults of Japan 1991). Such differences in geometry and stress field between the Hinagu fault and Futagawa fault are manifested as the different surface rupture traces and slip senses associated with the Mw = 7.0 16 April Kumamoto earthquake. Our field observations revealed that a ~10-km stretch of the 20-km-long Futagawa fault rupture was accompanied by a series of normal fault scarps along the previously mapped Idenokuchi fault (Fig. 1b). Together with a NW-dipping source fault inferred from the aftershock distribution and geodetic inversion, we found that the slip-partitioned fault breaks occurred during the Kumamoto earthquake. This could be the second significant case of coseismic slip partitioning after the 2001 Kokoxili, China, earthquake (King et al. 2005) and gives us a great opportunity to understand the mechanism of the slip partitioning and to gain clues for rupture dynamics and long-term fault development in a complex stress environment. In this paper, we describe the significant features of coexistence of strike-slip and normal faulting ruptures associated with the 2016 Kumamoto earthquake and then illustrate a possible mechanism of such a slip partitioning and its implication to seismic hazard evaluation. The 2016 earthquake surface rupture Detailed mapping of the surface break and measurements of fault offsets were undertaken from 16 April to 16 June by ~20 research scientists from numerous Japanese universities. The mapping observations, coordinated by Associate Prof. Kumahara (Kumahara et al. 2016), conclude that the surface rupture associated with the 16 April 2016 event is ~30 km long, involving 6 km of the northernmost part of the Hinagu fault with up to ~0.6-m right-lateral slip, and a 20-km-long break on the Futagawa fault with up to ~2.0-m right-lateral slip. The rupture progressed easterly for another 5 km on an unknown right-lateral strike-slip fault into the Aso Caldera with up to 1-m right-lateral slip. A series of left-stepping en-echelon fractures, corresponding to right-lateral faulting, from meter to kilometer scale are visible in the field and along mapped surface rupture traces. On a large scale, these ground breaks mostly follow the previously mapped active fault traces of the Hinagu and Futagawa fault zones identified as significant faulted landforms (e.g., Research Group for Active Faults of Japan 1991; Nakata and Imaizumi 2002) (Fig. 1). However, on a small scale, there are numerous small fault branches, parallel-running rupture traces, and short conjugate NW-trending left-lateral faults occurring off the mapped faults, demonstrating complex surface phenomena. Slip partitioning at the 2016 earthquake During the surface rupture mapping project, the authors of this paper were mostly in charge of the eastern portion of the Futagawa fault, where we found a 10-km stretch of coexistent coseismic normal and right-lateral strike-slip ground breaks separated by up to 2 km. Right-lateral strike-slip fault zone Normal faulting rupture zone A normal surface rupture zone ~10 km in length predominantly emerged along the previously mapped 7-km-long Idenokuchi fault (Research Group for Active Tectonics in Kyushu 1989, Fig. 1). The overall vertical slip sense is southeast-up, which is consistent with the large-scale topography as shown in Fig. 2. The trend of the normal fault zone is N55°E, parallel to the trend of the strike-slip fault zone. The separation distance between the two zones is 1.2–2.0 km. The normal rupture zone is comprised of several left-stepping sections and significant fault bend where the strike differs by ~50° from the overall trend of the fault zone. A predominant morphological style of the surface rupture was a near-vertical free face of thick unconsolidated sediments and/or soil units involving a tensile (opening) component of up to ~1 m and occasionally with a colluvial wedge present. Tilted trees and meter-scale local landslides triggered by fault slip were commonly observed along the rupture zone. Since most of the surface breaks are located on steep slopes in the deep forest and ranches (Fig. 4), we measured vertical offsets using a measuring tape, folding ruler, leveling staff and/or hand-level, which includes approximately ±10-cm measurement errors. At key locations, we measured vertical separation by taking surface profile using a laser distance meter (Laser Technology TruPulse 200X) and a target prism that gives a measurement accuracy of ±4 cm. There were few observations of man-made features as piercing points, and hence, we could not measure the amount of slip more accurately, in particular lateral slip. The maximum vertical slip of ~2.0 m was measured at a 6.7-km mark (Fig. 3), gradually tapering toward the west and east (Fig. 3b). But it is comprised of several triangle slip distributions corresponding to several subsections. The throw averages 0.7 m, approximately equivalent to the average right-lateral slip in the strike-slip zone. We note two minor features in the coseismic surface ruptures along the normal fault zone. One is left-lateral slip of up to 1.2 m accompanying the normal faulting (Fig. 3b). These left-lateral features were discernible from man-made features and by jigsaw matching and the restoration of detailed rupture shapes between the up-thrown side and down-thrown side. The other observed feature is an antithetic fault that locally uplifts the downslope side of the main fault zone making the uphill-facing fresh scarp (Fig. 4e). A significant antithetic (south-dipping) rupture scarp at 6.2- to 7.3-km mark emerged exactly along the northern foothill of tectonic geomorphic bulges (pale blue lines in Fig. 2b). These two minor but important features enable us to prove that these 10-km-long continuous scarps are purely tectonic, not a product of a massive landslide or a combination of local landslides. Fault locations on InSAR image Synthetic aperture radar (SAR) images were obtained by the ALOS-2/PALSAR-2 and were analyzed by Geospatial Information Authority of Japan (2016a). These images reveal that there is an E–W elongated crustal block sandwiched between the strike-slip and normal faulting traces and it moved separately from the surrounding crust. The locations of the interferogram fringe offsets are consistent with the positions of the strike-slip and normal faulting zones we found in the field (Fig. 2a). A set of concentric semicircular fringes north of the strike-slip zone indicate a broad zone of subsidence centered 1.5 km along the studied rupture zone (X in Fig. 2a). Another concentric semicircle marked as Y in Fig. 2a appears in the western part of the normal fault zone, also suggesting subsidence in this region. Since the fringe patterns and fringe intervals of point Y differ from the ones south of the normal fault zone, we hypothesize that there is a subsurface continuous normal fault in this area with intermittent short surface breaks (mark 0–3 km in Fig. 2a). Dense E–W-striking fringes at the central portion (mark 3–6 km, Z in Fig. 2a) indicate significant subsidence bounded by the normal fault zone. There are 12 fringes present, suggesting approximately 1.5 m increase in range (further away from the satellite), corresponding to subsidence. There are several discrete fringes south of the normal fault zone, which might be related to other short tectonic breaks, but we have not yet confirmed this in the field. Despite poor coherence, the other third of the slip-partitioned block at mark 6–9 km (Fig. 2a) is also shown as densely distributed fringes clearly separated from the surrounding crust. These remarkable several kilometer-scale features are omitted in the geodetic inversion model that assigns slip and rake on subfault patches of a single rectangular fault plane (Geospatial Information Authority of Japan 2016a). We thus conclude that slip partitioning occurred during the 2016 Kumamoto earthquake. The term “slip partitioning” is used to describe oblique motion along a fault system that is accommodated on two or more faults with different mechanisms (Fitch 1972; McCaffrey 1992; Jones and Wesnousky 1992; Bowman et al. 2003). Slip partitioning is observed at a variety of scales. For example, Fitch (1972) observed that larger-scale (over 100 km) slip partitioning occurs along subduction systems. As shown in Fig. 1a, an oblique subduction of the Philippine Sea plate underneath southwest Japan is accommodated not only by the megathrust earthquakes along the Nankai Trough but also by right-lateral slip along the Median Tectonic Line (MTL) active fault system. The separation distance between the Nankai Trough and the MTL is over 100 km, which is comparable with depth of the upper mantle and the thickness of the plate. Partitioned slip separated by several tens of kilometer distance was explained by deeper oblique driving motion in viscous lower crust (Bowman et al. 2003). Slip partitioning could be explained as a result of upward propagation of oblique shear at depth. The authors validated the model by applying it to parts of the San Andreas and Haiyuan faults. Nevertheless, neither plate scale nor entire crustal scale movement would cause slip partitioning to occur for a single earthquake, even though slip partitioning at several kilometer scale has been documented elsewhere. Simultaneous movements of strike-slip and dip-slip faults at a single earthquake were first observed at the Mw 7.8 2001 Kokoxili earthquake along the Kunlun fault (King et al. 2005). These authors adopted Bowman’s idea to the Kokoxili earthquake and then concluded that slip partitioning is a consequence of rupture propagation within the brittle upper crust, and thus, it would not occur if all parts of a fault move simultaneously. The 2016 Kumamoto earthquake would be the second significant case among well-observed recent earthquakes. We here demonstrate a simple mechanical model of faulting in elastic half-space from Okada (1992) with several constraints from field observations and estimated fault dip at depth. We followed the fundamental hypothesis of the mechanics of slip partitioning proposed by Bowman et al. (2003) and King et al. (2005) which models movement on a buried oblique fault and the deformation field near the surface is analyzed. An additional minor feature of the Kumamoto slip partitioning is a small amount of left-lateral slip on the normal fault (Fig. 3b). Here we also examined the shear stress change for a left-lateral fault parallel to the NE-striking source fault in the dislocation model (Fig. 8c). We found that the driving stress from the buried oblique fault inhibits left-lateral slip on the NE-trending normal fault. However, if right-lateral faulting had occurred immediately before the normal faulting, it would have added moderate left-lateral shear stress on the normal fault. Our simple calculation may suggest that the right-lateral movement occurred first, followed by normal faulting during the coseismic process, in addition to the upward propagation of the rupture. It should be noted that the effects of dynamic rupture propagation differ from those in a static model. Perpetual features (topographic expression) It is unclear whether corupture of the strike-slip fault and normal fault occurred solely during the 2016 event, or whether slip partitioning would always be a feature of the rupture of the source fault at depth. Normal faulting has occurred along the previously mapped ~7-km-long Idenokuchi fault that displaces the 90-ka pyroclastic deposits by up to 60 m (Research Group for Active Tectonics in Kyushu 1989), yielding a slip rate of 0.7 mm/year. In the field, we observed cumulative vertical separation of the NW-facing slope (Fig. 4c). The long-term right-lateral strike-slip rate along the Futagawa fault section is poorly known due to the difficulty in estimating ages of piercing points. At one location, a paleoseismic trench excavation (Fig. 1b) immediately southwest of our study area in Mashiki Town provides a right-lateral strike-slip rate of 0.25 mm/year (Kumamoto Prefecture 1996), which is one-third of the normal faulting rate on the Idenokuchi fault. However, the right-lateral strike-slip around this trench site was only ~0.5 m in 2016 (Kumahara et al. 2016), as compared to the maximum right-lateral strike-slip of ~2 m to the northeast. This implies that the maximum slip rate of the strike-slip fault is substantially larger than the estimate at this trench. We therefore suggest that slip rates of strike-slip and normal faulting are roughly equivalent, which would be consistent with the amounts of slip observed at the 2016 event. We thus believe that these near-surface tectonic structures are long-lived features, and coseismic slip partitioning behavior has frequently occurred in the past, in particular since the last caldera-forming eruption. Location of asperity, aftershock productivity and postseismic deformation It is interesting to note that location of a large asperity with up to 5-m net slip from the seismic waveform inversion (Kubo et al. 2016) corresponds to the section of slip partitioning. The variable slip model may also explain the paucity of aftershocks on this slip-partitioned section (Figs. 1, 6) if the large slip released accumulated strain and generated a stress shadow. But we propose another reason for the lack of aftershocks. A significant normal fault slip relative to the western part of the 2016 rupture zone (out of our study area) might have strengthened the fault zone, according to a model suggested by Sibson (1992). Figure 2 in Sibson (1992) illustrates that both normal stress and frictional strength decrease through the interseismic period on an optimally oriented normal fault due to a reduction of horizontal stress, and frictional strength suddenly increases immediately after failure. Another intriguing feature that might be related to the slip partitioning is that a minor postseismic subsidence of a few centimeters occurred in the crustal block bounded by the Futagawa and Idenokuchi faults. It was clearly detected by interferograms of ALOS-2/PALSAR-2 data using a pair of SAR images obtained on 17 April 2016 and 2 May 2016 (Geospatial Information Authority of Japan 2016b). We hypothesize that this faint subsidence occurred in the partially detached block from the surrounding crust (Fig. 7). Implication to the stress field estimate and seismic hazard analyses Coseismic slip partitioning during the Kumamoto earthquake provides us with a caveat against stress tensor inversion (e.g., Michael 1984) using local geological data. It could be argued that the coordinates of the principal stress axes near the Idenokuchi fault, which are favorable for normal faulting, are different from the ones along the Futagawa fault. But it could also be deduced that a large group of samples from the 2016 rupture zone would reproduce a representative stress field along the southern BSRZ with a mix of two types of fault slip. It demonstrates a good example of scale and depth dependency of the stress heterogeneity (e.g., Smith and Heaton 2011) and sample size dependency in space and time in the stress tensor inversion. On the other hand, the Kumamoto earthquake furnishes an important clue to properly evaluate grouping multiple fault strands. Coexistence of different type faults within several kilometers, narrower than seismogenic source dimensions, does not exclude them being individual seismic sources. Instead, multiple fault strands, even different slip sense, may be joined together as a single seismogenic source at depth. For example, the Median Tectonic Line active fault system in Shikoku Island is comprised of a dipping strike-slip fault system involving secondary parallel thrust fault traces. This could be a candidate to investigate the hypothesis further. The Kumamoto earthquake, for which the Futagawa and Idenokuchi faults were previously identified, would have been such a case. This case fits the limiting dimension of fault step of 3–4 km, greater than which earthquake ruptures would not propagate (Wesnousky 2006), if the difference in slip sense were ignored. However, broadening a fault zone due to such a bifurcation to the surface will probably lead to widening of the areas affected by strong ground motion. Geologic observations of the surface rupture associated with the Mw = 7.0 16 April Kumamoto earthquake revealed ~10-km stretch of two parallel rupture strands of strike-slip and normal fault reoccupying older fault scarps along the Futagawa and Idenokuchi faults, respectively. The locations and slip motions of the rupture zones were also manifested as interferogram fringe offsets in InSAR images. Coupled with the aftershock distribution, seismic and geodetic inversions from other studies, we found that slip-partitioned fault breaks occurred during the Kumamoto earthquake under a complex stress field, leading to oblique fault motion mixed with right-lateral shear and extension on the southern margin of the Beppu-Shimabara Rift Zone (BSRZ). The Kumamoto rupture is the second significant case of coseismic slip partitioning. Our simple dislocation model with a subsurface oblique-slip fault demonstrates that such a bifurcation into pure strike-slip and normal faults likely occurs near the surface for optimally oriented failure. This provides an insight into scale- and depth-dependent stress heterogeneity. In seismic hazard estimates, the Kumamoto case also provides an implication to estimate a single seismic source fault in multiple fault strands regardless of slip sense, which may influence the estimated size of the strong shaking areas. ST organized the field survey in the study area, conducted the modeling and drafted the manuscript. ST, HK, SO, DI, and ZM mapped the surface rupture and measured fault slip in the field. ZM contributed to revise English sentences. All authors have read and approved the final manuscript. We acknowledge the helpful discussions and interactions in the field with Yasuhiro Kumahara and the other members of the 2016 Kumamoto earthquake surface rupture mapping group. We also thank Naoya Takahashi, Tomoki Tanaka and Shintaro Kashihara for their support during our field mapping. Our field work was supported by Tohoku University and The Ministry of Education, Culture, Sports, Science and Technology (MEXT) KAKENHI Grant Numbers 16H06298 and 16H03112. Travel expenses for ZM were provided by the Short-Term Fellowship program (No. PE15776), Japan Society for the Promotion of Science (JSPS). The authors declare that they have no competing interests. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. - Bowman D, King G, Tapponnier P (2003) Slip partitioning by elastoplastic propagation of oblique slip at depth. Science 300:1121–1123View ArticleGoogle Scholar - Fitch T (1972) Plate convergence, transcurrent faults, and internal deformation adjacent to southeast Asia and the western Pacific. J Geophys Res 77:4432–4462View ArticleGoogle Scholar - Geospatial Information Authority of Japan (2016a) The 2016 Kumamoto Earthquake: Crustal deformation around the faults (in Japanese). http://www.gsi.go.jp/BOUSAI/H27-kumamoto-earthquake-index.html#3. Accessed 25 June 2016 - Geospatial Information Authority of Japan (2016b) The 2016 Kumamoto Earthquake: Crustal deformation around the faults (in Japanese), postseismic deformation. http://www.gsi.go.jp/common/000140323.png Accessed 25 June 2016 - Ishizaka S, Iwasaki Y, Hase Y, Watanabe K, Iwauchi A, Taziri M (1995) Subsidence rate and sediments of the last interglacial epoch in the Kumamoto Plain, Japan. Quat Res 34:335–344 (in Japanese with English abstract) View ArticleGoogle Scholar - Jones CH, Wesnousky SG (1992) Variations in strength and slip rate along the San Andreas fault system. Science 256:83–86View ArticleGoogle Scholar - Kamata H, Kodama K (1994) Tectonics of an arc-arc junction: an example from Kyushu Island at the junction of the Southwest Japan Arc and the Ryuku Arc. Tectonophysics 233:69–81View ArticleGoogle Scholar - Kamata H, Uto K, Uchiumi S (1988) Geochronology and evolution of the post-Shishimuta caldera activity around the Waitasan area in the Hohi volcanic zone, Kyushu, Japan. Bull Volcanol Soc Jpn 33:305–320Google Scholar - King G, Klinger Y, Bowman D, Tapponnier P (2005) Slip-partitioned surface breaks for the Mw 7.8 2001 Kokoxili earthquake, China. Bull Seismol Soc Am 95:731–738View ArticleGoogle Scholar - Kubo H, Suzuki W, Aoi S, Sekiguchi H (2016) Source rupture processes of the 2016 Kumamoto, Japan, earthquakes estimated from strong-motion waveforms. Earth Planets Space 68:161–173. doi:10.1186/s40623-016-0536-8 View ArticleGoogle Scholar - Kumahara Y, Goto H, Nakata T, Ishiguro S, Ishimura D, Ishiyama T, Okada S, Kagohara K, Kashihara S, Kaneda H, Sugito N, Suzuki Y, Takenami T, Tanaka K, Tanaka T, Tsutsumi H, Toda S, Hirouchi D, Matsuta N, Mita T, Moriki H, Yoshida H, Watanabe M (2016) Distribution of surface rupture associated the 2016 Kumamoto earthquake and its significance. Abstract (MIS34-05) of the Japan Geoscience Union Meeting 2016. http://www2.jpgu.org/meeting/2016/PDF2016/M-IS34_O.pdf Accessed 25 June 2016 - Kumamoto Prefecture (1996) Survey report on the Futagawa and Tatsutayama faults. http://www.hp1039.jishin.go.jp/danso/Kumamoto2Afrm.htm Accessed 12 July 2016 (in Japanese) - Matsumoto Y (1979) Some problems on volcanic activities and depression structures in Kyushu, Japan. Mem Geol Soc Jpn 16:127–139Google Scholar - McCaffrey R (1992) Oblique plate convergence, slip vectors, and forearc deformation. J Geophys Res 97:8905–8915View ArticleGoogle Scholar - Michael A (1984) Determination of stress from slip data: faults and folds. J Geophys Res 89:11517–11526View ArticleGoogle Scholar - Nakata T, Imaizumi T (eds) (2002) Digital active fault map of Japan. University of Tokyo Press, Tokyo DVDGoogle Scholar - Nishimura S, Hashimoto M (2006) A model with rigid rotations and slip deficits for the GPS-derived velocity field in southwest Japan. Tectonophysics 421:187–202View ArticleGoogle Scholar - Okada Y (1992) Internal deformation due to shear and tensile faults in a half-space. Bull Seismol Soc Am 82:1018–1040Google Scholar - Ono K, Watanabe K (1985) Geological map of Aso Volcano. In: Geological Map of Volcanoes. No. 4 Geological Survey of JapanGoogle Scholar - Research Group for Active Faults of Japan (1991) Active Faults in Japan, sheet maps and inventories, rev. ed. Univ of Tokyo Press, TokyoGoogle Scholar - Research Group for Active Tectonics in Kyushu (1989) Active Tectonics in Kyushu, Univ. of Tokyo Press, TokyoGoogle Scholar - Sibson RH (1992) Implications of fault-valve behavior for rupture nucleation and recurrence. Tectonophysics 211:283–293View ArticleGoogle Scholar - Smith DE, Heaton TH (2011) Models of stochastic, spatially varying stress in the crust compatible with focal-mechanism data, and how stress inversions can be biased toward the stress rate. Bull Seismol Soc Am 101:1396–1421View ArticleGoogle Scholar - Tada T (1984) Spreading of the Okinawa Trough and its relation to the crustal deformation in the Kyushu. J Seismol Soc Jpn 37:407–415Google Scholar - Tsukuda E (1990) Active tectonics of the median tectonic line. Bull Geol Surv Jpn 41:405–406 (in Japanese) Google Scholar - Watanabe K, Momikura Y, Tsuruta K (1979) Active faults and parasitic eruption centers on the western flank of Aso Caldera, Japan. Quat Res 18:89–101 (in Japanese with English abstract) View ArticleGoogle Scholar - Wesnousky SG (2006) Predicting the endpoints of earthquake ruptures. Nature 444:358–360. doi:10.1038/nature05275 View ArticleGoogle Scholar - Yabe H (1925) The “Nagasaki Dreiecke” proposed by Prof. Richthofen (“Richthofen-shi no Nagasaki-sankaku-chiiki” in Japanese). J Geol Soc Jpn 32:201–209 (in Japanese) View ArticleGoogle Scholar
<urn:uuid:2c372631-7bbc-40d9-98ae-dc9333f264de>
2.625
6,216
Academic Writing
Science & Tech.
44.641363
95,567,572
When it comes to statistical modeling few things are as tried and tested as linear regression. It's simple, it's (fairly) easy to conceptualize, and fast. Unfortunately, most of the articles I've read about it feel closer to math textbooks than to layman's definitions. In this post I'll give a fairly informal definition of linear regression, overview the goals of linear regression, and talk about a few things you can use it for. Caveat lector: this post intentionally avoids rigorous mathematical definitions of linear regressions! Try Googling It In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables denoted X. The case of one explanatory variable is called simple linear regression. Oh, well now it's all so obvious . There are some scary words in there: scalar, dependent variable, explanatory variable, and even this thing called simple linear regression. What-the-what? I thought I was already looking for the simplest definition! This all started in the 1800s with a guy named Francis Galton. Galton was studying the relationship between parents and their children. In particular, he investigated the relationship between the heights of fathers and their sons. What he discovered (as you might expect) was that a man's son tended to be roughly as tall as his father. However Galton's breakthrough was that the son's height tended to be closer to the overall average height of all people. Let's take Shaquille O'Neal as an example. Shaq is really tall, 7 ft 1 in to be exact (for you metric fans that's about 2.2 meters). If Shaq has a son, chances are he'll be pretty tall too. However, Shaq is such an anomaly that there is also a very good chance that his son will be not be as tall as he Shaq. Let's take the simplest possible example: calculating a regression with only 2 data points. Now while the statistician in the room might be quaking in fear at the thought of this, I think it'll help get my point across . All we're trying to do when we calculate our regression line is draw a line that's as close to every dot as possible. For classic linear regression, or "Least Squares Method", you only measure the closeness in the "up and down" direction (there are plenty of other ways to do this, but to be honest it usually doesn't matter). So if you draw a straight line that is as close as possible to each of our 2 points, you get something like this: This is great! Our line crosses through both data points (this is also the definition of a line). If we want to calculate the equation of this line, we can use the slope formula: Plugging in our of our points we calculate our line to be: Now hopefully you aren't too impressed by this, but this is in some sense the basis of what a linear regression is! Scaling Up from There Now wouldn't it be great if we could apply this same concept to a graph with more than just two data points? By doing this, we could take multiple men and their son's heights and do things like tell a man how tall we expect his son to be...before he even has a son! Below, we see 1000 father/son height combos. To roughly estimate a regression line, it's pretty simple: Just draw a line that is as close as possible to every point on your graph. Now this might be a little tedious to do by hand, but you'd be surprised at how close you can come just by eyeballing things. A Little More Complex We can use the same approach we used with 2 points, but now that we have 1,000 data points this is a bit more complex. Below is the result of the linear regression, with the fitted line in red. I've used R to create the regression, but there are tons of ways to do this (see below). A critical question to ask at this point is why this line? Why not a line with greater slope or even more extreme, a vertical line? Furthermore, how can we claim that this line is the best and what is it mean to be the best line? Let's compare the red line to two other other lines below: Clearly these two lines don't fit our data very well. But what does that mean mathematically? Without getting in too deep into the math, if we refer back to earlier in the post, we mentioned that our goal with linear regression is to minimize the vertical distance between all the data points and our line. So in determining the best line, we are attempting to minimize the distance between all the points and their distance to our line. There are lots of different ways to minimize this, (sum of squared errors, sum of absolute errors, etc), but all these methods have a general goal of minimizing this distance. In our example, we can see that if we were to take the total vertical distance between the points the the red line DR, and the total vertical distance between the points and the green DG or blue line DB, the total distance between the points and the red line is smaller. In pseudo-mathematical terms: There are more robust mathematical proofs to show this, but we won't get into that here. If you're interested in reading more about this, I recommend Khan Academy's Tutorial. Linear regression is a powerful tool that you can do some really cool stuff with. From just this example you could estimate how tall a man's son will be before he has one, determine which of your friends is freakishly tall with respect to their dad, or even compare different groups of men and their sons over time to analyze trends. One day, Simba, you will be 4' 10". While this post intentionally breezes over the math aspects of linear regression, its undeniable that to use linear regression in practice, you need to have a thorough understanding of both the qualitative and quantitative characteristics of the regression. As the number of inputs to a model increases, so does its complexity and it is important to understand the ramifications of this to be able to make sense of your model. This is nearly impossible to do without understanding the math behind these statistical techniques. That said, linear regression is a great place to start learning statistical modeling techniques, and the links below should help you get going!
<urn:uuid:e96ecea5-77e0-4210-aa25-70848bb9907f>
3.234375
1,333
Personal Blog
Science & Tech.
58.18704
95,567,593
Be it in sports or comedy, they say that timing is everything. In evolution, it’s no different. Many of the innovations that have separated us from other apes may have arisen not through creating new genetic material, but by subtly shifting how the existing lot is used. Take our brains, for example. In the brains of humans, chimps and many other mammals, the genes that are switched on in the brain change dramatically in the first few years of life. But Mehmet Somel from the Max Planck Institute for Evolutionary Anthropology has found that a small but select squad of genes, involved in the development of nerve cells, are activated much later in our brains than in those of other primates. This genetic delay mirrors other physical shifts in timing that separate humans from other apes. Chimpanzees, for example, become sexually mature by the age of 8 or 9; we take five more years to reach the same point of development. These delays are signs of an evolutionary process called “neoteny“, where a species’ growth slows down to the point where adults retain many of the features previously seen juveniles. You can see neoteny at work in some domestic dog breeds, which are remarkably similar to baby wolves, or the axolotl salamander, which keeps the gills of a larva even as it becomes a sexually mature adult. And some scientists, like the late Stephen Jay Gould, have suggested that neoteny has played a major role in human evolution too. As adults, we share many of the physical features of immature chimps. Our bone structures, including flat faces and small jaws, are similar to those of juvenile chimps, as is our patchy distribution of hair. A slower rate of development may even have shaped our vaunted intelligence, by stretching out the time when we are most receptive to new skills and knowledge. Somel’s research supports this idea by showing that since our evolutionary split from chimpanzees, the activation of some important brain genes has been delayed to the very start of adolescence. Somel collected samples of brain matter from 39 humans, 14 chimps and 9 macaques, all of whom had recently passed away and who represented a wide range of ages. He focused on almost 8,000 genes and analysed when they became activated in a part of the brain called the dorsolateral prefrontal cortex (DLPFC). On a broad level, all three species showed similar patterns. The majority of these 8,000 genes were expressed in different ways as the animals age and most of these changes happened very early on. In both chimps and humans, half of these shifts take place within the first year of life. The same thing happens in the brains of mice, which suggests that this is a general pattern of brain development that is common to most, if not all, mammals. But these superficial similarities mask subtler differences. Somel looked at each gene individually and focused on those that are activated differently as the brain matures; he found that half of these follow different journeys in humans compared to chimps. Some were switched on at an earlier point in time but a much higher proportion were delayed in their timing. These genes were neotenic – in the brains of humans, their use matches that of much younger chimps. To work out the proportion of neotenic genes in the human DLPFC, Somel compared brain samples taken from 14 humans and 14 chimps, who were matched as closely as possible for both age and sex. He considered 300 genes whose timings had shifted in humans or chimps, and worked out whether each was activated earlier (accelerated) or later (neotenic) in either species. He found that the neotenic human genes were in the majority and outnumbered the other groups by a factor of two. This isn’t just a property of the DLPFC; Somel found the same pattern by looking at another part of the brain – the superior frontal gyrus – in 9 other humans, chimps and macaques. Here, the dominance of neotenic human genes was even more pronounced and they comprised half of the total sample. So clearly, some genes in the human brain have come to be activated at a later point in time than their counterparts in chimps. These are still in the minority – it’s not the case that the development of the human brain as a whole has been drawn out. This mirrors the use of neoteny in explaining the evolution of human anatomy too – those who first proposed the idea of humans as neotenic apes applied the idea with broad brushstrokes to our entire bodies; now it’s restricted to specific parts. Finding out exactly what these genes do is the next big step. For starters, Somel has found that many of the neotenic genes are activated in our grey matter (the bodies of nerve cells, as opposed to white matter, which is the cabling that connects them). The differences in timing between humans and chimps are particularly pronounced when we reach the start of adolescence. At this point, our brains become dramatically reorganised and our grey matter actually starts to shrink, as unused connections between neurons are trimmed in a bid for efficiency. Somel thinks that delaying the point at which this happens may have given us extra invaluable time with which to pick up knowledge and skills. [PS I’m not sure I like the headline. For some reason, I had a total mental block about it; better suggestions welcome – Ed] Reference: PNAS 10.1073/pnas.0900544106 to be published this week
<urn:uuid:fedce71d-44e0-466d-ac81-590a0028ec9a>
3.78125
1,153
News Article
Science & Tech.
44.06339
95,567,643
International System of Units(Redirected from SI unit) The International System of Units (SI, abbreviated from the French Système international (d'unités)) is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units that are ampere, kelvin, second, metre, kilogram, candela, mole, and a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system also specifies names for 22 derived units, such as lumen and watt, for other common physical quantities. The base units are derived from invariant constants of nature, such as the speed of light and the triple point of water, which can be observed and measured with great accuracy, and one physical artefact. The artefact is the international prototype kilogram, certified in 1889, and consisting of a cylinder of platinum-iridium, which nominally has the same mass as one litre of water at the freezing point. Its stability has been a matter of significant concern, culminating in a proposed revision of the definition of the base units entirely in terms of constants of nature, expected to be put into effect in May 2019. Derived units may be defined in terms of base units or other derived units. They are adopted to facilitate measurement of diverse quantities. The SI is intended to be an evolving system; units and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves. The most recent derived unit, the katal, was defined in 1999. The reliability of the SI depends not only on the precise measurement of standards for the base units in terms of various physical constants of nature, but also on precise definition of those constants. The set of underlying constants is modified as more stable constants are found, or other constants may be more precisely measured. For example, in 1983, the metre was redefined to be the distance of light propagation in vacuum in an exact fraction of a second. Thus, the speed of light now has an exact value in terms of the defined units. The motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units (MKS) rather than any variant of the CGS. Since then, the SI has been adopted by all countries except the United States, Liberia and Myanmar. Units and prefixesEdit The International System of Units consists of a set of base units, derived units, and a set of decimal-based multipliers that are used as prefixes.:103–106 The units, excluding prefixed units,[Note 1] form a coherent system of units, which is based on a system of quantities in such a way that the equations between the numerical values expressed in coherent units have exactly the same form, including numerical factors, as the corresponding equations between the quantities. For example, 1 N = 1 kg × 1 m/s2 says that one newton is the force required to accelerate a mass of one kilogram at one metre per second squared, as related through the principle of coherence to the equation relating the corresponding quantities: F = m × a. Derived units apply to derived quantities, which may by definition be expressed in terms of base quantities, and thus are not independent; for example, electrical conductance is the inverse of electrical resistance, with the consequence that the siemens is the inverse of the ohm, and similarly, the ohm and siemens can be replaced with a ratio of an ampere and a volt, because those quantities bear a defined relationship to each other.[Note 2] Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI system, such as acceleration, which is defined in SI units as m/s2. The SI base units are the building blocks of the system and all the other units are derived from them. When Maxwell first introduced the concept of a coherent system, he identified three quantities that could be used as base units: mass, length and time. Giorgi later identified the need for an electrical base unit, for which the unit of electric current was chosen for SI. Another three base units (for temperature, amount of substance and luminous intensity) were added later. |Definition [n 1]| |mole||mol||N||amount of substance| The Prior definitions of the various base units in the above table were made by the following authorities: All other definitions result from resolutions by either CGPM or the CIPM and are catalogued in the SI Brochure. The early metric systems defined a unit of weight as a base unit, while the SI defines an analogous unit of mass. In everyday use, these are mostly interchangeable, but in scientific contexts the difference matters. Mass, strictly the inertial mass, represents a quantity of matter. It relates the acceleration of a body to the applied force via Newton's law, F = m × a: force equals mass times acceleration. In SI units, if you apply a force of 1 N (newton) to a mass of 1 kg, it will accelerate at 1 m/s2. This is true whether the object is floating in space or in a gravity field e.g. at the Earth's surface. Weight is the force exerted on a body by a gravitational field, and hence its weight depends on the strength of the gravitational field. Weight of a 1 kg mass at the Earth's surface is m × g; mass times the acceleration due to gravity which at the earth's surface is 9.81 newtons, and at the surface of Mars is about 3.5 newtons. Weight is not an accurate base unit for precision measurement because the acceleration due to gravity is local and varies over the surface of the earth, since the earth does not have uniform density or radius in all directions. It also varies with altitude or depth (distance from earth's centre). The derived units in the SI are formed by powers, products or quotients of the base units and are unlimited in number.:103:3 Derived units are associated with derived quantities; for example, velocity is a quantity that is derived from the base quantities of time and length, and thus the SI derived unit is metre per second (symbol m/s). The dimensions of derived units can be expressed in terms of the dimensions of the base units. Combinations of base and derived units may be used to express other derived units. For example, the SI unit of force is the newton (N), the SI unit of pressure is the pascal (Pa)—and the pascal can be defined as one newton per square metre (N/m2). |Namenote 1||Symbol||Quantity||In other SI units||In SI base units| |steradiannote 2||sr||solid angle||(m2⋅m−2)| |joule||J||energy, work, heat||N⋅m = Pa⋅m3||kg⋅m2⋅s−2| |watt||W||power, radiant flux||J/s||kg⋅m2⋅s−3| |coulomb||C||electric charge or quantity of electricity||s⋅A| |volt||V||voltage (electrical potential), emf||W/A||kg⋅m2⋅s−3⋅A−1| |ohm||Ω||resistance, impedance, reactance||V/A||kg⋅m2⋅s−3⋅A−2| |tesla||T||magnetic flux density||Wb/m2||kg⋅s−2⋅A−1| |degree Celsius||°C||temperature relative to 273.15 K||K| |becquerel||Bq||radioactivity (decays per unit time)||s−1| |gray||Gy||absorbed dose (of ionizing radiation)||J/kg||m2⋅s−2| |sievert||Sv||equivalent dose (of ionizing radiation)||J/kg||m2⋅s−2| 1. The table is ordered so that a derived unit is listed after the units upon which its definition depends. 2. The radian and steradian are defined as dimensionless derived units. Prefixes are added to unit names to produce multiples and sub-multiples of the original unit. All of these are integer powers of ten, and above a hundred or below a hundredth all are integer powers of a thousand. For example, kilo- denotes a multiple of a thousand and milli- denotes a multiple of a thousandth, so there are one thousand millimetres to the metre and one thousand metres to the kilometre. The prefixes are never combined, so for example a millionth of a metre is a micrometre, not a millimillimetre. Multiples of the kilogram are named as if the gram were the base unit, so a millionth of a kilogram is a milligram, not a microkilogram.:122:14 When prefixes are used to form multiples and submultiples of SI base and derived units, the resulting units are no longer coherent.:7 The BIPM specifies twenty prefixes for the International System of Units (SI): - Prefixes adopted before 1960 already existed before SI. 1873 was the introduction of the CGS system. Non-SI units accepted for use with SIEdit Many non-SI units continue to be used in the scientific, technical, and commercial literature. Some units are deeply embedded in history and culture, and their use has not been entirely replaced by their SI alternatives. The CIPM recognised and acknowledged such traditions by compiling a list of non-SI units accepted for use with SI, which are grouped as follows::123–129:7–11[Note 4] - Non-SI units accepted for use with the SI: - Certain units of time, angle, and legacy non-SI metric units have a long history of consistent use. Most societies have used the solar day and its non-decimal subdivisions as a basis of time and, unlike the foot or the pound, these were the same regardless of where they were being measured. The radian, being 1/ of a revolution, has mathematical advantages but it is cumbersome for navigation, and, as with time, the units used in navigation are largely consistent around the world. The tonne, litre, and hectare were adopted by the CGPM in 1879 and have been retained as units that may be used alongside SI units, having been given unique symbols. The catalogued units are - Non-SI units whose values in SI units must be obtained experimentally (Table 7). - Physicists often use units of measure that are based on natural phenomena, particularly when the quantities associated with these phenomena are many orders of magnitude greater than or less than the equivalent SI unit. The most common ones have been catalogued in the SI Brochure together with consistent symbols and accepted values, but with the caveat that their values in SI units need to be measured. - Other non-SI units (Table 8): - A number of non-SI units that had never been formally sanctioned by the CGPM have continued to be used across the globe in many spheres including health care and navigation. As with the units of measure in Tables 6 and 7, these have been catalogued by the CIPM in the SI Brochure to ensure consistent usage, but with the recommendation that authors who use them should define them wherever they are used. - In the interests of standardising health-related units of measure used in the nuclear industry, the 12th CGPM (1964) accepted the continued use of the curie (symbol Ci) as a non-SI unit of activity for radionuclides;:152 the SI derived units becquerel, sievert and gray were adopted in later years. Similarly, the millimetre of mercury (symbol mmHg) was retained for measuring blood pressure.:127 - Non-SI units associated with the CGS and the CGS-Gaussian system of units (Table 9) - The SI manual also catalogues a number of legacy units of measure that are used in specific fields such as geodesy and geophysics or are found in the literature, particularly in classical and relativistic electrodynamics where they have certain advantages: The units that are catalogued are: Common notions of the metric unitsEdit The basic units of the metric system, as originally defined, represented common quantities or relationships in nature. They still do – the modern precisely defined quantities are refinements of definition and methodology, but still with the same magnitudes. In cases where laboratory precision may not be required or available, or where approximations are good enough, the original definitions may suffice.[Note 5] - A second is 1/60 of a minute, which is 1/60 of an hour, which is 1/24 of a day, so a second is 1/86400 of a day; a second is the time it takes a dense object to freely fall 4.9 metres from rest. - The metre is close to the length of a pendulum that has a period of 2 seconds; most dining tabletops are about 0.75 metre high; a very tall human (basketball forward) is about 2 metres tall. - The kilogram is the mass of a litre of cold water; a cubic centimetre or millilitre of water has a mass of one gram; a 1-euro coin, 7.5 g; a Sacagawea US 1-dollar coin, 8.1 g; a UK 50-pence coin, 8.0 g. - A candela is about the luminous intensity of a moderately bright candle, or 1 candle power; a 60 W tungsten-filament incandescent light bulb has a luminous intensity of about 64 candela. - A mole of a substance has a mass that is its molecular mass expressed in units of grams; the mass of a mole of table salt is 58.4 g. - A temperature difference of one kelvin is the same as one degree Celsius: 1/100 of the temperature differential between the freezing and boiling points of water at sea level; the absolute temperature in kelvins is the temperature in degrees Celsius plus about 273; human body temperature is about 37 °C or 310 K. - A 60 W incandescent light bulb consumes 0.5 amperes at 120 V (US mains voltage) and about 0.26 amperes at 230 V (European mains voltage). Names of units follow the grammatical rules associated with common nouns: in English and in French they start with a lowercase letter (e.g., newton, hertz, pascal), even when the symbol for the unit begins with a capital letter. This also applies to "degrees Celsius", since "degree" is the unit. The official British and American spellings for certain SI units differ – British English, as well as Australian, Canadian and New Zealand English, uses the spelling deca-, metre, and litre whereas American English uses the spelling deka-, meter, and liter, respectively.:3 Unit symbols and the values of quantities Edit Although the writing of unit names is language-specific, the writing of unit symbols and the values of quantities is consistent across all languages and therefore the SI Brochure has specific rules in respect of writing them.:130–135 The guideline produced by the National Institute of Standards and Technology (NIST) clarifies language-specific areas in respect of American English that were left open by the SI Brochure, but is otherwise identical to the SI Brochure. General rules[Note 6] for writing SI units and quantities apply to text that is either handwritten or produced using an automated process: - The value of a quantity is written as a number followed by a space (representing a multiplication sign) and a unit symbol; e.g., 2.21 kg, ×102 m2, 22 K. This rule explicitly includes the percent sign (%) 7.3:134 and the symbol for degrees of temperature (°C).:133 Exceptions are the symbols for plane angular degrees, minutes, and seconds (°, ′, and ″), which are placed immediately after the number with no intervening space. - Symbols are mathematical entities, not abbreviations, and as such do not have an appended period/full stop (.), unless the rules of grammar demand one for another reason, such as denoting the end of a sentence. - A prefix is part of the unit, and its symbol is prepended to a unit symbol without a separator (e.g., k in km, M in MPa, G in GHz, μ in μg). Compound prefixes are not allowed. A prefixed unit is atomic in expressions (e.g., km2 is equivalent to (km)2). - Symbols for derived units formed by multiplication are joined with a centre dot (⋅) or a non-breaking space; e.g., N⋅m or N m. - Symbols for derived units formed by division are joined with a solidus (/), or given as a negative exponent. E.g., the "metre per second" can be written m/s, m s−1, m⋅s−1, or m/. A solidus must not be used more than once in a given expression without parentheses to remove ambiguities; e.g., kg/(m⋅s2) and kg⋅m−1⋅s−2 are acceptable, but kg/m/s2 is ambiguous and unacceptable. - The first letter of symbols for units derived from the name of a person is written in upper case; otherwise, they are written in lower case. E.g., the unit of pressure is named after Blaise Pascal, so its symbol is written "Pa", but the symbol for mole is written "mol". Thus, "T" is the symbol for tesla, a measure of magnetic field strength, and "t" the symbol for tonne, a measure of mass. Since 1979, the litre may exceptionally be written using either an uppercase "L" or a lowercase "l", a decision prompted by the similarity of the lowercase letter "l" to the numeral "1", especially with certain typefaces or English-style handwriting. The American NIST recommends that within the United States "L" be used rather than "l". - A plural of a symbol must not be used; e.g., 25 kg, not 25 kgs. - Uppercase and lowercase prefixes are not interchangeable. E.g., the quantities 1 mW and 1 MW represent two different quantities (milliwatt and megawatt). - The symbol for the decimal marker is either a point or comma on the line. In practice, the decimal point is used in most English-speaking countries and most of Asia, and the comma in most of Latin America and in continental European countries. - Spaces should be used as a thousands separator (000000) in contrast to commas or periods (1,000,000 or 1.000.000) to reduce confusion resulting from the variation between these forms in different countries. 1 - Any line-break inside a number, inside a compound unit, or between number and unit should be avoided. Where this is not possible, line breaks should coincide with thousands separators. - Since the value of "billion" and "trillion" can vary from language to language, the dimensionless terms "ppb" (parts per billion) and "ppt" (parts per trillion) should be avoided. No alternative is suggested in the SI Brochure. Printing SI symbolsEdit The rules covering printing of quantities and units are part of ISO 80000-1:2009. International System of QuantitiesEdit The quantities and equations that provide the context in which the SI units are defined are now referred to as the International System of Quantities (ISQ). The system is based on the quantities underlying each of the seven base units of the SI. Other quantities, such as area, pressure, and electrical resistance, are derived from these base quantities by clear non-contradictory equations. The ISQ defines the quantities that are measured with the SI units. The ISQ is defined in the international standard ISO/IEC 80000, and was finalised in 2009 with the publication of ISO 80000-1. Realisation of unitsEdit Metrologists carefully distinguish between the definition of a unit and its realisation. The definition of each base unit of the SI is drawn up so that it is unique and provides a sound theoretical basis on which the most accurate and reproducible measurements can be made. The realisation of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit. A description of the mise en pratique[Note 8] of the base units is given in an electronic appendix to the SI Brochure.:168–169 The published mise en pratique is not the only way in which a base unit can be determined: the SI Brochure states that "any method consistent with the laws of physics could be used to realise any SI unit.":111 In the current (2016) exercise to overhaul the definitions of the base units, various consultative committees of the CIPM have required that more than one mise en pratique shall be developed for determining the value of each unit. In particular: - At least three separate experiments be carried out yielding values having a relative standard uncertainty in the determination of the kilogram of no more than ×10−8 and at least one of these values should be better than 5×10−8. Both the 2Kibble balance and the Avogadro project should be included in the experiments and any differences between these be reconciled. - When the kelvin is being determined, the relative uncertainty of the Boltzmann constant derived from two fundamentally different methods such as acoustic gas thermometry and dielectric constant gas thermometry be better than one part in and that these values be corroborated by other measurements. 10−6 Evolution of the SIEdit Changes to the SIEdit The International Bureau of Weights and Measures (BIPM) has described SI as "the modern metric system".:95 Changing technology has led to an evolution of the definitions and standards that has followed two principal strands – changes to SI itself, and clarification of how to use units of measure that are not part of SI but are still nevertheless used on a worldwide basis. Since 1960 the CGPM has made a number of changes to the SI to meet the needs of specific fields, notably chemistry and radiometry. These are mostly additions to the list of named derived units, and include the mole (symbol mol) for an amount of substance, the pascal (symbol Pa) for pressure, the siemens (symbol S) for electrical conductance, the becquerel (symbol Bq) for "activity referred to a radionuclide", the gray (symbol Gy) for ionizing radiation, the sievert (symbol Sv) as the unit of dose equivalent radiation, and the katal (symbol kat) for catalytic activity.:156:156:158:159:165 The 1960 definition of the standard metre in terms of wavelengths of a specific emission of the krypton 86 atom was replaced with the distance that light travels in a vacuum in exactly 1/ second, so that the speed of light is now an exactly specified constant of nature. A few changes to notation conventions were also made to alleviate lexicographic ambiguities. After the metre was redefined in 1960, the kilogram remained the only SI base unit that relied on a specific physical artefact, the international prototype of the kilogram (IPK), for its definition and thus the only unit that was still subject to periodic comparisons of national standard kilograms with the IPK. During the 2nd and 3rd Periodic Verification of National Prototypes of the Kilogram, a significant divergence had occurred between the mass of the IPK and all of its official copies stored around the world: the copies had all noticeably increased in mass with respect to the IPK. During extraordinary verifications carried out in 2014 preparatory to redefinition of metric standards, continuing divergence was not confirmed. Nonetheless, the residual and irreducible instability of a physical IPK undermines the reliability of the entire metric system to precision measurement from small (atomic) to large (astrophysical) scales. The existing proposal is: - In addition to the speed of light, four constants of nature – the Planck constant, an elementary charge, the Boltzmann constant and the Avogadro number – be defined to have exact values - The International Prototype Kilogram be retired - The current definitions of the kilogram, ampere, kelvin and mole be revised - The wording of base unit definitions should change emphasis from explicit unit to explicit constant definitions. The redefinitions are expected to be adopted at the 26th CGPM in November 2018. The CODATA task group on fundamental constants has announced special submission deadlines for data to compute the values that will be announced at this event. The improvisation of unitsEdit The units and unit magnitudes of the metric system which became the SI were improvised piecemeal from everyday physical quantities starting in the mid-18th century. Only later were they moulded into an orthogonal coherent decimal system of measurement. The degree centigrade as a unit of temperature resulted from the scale devised by Swedish astronomer Anders Celsius in 1742. His scale counter-intuitively designated 100 as the freezing point of water and 0 as the boiling point. Independently, In 1743, the French physicist Jean-Pierre Christin described a scale with 0 as the freezing point of water and 100 the boiling point. The scale became known as the centi-grade, or 100 gradations of temperature, scale. The metric system was developed from 1791 onwards by a committee of the French Academy of Sciences, commissioned to create a unified and rational system of measures. The group, which included preeminent French men of science,:89 used the same principles for relating length, volume, and mass that had been proposed by the English clergyman John Wilkins in 1668 and the concept of using the Earth's meridian as the basis of the definition of length, originally proposed in 1670 by the French abbot Mouton. In March 1791, the Assembly adopted the committee's proposed principles for the new decimal system of measure including the metre defined to be 1/10,000,000th of the length of the quadrant of earth's meridian passing through Paris, and authorised a survey to precisely establish the length of the meridian. In July 1792, the committee proposed the names metre, are, litre and grave for the units of length, area, capacity, and mass, respectively. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth and kilo for a thousand.:82 Later, during the process of adoption of the metric system, the Latin gramme and kilogramme, replaced the former provincial terms gravet (1/1000 grave) and grave. In June 1799, based on the results of the meridian survey, the standard mètre des Archives and kilogramme des Archives were deposited in the French National Archives. Subsequently, that year, the metric system was adopted by law in France. The French system was short-lived due to its unpopularity. Napoleon ridiculed it, and in 1812, introduced a replacement system, the mesures usuelles or "customary measures" which restored many of the old units, but redefined in terms of the metric system. During the first half of the 19th century there was little consistency in the choice of preferred multiples of the base units: typically the myriametre (000 metres) was in widespread use in both France and parts of Germany, while the kilogram ( 10 grams) rather than the myriagram was used for mass. 1000 In 1832, the German mathematician Carl Friedrich Gauss, assisted by Wilhelm Weber, implicitly defined the second as a base unit when he quoted the Earth's magnetic field in terms of millimetres, grams, and seconds. Prior to this, the strength of the Earth's magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a suspended magnet of known mass by the Earth's magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length and time to the magnetic field.[Note 9] A candlepower as a unit of illuminance was originally defined by an 1860 English law as the light produced by a pure spermaceti candle weighing 1⁄6 pound (76 grams) and burning at a specified rate. Spermaceti, a waxy substance found in the heads of sperm whales, was once used to make high-quality candles. At this time the French standard of light was based upon the illumination from a Carcel oil lamp. The unit was defined as that illumination emanating from a lamp burning pure rapeseed oil at a defined rate. It was accepted that ten standard candles were about equal to one Carcel lamp. |étalons||[Technical] standard||5, 95| |noms spéciaux||[Some derived units have] |mise en pratique||mise en pratique [Practical realisation][Note 10] A French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention also called Treaty of the Metre by 17 nations.[Note 11]:353–354 Initially the convention only covered standards for the metre and the kilogram. In 1921, the Metre Convention was extended to include all physical units, including the ampere and others thereby enabling the CGPM to address inconsistencies in the way that the metric system had been used.:96 A set of 30 prototypes of the metre and 40 prototypes of the kilogram,[Note 12] in each case made of a 90% platinum-10% iridium alloy, were manufactured by British metallurgy specialty firm and accepted by the CGPM in 1889. One of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the remaining prototypes to serve as the national prototype for that country. The cgs and MKS systemsEdit This section is missing information about all 22 named derived units of SI.(December 2017) This section is missing information about a period of ~35-40 years between early 20th century and end of WW2 covering most of the industrial revolution.(December 2017) In the 1860s, James Clerk Maxwell, William Thomson (later Lord Kelvin) and others working under the auspices of the British Association for the Advancement of Science, built on Gauss' work and formalised the concept of a coherent system of units with base units and derived units christened the centimetre–gram–second system of units in 1874. The principle of coherence was successfully used to define a number of units of measure based on the CGS, including the erg for energy, the dyne for force, the barye for pressure, the poise for dynamic viscosity and the stokes for kinematic viscosity. In 1879, the CIPM published recommendations for writing the symbols for length, area, volume and mass, but it was outside its domain to publish recommendations for other quantities. Beginning in about 1900, physicists who had been using the symbol "μ" (mu) for "micrometre" or "micron", "λ" (lambda) for "microlitre", and "γ" (gamma) for "microgram" started to use the symbols "μm", "μL" and "μg". At the close of the 19th century three different systems of units of measure existed for electrical measurements: a CGS-based system for electrostatic units, also known as the Gaussian or ESU system, a CGS-based system for electromechanical units (EMU) and an International system based on units defined by the Metre Convention. for electrical distribution systems. Attempts to resolve the electrical units in terms of length, mass, and time using dimensional analysis was beset with difficulties—the dimensions depended on whether one used the ESU or EMU systems. This anomaly was resolved in 1901 when Giovanni Giorgi published a paper in which he advocated using a fourth base unit alongside the existing three base units. The fourth unit could be chosen to be electric current, voltage, or electrical resistance. Electric current with named unit 'ampere' was chosen as the base unit, and the other electrical quantities derived from it according to the laws of physics. This became the foundation of the MKS system of units. In the late 19th and early 20th centuries, a number of non-coherent units of measure based on the gram/kilogram, centimetre/metre and second, such as the Pferdestärke (metric horsepower) for power,[Note 13] the darcy for permeability and "millimetres of mercury" for barometric and blood pressure were developed or propagated, some of which incorporated standard gravity in their definitions.[Note 14] At the end of the Second World War, a number of different systems of measurement were in use throughout the world. Some of these systems were metric system variations; others were based on customary systems of measure, like the U.S customary system and Imperial system of the UK and British Empire. The Practical system of unitsEdit This section is missing information about changeover centigrade->Kelvin and candlepower->candela.(December 2017) In 1948, the 9th CGPM commissioned a study to assess the measurement needs of the scientific, technical, and educational communities and "to make recommendations for a single practical system of units of measurement, suitable for adoption by all countries adhering to the Metre Convention". This working document was Practical system of units of measurement. Based on this study, the 10th CGPM in 1954 defined an international system derived from six base units including units of temperature and optical radiation in addition to those for the MKS system mass, length, and time units and Georgi's current unit. Six base units were recommended: the metre, kilogram, second, ampere, degree Kelvin, and candela. The 9th CGPM also approved the first formal recommendation for the writing of symbols in the metric system when the basis of the rules as they are now known was laid down. These rules were subsequently extended and now cover unit symbols and names, prefix symbols and names, how quantity symbols should be written and used and how the values of quantities should be expressed.:104,130 Birth of the SIEdit This section needs expansion. You can help by adding to it. (December 2017) In 1960, the 11th CGPM synthesized the results of the 12-year study into a set of 16 resolutions. The system was named the International System of Units, abbreviated SI from the French name, Le Système International d'Unités.:110 - Introduction to the metric system - Outline of the metric system - List of international common standards - Metre–tonne–second system of units Standards and conventions - For historical reasons, the kilogram rather than the gram is treated as the coherent unit, making an exception to this characterization. - Ohm's law: 1 Ω = 1 V/A from the relationship E = I × R, where E is electromotive force or voltage (unit: volt), I is current (unit: ampere), and R is resistance (unit: ohm). - This object is the International Prototype Kilogram or IPK called rather poetically Le Grand K. - This grouping reflects the 2014 revision of the 8th Edition of the SI Brochure (2006). - While the second is readily determined from the Earth's rotation period, the metre, originally defined in terms of the Earth's size and shape, is less amenable; however, that the Earth's circumference is very close to 40,000 km may be a useful mnemonic. - Except where specifically noted, these rules are common to both the SI Brochure and the NIST brochure. - For example, the United States' National Institute of Standards and Technology (NIST) has produced a version of the CGPM document (NIST SP 330) which clarifies local interpretation for English-language publications that use American English - This term is a translation of the official [French] text of the SI Brochure. - The strength of the earth's magnetic field was designated 1 G (gauss) at the surface = cm−1/2g1/2s−1. - The 8th edition of the SI Brochure (2008) notes that [at that time of publication] the term "mise en pratique" had not been fully defined. - Argentina, Austria-Hungary, Belgium, Brazil, Denmark, France, German Empire, Italy, Peru, Portugal, Russia, Spain, Sweden and Norway, Switzerland, Ottoman Empire, United States and Venezuela. - The text "Des comparaisons périodiques des étalons nationaux avec les prototypes internationaux" (English: the periodic comparisons of national standards with the international prototypes) in article 6.3 of the Metre Convention distinguishes between the words "standard" (OED: "The legal magnitude of a unit of measure or weight") and "prototype" (OED: "an original on which something is modelled"). - Pferd is German for "horse" and Stärke is German for "strength" or "power". The Pferdestärke is the power needed to raise 75 kg against gravity at the rate of one metre per second. (1 PS = 0.985 HP). - This constant is unreliable, because it varies over the surface of the earth. - "Convocation of the General Conference on Weights and Measures (25th meeting)" (PDF). International Bureau of Weights and Measures. p. 32. Retrieved 2014-05-27. - "The World Factbook Appendix G". CIA. Retrieved 2017-10-26. - International Bureau of Weights and Measures (2006), The International System of Units (SI) (PDF) (8th ed.), ISBN 92-822-2213-6, archived (PDF) from the original on 2017-08-14 - Taylor, Barry N.; Thompson, Ambler (2008). The International System of Units (SI) (Special publication 330) (PDF). Gaithersburg, MD: National Institute of Standards and Technology. Retrieved 2017-08-04. - Quantities Units and Symbols in Physical Chemistry, IUPAC - Page, Chester H.; Vigoureux, Paul, eds. (1975-05-20). The International Bureau of Weights and Measures 1875–1975: NBS Special Publication 420. Washington, D.C.: National Bureau of Standards. pp. 238–244. - Secula, Erik M. (7 October 2014). "Redefining the Kilogram, The Past". Nist.gov. Retrieved 22 August 2017. - McKenzie, A. E. E. (1961). Magnetism and Electricity. Cambridge University Press. p. 322. - "Units & Symbols for Electrical & Electronic Engineers". Institution of Engineering and Technology. 1996. pp. 8–11. Retrieved 2013-08-19. - Thompson, Ambler; Taylor, Barry N. (2008). Guide for the Use of the International System of Units (SI) (Special publication 811) (PDF). Gaithersburg, MD: National Institute of Standards and Technology. - Rowlett, Russ (2004-07-14). "Using Abbreviations or Symbols". University of North Carolina. Retrieved 2013-12-11. - "SI Conventions". National Physical Laboratory. Retrieved 2013-12-11. - Thompson, A.; Taylor, B. N. (July 2008). "NIST Guide to SI Units – Rules and Style Conventions". National Institute of Standards and Technology. Retrieved 2009-12-29. - "Interpretation of the International System of Units (the Metric System of Measurement) for the United States" (PDF). Federal Register. National Archives and Records Administration. 73 (96): 28432–28433. 2008-05-09. FR Doc number E8-11058. Retrieved 2009-10-28. - Williamson, Amelia A. (March–April 2008). "Period or Comma? Decimal Styles over Time and Place" (PDF). Science Editor. Council of Science Editors. 31 (2): 42. Archived from the original (PDF) on 2013-02-28. Retrieved 2012-05-19. - "ISO 80000-1:2009(en) Quantities and Units—Past 1:General". International Organization for Standardization. 2009. Retrieved 2013-08-22. - "The International Vocabulary of Metrology (VIM)". - "1.16". International vocabulary of metrology – Basic and general concepts and associated terms (VIM) (PDF) (3rd ed.). International Bureau of Weights and Measures (BIPM): Joint Committee for Guides in Metrology. 2012. Retrieved 2015-03-28. - S. V. Gupta, Units of Measurement: Past, Present and Future. International System of Units, p. 16, Springer, 2009. ISBN 3642007384. - "Avogadro Project". National Physical Laboratory. Retrieved 2010-08-19. - "What is a mise en pratique?". International Bureau of Weights and Measures. Retrieved 2012-11-10. - "Recommendations of the Consultative Committee for Mass and Related Quantities to the International Committee for Weights and Measures" (PDF). 12th Meeting of the CCM. Sèvres: Bureau International des Poids et Mesures. 2010-03-26. Retrieved 2012-06-27. - "Recommendations of the Consultative Committee for Amount of Substance – Metrology in Chemistry to the International Committee for Weights and Measures" (PDF). 16th Meeting of the CCQM. Sèvres: Bureau International des Poids et Mesures. 15–16 April 2010. Retrieved 2012-06-27. - "Recommendations of the Consultative Committee for Thermometry to the International Committee for Weights and Measures" (PDF). 25th Meeting of the CCT. Sèvres: Bureau International des Poids et Mesures. 6–7 May 2010. Retrieved 2012-06-27. - p. 221 – McGreevy - "Redefining the kilogram". UK National Physical Laboratory. Retrieved 2014-11-30. - Wood, B. (3–4 November 2014). "Report on the Meeting of the CODATA Task Group on Fundamental Constants" (PDF). BIPM. p. 7. [BIPM director Martin] Milton responded to a question about what would happen if ... the CIPM or the CGPM voted not to move forward with the redefinition of the SI. He responded that he felt that by that time the decision to move forward should be seen as a foregone conclusion. - Mohr, Peter J.; Newell, David B.; Taylor, Barry N. (2015). "CODATA recommended values of the fundamental physical constants: 2014 – Summary". Zenodo. doi:10.5281/zenodo.22827 (inactive 2017-06-08). Because of the good progress made in both experiment and theory since the 31 December 2010 closing date of the 2010 CODATA adjustment, the uncertainties of the 2014 recommended values of h, e, k and NA are already at the level required for the adoption of the revised SI by the 26th CGPM in the fall of 2018. The formal road map to redefinition includes a special CODATA adjustment of the fundamental constants with a closing date for new data of 1 July 2017 in order to determine the exact numerical values of h, e, k and NA that will be used to define the New SI. A second CODATA adjustment with a closing date of 1 July 2018 will be carried out so that a complete set of recommended values consistent with the New SI will be available when it is formally adopted by the 26th CGPM. - "Amtliche Maßeinheiten in Europa 1842" [Official units of measure in Europe 1842] (in German). Retrieved 2011-03-26Text version of Malaisé's book</rref>Malaisé, Ferdinand (1842). Theoretisch-practischer Unterricht im Rechnen [Theoretical and practical instruction in arithmetic] (in German). München. pp. 307–322. Retrieved 2013-01-07. - "The name 'kilogram'". International Bureau of Weights and Measures. Archived from the original on 14 May 2011. Retrieved 25 July 2006. - Alder, Ken (2002). The Measure of all Things—The Seven-Year-Odyssey that Transformed the World. London: Abacus. ISBN 0-349-11507-9. - Quinn, Terry (2012). From artefacts to atoms: the BIPM and the search for ultimate measurement standards. Oxford University Press. p. xxvii. ISBN 978-0-19-530786-3. he [Wilkins] proposed essentially what became ... the French decimal metric system - Wilkins, John (1668). "VII". An Essay towards a Real Character and a Philosophical Language. The Royal Society. pp. 190–194. "Reproduction (33 MB)" (PDF). Retrieved 2011-03-06.; "Transcription" (PDF). Retrieved 2011-03-06. - "Mouton, Gabriel". Complete Dictionary of Scientific Biography. encyclopedia.com. 2008. Retrieved 2012-12-30. - O'Connor, John J.; Robertson, Edmund F. (January 2004), "Gabriel Mouton", MacTutor History of Mathematics archive, University of St Andrews. - Tavernor, Robert (2007). Smoot's Ear: The Measure of Humanity. Yale University Press. ISBN 978-0-300-12492-7. - "Brief history of the SI". International Bureau of Weights and Measures. Retrieved 2012-11-12. - Tunbridge, Paul (1992). Lord Kelvin, His Influence on Electrical Measurements and Units. Peter Pereginus Ltd. pp. 42–46. ISBN 0-86341-237-8. - Everett, ed. (1874). "First Report of the Committee for the Selection and Nomenclature of Dynamical and Electrical Units". Report on the Forty-third Meeting of the British Association for the Advancement of Science held at Bradford in September 1873. British Association for the Advancement of Science: 222–225. Retrieved 2013-08-28. Special names, if short and suitable, would ... be better than the provisional designation 'C.G.S. unit of ...'. - Page, Chester H.; Vigoureux, Paul, eds. (1975-05-20). The International Bureau of Weights and Measures 1875–1975: NBS Special Publication 420. Washington, D.C.: National Bureau of Standards. p. 12. - Maxwell, J. C. (1873). A treatise on electricity and magnetism. 2. Oxford: Clarendon Press. pp. 242–245. Retrieved 2011-05-12. - Bigourdan, Guillaume (2012) . Le Système Métrique Des Poids Et Mesures: Son Établissement Et Sa Propagation Graduelle, Avec L'histoire Des Opérations Qui Ont Servi À Déterminer Le Mètre Et Le Kilogramme (facsimile edition) [The Metric System of Weights and Measures: Its Establishment and its Successive Introduction, with the History of the Operations Used to Determine the Metre and the Kilogram] (in French). Ulan Press. p. 176. ASIN B009JT8UZU. - Smeaton, William A. (2000). "The Foundation of the Metric System in France in the 1790s: The importance of Etienne Lenoir's platinum measuring instruments". Platinum Metals Rev. Ely. 44 (3): 125–134. Retrieved 2013-06-18. - "The intensity of the Earth's magnetic force reduced to absolute measurement" (PDF). - Nelson, Robert A. (1981). "Foundations of the international system of units (SI)" (PDF). Physics Teacher: 597. - "The Metre Convention". Bureau International des Poids et Mesures. Retrieved 2012-10-01. - General Conference on Weights and Measures (Conférence générale des poids et mesures or CGPM) - International Committee for Weights and Measures (Comité international des poids et mesures or CIPM) - International Bureau of Weights and Measures (Bureau international des poids et mesures or BIPM) – an international metrology centre at Sèvres in France that has custody of the International prototype kilogram, provides metrology services for the CGPM and CIPM, - McGreevy, Thomas (1997). Cunningham, Peter, ed. The Basis of Measurement: Volume 2 – Metrication and Current Practice. Pitcon Publishing (Chippenham) Ltd. pp. 222–224. ISBN 0-948251-84-0. - Fenna, Donald (2002). Weights, Measures and Units. Oxford University Press. International unit. ISBN 0-19-860522-6. - "Historical figures: Giovanni Giorgi". International Electrotechnical Commission. 2011. Retrieved 2011-04-05. - "Die gesetzlichen Einheiten in Deutschland" [List of units of measure in Germany] (PDF) (in German). Physikalisch-Technische Bundesanstalt (PTB). p. 6. Retrieved 2012-11-13. - "Porous materials: Permeability" (PDF). Module Descriptor, Material Science, Materials 3. Materials Science and Engineering, Division of Engineering, The University of Edinburgh. 2001. p. 3. Archived from the original (PDF) on 2 June 2013. Retrieved 13 November 2012. - "BIPM - Resolution 6 of the 9th CGPM". Bipm.org. 1948. Retrieved 22 August 2017. - "Resolution 7 of the 9th meeting of the CGPM (1948): Writing and printing of unit symbols and of numbers". International Bureau of Weights and Measures. Retrieved 2012-11-06. - "BIPM - Resolution 12 of the 11th CGPM". Bipm.org. Retrieved 22 August 2017. - International Union of Pure and Applied Chemistry (1993). Quantities, Units and Symbols in Physical Chemistry, 2nd edition, Oxford: Blackwell Science. ISBN 0-632-03583-8. Electronic version. - Unit Systems in Electromagnetism - MW Keller et al. Metrology Triangle Using a Watt Balance, a Calculable Capacitor, and a Single-Electron Tunneling Device - "The Current SI Seen From the Perspective of the Proposed New SI". Barry N. Taylor. Journal of Research of the National Institute of Standards and Technology, Vol. 116, No. 6, Pgs. 797–807, Nov–Dec 2011. - B. N. Taylor, Ambler Thompson, International System of Units (SI), National Institute of Standards and Technology 2008 edition, ISBN 1437915582. |Wikimedia Commons has media related to International System of Units.| - BIPM – About the BIPM (home page) - ISO 80000-1:2009 Quantities and units – Part 1: General - NIST On-line official publications on the SI - Rules for SAE Use of SI (Metric) Units - International System of Units at Curlie (based on DMOZ) - EngNet Metric Conversion Chart Online Categorised Metric Conversion Calculator - U.S. Metric Association. 2008. A Practical Guide to the International System of Units - LaTeX SIunits package manual gives a historical background to the SI system.
<urn:uuid:957323bb-f828-44e4-81ca-1813fdeee9eb>
3.875
11,386
Knowledge Article
Science & Tech.
51.276018
95,567,649
Chlamydomonas Reinhardii, A Potential Model System for Chloroplast Gene Manipulation Studies on the structure, function and regulation of genes coding for chloroplast proteins are important for understanding the biosynthesis of the photosynthetic apparatus and the integration of chloroplasts within plant cells. Chlamydomonas reinhardii is particularly well suited for solving these problems because this green unicellular alga can be manipulated with ease both at the biochemical and genetic level. Several genes have been identified on the physical map of the chloroplast genome. They include genes coding for ribosomal RNA, tRNA and several proteins including the large subunit of ribulose 1,5 bisphosphate carboxylase (RubisCo) and several thylakoid polypeptides. The nuclear gene for the small subunit of RubisCo has also been cloned. Because chloroplast DNA recombination occurs in C. reinhardii, a rare property among plants, chloroplast genes can be analyzed by genetic means. Numerous chloroplast photosynthetic mutations have been isolated and several of them have been shown to be part of a single linkage group (Gillham, 1978). We have reached the stage where the genetic and biochemical approaches can be coupled efficiently in C. reinhardii; in particular, it has been possible to correlate the physical and genetic chloroplast DNA maps at a few sites. A nuclear transformation system has been developed for C. reinhardii by using a cell wall deficient arginine auxotroph which can be complemented with a plasmid containing the yeast ARG4 locus. Transformation vectors have been constructed by inserting random nuclear and chloroplast DNA fragments into a plasmid containing the yeast ARG4 locus and by testing the recombinant plasmids for their ability to promote autonomous replication in yeast (ARS sites) and C. reinhardii (ARC sites). Several plasmids have been recovered that act as shuttle vectors between E. coli, C. reinhardii and yeast. Four ARS sites and four ARC sites have been mapped on the chloroplast genome of C. reinhardii. One plasmid replicates both in C. reinhardii and yeast. Because C. reinhardii cells contain a single large chloroplast they offer interesting possibilities for attempts of chloroplast transformation by microinjection. Since appropriate selective markers and transformation vectors are available, this approach can now be explored. KeywordsChloroplast Genome Chloroplast Gene Bisphosphate Carboxylase Chloroplast Transformation Uniparental Inheritance Unable to display preview. Download preview PDF. - Blanc, H., and Dujon, B., 1981, in:“Mitochondrial Genes,” P. Slonimski et al., eds., Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, pp. 279–294.Google Scholar - Broach, J. R., Li, Y. Y., Feldman, J., Jayaram, M., Abraham, J., Nasmyth, K. A., and Hicks, J. B., 1982, Localization and sequence analysis of yeast origins of DNA replication, Cold Spring Harbor, Symp. Quant. Biol., 47:1165–1173.Google Scholar - Erickson, J. M., Shneider, M., Vallet, J. M., Dron, M., Bennoun, P., and Rochaix, J. D., 1983, Chloroplast gene function: combined genetic and molecular approach in Chlamydomonas reinhardii, in: “Proceedings of 6th Intl. Congress on Photosynthesis,” Sytresma, C. ed., M. Nijhoff and W. Junk Publ., in press.Google Scholar - Gillham, N. W., 1978, “Organelle heredity,” Raven Press, New York.Google Scholar - Goldschmidt-Clermont, M., 1983, Regulation of ribulose bisphosphate carboxylase gene expression in Chlamydomonas reinhardii, in:“Proceedings of 6th Intl. Congress on Photosynthesis,” Sybesma, C. ed., M. Nijhoff and W. Junk Publ., In press.Google Scholar - Hirschberg, J., and McIntosh, L., 1983, Molecular basis of herbicide resistance, Science, In press.Google Scholar - Rochaix, J. D., 1978, Restriction endonuclease map of the chloroplast DNA of Chlamydomonas reinhardii, J. Mold. Biol., 126:567–617.Google Scholar - Rochaix, J. D., Dron, M., Schneider, M., Vallet, J. M., and Erickson, J. M., 1983, Chlamydomonas reinhardii, a model system for studying the biosynthesis of the photosynthetic Apparatus, in:“15th Miami Winter Symposium, Advances in Gene Technology: Molecular Genetics of Plants and Animals,” Ahmad, F., Downey, K., Schultz, S. and Voellmy, R. W., eds., Academic Press, In press.Google Scholar - Stinchcomb, D. T., Mann, C., Selker, E., and Davis, R. W., 1981, DNA sequences that allow the replication and segregation of yeast chromosomes, ICN-UCLA Symp., Mol. Cell. Biol., 22:473–488.Google Scholar - Vallet, J. M., Rahire, M., and Rochaix, J. D., 1984, Localization and sequence analysis of chloroplast DNA sequences of Chlamydomonas reinhardii that promote autonomous replication in yeast, EMBO J., In press.Google Scholar - Wu, M., and Waddell, J. M., 1983, The replicative origins of chloroplast DNA in Chlamydomonas reinhardii, J. Cell Biochem. Suppl., 7B:286.Google Scholar
<urn:uuid:f07b8b74-1f7a-4fef-9736-2ec6f20ca016>
2.96875
1,303
Academic Writing
Science & Tech.
51.222798
95,567,651
The Breathtaking Technologies We'll Use To Save Our Planet Lots of people are focused on how we're going to escape our planet. They want to colonize another world because it's only a matter of time before earth becomes uninhabitable. Maybe they're getting too far ahead of themselves and they should be focusing their efforts on something else. Nobody is saying the planet isn't currently in a bad state, but think about how fast technology is evolving. If we work hard enough new technologies will have the power to save the world. Here are just a few things we have already came up with that you'll want to know about. Using Organic Flow Batteries We're all excited about the advancements in renewable energy, which will one day replace the fossil fuels we use at the moment. One of the biggest issues we face is how all the energy is going to be stored. Harvard researchers think organic flow batteries will be the ultimate solution. It's unlikely you'll know all of the technology stored inside a battery anyway, so when you picture flow batteries envisage external tanks with some of the components inside them. When you combine them with organic molecules in place of the metal required it makes the technology cost-effective. Introduction Of The Smart Grid Everyone has heard about the cool smart gadgets being released, but we'll need to start using a smart grid too. There is no way the grid you use at the moment will be able to handle renewable energy as it's collected right now. It's not designed to handle unplanned breakages or sudden shifts in energy demand throughout the day. Smart grids will take care of the problem and allow everything to run smoothly. Energy will be able to flow from a house back into the grid when it's needed. People will even be able to see when the energy demand is high, which will tempt them into waiting to turn on their appliances. We'll Start Recycling Carbon Look at the way we recycle things like ink cartridges and materials at the moment. It's been wonderful for the planet over the years, but it's simply not enough. We need to improve our recycling abilities and we'll start doing it with carbon to prevent it from destroying our atmosphere. You'll probably be wondering how on earth it's going to be done, but it's rather simple when you think about it. We'll build CO2 capture plants to collect it straight out of the sky. Instead of burying it deep underground it will be used to make certain products until the whole cycle repeats itself again. The Growth Of Vertical Farming We've seen or heard about vertical farming in the past, but I don't think anyone could have imagined what it would look like on an industrial scale. It will be possible to grow a tremendous amount of crops in the middle of a large city and they'll be ready to eat much quicker too. It will save a lot of gas from being sent into the atmosphere as trucks drive our food across the country. Less food will be wasted because we'll be able to control the conditions carefully. It will even need around 95 percent less water, which is going to be a game changer. GPS Tracking Of Animals It's not only humans inhabiting the earth, so we need to look out for our animal friends. One of the ways we'll be doing this is by using the power of GPS. There are lots of ways the technology can be used, but the biggest one is probably protecting endangered species before they go the way of the woolly mammoth. Leatherback turtles are in the endangered species category and their numbers are dwindling. Now we can use GPS to track where they go when they're swimming around in the ocean. If we have a good idea where they'll be it will reduce the chances of them being killed due to commercial fishing. There Is So Much More To Come First of all, we've only talked about a few technologies. There are plenty more we could have covered, but it would have taken all day. Everything we've came up with so far will go a long way towards saving the planet. The interesting thing will be what happens in the next decade or two. It's impossible to guess what we're going to come up with next. Maybe we should think about staying here a bit longer before leaving earth.
<urn:uuid:453b7f75-958c-4d9c-b67b-0135424dd696>
3.21875
876
Listicle
Science & Tech.
59.501459
95,567,662
In General > s.a. chaos; critical * Complex system: An aggregate of individual "agents" each of which acts independently, which displays collective behavior that does not follow trivially from the behaviors of the individual parts, for example reacting as a whole to changes in the system's environment; Examples include condensed-matter systems, ecosystems, stock markets and economies, biological evolution, and the whole of human society; Progress has been made in the quantitative understanding of complex systems since the 1980s, using a combination of basic theory and computer simulation. * Goal: Complexity theory studies questions like, How do simple laws give rise to intricate structure? (Sensitive dependence on initial conditions); Why are intricate structures so ubiquitous? Why do they often embody their own kind of simple law? * Features: Many complex phenomena have a power law behavior, such that the probability for an even goes like some power of the event's "size," arising from interactions of components at many scales; Complexity is related to information, but is not adequately quantified by the information needed to describe something, or shortest algorithm size; Logical depth (Bennett) is impractical, breadth (Lloyd & Pagels) seems better. * Approaches: Self-organized criticality (SOC); Highly optimized tolerance (HOT). * Remark: Have to distinguish complexity from randomness, they are not the same! Computational Complexity Classes > s.a. computation and quantum computing. * In general: The study of how the computation time scales with the size of the input; There are many different classes, including BQP and NP. * NP problems: There is no known algorithm that will solve them in polynomial time; Non-deterministic methods (evolutionary search and quantum computers) have been proposed; > s.a. mathematics [Millennium Problems]. @ General references: Rodríguez-Laguna & Santalla JSM(14)-a1010 [physical consequences of P ≠ NP]; Manin JPCS(14)-a1302 [and physics]. @ Algorithmic complexity: Zurek Nat(89)sep [and thermodynamic cost of computations]; Batterman & White FP(96); Dimitrijevs & Yakaryılmaz a1608 [uncountably many classical and quantum complexity classes]; > s.a. quantum cosmology; realism. @ NP completeness: Garey & Johnson 79; Greenwood qp/00-conf [methods]; Nussinov & Nussinov cm/02 [approach to graph problems]; Mao PRA(05)qp/04 [polynomial time in quantum computers]; Aaronson qp/05 [and physical reality]; Zak & Fijany PLA(05) [using quantum resonances]; Rojas qp/06 [interpolating problem with BQP]; > s.a. statistical mechanics. @ Examples: Gottesman & Irani a0905 [NEXP-complete classical tiling problem and QMA_EXP-complete quantum Hamiltonian problem]. Other Applications and Examples > s.a. classical systems; fractal; partial * Dynamical systems: In classical mechanics complexity is characterized by the rate of local exponential instability, which effaces the memory of initial conditions and leads to practical irreversibility; Quantum mechanics instead appears to exhibit strong memory of the initial state, and the notion of complexity of a system needs to be based on other notions, such as its stability and reversibility properties. * Remark: Complexity is giving insight into the effectiveness of math in describing nature, and stimulating a shift in paradigms in physics, stimulated by computer simulations now possible (e.g., cellular automata); It has been proposed as foundation for the formulation of various concepts in physics, such as the Lagrangian formulation of particle mechanics (Soklakov). * Hamiltonian complexity: A discipline that studies the question, How hard is it to simulate a physical system? * Types of phenomena: - Simple phenomena; Chaotic systems with few degrees of freedom, or possibly equilibrium states of many degrees of freedom. - Complex phenomena; Characterized by contingency and variability. - Critical phenomena; (a) Boundaries between phases in thermodynamics (fragile, need fine tuning); (b) Self-organized systems (robust, exhibit punctuated equilibrium, small changes also lead to large variations). * Applications: In physics, hydrodynamic flow, astrophysics, solid state, lasers; In other fields, marketing, investment, industrial design. @ Physics: Ford in(93); Batterman & White FP(96); Kreinovich & Longpre IJTP(98) [relevance]; Yurtsever qp/02 [simple rules?]; Allegrini et al CSF(04) [and randomness]; Nicolis CSF(05) [mechanisms for reducing complexity]; Kwapień & Drożdż PRP(11). @ Quantum physics: Kirilyuk AFLB(96)qp/98; Cleve qp/99-in [review]; Segre IJTP(04) [classical and quantum objects]; Mora et al qp/06 [quantum Kolmogorov complexity]; Benenti & Casati PRE(09)-a0808; Balachandran et al PRE(10)-a1009 [many-body dynamics, phase-space approach]; Anders & Wiesner Chaos(11)-a1110 [and quantum correlations]; Brown & Susskind a1701 [thermodynamics of quantum complexity]; Chapman et al a1707 [continuous quantum-many body systems, field theory]. @ Cosmology: comments to Vilenkin PRD(88); Rosu NCB(95)ap/94 [sub-horizon scales]; Ćirković FP(02)qp/01; Sylos Labini & Gabrielli PhyA(04)ap-in [structures]; Vazza MNRAS-a1611 [information content of cosmic structures]. @ Hierarchical structures: Drossel PRL(99); Olemskoi et al PhyA(09). @ Networks: Meyer-Ortmanns PhyA(04)cm/03; Felice et al JMP(14); Franzosi et al PRE(16)-a1410 [entropic measure]. @ Other examples: Parisi cm/94 [biology]; Boffetta et al PRP(02) [and predictability]; Sant'Anna mp/04 [and entropy]; Prosen JPA(07)-a0704 [chains of quantum particles]; > s.a. black holes; causality; posets; quantum gravity; spin models [spin glasses]. References > s.a. emergence; Scaling. @ I, books: Bonner 88; Levy 92 [artificial life]; Lewin 92; Waldrop 92; Gell-Mann 94; Coveney & Highfield 95; Holland 95; Auyang 98. @ I, articles: Zabusky PT(87)oct; Lloyd ThSc(90)sep; Maddox Nat(90)apr [no single measure], Nat(90)oct [order within chaos]; Horgan SA(95)jun; Chaisson SWJ(14)-a1406 [global history for a wide spectrum of systems]. @ II, books: Nicolis & Prigogine 89; Mainzer 97; Mitchell 09 [r PT(10)feb]. @ General: Bennett FP(86); Grassberger IJTP(86); Lloyd & Pagels AP(88); Lloyd Compl(99); Osborne RPP(12)-a1106 [Hamiltonian complexity, rev]; Newman AJP(11)aug; Martín-Delgado ARBOR(13)-a1110-in [Turing's contribution]; Manin a1301-talk [Kolmogorov complexity in scientific discourse]. @ Self-organized criticality: Bak & Paczuski PW(93)dec. @ Highly optimized tolerance: Carlson & Doyle PRE(99), PRL(00); Doyle & Carlson PRL(00). @ Measures of complexity: Feldman & Crutchfield PLA(98), Martin et al PLA(03) [statistical]; Stoop et al JSP(04) [obstruction to predictability]; Martin et al PhyA(06) [bounds]; López-Ruiz et al JMP(09)-a0905 [quantum]; Cafaro et al AMC(10)-a1011 [information geometric construction of entropic indicators]; Manzano PhyA(12)-a1105 [Fisher-Shannon statistical measure, continuous variables]; Campbell & Castilho Piqueira a1110 [for quantum states]; Tan et al EPJP(14)-a1404 [new quantum measure of complexity]; Aaronson et al a1405 [and mixing of two liquids, rise and fall of complexity]. @ Techniques: Marwan et al PRP(07) [recurrence plots]; Friedrich et al PRP(11) [stochastic methods]; Felice et al Chaos(18)-a1804 [information geometry]. @ And information: Parlett BAMS(92); Traub & Werschulz 99; Gell-Mann Lloyd Compl(96) [information measures]; McAllister PhSc(03)apr [Gell-Mann's effective complexity]; > s.a. Brudno's Theorem; quantum information. @ Related topics: Maldonado a1611 [and quantum theory, and time]. – journals – comments – other sites – acknowledgements send feedback and suggestions to bombelli at olemiss.edu – modified 9 apr 2018
<urn:uuid:e06e46de-90f0-4368-bc8b-e07f1bf12987>
2.828125
2,095
Content Listing
Science & Tech.
36.50135
95,567,680
How safe are 'eye-safe' lasers? "Very low-energy radiation also damages DNA: how safe are "eye-safe" lasers?" Damage to DNA by high energy radiation constitutes the most lethal damage occurring at the cellular level. Surprisingly, very low-energy interactions - with OH radicals, for instance - can also induce DNA damage, including double strand breaks. It is known that single strand breaks in the DNA backbone are amenable to repair but most double strand breaks are irreparable. The propensity with which slow OH radicals damage DNA depends on their rotational energy: rotationally "hot" OH is more proficient in causing double breaks. These novel findings are from experiments conducted on DNA in a physiological environment. Intense femtosecond laser pulses are propagated through water (in which DNA plasmids are suspended), creating plasma channels within water, resulting in generation, in situ, of electrons and OH radicals. It is shown that use of long laser wavelength light (1350 nm and 2200 nm) ensures only OH-induced damage to DNA is accessed. It is noteworthy that industry presently characterizes as "eye-safe" lasers that emit at wavelengths longer than 1300 nm. But it is such wavelengths that are proficient at inducing damage to DNA: how safe is "eye-safe" when DNA in the eye can be readily damaged? Deepak Mathur | EurekAlert! The genes are not to blame 20.07.2018 | Technische Universität München Targeting headaches and tumors with nano-submarines 20.07.2018 | Universitätsmedizin der Johannes Gutenberg-Universität Mainz A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:4c000813-910a-45a6-9f58-c13fca817a5f>
3.03125
870
Content Listing
Science & Tech.
35.786756
95,567,692
Year-to-Year Fluctuation of the Spring Phytoplankton Bloom in South San Francisco Bay: An Example of Ecological Variability at the Land-Sea Interface Estuaries are transitional ecosystems at the interface of the terrestrial and marine realms. Their unique physiographic position gives rise to large spatial variability, and to dynamic temporal variability resulting, in part, from a variety of forces and fluxes at the oceanic and terrestrial boundaries. River flow, in particular, is an important mechanism for delivering watershed-derived materials such as fresh water, sediments, and nutrients; each of these quantities in turn directly influences the physical structure and biological communities of estuaries. With this setting in mind, we consider here the general proposition that estuarine variability at the yearly time scale can be caused by annual fluctuations in river flow. We use a “long-term” (15-year) time series of phytoplankton biomass variability in South San Francisco Bay (SSFB), a lagoon-type estuary in which phytoplankton primary production is the largest source of organic carbon (Jassby et al. 1993). KeywordsRiver Flow Phytoplankton Biomass Spring Bloom Marine Ecology Progress Series Phytoplankton Dynamic Unable to display preview. Download preview PDF. - Afifi, A. A., and V. Clark. 1990. Computer-Aided Multivariate Analysis. Von Nostrand Reinhold, New York.Google Scholar - Chatfield, C., and A. J. Collins. 1980. Introduction to Multivariate Analysis. Chapman & Hall, New York.Google Scholar - Cloern, J. E. 1991a. “Annual Variations in River Flow and Primary Production in the South San Francisco Bay Estuary.” In M. Elliott and D. Ducrotoy (eds.), Estuaries and Coasts: Spatial and Temporal Intercomparisons. Olsen and Olsen, Denmark.Google Scholar - Cloern, J. E., T. M. Powell, and L. M. Huzzey. 1989. Spatial and temporal variability in South San Francisco Bay. II. Temporal changes in salinity, suspended sediments, and phytoplankton biomass and productivity over tidal time scales. Estuarine, Coastal and Shelf Science 9:599–619.CrossRefGoogle Scholar - Jolliffe, I. T. 1986. Principal Component Analysis. Springer-Verlag, New York.Google Scholar - Peterson, D. H., D. R. Cayan, J. F. Festa, F. H. Nichols, R. A. Walters, J. V. Slack, S. W. Hager, and L. E. Schemel. 1989. “Climate Variability in an Estuary: Effects of Riverflow on San Francisco Bay.” In D. H. Peterson (ed.), Aspects of Climate Variability in the Pacific and the Western Americas. American Geophysical Union, Geophysical Monograph 55, Washington, D.C.CrossRefGoogle Scholar - Powell, T. M., J. E. Cloern, and R. A. Walters. 1986. “Phytoplankton Spatial Distribution in South San Francisco Bay—Mesocale and Small-scale Variability.” In D. A. Wolfe (ed.), Estuarine Variability. Academic Press, New York.Google Scholar - Priesendorfer, R. W. 1988. Principal component analysis in meteorology and oceanography. Developments in Atmospheric Science 17. Elsevier, New York.Google Scholar - Wienke, S. M., B. E. Cole, J. E. Cloern, and A. E. Alpine. 1992. Plankton studies in San Francisco Bay. XIII. Chlorophyll distributions and hydrographic properties in San Francisco Bay, 1991. U.S. Geological Survey Open-File Report 92–158.Google Scholar
<urn:uuid:11bbe62f-986e-4a50-a667-f5b7ed9666aa>
2.640625
841
Academic Writing
Science & Tech.
50.600206
95,567,705
Thermal analysis, as the name suggests is analyzing the change in the properties of materials when they are subjected to different temperatures. Different techniques are used and the response of the material is then plotted as a function of temperature and time. This is an indispensable branch of materials science. It is crucial to determine the way a substance reacts to a change in temperature. Who does this impact? Food companies use it to know the optimum temperature to store their products without spoiling. Oceanographers want to know if the temperature at the bottom of the sea will affect the equipment they use. Software companies need to know what is the exact range of temperatures that will keep both their machines as well as their employees happy and working. Engineers will also want to check out if the bridges that they build will survive the expansion and contraction due to the temperatures they are subjected to. Due to such a wide area of application, instruments used in the various different methods of thermal analysis are carefully calibrated and made highly sensitive to aid researchers in the best possible way. Does vibration affect thermal analysis instruments? As an example of how vibrations can affect your thermal analysis instruments, let us consider a test in one method of thermal analysis called Thermogravimetric analysis. The test we will consider is the ‘Blank Test’. This is a test of the apparatus without the sample material in it. The Blank Test gives us an idea of the health of the instrument we are about to use and is thus a necessity in thermal analysis. When we run such a Blank Test, we plot a curve of the changes in physical and chemical properties against an increasing temperature. This curve is also called the Thermogravimetric Analysis curve(TGA Curve) and is affected by noise present in the equipment. Various factors that contribute to the noise may be the touching together of the different minute components within the apparatus which can occur due to external vibrations. Hence, an inaccurate TGA curve is obtained. How we can help you: Are external vibration sources in your lab rendering your measurements inaccurate, and thus, quite useless today? We at www.antivibrationtable.com can provide you with tables for your equipment that reduce the effect of these vibrations that spoil the perfection of your work. We offer you a customized solution to your problems. Make a small move today and get in touch with us, as you continue your pursuit of excellence in your lab. To know more about anti-vibration techniques suitable for your lab contact us now through our web page or give us a ring at +91 9393728474.
<urn:uuid:804051e5-745e-4ef9-b5d1-7d387261cef0>
2.890625
529
Product Page
Science & Tech.
41.123692
95,567,707
A Yale-led team has produced one of the highest-resolution maps of dark matter ever created, offering a detailed case for the existence of cold dark matter -- sluggish particles that comprise the bulk of matter in the universe. The dark matter map is derived from Hubble Space Telescope Frontier Fields data of a trio of galaxy clusters that act as cosmic magnifying glasses to peer into older, more distant parts of the universe, a phenomenon known as gravitational lensing. Yale astrophysicist Priyamvada Natarajan led an international team of researchers that analyzed the Hubble images. "With the data of these three lensing clusters we have successfully mapped the granularity of dark matter within the clusters in exquisite detail," Natarajan said. "We have mapped all of the clumps of dark matter that the data permit us to detect, and have produced the most detailed topological map of the dark matter landscape to date." Scientists believe dark matter -- theorized, unseen particles that neither reflect nor absorb light, but are able to exert gravity -- may comprise 80% of the matter in the universe. Dark matter may explain the very nature of how galaxies form and how the universe is structured. Experiments at Yale and elsewhere are attempting to identify the dark matter particle; the leading candidates include axions and neutralinos. "While we now have a precise cosmic inventory for the amount of dark matter and how it is distributed in the universe, the particle itself remains elusive," Natarajan said. Dark matter particles are thought to provide the unseen mass that is responsible for gravitational lensing, by bending light from distant galaxies. This light bending produces systematic distortions in the shapes of galaxies viewed through the lens. Natarajan's group decoded the distortions to create the new dark matter map. Significantly, the map closely matches computer simulations of dark matter theoretically predicted by the cold dark matter model; cold dark matter moves slowly compared to the speed of light, while hot dark matter moves faster. This agreement with the standard model is notable given that all of the evidence for dark matter thus far is indirect, said the researchers. The high-resolution simulations used in the study, known as the Illustris suite, mimic structure formation in the universe in the context of current accepted theory. A study detailing the findings appeared Feb. 28 in the journal Monthly Notices of the Royal Astronomical Society. Other Yale researchers involved in the study were graduate students Urmila Chadayammuri and Fangzhou Jiang, faculty member Frank van den Bosch, and former postdoctoral fellow Hakim Atek. Additional co-authors came from institutions worldwide: Mathilde Jauzac from the United Kingdom and South Africa; Johan Richard, Eric Jullo, and Marceau Limousin from France; Jean-Paul Kneib from Switzerland; Massimo Meneghetti from Italy; and Illustris simulators Annalisa Pillepich, Ana Coppa, Lars Hernquist, and Mark Vogelsberger from the United States. The research was supported in part by grants from the National Science Foundation, the Science and Technology Facilities Council, and NASA via the Space Telescope Institute HST Frontier Fields initiative. Jim Shelton | idw - Informationsdienst Wissenschaft Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Materials Sciences 23.07.2018 | Information Technology 23.07.2018 | Health and Medicine
<urn:uuid:971ef460-8e2a-44e5-825e-c0a7183fa03d>
3.609375
1,242
Content Listing
Science & Tech.
32.963136
95,567,708
West Ashley High School, W.A.T.E.R. Wildcats Preserve Pond Using STEM The W.A.T.E.R. (West Ashley Team of Environmental Restoration) Wildcats at West Ashley High School will be studying the impact of polluted rainwater runoff into wetlands. Students will work to improve the ecology of the school's pond and surrounding wetlands by installing a floating wetland and underwater reef. They will work in teams on different aspects of the project including publicity, research and design, technology, and education. This group approach mimics the Science, Technology, Engineering, and Math (STEM) Initiative developed by The Citadel and Chamber of Commerce. Native plants will be attached to floating island pallets in the middle of the pond in order to decrease pollutants and increase biodiversity. Water quality will be tested throughout the year and a biodiversity survey will be conducted before and after the project. Students will share their findings on the school website and information about flora and fauna will be posted on kiosks placed around the pond. U.S. Fish and Wildlife, Clemson Agricultural Extension Service, South Carolina Aquarium, Department of Natural Resources and master gardeners will be partnering with students on the project. Barnwell County Career Center, Barnwell "Green" Living Project Barnwell County Career Center will establish a "Go Green" Demonstration Facility to educate 8th -12th grade students on how "green" technology can make energy and agriculture more efficient and increase sustainability. Students in drafting, building construction, electricity, and agriculture programs will incorporate solar heat, wind energy, rain water collection, drought tolerant landscaping, and composting into the "green" building. Business education students will help demonstrate to the community and to local industries how they can use "green" technology at home. Barnwell County Career Center plans to make the demonstration facility a free teaching tool available to surrounding feeder schools and other educational groups such as 4H and scout troops. Seasonal produce will also be sold at the Future Farmers of America Produce Stand to raise awareness about conservative agricultural practices. Heathwood Hall Episcopal School, Richland Highlander School Environmental Education Programs Students at Heathwood Hall Episcopal School will implement three environmental projects. First, a rain barrel and drip irrigation system will be constructed to collect and store rainwater. Twelve rain barrels will be elevated on a wooden platform to generate hydraulic pressure and irrigate the school's Certified South Carolina Grown vegetable garden. Second, students will expand upon the existing plastic concession cup collection program by purchasing collection containers. Concession cups will be collected and repurposed into small pots for use in the school's greenhouse. Finally, a green roof demonstration table will be constructed to study the benefits of alternative roofing materials. A digital thermometer will be installed to show how efficient each type of roofing material is in reflecting or absorbing solar energy. By participating in these Environmental Education programs, students will learn how to use water more efficiently, how to reduce waste, and how to increase energy savings. Students will develop educational materials to share this knowledge with the community. Pickens Middle School, Pickens Pickens Middle School students will learn about watersheds and the effect of runoff pollution on water quality. They will do water quality testing of a local creek with help from the Clemson Extension Service. They will also educate the community about the importance of clean water by GPSing and labeling nearby storm drains, participating in creek clean ups, partnering with the city of Pickens on the "Turtles on the Town" project, placing rain barrels throughout the Pickens area, and decorating grocery bags with environmental messages. The school has an active recycling program, with recycling bins for paper in every classroom and collection bins for plastic and aluminum in the halls. Garbage collected from the creek clean ups will be separated into the appropriate recycling container. High School, Cherokee County Green Teens: Sustainability is a Reality! At Gaffney High School, a student organization called the Green Teens, has started a recycling program that accepts paper, plastics, batteries, aluminum, printer cartridges, and cell phones. The Cherokee County recycling center picks up the collected items on a weekly basis. Now the students are expanding the program to include the collection of cafeteria waste and landscape trimmings which will be composted and used to fertilize a student maintained herb garden. Rainwater will also be collected for irrigation. Students will educate the community about sustainable gardening practices by placing educational signage around the garden, creating a brochure that will be posted on the school website, and by selling the harvested herbs at the local farmer's market. Proceeds will be reinvested into the Green Teen account, designated for garden maintenance. Wren Middle School, Anderson Rainy Day Garden Sixth and seventh grade students at Wren Middle School are studying the impact of storm water on the local watershed. Through their "Rainy Day Garden" project, students will learn how rain gardens treat storm water to improve water quality, reduce water quantity, and reduce erosion. Students will lay out a design plan using scaled drawings and square foot templates. They will learn about plants as they maintain the garden and will share the information through student developed outreach materials on Edmodo.com, a social network site for educators and students. Signs will also be developed and placed near the garden to educate school visitors about the environmental benefits of rain gardens. Earth Design, Anderson Regional Joint Water System, Anderson County Storm Water, and the South Carolina Rural Water Association will be working with students on the project. School, Florence County Better With Butterflies Moore Intermediate School has been working toward becoming more environmentally friendly. A school environmental club was formed last year and the school has been participating in South Carolina's Green Steps program. Now, students and teachers are working to establish a butterfly garden where students can learn about environmental issues in a natural setting. Students and their families will be invited to prepare the area and do the initial planting. After that, student representatives will be appointed each term, and throughout the summer, to maintain the garden. Butterfly grow gardens will be provided to each science classroom so that students can observe the life cycle of a butterfly. When the garden is completed, students will release the butterflies into the garden. The butterfly garden will be a focal point for science classes on ecosystems, insects, and plant and animal adaptations. However, the garden will also be used by the whole school for other lessons such as art and creative writing. Students will share their garden experiences through videos, podcasts and surveys on the school website. Roebuck Elementary School, Roebuck Eagles Are Waste Free Roebuck Elementary School will educate students and the community about the impact of waste free lunches on the amount of trash that goes to the landfill. In order to receive the lunch kit, students and parents must agree to pack food in the reusable food containers, wash and reuse the silverware and cloth napkins, store drinks in the reusable Bisphenol A (BPA) free bottles, control food portions so that there will be no leftovers, and pack it all in the reusable lunch bag. Students will weigh the amount of waste before and during the project, and graph the data to show how much waste is being kept out of landfills. Teachers estimate that in the first year of the project, 10,050 pounds of trash will be kept out of the landfill.
<urn:uuid:3181feb3-2bea-4333-8c45-0b4fa204dc29>
2.75
1,519
Content Listing
Science & Tech.
35.031221
95,567,713
Is That A Yeti? Science Says It’s Probably A Bear Steph Spencer October 21 Camping, Sidelines This post may contain affiliate links. Read our disclosure. © Katy Kristin Next time you’re on a camping trip or hiking expedition and think you’ve encountered the elusive Yeti, take comfort in the knowledge that it’s probably a bear. You’ll feel so much safer knowing that it’s not some mysterious hominid unknown to science, but in all likelihood a hybrid between a brown bear and a polar bear. Though the legendary creatures have long been described as apes, a new study by an Oxford University genetics professor claims that sightings of may well be attributable to an ancient species of polar bear. Bryan Sykes studied the DNA extracted from a hair alleged to have come from a mummified Yeti in India decades ago as well as a hair found in Bhutan much more recently. Both samples had DNA sequences that corresponded to the DNA found in the 40,000-year-old jawbone of a species of ancient polar bear found in Norway’s arctic. Sykes’ viewpoint is that because the animals are part polar bear, they might stand and walk on two feet more often, as polar bears are known to do, and which would explain eyewitness accounts of yeti sightings as bipedal creatures. Find out more about this from CBC News Leave a Reply Cancel Reply Your email address will not be published.CommentName* Email* Website Let\'s Make Sure You\'re Human ... *Time limit is exhausted. Please reload CAPTCHA. 4 + = six Notify me of followup comments via e-mail. You can also subscribe without commenting. This site uses Akismet to reduce spam. Learn how your comment data is processed.
<urn:uuid:b20e225b-93ee-4b9a-8b3d-e86d40b87c88>
2.5625
370
Truncated
Science & Tech.
57.092902
95,567,731
The Third Law of Thermodynamics is on the minds of John Cumings, assistant professor of materials science and engineering at the University of Maryland's A. James Clark School of Engineering, and his research group as they examine the crystal lattice structure of ice and seek to define exactly what happens when it freezes. "Developing an accurate model of ice would help architects, civil engineers, and environmental engineers understand what happens to structures and systems exposed to freezing conditions," Cumings said. "It could also help us understand and better predict the movement of glaciers." Understanding the freezing process is not as straightforward as it may seem. The team had to develop a type of pseudo-ice, rather than using real ice, in order to do it. Despite being one of the most abundant materials on Earth, water, particularly how it freezes, is not completely understood. Most people learn that as temperatures fall, water molecules move more slowly, and that at temperatures below 32º F/0º C, they lock into position, creating a solid—ice. What's going on at a molecular level, says Cumings, is far more complicated and problematic. For one thing, it seems to be in conflict with a fundamental law of physics. The Third Law of Thermodynamics states that as the temperature of a pure substance moves toward absolute zero (the mathematically lowest temperature possible) its entropy, or the disorderly behavior of its molecules, also approaches zero. The molecules should line up in an orderly fashion. Ice seems to be the exception to that rule. While the oxygen atoms in ice freeze into an ordered crystalline structure, its hydrogen atoms do not. "The hydrogen atoms stop moving," Cumings explains, "but they just stop where they happen to lie, in different configurations throughout the crystal with no correlation between them, and no single one lowers the energy enough to take over and reduce the entropy to zero." So is the Third Law truly a law, or more of a guideline? "It's a big fundamental question," says Cumings. "If there's an exception, it's a rule of thumb." Materials that violated the Third Law as originally written were found in the 1930s, mainly non-crystalline substances such as glasses and polymers. The Third Law was rewritten to say that all pure crystalline materials' entropy moves toward zero as their temperatures move toward absolute zero. Ice is crystalline—but it seems only its oxygen atoms obey the Law. Over extremely long periods of time and at extremely low temperatures, however, ice may fully order itself, but this is something scientists have yet to prove. Creating an accurate model of ice to study has been difficult. The study of ice's crystal lattice requires precise maintenance of temperatures below that of liquid nitrogen (-321 °F/-196 °C), and also a lot of time: no one knows how long it takes for ice to ultimately reach an ordered state—or if it does at all. Experiments have shown that if potassium hydroxide is added to water, it will crystallize in an ordered way—but researchers don't know why, and the addition shouldn't be necessary due to the Third Law's assertion that pure substances should be ordered as they freeze. To overcome these problems, scientists have designed meta-materials, which attempt to mimic the behavior of ice, but are created out of completely different substances. A previous material, spin ice, was designed from rare earth elements and had a molecular structure resembling ice, with magnetic atoms (spins) representing the position of hydrogen atoms. However, it did not always behave like ice. The Cumings group is refining a successor to spin ice called artificial spin ice, which was originally pioneered by researchers at Penn State. The newer meta-material takes the idea a step further. "The original spin ice research went from one part of the periodic table to a more flexible one," said Cumings. "But artificial spin ice goes off the periodic table altogether." Artificial spin ice is a collection of "pseudo-atoms" made of a nickel-iron alloy. Each pseudo-atom is a large-scale model made out of millions of atoms whose collective behavior mimics that of a single one. As with the original spin ice, magnetic fields are stand-ins for hydrogen atoms. Working at this "large" scale—each pseudo-atom is 100x30 nanometers in size (100 nanometers is 1000 times smaller than the width of a human hair)—gives the researchers control over the material and freedom to explore how real atoms behave. "It mimics the behavior of real ice but is completely designable with specific properties," Cumings said. "We can change the strength of the spin or reformulate the alloy to change the magnetic properties, which creates new bulk properties that we either couldn't get from normal materials, or couldn't control at the atomic level." The team is also able to image the behavior of the pseudo hydrogen atoms using an electron microscope—such direct observation is not possible with the original spin ice or real ice. "This is the first time the rules of ice behavior have ever been rigorously confirmed by directly counting pseudo hydrogen atoms," explained group member and postdoctoral research associate Todd Brintlinger. "We can track the position and movement of each pseudo atom in our model, see where defects occur in the lattice, and simulate what happens over much longer periods of time." The ultimate impact of the research may go beyond civil engineering and the environment. "Although we're mimicking the behavior of ice," Cumings explained, "our meta-material is very similar to patterned hard-disk media. Magnetic 'bits' used in hard drives are usually placed at random, but memory density could be increased if they were in a tight, regular pattern instead. "We've found that both hydrogen in ice and the pseudo-hydrogen in our artificial spin ice also behave as bits, can carry information, and interact with each other. Perhaps in the future, engineers will be inspired by this in their hard drive designs. The formal patterning and bit interactions may actually help to stabilize information, ultimately leading to drives with much higher capacities."About the A. James Clark School of Engineering The Clark School's graduate programs are collectively the fastest rising in the nation. In U.S. News & World Report's annual rating of graduate programs, the school is 17th among public and private programs nationally, 11th among public programs nationally and first among public programs in the mid-Atlantic region. The School offers 13 graduate programs and 12 undergraduate programs, including degree and certification programs tailored for working professionals. The school is home to one of the most vibrant research programs in the country. With major emphasis in key areas such as communications and networking, nanotechnology, bioengineering, reliability engineering, project management, intelligent transportation systems and space robotics, as well as electronic packaging and smart small systems and materials, the Clark School is leading the way toward the next generations of engineering advances. Visit the Clark School homepage at www.eng.umd.edu. Missy Corley | EurekAlert! Factory networks energy, buildings and production 12.07.2018 | FIZ Karlsruhe – Leibniz-Institut für Informationsinfrastruktur GmbH Manipulating single atoms with an electron beam 10.07.2018 | University of Vienna For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:4bfd18f2-88d4-44fa-9d25-06fd47bb4f91>
4
2,095
Content Listing
Science & Tech.
38.818382
95,567,756
Author: George McGinn Even after decades of observations and a visit by NASA’s Voyager 2 spacecraft, Uranus held on to one critical secret — the composition of its clouds. Now, one of the key components of the planet’s clouds has finally been verified. A global research team that includes Glenn Orton of NASA’s Jet Propulsion Laboratory in Pasadena, California, has spectroscopically dissected the infrared light from Uranus captured by the 26.25-foot (8-meter) Gemini North telescope on Hawaii’s Mauna Kea. They found hydrogen sulfide, the odiferous gas that most people avoid, in Uranus’ cloud tops. The long-sought evidence was published in the April 23rd issue of the journal Nature Astronomy. The detection of hydrogen sulfide high in Uranus’ cloud deck (and presumably Neptune’s) is a striking difference from the gas giant planets located closer to the Sun — Jupiter and Saturn — where ammonia is observed above the clouds, but no hydrogen sulfide. These differences in atmospheric composition shed light on questions about the planets’ formation and history. Many of NASA’s most iconic spacecraft towered over the engineers who built them: think Voyagers 1 and 2, Cassini or Galileo — all large machines that could measure up to a school bus. But in the past two decades, mini-satellites called CubeSats have made space accessible to a new generation. These briefcase-sized boxes are more focused in their abilities and have a fraction of the mass — and cost — of some past titans of space. In May, engineers will be watching closely as NASA launches its first pair of CubeSats designed for deep space. The twin spacecraft are called Mars Cube One, or MarCO, and were built at NASA’s Jet Propulsion Laboratory in Pasadena, California. NASA’s Webb Observatory Requires More Time for Testing and Evaluation; New Launch Window Under Review NASA Release by Jen Rae Wang / Steve Cole By George McGinn Cosmology and Space Research Institute I don’t believe in Dark Matter or Dark Energy. Even the new Dark Flow. Published on Oct 25, 2017 – For years, astronomers have been unable to find up to half of the baryonic matter in the universe. We may just have solved this problem. We’ve known for some time that around 95% of the energy content of the universe is in dark matter and dark energy. This dark sector doesn’t interact with light in any way and so is invisible to us. The remaining 5% – the light sector – represents all of the regular matter in the universe. Yet what if I told you that all of the stars and galaxies and galaxy clusters only comprise 10% of the light sector. The rest has proved as elusive as the dark sector. We think it must exist as extremely diffuse gas in between the galaxies, yet our intense searches miss up to half of it. At least until now. POST TO SPACE-TIME: What about matter that due to the faster than light expansion of the universe? Do we not count them? Ignore them? At the current rate of expansion, which I believe (no verified) is about 2.4, this would mean less mass would be within the visible range every year, 100, 1000+ years. In the area where light will never reach us there is still matter and star creation which must me counted to get an accurate, exact answer to the total mass to dark matter to dark energy (if this really is another name for the faster than light expansion of the universe) ratio. Until them, this is no more than guess work.To make this less confusing, what I am referring to is the speed of causality, or speed of light. In several episodes, you represented this on a graph, say X=time, Y=speed, and the speed of “c” cut the graph at 45 degrees. Now everything to the left of “c” is the visible universe, but due to the faster than “c” expansion of the universe, galaxies cross over the line into the area where light is not fast enough to cross over. The same goes for matter. If Dark Energy is a myth, and only explains the rapid expansion of the universe set in motion by the Big Bang, the missing mass is in the part we can’t see. And since we can’t see into it, we have no idea how big it is, nor how old it is. Ninety-five percent of our missing mass may reside there. Harvard-Smithsonian Center for Astrophysics Transiting rocky super-Earth found in habitable zone of quiet red dwarf star Jet Propulsion Laboratory, Pasadena, Calif. A relatively large near-Earth asteroid discovered nearly three years ago will fly safely past Earth on April 19 at a distance of about 1.1 million miles (1.8 million kilometers), or about 4.6 times the distance from Earth to the moon. Although there is no possibility for the asteroid to collide with our planet, this will be a very close approach for an asteroid of this size.
<urn:uuid:14a26c8c-c2be-41cc-9a36-e5628a85bc62>
3.25
1,072
Content Listing
Science & Tech.
52.847445
95,567,827
Large-scale spills of hazardous materials often produce gas clouds which are denser than air. The dominant physical processes which occur during dense-gas dispersion are very different from those recognized for trace gas releases in the atmosphere. Most important among these processes are stable stratification and gravity flow. Dense-gas flows displace the ambient atmospheric flow and modify ambient turbulent mixing. Thermodynamic and chemical reactions can also contribute to dense-gas effects. Some materials flash to aerosol and vapor when released and the aerosol can remain airborne, evaporating as it moves downwind, causing the cloud to remain cold and dense for long distances downwind. Dense-gas dispersion models, which include phase change and terrain effects have been developed and are capable of simulating many possible accidental releases. A number of large-scale field tests with hazardous materials such as liquefied natural gas (LNG), ammonia (NH3), hydrofluoric acid(HF) and nitrogen tetroxide(N2O4) have been performed and used to evaluate models. The tests have shown that gas concentrations up to ten times higher than those predicted by trace gas models can occur due to aerosols and other dense-gas effects. A methodology for model evaluation has been developed which is based on the important physical characteristics of dense-gas releases. © 1989. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:5ad0e878-c111-4508-8139-900bd0d407ff>
3.796875
289
Academic Writing
Science & Tech.
23.397461
95,567,835
12 July 2018 A chemist’s guide to catalysis Published online 27 June 2018 Researchers lay the principles to guide the design of ideal catalysts in propylene synthesis. A team of researchers delineated the most important characteristics of ideal catalysts in the conversion of methanol to propylene, a process needed to make plastics, fibers and important chemicals. Zeolites – porous compounds of Aluminum Silicate – have long been used as catalysts for this reaction. In particular, a certain topology called ZSM-5 showed great promise. The team, lead by researchers at KAUST in Saudi Arabia and TUDelft in the Netherlands, set out to characterize what parameters most affect catalytic activity, to provide a rational and systematic basis for catalyst design. To do that, they synthesized an array of modified ZSM-5 catalysts, by either altering the aluminum to silicon ratio during the synthesis, or by modifying the catalysts post-synthesis by removal of metal ions, or by incorporation of alkaline earth metals. The rationale was to alter the distribution and composition of acid sites in the catalysts. Two types of acid sites are present in Zeolites; Brønsted acid sites (proton donors), and Lewis acid sites (electron pair acceptors). Reducing aluminium or incorporation of alkaline earth metals typically neutralizes the acidity of Brønsted acid sites. The team demonstrated the efficiency of Brønsted acid site isolation, which involves preventing secondary reactions that lead to undesired products, publishing their results in Nature Chemistry1. More importantly, they showed that the presence of Lewis acid sites increased the lifetime of the catalysts by making them more resistant to coking, a thermal process used in petroleum refining. “Our results on the synergistic effects of Brondsted and Lewis acid sites will help design more efficient zeolite catalysts for the transformation of fossil fuels or even CO2 into valuable chemicals," says Jorge Gascon, a professor at KAUST's catalysis center and the principal investigator on the project. - Yarulina, I. et al. Structure–performance descriptors and the role of Lewis acidity in the methanol-to-propylene process. Nat Chem https://doi.org/10.1038/s41557-018-0081-0 (2018).
<urn:uuid:45141a79-89e8-4aeb-8241-00752d1c3d31>
3.296875
491
Truncated
Science & Tech.
27.006494
95,567,848
Soil seed banks are important for vegetation management because they contain propagules of species that may be considered desirable or undesirable for site colonization after management and disturbance events. Knowledge of seed bank size and composition before planning management activities facilitates proactive management by providing early alerts of exotic species presence and of abilities of seed banks to promote colonization by desirable species. We developed models in ponderosa pine (Pinus ponderosa) forests in northern Arizona to estimate the size and richness of mineral soil seed banks using readily observable vegetation and forestfloor characteristics. Regression models using three or fewer predictors explained 41 to 59 percent of the variance in 0- to 2-inch (0- to 5-cm) seed densities of total and native perennial seed banks. Key predictors included aboveground plant species richness/10.8 ft2 (1 m2), litter weight and thickness, and tree canopy type (open or closed). Both total and native perennial seed banks were larger and richer in plots containing: (1) species-rich understories, (2) sparse litter, and (3) tree canopy openings. A regression tree model estimated that seed bank density of native perennials is 14-fold greater if aboveground plant richness exceeds eight species/10.8 ft2, forest-floor leaf litter is < 1 inch (2.5 cm) thick, and tree canopies are open. Arizona; Endemic plants; Forest litter; Forest management; Forests and forestry; Gambel oak; Invasive plants; Perennials; Pinus ponderosa; Ponderosa pine; Quercus gambelii; Soil seed banks; Southwest; New; Vegetation management Environmental Indicators and Impact Assessment | Forest Management | Other Earth Sciences | Other Forestry and Forest Sciences | Plant Sciences | Soil Science | Weed Science Abella, S. R., Springer, J. D. Estimating soil seed bank characteristics in Ponderosa Pine Forests using vegetation and forest-floor data. USDA Forest Service Research Note RMRS-RN-35
<urn:uuid:0a98c26a-79c7-4555-828d-8d71d16f64ea>
3.09375
417
Academic Writing
Science & Tech.
26.850282
95,567,855
This movie illustrates methane's connection to global warming. Methane is a simple compound made of carbon and hydroge. This gas comes from ordinary sources, like cattle herds and garbage dumps. On a planetary scale it also has a significant impact on climate. As it builds up in the atmosphere, it traps energy from the sun like a layer of insulation. Carbon dioxide does much the same thing-it causes global warming by trapping heat. But as experts struggle to curtail global climate change, a decrease of atmospheric methane might be easier to achieve than proportional drops in carbon dioxide, affording an alternate scenario to policy makers. Methane is second only to carbon dioxide in contributing to global warming. It is a naturally occurring gas, a product of a variety of biological processes. But in terms of climate change, it is the unnatural concentration of the gas from human induced factors that has researchers concerned. In the case of garbage disposal, methane enters the atmosphere as a byproduct of decomposition. As anaerobic bacteria break down polymers and other carbon based garbage, like the banana peel shown here, methane gets produced as a waste gas. As it enters the atmosphere, it reduces the Earth's ability to cool by absorbing more reflected heat from the planet than would otherwise occur. Other sources of methane production include rice cultivation, industrial production, and cattle herds. GCMD keywords can be found on the Internet with the following citation: Olsen, L.M., G. Major, K. Shein, J. Scialdone, S. Ritz, T. Stevens, M. Morahan, A. Aleman, R. Vogel, S. Leicester, H. Weir, M. Meaux, S. Grebas, C.Solomon, M. Holland, T. Northcutt, R. A. Restrepo, R. Bilodeau, 2013. NASA/Global Change Master Directory (GCMD) Earth Science Keywords. Version 184.108.40.206.0
<urn:uuid:df04bcf0-8e67-4e90-a7f0-745d8d5610cc>
3.78125
408
Knowledge Article
Science & Tech.
53.92023
95,567,857
Arctic sea ice acts like a heat shield, reflecting most of the sun's energy. As the ice has melted back, it reflects less, and the planet is absorbing more heat. News, Blogs & Features - Jul 14th, 2018 - As Seas Rise, Americans Use Nature to Fight Worsening Erosion - Jul 11th, 2018 - Air Conditioning Costs Rise With Arizona’s Heat - Jul 11th, 2018 - Report: The High Cost of Hot - Jun 10th, 2018 - Antarctic Ocean Discovery Warns of Faster Global Warming - Jun 7th, 2018 - Rising Seas Could Swell Arizona’s Population
<urn:uuid:f6053cdc-0dcb-4fd1-bedc-1c9a336f96b1>
2.5625
137
Truncated
Science & Tech.
41.043182
95,567,868
Ion Implantation of Graphite Fibers and Filaments Ion implantation is an important technique for modifying material properties through the introduction of impurity atoms or the creation of lattice defects in a controlled way. The technique is important in the semiconductor industry for making p-n junctions by, for example, implanting n-type impurities into p-type host materials. From a materials science point of view, ion implantation allows essentially any element of the periodic table to be introduced into the near-surface region of essentially any host material, with quantitative control over the depth and composition profile of the impurity by proper choice of ion energy and fluence (i.e., the total number of implanted ions per unit area of sample). Furthermore an important application of ion implantation is in the synthesis of metastable alloys which could not be produced by other means. KeywordsHost Material Dark Field Image Lattice Image Rutherford Backscatter Spectroscopy Graphite Fiber Unable to display preview. Download preview PDF.
<urn:uuid:91343b43-aa77-4725-9e29-5bef179257b6>
3.078125
212
Truncated
Science & Tech.
21.259028
95,567,874
A typical adult damselfly. Note the spiked legs, which are held in a basket shape to help catch prey while flying. Dragonflies are charismatic insects, and most of us can probably remember chasing them or watching their acrobatic flights when we were children. But what most of us didn’t realize when we were kids, is that dragonflies spend the majority of their lives as toothy, alien-looking predators living underwater before they become adults. Depending on the species, they can live in the water for several weeks up to several years. A typical larval dragonfly, which feeds on other aquatic animals – and even other dragonflies! By living part of their lives in water, and part on land/in the air, dragonflies represent an interesting conservation challenge. Historically, conservation science has focused on single habitats, such as lakes, streams, forests or grasslands. Little attention has focused on incorporating multiple habitat types, such as those required by dragonflies, into conservation, potentially leaving species like dragonflies in danger. In the Waubaushene area of Georgian Bay (Lake Huron), recreational boating is very common. These boats create waves that can dislodge both adult and larval dragonflies, affecting their ability to find food and avoid predators. The overall number of boats, the speed of these boats, and how close they are to coastal wetlands are the most important factors that determine how impactful boat-generated waves are on dragonflies. My colleagues and I at the University of Toronto investigated how much influence these recreational boats have, relative to more natural processes, on dragonfly communities in Georgian Bay. A Google Earth image of an area in Georgian Bay. Note the many waves created by boats as they travel through this region. Taking the lead on this project, I counted dragonflies from 17 islands in Waubaushene. The coastal wetlands around these islands are inhabited by dragonflies. The islands studied in this project were selected to represent a range of influence from boats in the area, determined by their distance and orientation to marked boating channels and area marinas. Aaron Hall counting adult dragonflies at one of the islands in Waubaushene. The results show that boats do have an influence on dragonfly communities, providing a link between recreational boating and dragonfly communities. This research provides important insights that can be applied to the protection and conservation of dragonflies, and suggests that some very simple changes in boater behaviour could have big implications. For example, if boats travel slower or further away from dragonfly habitats, they would have less impact. These two factors might be simple to change. In areas where boats mostly stay within marked boating channels, if these channels were moved or adjusted so they are as far away from dragonfly habitats as possible, impacts would be minimized. Additionally, speed limits could be set in these channels to reduce the size of waves created by boats. These simple measures could have a positive impact on dragonflies, which are a critical component in the aquatic and terrestrial foodwebs of this region.
<urn:uuid:44e6f0aa-accf-49b3-a0d4-bbddacfda43d>
3.828125
622
Knowledge Article
Science & Tech.
33.304022
95,567,889
Here are the lab steps 1. Take a volumetric flask from the Glassware shelf and place it on the workbench. 2. Prepare a standard solution of potassium dichromate by adding 4g of dry potassium dichromate to the volumetric flask and filling the volumetric flask to the 100 mL mark with water. 3. Take a second volumetric flask from the Glassware shelf and place it on the workbench. 4. Prepare a standard solution of iron(II) ammonium sulfate hexahydrate by adding 4g of iron(II) ammonium sulfate hexahydrate to the second volumetric flask, adding 30 mL H©üO to dissolve the compound and release the water of hydration, and then filling with water to the mark to make a 100 mL solution. 5. Take a third volumetric flask from the Glassware shelf and place it on the workbench. 6. Add 2 mL from one of the Vodka bottles on the Chemicals shelf (Smirnoff or Absolut) to the volumetric flask, and fill with water to the mark to make a 100 mL solution. Procedure 2 ( click to view assignments for this procedure ) 1. Take a 150 mL Erlenmeyer flask from the Glassware shelf and place it on the workbench. 2. Add 5 mL of diluted vodka solution from the volumetric flask into the Erlenmeyer flask. 3. Add 10 mL of water from the chemicals shelf to the Erlenmeyer flask. 4. Acidify the solution by adding 3 mL of sulfuric acid solution (H©üSO©þ) from the Chemicals shelf to the Erlenmeyer flask. 5. Add 5 mL of the standard potassium dichromate solution from the volumetric flask into the Erlenmeyer flask. This is enough to reduce all of the ethanol in the vodka and leave an excess of dichromate ions. 6. Take a burette from the Glassware shelf and place it on the workbench. Fill the burette with 50 mL of the standard iron(II) solution. 7. Drag the Erlenmeyer flask and drop it on the lower half of the burette to connect them. 8. Perform a rough titration by adding the iron(II) solution in one to two mL increments until the endpoint, indicated by a loss of the dark purple color, is reached. 9. Refill the burette with iron(II) solution. 10. Prepare a second flask of vodka, sulfuric acid and potassium dichromate using the same volumes of each as in steps 2-5. 11. Perform the titration again, using smaller increments of iron(II) solution as you get closer to the expected endpoint. You should be able to reach the endpoint within an accuracy of 1-2 drops. Record the volume of the titrant used for this titration. 12. Repeat the accurate titration again wit ha third flask of vodka, potassium dichromate and sulfuric acid to an accuracy of 1-2 drops. Here is the assignments 1. Calculate the concentration of dichromate ion in the first volumetric flask. 2. Calculate the concentration of the iron(II) ion in the second volumetric flask. (Enter Assignment Report) Assignment 1 of Procedure 2 1. Record and calculate the following: (a) volume of dichromate solution added to the flask (mL): (b) moles of dichromate ion: (c) volume of iron(II) solution delivered from the burette (mL): (d) moles of iron(II) ions delivered from the burette: (e) moles of excess dichromate ions reacted with the iron(II) ions (remember the ratio in which they react!): (f) moles of dichromate that originally oxidized the ethanol in the vodka, by subtraction of (e) from (b): (g) moles of ethanol in the vodka (remember the ratio in which the dichromate ions react with the ethanol molecules): 2. The amount of alcohol in a drink is typically reported as the percent alcohol by volume. To find this: (a) calculate the mass of alcohol (ethanol) in the sample solution (g): (b) record the volume of vodka used in the test samples (mL): (c) Given the density of ethanol as 0.7893 g/mL, calculate the percent alcohol by volume as: % by volume= [(mass of ethanol) / (density of ethanol)] / volume of vodka x 100%© BrainMass Inc. brainmass.com July 18, 2018, 6:29 pm ad1c9bdddf Calculations for determining the alcohol content in given sample of Vodka
<urn:uuid:c4c3b1de-0ada-413e-8eff-c0ae54b760c6>
3.109375
1,031
Tutorial
Science & Tech.
61.950477
95,567,897
September 2015 lunar eclipse |Total lunar eclipse| September 28, 2015 From Murrieta, California, 2:52 UTC |Ecliptic north top| The Moon passes right to left (west to east) through Earth's shadow |Saros (and member)||137 (26 of 78)| The Moon crosses Earth's shadow in Pisces, passing west to east (right to left) as shown here in hourly movements. Uranus, at magnitude 5.7, can be seen in binoculars 16 degrees east of the total eclipsed Moon. A total lunar eclipse took place between September 27 and 28, 2015. It was seen on Sunday evening, September 27, in the Americas; while in Europe, Africa, and the Middle East, it was seen in the early hours of Monday morning, September 28. It was the latter of two total lunar eclipses in 2015, and the final in a tetrad (four total lunar eclipses in series). Other eclipses in the tetrad are those of April 15, 2014, October 8, 2014, and April 4, 2015. The Moon appeared larger than normal, because the Moon was just 59 minutes past its closest approach to Earth in 2015 at mid-eclipse, sometimes called a supermoon. The Moon's apparent diameter was larger than 34' viewed straight overhead, just off the coast of northeast Brazil. The eclipse was visible over Europe, the Middle East, Africa, and America. View of Earth from Moon at greatest eclipse Simulated appearance of Earth and atmospheric ring of sunlight The stages of the Lunar eclipse from Staffordshire, UK Time-lapse images from Oslo, Norway Time-lapse images from Bregenz, Austria Warsaw, Poland, 2:01 - 2:16 UTC Denver, Colorado, 2:15 UTC Fray Bentos, Uruguay 2:28 UTC Tampa, Florida, 2:30 UTC New York City, New York, 2:36 UTC Wrocław, Poland, 2:36 UTC Zürich, Switzerland, 2:36 UTC Coralville, Iowa, 2:52 UTC Munich, Germany, 2:55 UTC Sitia, Greece, 3:01 UTC Berlin, Germany, 3:05 UTC Mill Valley, California, 3:07 UTC Boston, Massachusetts, 3:24 UTC Germany, 3:37 UTC Cosne-Cours-sur-Loire, France, 4:02 UTC California, 4:07 UTC This eclipsed moon appeared 12.9% larger in diameter than the April 2015 lunar eclipse, measured as 29.66' and 33.47' in diameter from earth's center, as compared in these simulated images. A supermoon is the coincidence of a full moon or a new moon with the closest approach the Moon makes to the Earth on its elliptical orbit, resulting in the largest apparent size of the lunar disk as seen from Earth. This was the last supermoon lunar eclipse until 31 January 2018. A lunar eclipse occurs when the Moon passes within Earth's umbra (shadow). As the eclipse begins, Earth's shadow first darkens the Moon slightly. Then, the shadow begins to "cover" part of the Moon, turning it a dark red-brown color (typically – the color can vary based on atmospheric conditions). The Moon appears to be reddish because of Rayleigh scattering (the same effect that causes sunsets to appear reddish) and the refraction of that light by Earth's atmosphere into its umbra. The following simulation shows the approximate appearance of the Moon passing through Earth's shadow. The Moon's brightness is exaggerated within the umbral shadow. The northern portion of the Moon was closest to the center of the shadow, making it darkest, and most red in appearance. |Event||Evening September 27||Morning September 28| |P1||Penumbral begins*||N/A†||N/A†||7:12 pm||8:12 pm||9:12 pm||10:12 pm||11:12 pm||12:12 am||1:12 am||2:12 am||3:12 am| |U1||Partial begins||N/A†||7:07 pm||8:07 pm||9:07 pm||10:07 pm||11:07 pm||12:07 am||1:07 am||2:07 am||3:07 am||4:07 am| |U2||Total begins||7:11 pm||8:11 pm||9:11 pm||10:11 pm||11:11 pm||12:11 am||1:11 am||2:11 am||3:11 am||4:11 am||5:11 am| |Mid-eclipse||7:47 pm||8:47 pm||9:47 pm||10:47 pm||11:47 pm||12:47 am||1:47 am||2:47 am||3:47 am||4:47 am||5:47 am| |U3||Total ends||8:23 pm||9:23 pm||10:23 pm||11:23 pm||12:23 am||1:23 am||2:23 am||3:23 am||4:23 am||5:23 am||6:23 am| |U4||Partial ends||9:27 pm||10:27 pm||11:27 pm||12:27 am||1:27 am||2:27 am||3:27 am||4:27 am||5:27 am||6:27 am||Set| |P4||Penumbral ends||10:22 pm||11:22 pm||12:22 am||1:22 am||2:22 am||3:22 am||4:22 am||5:22 am||6:22 am||Set||Set| † The Moon was not visible during this part of the eclipse in this time zone. * The penumbral phase of the eclipse changes the appearance of the Moon only slightly and is generally not noticeable. The timing of total lunar eclipses are determined by its contacts: - P1 (First contact): Beginning of the penumbral eclipse. Earth's penumbra touches the Moon's outer limb. - U1 (Second contact): Beginning of the partial eclipse. Earth's umbra touches the Moon's outer limb. - U2 (Third contact): Beginning of the total eclipse. The Moon's surface is entirely within Earth's umbra. - Greatest eclipse: The peak stage of the total eclipse. The Moon is at its closest to the center of Earth's umbra. - U3 (Fourth contact): End of the total eclipse. The Moon's outer limb exits Earth's umbra. - U4 (Fifth contact): End of the partial eclipse. Earth's umbra leaves the Moon's surface. - P4 (Sixth contact): End of the penumbral eclipse. Earth's penumbra no longer makes contact with the Moon. The eclipse was one of four lunar eclipses in a short-lived series at the descending node of the Moon's orbit. The lunar year series repeats after 12 lunations, or 354 days (shifting back about 10 days in sequential years). Because of the date shift, Earth's shadow will be about 11 degrees west in sequential events. |Lunar eclipse series sets from 2013–2016| |Ascending node||Descending node| ||2013 Apr 25 ||2013 Oct 18 ||2014 Apr 15 ||2014 Oct 08 ||2015 Apr 04 ||2015 Sep 28 |142||2016 Mar 23 ||2016 Sep 16 |Last set||2013 May 25||Last set||2012 Nov 28| |Next set||2017 Feb 11||Next set||2016 Aug 18| |September 22, 2006||October 2, 2024| - Sky and Telescope - Here’s the Scoop on Sunday’s Supermoon Eclipse, Bob King - "Why Was September's Lunar Eclipse So Dark? - Universe Today". Universe Today. 2015-10-05. Retrieved 2017-08-08. - Fred Espenak & Jean Meeus. "Visual Appearance of Lunar Eclipses". NASA. Retrieved April 13, 2014. - Espenak, Fred. "Lunar Eclipses for Beginners". MrEclipse. Retrieved April 7, 2014. - Clarke, Kevin. "On the nature of eclipses". Inconstant Moon. Cyclopedia Selenica. Retrieved 19 December 2010. - Mathematical Astronomy Morsels, Jean Meeus, p.110, Chapter 18, The half-saros |Wikimedia Commons has media related to Lunar eclipse of 2015 September 28.|
<urn:uuid:1a2caf63-954c-46db-9e5c-9e2444f565cb>
2.609375
1,859
Knowledge Article
Science & Tech.
83.343658
95,567,898
about this item The study of life and its existence in the universe, known as astrobiology, is now one of the hottest areas of both popular science and serious academic research, fusing biology, chemistry, astrophysics, and geology. Lewis Dartnell tours its latest findings, and explores some of the most fascinating questions in science. Could life emerge on other planets or moons? Could alien cells be based on silicon rather than carbon, or need ammonia instead of water? Dartnell takes us on a tour of our solar system and beyond to reveal how deeply linked we are to our cosmic environment, and what we might hope to find out there.
<urn:uuid:0bee5d6b-c430-44a1-b9e1-166937c61994>
2.78125
132
Product Page
Science & Tech.
41.120844
95,567,904
Some fairly major genetic switches are scrambled in drosophila raised in artificial zero-G. This is bad news for any form of space travel involving long-term exposure to microgravity, especially those where children are born and raised from scratch. It makes sense, since until a little over a half-century ago, every single one of our ancestors had been in a gravity field of right around 1.0 g for 3.7 billion years; zero gravity is a huge warranty-violation for biochemistry. Sex has already occurred in space, but that's easy compared to building a whole new human being molecule-by-molecule. There's Something Magical About the Charmed Reboot 7 minutes ago
<urn:uuid:0907babe-acb3-428e-9d1e-5c2c3c48bce5>
2.6875
145
Truncated
Science & Tech.
40.620287
95,567,906
Heat and salt balances in the Seto Inland Sea - 39 Downloads Seasonal variations of heat and salt balances are estimated in the Seto Inland Sea with the use of a numerical experiment. The surface effect is dominant with respect to the heat balance. In spring, however, the effect of the horizontal heat transport is the same as or greater than that of the surface heating (or cooling). Annual mean heat transport is 85 cal cm−2 day−1 (356 J cm−2 day−1) which is supplied from the open ocean and lost through the sea surface in the Inland Sea as a whole. Because of the shallow water depth, heat is supplied through the surface and carried out by the horizontal heat transport in Hiuchi- and Bingo-nada in the annual mean. The heat transport has the opposite sense to that in the whole Seto Inland Sea and annual mean transport is negative (−10 cal cm−2 day−1,i.e., −42 J cm−2 day−1). The salt balance is primarily controlled by the river discharge and the surface effect (precipitation) in June and July. In the other months, the effects of horizontal salt transport, of river inflow and of sea surface exchange (especially of the evaporation in autumn) are comparable to each other. In the Bungo Channel the river effect is relatively small. Osaka Bay and the Kii Channel are characterized by a smaller surface effect. KeywordsWater Depth Heat Transport River Discharge Surface Heating Salt Balance Unable to display preview. Download preview PDF. - Budyko, M.I. (1956): The heat balance of the Earth's surface. Hydrometeorological Publishing Hause, Leningrad, 254 pp.Google Scholar - Defant, A. (1961): Physical oceanography, Vol. 1. Pergamon Press, 729 pp.Google Scholar - Hayami, K. and S. Unoki (1970): Exchange of sea water and diffusion of substance in the Seto Inland Sea. Proc. of 17th Conf. on Coastal Eng. in Japan, 385–393 (in Japanese).Google Scholar - Hishida, K. and K. Nishiyama (1969): On the diagram of evaluating the amount of heat exchange at the sea surface. Umi to Sora,45, 1–10 (in Japanese).Google Scholar - Ishizaki, H. and M. Saito (1978a): On the heat budget in the Seto Inland Sea. Umi to Sora,54, 1–11 (in Japanese).Google Scholar - Ishizaki, H. and M. Saito (1978b): On the evaporation amount in the Seto Inland Sea. Bull. Coast. Oceanogr.,16, 11–20 (in Japanese).Google Scholar - Japan Environmental Agency (1973): The synthetic research for water pollutions in the Seto Inland Sea in 1972.Google Scholar - Japan Environmental Agency (1974): The report of the simulation of water pollutions in the Seto Inland Sea.Google Scholar - Japan Meteorological Agency (1972, 1973): The monthly data report (March 1972–February 1973).Google Scholar - Murakami, M., Y. Oonishi, A. Harashima and H. Kunishi (1978): A numerical simulation of the distribution of water temperature and salinity in the Seto-Inland Sea. Bull. Coast. Oceanogr.,15, 130–137 (in Japanese).Google Scholar - Murakami, M., Y. Oonishi and H. Kunishi (1985): A numerical simulation of the distribution of water temperature and salinity in the Seto Inland Sea. J. Oceanogr. Soc. Japan,41, 213–224.Google Scholar - Tanaka, M. and C. Nakajima (1975): Heat balance at the sea surface over Hiuchinada in Seto Inland Sea. Ann. Disast. Prev. Res. Inst., Kyoto Univ.,18B, 589–595 (in Japanese).Google Scholar - The River Bureau, Ministry of Construction (1972, 1973): The River Discharge Table.Google Scholar - Yanagi, T. (1984): Seasonal variation of water temperature in the Seto Inland Sea. J. Oceanogr. Soc. Japan,40, 445–450.Google Scholar
<urn:uuid:47af769c-9279-4ab3-9ecf-50be42bd2564>
2.875
921
Academic Writing
Science & Tech.
64.728987
95,567,910
Adenoviruses cause numerous diseases, such as eye or respiratory infections, and they are widely used in gene therapy. Researchers from the University of Zurich have now discovered how these viruses penetrate the cells, a key step for infection and gene delivery The cell unwillingly supports virus entry and infection by providing lipids that are normally used to repair damaged membranes. An intact cell membrane is essential for any cell to function. The external cell membrane can be damaged by mechanical stress, for example in muscle cells, or by pathogens, such as viruses and bacteria. Membrane damage can result in small pores, which lead to loss of valuable substances from the cell. The cell can quickly repair such injuries to its membrane. Human adenoviruses also cause small pores in the cell membrane, as a team of cell biologists headed by Urs Greber, a professor at the Institute of Molecular Life Sciences at the University of Zurich, has now discovered. These pores are too small for the virus to get directly into the cell but are large enough for the cell to recognize them as a danger signal and repair them in a matter of seconds. The adenovirus uses this very repair mechanism to trigger an infection. Certain lipids help the virus to enter the cell During this repair process, lipids – in particular ceramide lipids – are formed, which enable the virus to enter the cell more rapidly. The ceramide lipids cause the membrane to bend and endosomes to form. Endosomes are small bubbles of lipids and proteins and they engulf extracellular material, such as nutrients, but also viruses. With the aid of the ceramide lipids, the virus increases the size of the membrane lesions, and can leave the endosome before the endosome becomes a lysosome and degrades the virus. The virus then multiplies in the nucleus and subsequently infects other cells. “We have identified particular cellular lipids as key components for the virus to enter into cells, which is surprising as lipids have important roles in biology, but these roles are difficult to identify,” explains Stefania Luisoni, the first author on the study and a doctoral student at the Institute of Molecular Life Sciences. The scientists identified a connection between the formation of a membrane pore by the virus and a cellular repair mechanism. These events form a positive feedback loop, which is part of the explanation for the high infection efficiency of the adenoviruses, which scientists have known for some time. The work also identified a new inhibitor against the adenoviruses, which inhibits the cellular protein “lysosomal acid sphingomyelinase“, and blocks the formation of ceramide lipids in the plasma membrane. “Our results are potentially interesting for the development of new anti-viral agents, and they increase our understanding in how the adenovirus works in vaccination and gene therapy” concludes Greber. Stefania Luisoni, Maarit Suomalainen, Karin Boucke, Lukas B. Tanner, Markus R. Wenk, Xue Li Guan, Michal Grzybek, Ünal Coskun, Urs F. Greber. Co-option of Membrane Wounding Enables Virus Penetration into Cells. Cell Host & Microbe, July 8, 2015. http://www.sciencedirect.com/science/article/pii/S1931312815002541 Nathalie Huber | Universität Zürich Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:19f91cbc-c35f-425f-839b-dc6772fbc952>
4.03125
1,310
Content Listing
Science & Tech.
34.191949
95,567,925
Identification of Important Sea Turtle Areas (ITAs) for hawksbill turtles in the Arabian Region MetadataShow full item record We present the first data on hawksbill turtle post-nesting migrations and behaviour in the Arabian region. Tracks from 90 post-nesting turtles (65 in the Gulf and 25 from Oman) revealed that hawksbills in the Arabian region may nest up to 6 times in a season with an average of 3 nests per turtle. Turtles from Qatar, Iran and the UAE generally migrated south and southwest to waters shared by the UAE and Qatar. A smaller number of turtles migrated northward towards Bahrain, Saudi Arabia and one reached Kuwait. Omani turtles migrated south towards Masirah island and to Quwayrah, staying close to the mainland and over the continental shelf. The widespread dispersal of hawksbill foraging grounds across the SW Gulf may limit habitat protection options available to managers, and we suggest these be linked to preservation of shallow water habitats and fishery management. In contrast, the two main foraging areas in Oman were small and could be candidates for protected area consideration. Critical migration bottlenecks were identified at the easternmost point of the Arabian Peninsula as turtles from Daymaniyat Islands migrate southward, and between Qatar and Bahrain. Overall, Gulf turtles spent 68% of the time in foraging ground with home ranges of 40–60km2 and small core areas of 6km2. Adult female turtles from Oman were significantly larger than Gulf turtles by ~11cm x¯=81.4CCL and spent 83% of their time foraging in smaller home ranges with even smaller core areas (~3km2), likely due to better habitat quality and food availability. Gulf turtles were among the smallest in the world x¯=70.3CCL and spent an average of 20% of time undertaking summer migration loops, a thermoregulatory response to avoid elevated sea surface temperatures, as the Gulf regularly experiences sustained sea surface temperatures >30°C. Fishery bycatch was determined for two of the 90 turtles. These spatio-temporal findings on habitat use will enable risk assessments for turtles in the face of multiple threats including oil and gas industries, urban and industrial development, fishery pressure, and shipping. They also improve our overall understanding of hawksbill habitat use and behaviour in the Arabian region, and will support sea turtle conservation-related policy decision-making at national and regional levels. - Marine Science Cluster [34 items ]
<urn:uuid:cac84468-7bd1-41d5-a699-41d602685e48>
2.71875
507
Academic Writing
Science & Tech.
30.355971
95,567,933
Time Series (Authors Michael Hauser and Wolfgang Hörmann) In this chapter we are concerned with times series, i.e. with the generation of sample paths of stochastic non-deterministic processes in discrete time, (X t ,t ∈ ℤ), where X t are continuous random variates. In the first part we will focus our presentation on stationary Gaussian processes. These are most widely used in the analysis of, e.g., economic series or in signal processing. KeywordsTime Series Autocorrelation Function Multivariate Distribution Toeplitz Matrice Stationary Time Series Unable to display preview. Download preview PDF.
<urn:uuid:8fddf4ef-4cde-4260-8979-70b70234931c>
2.671875
139
Truncated
Science & Tech.
36.455097
95,567,965
Watching the sun come and go sounds like a peaceful process, but Johns Hopkins scientists have discovered that behind the scenes, millions of specialized cells in our eyes are fighting for their lives to help the retina set the stage to keep our internal clocks ticking. In a study that appeared in a recent issue of Neuron, a team led by biologist Samer Hattar has found that there is a kind of turf war going on behind our eyeballs, where intrinsically photosensitive retinal ganglion cells (ipRGCs) are jockeying for the best position to receive information from rod and cone cells about light levels. By studying these specialized cells in mice, Hattar and his team found that the cells actually kill each other to seize more space and find the best position to do their job. Understanding this fight could one day lead to victories against several conditions, including autism and some psychiatric disorders, where neural circuits influence our behavior. The results could help scientists have a better idea about how the circuits behind our eyes assemble to influence our physiological functions, said Hattar, an associate professor of biology in the Krieger School of Arts and Sciences. "In a nutshell, death in our retina plays a vital role in assembling the retinal circuits that influence crucial physiological functions such as circadian rhythms and sleep-wake cycles," Hattar said. "Once we have a greater understanding of the circuit formation underlying all of our neuronal abilities, this could be applied to any neurological function." Hattar and his team determined that the killing among rival ipRGCs is justifiable homicide: Without this cell death, circadian blindness overcame the mice, who could no longer distinguish day from night. Hattar’s team studied mice that were genetically modified to prevent cell death by removing the Bax protein, an essential factor for cell death to occur. They discovered that if cell death is prevented, ipRGCs distribution is highly affected, leading the surplus cells to bunch up and form ineffectual, ugly clumps incapable of receiving light information from rods and cones for the alignment of circadian rhythms. To detect this, the researchers used wheel running activity measurements in mice that lacked the Bax protein as well as the melanopsin protein which allows ipRGCs to respond only through rods and cones and compared it to animals where only the Bax gene was deleted. What the authors uncovered was exciting: When death is prevented, the ability of rods and cones to signal light to our internal clocks is highly impaired. This shows that cell death plays an essential role in setting the circuitry that allows the retinal rods and cones to influence our circadian rhythms and sleep. Hattar’s study was funded by the National Institute of General Medical Sciences and the National Institute of Neurological Disorders and Stroke and was carried out in close collaboration with Rejji Kuruvilla, an associate professor who is another member of the mouse tri-lab community in the Department of Biology at Johns Hopkins.Copies of the study are available. Contact Amy Lunday at firstname.lastname@example.org or 443-287-9960. Hattar’s webpage: http://www.bio.jhu.edu/Faculty/Hattar/Default.htmlJohns Hopkins University news releases can be found on the World Wide Web at http://releases.jhu.edu/ Information on automatic E-mail delivery of science and medical news releases is available at the same address. Amy Lunday | Newswise Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:37474f46-890e-4404-9bbf-4154964f608c>
3.515625
1,363
Content Listing
Science & Tech.
41.039784
95,567,971
Geologists from the University of Innsbruck search for ancient traces of our climate history. Diving in the cavern: Beneath the surface of the Amargosa desert, located in southwestern USA, lies a hidden ‘gem’ for climatologists that harbors a complete history of climate evolution spanning a million years: Devils Hole. Geologists from Innsbruck are studying this climate record which is found both above and below the present day water table. Apart from ice in the polar regions, caves are one of the most important climate archives in the world. The Earth’s surface is exposed to weathering and erosion and constantly changes. In caves, however, the footprints of the past are well preserved, sometimes over many hundreds of thousands of years. In February 2017, a group of researchers from Innsbruck descended into a part of Devils Hole to get a glimpse into the historic climate changes. They were accompanied by Robbie Shone, one of the most accomplished cave photographers in the world. In our multimedia-story we take a closer look on their cave adventure both above and below the water table. Melanie Bartos | Universität Innsbruck New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:596f0312-2a94-45b6-b51d-5d660917be1e>
3.484375
875
Content Listing
Science & Tech.
37.890128
95,567,988
The purity of a 0.248 gram sample containing Zn is determined by measuring the amount of hydrogen gas formed when the sample reacts with hydrochloric acid. The sample is found to be 91.7% Zn. What volume of hydrogen gas in liters was collected at STP?© BrainMass Inc. brainmass.com July 18, 2018, 10:30 pm ad1c9bdddf There are multiple steps in this problem: 1) First you have to set up the chemical reaction 2Zn + 2HCl = H2 +2ZnCl 2) Now you have to convert 0.248 g into moles 1 mole of Zn = 65.409 ... This solution takes you through four steps: set up the chemical reaction, convert 0.248 g into moles, from the number of moles of Zn find the number of moles of H2, with the number of moles of H2 find the volume of gas using the formula: V = (nRT)/P. 150 words.
<urn:uuid:716370ef-bbe4-4d25-9762-3cbca8d2e90c>
3.46875
219
Tutorial
Science & Tech.
95.643306
95,567,996
In several biologically relevant situations, cell locomotion occurs in polymeric fluids with Weissenberg number larger than 1. Here we present results of three-dimensional numerical simulations for the steady locomotion of a self-propelled body in a model polymeric (Giesekus) fluid at low Reynolds number. Locomotion is driven by steady tangential deformation at the surface of the body (the so-called squirming motion). In the case of a spherical squirmer, we show that the swimming velocity is systematically less than that in a Newtonian fluid, with a minimum occurring for Weissenberg numbers of order 1. The rate of work done by the swimmer always goes up compared to that occurring in the Newtonian solvent alone but is always lower than the power necessary to swim in a Newtonian fluid with the same viscosity. The swimming efficiency, defined as the ratio between the rate of work necessary to pull the body at the swimming speed in the same fluid and the rate of work done by swimming, is found to always be increased in a polymeric fluid. Further analysis reveals that polymeric stresses break the Newtonian front-back symmetry in the flow profile around the body. In particular, a strong negative elastic wake is present behind the swimmer, which correlates with strong polymer stretching, and its intensity increases with Weissenberg number and viscosity contrasts. The velocity induced by the squirmer is found to decay in space faster than in a Newtonian flow, with a strong dependence on the polymer relaxation time and viscosity. Our computational results are also extended to prolate spheroidal swimmers and smaller polymer stretching are obtained for slender shapes compared to bluff swimmers. The swimmer with an aspect ratio of two is found to be the most hydrodynamically efficient. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:f1280cdf-51ec-4836-b730-9a7068e420da>
2.65625
383
Academic Writing
Science & Tech.
23.636914
95,568,012
Mouse mothers-to-be have a remarkable way to protect their unborn pups. Because the smell of a strange male’s urine can cause miscarriage and reactivate the ovulatory cycle, pregnant mice prevent the action of such olfactory stimuli by blocking their smell. Researchers from the European Molecular Biology Laboratory (EMBL) in Monterotondo, Italy, have now revealed the nature of this ability. A surge of the chemical signal dopamine in the main olfactory bulb - one of the key brain areas for olfactory perception – creates a barrier for male odours, they report in the current issue of Nature Neuroscience. Social odours, such as pheromones, influence many aspects of human and animal behaviour – perhaps most widely known reproductive behaviour. For example, exposing a newly pregnant mouse to the smell of an alien male’s urine prevents the implantation of her embryos into the uterus and brings her back into the ovulatory cycle. The scent affects pregnancy by inhibiting the release of the pregnancy hormone prolactin. his phenomenon is often called the Bruce effect and creates a mating opportunity for the alien male. It is also beneficial for the female because it avoids infanticide by the strange male after birth. After day 3 of pregnancy, however, the smell of an alien male’s urine no longer affects pregnancy. At this stage the embryos have already been implanted into the uterus and loosing them would bear a high cost for the female. Liliana Minichiello and her team at the EMBL Mouse Biology Unit now discovered the molecular mechanism that underpins this change in sensitivity to male odours. “At day 3 of the pregnancy a chemical change occurs in the brain of the expectant mother that makes her unable to perceive male odours. This seems to mark a point of no return for the pregnancy,” explains Minichiello. Following coitus, a progressive surge of the chemical signal dopamine takes place in the main olfactory bulb, the most anterior part of the mouse brain that is dedicated to the processing of odours. The dopamine flood is triggered by the physical stimulation during mating and progressively impairs the perception and discrimination of social odours contained in male urine. Treating pregnant mice with chemicals that block the dopamine receptor D2 abolished the barrier effect, restored odour sensing and favoured pregnancy disruption. The findings unexpectedly reveal the main olfactory bulb as a key control centre of social and reproductive behaviour. Previous research in this area had focussed almost exclusively on other brain circuits. The main olfactory bulb likely achieves its control through projections to the amygdala and the hypothalamus, those regions of the brain that regulate emotional and reproductive behaviour through the release of hormones. Dopamine is also found in humans, where it is mostly known for its role as the brain’s ‘reward chemical’ that plays crucial roles in addiction and neurological disorders like Parkinson’s disease, but it is also found in the human olfactory bulb. It is unknown if a process similar to the phenomenon observed in mice takes place in pregnant women. “As far as we know, human pregnancy is not affected by strange male odours, but it could help explain why many women report changes in olfaction during pregnancy,” says Che Serguera, who carried out the research in Minichiello’s lab. Published online in Nature Neuroscience on 20 July 2008. Anna-Lynn Wegener | EMBL Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e55ef724-6cfd-4abe-90a1-e054d3aff819>
3.140625
1,298
Content Listing
Science & Tech.
33.813654
95,568,030
The 'ghost particle' from another galaxy that could transform our understanding of the universe after being detected in the Antarctic - The IceCube laboratory detected the high-energy neutrino in September 2017 - Astronomers traced the subatomic particle to its origin 4 billion light years away - It originated from a 'blazar' galaxy with a supermassive black hole at its heart - Being able to detect high-energy neutrinos will provide yet another window on the observable universe, scientists claim A single ghost-like subatomic particle captured on Earth could finally help solve a cosmic mystery that has left scientists baffled for more than a century. The high energy neutrino – the first of its type ever detected – was traced four billion light years to its source, a distant elliptical galaxy with a giant black hole at its heart emitting jets of light and radiation aimed directly at Earth. Known as a 'blazar', this galaxy was the smoking gun that led astronomers to finally unravel the 100 year-old riddle around the origin of high energy cosmic rays. Scroll down for video An artist's impression of the active galactic nucleus where the ghost-like subatomic particle captured at the IceCube laboratory likely originated These rays, which consist of fast-moving elementary particles, pepper Earth from space and pose a threat to astronauts, as well as the crews and passengers of commercial flights. Discovering the ghost-like particle, which burst from the 'blazar' before the Earth formed, could provide an entirely new way of looking at the cosmos, scientists claim. The neutrino discovery, published in the journal Science, points towards one likely origin – powerful jets of accelerated particles fired from the poles of rapidly rotating supermassive black holes. Until now, the origin of high energy cosmic rays was a mystery to scientists. Beyond cosmic rays, the latest finding could provide a new way of peering into the depths of the universe. Like the discovery of gravitational waves in 2016, neutrinos could be a new 'messenger', carrying energy across the cosmos. Neutrinos are the so-called 'third messenger', following light protons and gravitational waves. The discovery of a high-energy neutrino on September 22, 2017, sent astronomers on a chase to locate its source—a supermassive black hole in a distant galaxy. The high-energy neutrino was first detected on September 22, 2017 by the IceCube observatory, a huge facility sunk a mile beneath the South Pole. Here, a grid of more than 5,000 super-sensitive sensors picked up the characteristic blue 'Cherenkov' light emitted as the neutrino interacted with the ice. Having almost no mass and passing right through planets, stars and anything else in its way, the particle travelled in a straight line from its point of origin to Earth. As a result, astronomers were able to track its trajectory back across billions of light years to its probable source. News of the detection sent astronomers into a frenzy of activity as telescopes were quickly pointed in the suggested direction. The search led to the discovery of a 'blazar', a special class of galaxy containing a supermassive black hole four billion light years away, left of the Orion constellation. A key feature of blazars is twin jets of light and elementary particles shooting from the poles of the swirling mass of material surrounding the black hole. NASA's Fermi (top left) has achieved a new first—identifying a monster black hole in a far-off galaxy as the source of a high-energy neutrino seen by the IceCube Neutrino Observatory (sensor strings, bottom). The IceCube laboratory at the South Pole – the largest neutrino observatory in the world – where scientists made the first ever detection of a high-energy neutrino The neutrino detected by IceCube is thought to have been created by high-energy cosmic rays from the jets interacting with nearby material. Professor Paul O'Brien, a member of the international team of astronomers from the University of Leicester, said: 'Neutrinos rarely interact with matter. 'To detect them at all from the cosmos is amazing, but to have a possible source identified is a triumph. 'This result will allow us to study the most distant, powerful energy sources in the universe in a completely new way.' WHAT IS A HIGH-ENERGY NEUTRINO? High-energy neutrinos are chargeless, massless subatomic particles. Neutrinos are one of the fundamental particles that make up the universe, but are some of the least understood as they interact very weakly with everything around them. This makes them ideal astronomical messengers, since they can through the universe without scattering, absorption or deflection. However, these weak interactions also makes the particles notoriously tough to detect, leading to neutrino observatories requiring large-scale detectors. The only time they interact with other particles is when they collide head on. Most neutrino detectors use vast underground tanks brimming with water and fitted with extremely-sensitive sensors to capture brief flashes of light emitted when a neutrino smashes into a particle within the fluid. However, the largest neutrino observatory in the world, IceCube, instead uses a kilometre-sized section of ice some 1.55 miles (2.5 kilometres) beneath the surface of Antarctica, close to the South Pole. Sensors are embedded deep into the ice to capture the brief flashes that occur when neutrinos collide with particles in the ice. Capturing evidence of these collisions does not occur often, but when it does, it sets off a chain of events at the observatory to try and determine where the neutrino originated. Most neutrinos come from the sun or cosmic rays striking our atmosphere. Unlike high energy neutrinos, most cosmic rays carry an electric charge that causes their trajectories to be warped by magnetic fields, making it impossible to trace their origins. In contrast, neutrinos are unaffected by even the most powerful magnetic fields. The blazar believed to have generated the neutrino, code-named TXS 0506 + 056, was located in less than a minute after the IceCube team relayed co-ordinates for follow-up observations to telescopes worldwide. Being able to detect high-energy neutrinos will provide yet another window on the universe, said the scientists. The IceCube array uses strings of sensors which are lowered down boreholes in the ice. The IceTop has two layers of detectors beneath the surface. The Eiffel Tower is depicted, bottom right, to show the scale of the detector The sensational discovery of the second 'messenger', gravitational waves, or ripples in space-time, was announced in February 2016. France Cordova, director of the US National Science Foundation (NSF) that manages the IceCube laboratory, said: 'The era of multi-messenger astrophysics is here. 'Each messenger, from electromagnetic radiation, gravitational waves and now neutrinos, gives us a more complete understanding of the universe and important new insights into the most powerful objects and events in the sky.' Cosmic rays were discovered in 1912 by physicist Victor Hess using instruments on a balloon flight. Later research showed them to be made up of protons, electrons or atomic nuclei accelerated to speeds approaching that of light. HOW DOES THE ICECUBE WORK? IceCube is the world's most sensitive neutrino telescope. IceCube is a neutrino detector composed of 5,160 optical modules embedded in a gigaton of crystal-clear ice a mile beneath the geographic South Pole. Supported by the National Science Foundation, IceCube is capable of capturing the fleeting signatures of high-energy neutrinos — nearly massless particles generated, presumably, by dense, violent objects such as supermassive black holes, galaxy clusters, and the energetic cores of star-forming galaxies. The size of the observatory - a cubic kilometre of ice - is important because it increases the number of potential collisions that can be observed. In addition, the type of ice at the South Pole is perfect for detecting the rare collisions. Most ice contains air bubbles and other pockets that would distort measurements. But at the South Pole, it's basically a giant glacier consisting almost entirely of water ice, meaning there are more atoms and so more chance of a neutrino collision. Each of the round detectors are placed on a long string and lowered into holes in the ice that were drilled using a powerful hot-water drill that melted up to 200,000 gallons of ice per hole. The final module, signed by all the team, is readied for deployment Each cable string has 60 sensors at depth with 86 strings making up the main IceCube detector. The giant telescope was built at an average depth of up to 8,000 feet beneath the Antarctic plateau at the South Pole. The entire project cost $279 million, of which the National Science Foundation contributed $242 million towards it. The final stretch of construction ended with the drilling of the last of 86 holes for the 5,160 optical sensors that are now installed to form the main detector. The collision between a neutrino and an atom produces particles known as ‘muons’ in a flash of blue light called ‘Cherenkov radiation’. In the ultratransparency of the Antarctic ice, IceCube’s optical sensors detect this blue light. The trail left in the wake of the subatomic collision allows scientists to trace the direction of the incoming neutrino, back to its point of origin, be it a black hole or a crashing galaxy. Most watched News videos - Man fatally shoots a father during an argument over a handicap spot - Duck boats struggle to stay afloat on Missouri river - London commuter sings out loud and doesn't care who hears him - Roseanne Bar explains her Valerie Jarrett tweet in eccentric rant - Prince George turns five: His memorable moments - Clashes erupt during English Defence League march in Worcester - Sir David Attenborough shuts down Naga Munchetty's questions - Cohen taped Trump discussing payment to Playboy model - May urges EU to take more flexible view on Irish border issue - Woman livestreams unassisted birth of her 6th child in her garden - Moment uni student fends off armed mugger with martial arts in Brazil - Female police officer knocked down in Worcester protests
<urn:uuid:4e846470-9b3e-4de9-aace-ba9d2686e0d7>
2.90625
2,175
News Article
Science & Tech.
28.283962
95,568,050
Those are some of the conclusions contained in the Midwest chapter of a draft report released last week by the federal government that assesses the key impacts of climate change on every region in the country and analyzes its likely effects on human health, water, energy, transportation, agriculture, forests, ecosystems and biodiversity. Three University of Michigan researchers were lead convening authors of chapters in the 1,100-plus-page National Climate Assessment, which was written by a team of more than 240 scientists. University of Michigan aquatic ecologist Donald Scavia was a lead convening author of the Midwest chapter. Dan Brown of the School of Natural Resources and Environment was a lead convening author of the chapter on changes in land use and land cover. Rosina Bierbaum of SNRE and the School of Public Health was a lead convening author of the chapter on climate change adaptation. Missy Stults, a research assistant with Bierbaum and a doctoral student at the A. Alfred Taubman College of Architecture and Urban Planning, was a contributing author on the adaptation chapter. In addition, Bierbaum and Marie O'Neill of the School of Public Health serve on the 60-person advisory committee that oversaw development of the draft report, which is the third federal climate assessment report since 2000. The report stresses that climate change is already affecting Americans, that many of its impacts are expected to intensify in coming decades, and that the changes are primarily driven by human activity. "Climate change impacts in the Midwest are expected to be as diverse as the landscape itself. Impacts are already being felt in the forests, in agriculture, in the Great Lakes and in our urban centers," said Scavia, director of the Graham Sustainability Institute and special counsel to the U-M president on sustainability issues. In the Midwest, extreme rainfall events and floods have become more common over the last century, and those trends are expected to continue, causing erosion, declining water quality and negative impacts on transportation, agriculture, human health and infrastructure, according to the report. Climate change will likely worsen a host of existing problems in the Great Lakes, including changes in the range and distribution of important commercial and recreational fish species, increases in invasive species, declining beach health, and more frequent harmful algae blooms. However, declines in ice cover on the Great Lakes may lengthen the commercial shipping season. In agriculture, longer growing seasons and rising carbon dioxide levels are likely to increase the yields of some Midwest crops over the next few decades, according to the report, though those gains will be increasingly offset by the more frequent occurrence of heat waves, droughts and floods. In the long term, combined stresses associated with climate change are expected to decrease agricultural productivity in the Midwest. The composition of the region's forests is expected to change as rising temperatures drive habitats for many tree species northward. Many iconic tree species such as paper birch, quaking aspen, balsam fir and black spruce are projected to shift out of the United States into Canada. The rate of warming in the Midwest has accelerated over the past few decades, according to the report. Between 1900 and 2010, the average Midwest air temperature increased by more than 1 degree Fahrenheit. However, between 1950 and 2010, the average temperature increased twice as quickly, and between 1980 and 2010 it increased three times as quickly. The warming has been more rapid at night and during the winter. The trends are consistent with the projected effects of increased concentrations of heat-trapping greenhouse gases, such as carbon dioxide released by the burning of fossil fuels. Projections for regionally averaged temperature increases by the middle of the century, relative to 1979-2000, are approximately 3.8 degrees Fahrenheit for a scenario with substantial emissions reductions and 4.9 degrees for the current high-emissions scenario. Projections for the end of the century in the Midwest are about 5.6 degrees for the low-emissions scenario and 8.5 degrees for the high-emissions scenario, according to the report. The draft National Climate Assessment report is available at http://ncadac.globalchange.gov. A summary of associated technical input papers is available at www.glisa.umich.edu. Public comment on the draft report will be accepted through April 12. Jim Erickson | Newswise Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:8599d51a-3089-4b62-9215-2b5b2bc36765>
3.015625
1,450
Content Listing
Science & Tech.
36.568418
95,568,060
In a cosmic first, scientists detect 'ghost particles' from a distant galaxy - Washington Post /Link In a cosmic first, scientists detect 'ghost particles' from a distant galaxy When the sun was young and faint and the Earth was barely formed, a gigantic black hole in a distant, brilliant galaxy spat out a powerful jet of radiation. That jet contained neutrinos — subatomic particles so tiny and difficult to detect they are ... 'Ghost particle' found in Antarctica provides astronomy breakthrough Astronomers Follow Ghost Particles Down the Barrel of a Black Hole After years of searching, scientists finally trace high-energy neutrinos to a distant blazar Published on 12 Jul 2018 at 03:01PM
<urn:uuid:cea94e1b-232d-44e7-9b63-85091800c595>
3.1875
145
Content Listing
Science & Tech.
36.906588
95,568,068
Advancing Basic Science for Humanity Director of Communications and The Kavli Foundation Nanoscale structures based on ones found in nature make eye implants more effective and last longer In a project funded by DARPA, researchers from NIST, UCSB and Caltech have made a remarkable breakthrough in optical frequency synthesizers. Findings may be relevant to other disorders, from autism to PTSD Worn during sleep, the lens interrupts the process that destroys cells of the retina. A team of researchers has found a relationship between quantum physics and thermodynamics. Paul Asimow and his research group examine the properties of exotic solids known as quasicrystals via high-impact shock recovery experiments. Cornell researchers have become the first to control atomically thin magnets with an electric field. A Yale research team has identified a gene that when eliminated can spur regeneration of axons in nerve cells severed by spinal cord injury. Satellite developed by MIT aims to discover thousands of nearby exoplanets, including at least 50 Earth-sized ones. Black holes in these environments could combine repeatedly to form objects bigger than anything a single star could produce.
<urn:uuid:a0a28580-f44e-4c80-bec9-8eae88863a37>
2.828125
238
Content Listing
Science & Tech.
23.02587
95,568,069
Authors: George Rajna ATLAS experiment reported a preliminary result establishing the observation of the Higgs boson decaying into pairs of b quarks, furthermore at a rate consistent with the Standard Model prediction. Usha Mallik and her team used a grant from the U.S. Department of Energy to help build a sub-detector at the Large Hadron Collider, the world's largest and most powerful particle accelerator, located in Switzerland. They're running experiments on the sub-detector to search for a pair of bottom quarks— subatomic yin-and-yang particles that should be produced about 60 percent of the time a Higgs boson decays. A new way of measuring how the Higgs boson couples to other fundamental particles has been proposed by physicists in France, Israel and the US. Their technique would involve comparing the spectra of several different isotopes of the same atom to see how the Higgs force between the atom's electrons and its nucleus affects the atomic energy levels. The magnetic induction creates a negative electric field, causing an electromagnetic inertia responsible for the relativistic mass change; it is the mysterious Higgs Field giving mass to the particles. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate by the diffraction patterns. The accelerating charges explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron's spin also, building the bridge between the Classical and Relativistic Quantum Theories. The self maintained electric potential of the accelerating charges equivalent with the General Relativity space-time curvature, and since it is true on the quantum level also, gives the base of the Quantum Gravity. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory. Comments: 16 Pages. [v1] 2018-07-10 07:37:20 Unique-IP document downloads: 0 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:8b18051a-f19a-49ce-b009-4a81f8bd8972>
2.8125
568
Knowledge Article
Science & Tech.
31.799324
95,568,070
Open Access This article is - freely available Sustainability 2018, 10(6), 1908; doi:10.3390/su10061908 Prevention of Catastrophic Volcanic Eruptions, Large Earthquakes underneath Big Cities, and Giant Earthquakes at Subduction Zones Division of Sustainable Resources Engineering, Hokkaido University, Sapporo 060-8628, Japan Department of Geotechnical Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada Department of Earth Resources Engineering, University of Moratuwa, Moratuwa 10400, Sri Lanka Correspondence: email@example.com; Tel.: +81-11-706-6299 Received: 13 May 2018 / Accepted: 4 June 2018 / Published: 7 June 2018 Catastrophic volcanic eruptions, large earthquakes beneath big cities, or giant earthquakes at subduction zones are apparently the biggest problems facing the sustainability of human society. However, imminent prediction methods for these events have never been established, except that volcanic eruptions can only be predicted by exceptional efforts by dedicated researchers. Even if a prediction method has been established, the method cannot significantly reduce infrastructure damage, although it could slightly reduce the number of fatalities. On the other hand, prevention of eruptions or earthquakes could significantly reduce, not only the number of fatalities, but also infrastructure damage. Therefore, the authors propose (1) gradual energy release by supercritical power generation to prevent catastrophic eruptions; (2) gradual seismic energy release by injecting water into seismic sources to prevent large earthquakes beneath big cities; and (3) exploding existing nuclear warheads underground to prevent giant earthquakes at subduction zones. Necessary technical developments, costs, risks, and problems will also be explained. Keywords:prevention; catastrophic volcanic eruptions; large earthquakes underneath big cities; giant earthquakes at subduction zones Catastrophic volcanic eruptions, large earthquakes beneath big cities, or giant earthquakes at subduction zones are apparently the biggest problems facing the sustainability of human society. However, imminent prediction methods for these events have not been established so far, except that volcanic eruptions can be predicted only by exceptional efforts by dedicated researchers. For example, the eruption of Mt. Usu, Japan in 2000 [1,2] was predicted by Professor Hiromu Okada at Hokkaido University, Japan. Even if a prediction method has been established, the method may not significantly reduce infrastructure damage. Deliberate considerations of the management of natural disaster risk based on precise predictions may contribute to further decrease economic losses. However, predictions could slightly reduce the number of fatalities. On the other hand, prevention of eruptions or earthquakes, which has never been attempted, if developed, could significantly reduce, not only the number of fatalities, but also infrastructure damage. This paper considers how to prevent these devastating disasters and tries to clarify necessary technical developments, costs, risks, and problems. Euro and Japanese yen are converted at rates of 1 EUR = 1.2 USD and 1 JPY = 0.0091 USD when necessary. 2. Prevention of Catastrophic Volcanic Eruptions The Yellowstone supervolcano, for example, erupted ca. 2.1 Ma (million years ago) with 2450 km3 ejecta, and 1.3 Ma and 0.63 Ma with 1000 km3 ejecta . The log-normal distribution is a distribution function that can be used to represent eruption intervals . The data are not enough but the average interval and standard deviation are calculated as 105.856±0.0544 years, assuming the log-normal distribution. Let us assume that the next catastrophic eruption occurs 105.856±0.0544 years after the previous eruption, and calculate the probability of eruption after now. Cumulative distribution Φ at x for the log-normal distribution with the average value μ and standard deviation σ can be represented as:where erf is the error function (Figure 1). The probability p at t1 that a catastrophic eruption occurs before t2 can be calculated as: The probability for an imminent eruption is not zero and it reaches almost 100% in several hundred thousand years (Figure 2). If Yellowstone erupts, volcanic ash will distribute vastly and 90% of people within 1000 km of the volcano will die due to suffocation. In addition, the air temperature in the Northern Hemisphere will drop by 10 K for several to tens of years due to the sulfate aerosols from the volcano, which will obstruct sunlight from reaching the ground surface . No crops can be raised and most people will starve to death. In short, people in the Northern Hemisphere will certainly be made extinct in several hundred thousand years if we do nothing. Civilization may remain because people in the Southern Hemisphere could survive and move to the Northern Hemisphere. However, let us propose to prevent such catastrophic eruptions by gradually relieving energy for eruptions using a supercritical geothermal system [8,9]. The supercritical geothermal system makes drill holes in the vicinity of a magma chamber. Heat is exchanged at the borehole end and power is generated at the surface. Problems in this system are corrosion of drilling rods and durability of drilling bits due to the high temperature and low pH near magma chambers. These matters can be solved using a silica carbide composite and an electro pulse drilling technique . The eruption energy of Krakatoa in 1883 was estimated to be 200 MT (Megaton TNT (trinitrotoluene) equivalent) , namely 840 × 1015 (J). The volcanic explosivity index (VEI) is defined as:where VE is the volume of the ejecta . VEI of Krakatoa (1883) is estimated to be 6 . Assuming that the eruption energy is proportional to ejecta volume, VEI can be converted to eruption energy ER (J) by The VEI of Yellowstone eruptions for the eruptions of 2.1 Ma, 1.3 Ma, and 0.63 Ma, are estimated to be 8.4, unknown, and 8.0, respectively. Dividing the eruption energy by the average interval of 0.735 My (million years), the average power can be calculated as 8.9 MW and 3.6 MW for the first and the third known eruptions, respectively. Adopting a safe side estimation, it can be said that only 10 MW of supercritical power generation is enough to prevent the catastrophic eruptions of Yellowstone. The output of a moderate scale nuclear power plant is 1 GW; several GW are expected for supercritical power plants. A 10 MW power plant is much smaller than those power plants and no serious technical problems related to output scale are predicted. The most cost consuming part is drilling in the geothermal power plants. For example, the magma chamber of the Yellowstone supervolcano is ca. 8 km deep and drilling costs are expected to be too expensive. However, the cost of the electro pulse drilling is 800,000 EUR based on 100 EUR/m . Assuming that a drilling hole can be used for 30 years, the drilling cost is just 0.0041 EUR/kWh (0.0049 USD/kWh). This is cheap enough and the power plant can be profitable even after considering other various costs. Therefore, there would be no problems with respect to costs. On the other hand, humans in the Northern Hemisphere will be made extinct if a catastrophic eruption occurs at Yellowstone. The economic loss would be almost equal to the world GDP (Gross Domestic Product, 74 trillion USD in 2015) at the eruption year. Assuming that the world economy would gradually recover to the previous level in the following 100 years, total economic losses would be 74 × 100/2 = 3700 trillion USD. Dividing the total economic losses by the average eruption interval, the annual economic loss would be ca. 5 billion USD/y. Drilling unexpectedly encountered magma in the Iceland Deep Drilling Project and supercritical power generation was carried out without any hazards. However, there would still be risks that those drillings stimulate volcanoes and induce unpredicted catastrophic eruptions. This issue should be, of course, deliberately investigated before the method is adopted in practice. 3. Prevention of Large Earthquakes beneath Big Cities Twenty-two M ≥ 6 earthquakes occurred beneath big cities between 2000 and 2016, for example, in Japan, excluding aftershocks , which induced severe human and property damages. In Kumamoto in 2016 (M6.5), for example, 110 people died, 184,643 buildings, including the Kumamoto Castle, collapsed, and the economic losses were 2.4–4.6 trillion JPY (22–42 billion USD ). In Tottori in 2016 (M6.6) in Japan, as another example, 30 people died, 14,748 buildings collapsed, and the agricultural damage was 1.6 billion JPY (15 million USD ). Gradual release of seismic energy by injecting water through a drill hole to the seismic fault is proposed to prevent such large earthquakes. An M6 is occurs at an average interval of 100 years in Kumamoto. One thousand M4 earthquakes (an M4 causes almost no damage in Kumamoto) in 100 years will release the seismic energy for an M6 based on the following relationship between seismic energy ES (J) and magnitude M: Namely, 10 M4 for one year or ca. one M4 for a month will prevent M6 earthquakes. Fujii et al. (2014) evaluated the amount of water injection V (m3) and the maximum magnitude Mmax of the induced seismicity based on cases of geothermal systems (EGS), water injection to seismic faults, and so on, as follows : Namely, 9.4 × 105 m3 water should be injected to induce an M4. This amount of water can be injected, for example, for a week at 1.5 m3/s. The scheduled injections would be carried out very carefully under a dense microseismic and seismic monitoring to ensure safety. Focal depth of typical large earthquakes under big cities are 23 km for Tokyo (1923), 24 km for Nankai (1946), 16 km for Hyogo (1995), 12 km for Kumamoto (2016), 11 km for Tottori (2016) , and so forth. On the other hand, even the world’s deepest drillings, such as 12.3 km at the Kola Superdeep Borehole (2011) or 9.1 km at the KTB (German Continental Deep Drilling Program (in German: Kontinentales Tiefbohrprogramm der Bundesrepublik Deutschland), 1994) are not enough. The problems are the same as the supercritical geothermal system, that is, high temperatures and corrosion, and would be solved by the silica carbide composite and electro pulse drilling. The cost for 9.4 × 105 m3 water is 136 million JPY/year based on the water price for the public baths in Sapporo, Japan. The cost for 100 years is 13.6 billion JPY. The cost for 12 km of drilling would be 56 billion JPY based on a total cost of 42 billion JPY for KTB . Assuming that the drill hole can be used for 100 years, the total cost becomes ca. 70 billion JPY (640 million USD). The cost is already much cheaper than the economic damage of 2.4–4.6 trillion JPY (22–42 billion USD). Moreover, gray-water can be prepared at a much cheaper price and the drilling costs would be as low as 1.2 million EUR (1.4 million USD) by adopting electro pulse drilling. Equation (6) was obtained from various in-situ test results. However, there is still a possibility that an M6 is unexpectedly induced while water is being injected to induce an M4. This risk can be minimized by gradual injection under careful monitoring. The injection can be cancelled when the released energy exceeds an M4. Further study will also be required to determine the best location of the injection on the seismic fault to induce small earthquakes. 4. Prevention of Giant Earthquakes at Subduction Zones A Russian study claimed that there were no M ≥ 8.3 earthquakes that occurred during the period in which underground nuclear tests were frequently carried out . Unfortunately, the details of the study are not known because we did not succeed in accessing the original article. However, lists of the underground nuclear tests and giant earthquakes can be obtained from References [21,22], respectively. We examined the relationships between them by ourselves, expanding the lower limit to M ≥ 8, and found that few giant earthquakes occurred during the period in which underground nuclear tests were frequently carried out (Figure 3). The probability of the null hypothesis, assuming that there was no Granger causality from annual yield or number of underground nuclear tests to annual seismic energy or number of giant earthquakes, was calculated with the statistical package software R . Granger causality does not mean usual causality. For example, it can be said that there is Granger causality from time series A to time series B if B can be predicted with A better than only from the auto correlation of B. If the probability is less than 0.05, the null hypothesis is rejected and it can be statistically said that the necessary condition of the existence of Granger causality from underground nuclear explosion to the occurrence of giant earthquakes is satisfied. The probability values were more than 0.05 which means that the null hypothesis was not rejected although some of them were not so large (Table 1). Therefore, it can be statistically said that no Granger causality was found from underground nuclear tests to occurrences of the giant earthquakes. However, it is apparent that seismicity was restrained when the annual yield of the underground nuclear explosions was more than 1 MT/year (Figure 4). The mechanism of the restraint should be investigated further in future, but it could be induced small earthquakes due to vibration from underground nuclear explosions thereby relieving the strain energy for giant earthquakes. Manga et al. (2012) explained that one of the causes of induction of smaller earthquakes in far fields by large earthquakes would be the change in permeability due to transient stress disturbances . Electromagnetic waves from nuclear explosions could be another mechanism . However, electromagnetic waves were also emitted by the atmospheric nuclear tests and the absence of giant earthquakes after the period of atmospheric nuclear tests cannot be explained. Comparing the locations of nuclear tests and giant earthquakes from 1900 to 2016, nuclear tests and giant earthquakes are found in the same area for only Alaska. It is remarkable that five giant earthquakes occurred during the 70 years before the nuclear tests in Alaska but only a rather small one occurred during the 50 years after the tests (Figure 5). This also implies the prevention of giant earthquakes by underground nuclear explosions. Let us assume that the underground nuclear explosions prevented the occurrences of the giant earthquakes and propose to explode the existing nuclear warheads in the United States, Russia, the United Kingdom, and so forth, underground to prevent giant earthquakes. The total yield of the existing 25,900 warheads is 7000 MT . The minimum yield to prevent a giant earthquake is 1 MT/y as stated before. However, let us assume that 2 MT/y is used to ensure the reduction of giant earthquake occurrences. Therefore, the amount of the existing nuclear warheads is for 3500 years. Considering the deterioration of nuclear warheads, the existing warheads may be used for 100 years though. The cost of the recent underground nuclear test by North Korea was estimated to be ca. 5 million USD/test by South Korea . The cost of the proposed method, if the 2 MT/y is divided into 10 tests, can be very roughly estimated as 50 million USD/y. On the other hand, the property damage by the giant earthquakes listed in Wikipedia are 235 billion USD for Tohoku, Japan, in 2011, 86 billion USD for Sichuan, China, in 2008, and 15–30 billion USD for Chile in 2010, between 1906 and 2012 . This means that the total property damage was at least 336 billion USD in 107 years or ca. 3 billion USD/y. The cost of the prevention is less than 1/60 of the earthquake damage. The relationship between underground nuclear tests and giant earthquakes was obtained based on actual observation. However, the biggest possible risk of the proposed method would be an unexpected induction of giant earthquakes instead of preventing them. Deliberate investigation should be made before the practice of the method, of course. The biggest problem of this method would be obtaining social consensus. 5. Concluding Remarks Catastrophic volcanic eruptions, large earthquakes beneath big cities, or giant earthquakes at subduction zones are apparently the biggest problems against the sustainability of human society. However, imminent prediction methods for them have never been established so far except that volcanic eruptions can only be predicted by exceptional efforts by dedicated researchers. Even if a prediction method was established, the method could not significantly reduce infrastructure damage, although it could slightly reduce the number of fatalities. On the other hand, prevention of eruptions or earthquakes could significantly reduce not only the number of fatalities but also infrastructure damage. Therefore, the authors propose to prevent those devastating disasters as follows. Gradual energy release by supercritical power generation was proposed to prevent catastrophic eruptions. The necessary technical innovation is drilling into the depths. However, after the innovation, power generation itself would be profitable. The risk is unpredicted induction of unwanted catastrophic eruptions. Gradual seismic energy release by injecting water into the seismic sources was proposed to prevent large earthquakes beneath big cities. The necessary technical innovation is the same as above. After the innovation, the costs would be much less than the average damage by earthquakes. The risk is unpredicted induction of unwanted large earthquakes. Prevention of giant earthquakes at subduction zones by exploding the existing nuclear warheads underground was proposed. The cost is less than 1/60 of average giant earthquake damage. The risk is the possible unexpected induction of giant earthquakes. The biggest problem would be obtaining social consensus. However, it is worth to further consider this method because it would significantly contribute to world peace by not only preventing giant earthquakes but also by disarming the nuclear weapons. The authors admit that the estimation on cost and risk in this study are very rough at this stage. More precise estimations should be done in further study. Conceptualization: Y.F. and M.S.; writing and original draft preparation: Y.F. and A.B.D.; writing, review, and editing: J.-i.K. and D.F. This research received no external funding. The authors thank Masato Yamada, an undergraduate student at Hokkaido University, for collecting data. Conflicts of Interest The authors declare no conflict of interest. - Jones, T.E. Evolving approaches to volcanic tourism crisis management: An investigation of long-term recovery models at Toya-Usu Geopark. J. Hosp. Tour. Manag. 2016, 28, 31–40. [Google Scholar] [CrossRef] - Yamagishi, H.; Watanabe, T.; Yamazaki, F. Sequence of faulting and deformation during the 2000 eruptions of the Usu volcano, Hokkaido Japan–Interpretation and image analyses of aerial photographs. Geomorphology 2004, 57, 353–365. [Google Scholar] [CrossRef] - Kesete, Y.; Peng, J.; Gao, Y.; Shan, X.; Davidson, R.A.; Nozick, L.K.; Kruse, J. Modeling insurer-homeowner interactions in managing natural disaster risk. Risk Anal. 2014, 34, 1040–1055. [Google Scholar] [CrossRef] [PubMed] - Yellowstone Caldera, Wikipedia. Available online: https://en.wikipedia.org/wiki/Yellowstone_Caldera (accessed on 13 May 2018). - Udagawa, S.; Imanaka, H.; Koyaguchi, T.; Takayasu, H. Statistical analysis of occurrence of eruptions at Sakurajima volcano. Proc. Volcanol. Soc. Jpn. 1999, 32. (In Japanese) [Google Scholar] - Mastin, L.G.; Eaton, A.R.; Lowenstern, J.B. Modeling ash fall distribution from a Yellowstone supereruption. Geochem. Geophys. Geosyst. 2014, 15, 3459–3475. [Google Scholar] [CrossRef] - The Epoch Times. Available online: http://www.epochtimes.jp/jp/2010/08/html/d23986.html (accessed on 13 May 2018). (In Japanese) - Elders, W.A.; Friðleifsson, G.Ó.; Albertsson, A. Drilling into magma and the implications of the Iceland Deep Drilling Project (IDDP) for high-temperature geothermal systems worldwide. Geothermics 2014, 49, 111–118. [Google Scholar] [CrossRef] - Watanabe, N.; Numakura, T.; Sakaguchi, K.; Saishu, H.; Okamoto, A.; Ingerbritsen, S.E.; Tsuchiya, N. Potentially exploitable supercritical geothermal resources in the ductile crust. Nat. Geosci. 2017, 10, 140–144. [Google Scholar] [CrossRef] - Nakazato, N.; Kohyama, A.; Kohno, Y. Effects of pressure during preform densification on SiC/SiC composites. Open J. Non-Metall. Mater. 2013, 3, 10–13. [Google Scholar] [CrossRef] - Schiegg, H.O.; Rodland, A.; Zhu, G.; Yuen, D.A. Electro-Pulse-Boring (EPB): Novel super-deep drilling technology for low cost electricity. J. Earth Sci. 2015, 26, 37–46. [Google Scholar] [CrossRef] - MacNeill, A.R. Dynamics of Dinosaurs and Other Extinct Giants; Columbia University Press: New York, NY, USA, 1989. [Google Scholar] - Glossary—VEI, Volcanic Hazards Program, USGS (United States Geological Survey) Web Site. Available online: https://volcanoes.usgs.gov/vsc/glossary/vei.html (accessed on 29 May 2018). - Breining, G. The Deadlist Volcanoes. In Super Volcano: The Ticking Time Bomb Beneath Yellowstone National Park; Voyageur Press: Minneapolis, MN, USA, 2007. [Google Scholar] - Japan Meteorological Agency. Available online: http://www.data.jma.go.jp/svd/eqdb/data/shindo/index.php (accessed on 13 May 2018). (In Japanese) - Cabinet Office, Government of Japan. Available online: http://www.bousai.go.jp/updates/h280414jishin/pdf/h280414jishin_35.pdf (accessed on 13 May 2018). (In Japanese) - Cabinet Office, Government of Japan. Available online: http://www.bousai.go.jp/updates/h281021jishin/pdf/h281021jishin_09.pdf (accessed on 13 May 2018). (In Japanese) - Fujii, Y.; Takahashi, K.; Fukuda, D.; Kodama, J. Shale gas extraction and CCS may induce serious seismicity. In Proceedings of the Workshop on Rock Engineering and Environment at ARMS8 (2014 ISRM International Symposium—8th Asian Rock Mechanics Symposium), Sapporo, Japan, 14–16 October 2014; pp. 41–46. [Google Scholar] - The Japanese Assoc. Petr. Tech. Available online: https://www.japt.org/html/iinkai/drilling/seikabutu/fukaboriiin/fukabori.html (accessed on 13 May 2018). (In Japanese) - Sputnik. Available online: https://jp.sputniknews.com/science/20150725634763/ (accessed on 13 May 2018). (In Japanese) - Nuclear Tests—Databases and Other Material, Johnston’s Archive. Available online: http://www.johnstonsarchive.net/nuclear/tests/ (accessed on 13 May 2018). (In Japanese) - Search Earthquake Catalog, Earthquake Hazards Program, U.S. Geological Survey. Available online: http://earthquake.usgs.gov/earthquakes/search/ (accessed on 13 May 2018). - The R Project for Statistical Computing. Available online: https://www.r-project.org/ (accessed on 13 May 2018). - Manga, M.; Beresnev, I.; Brodsky, E.E.; Elkhoury, J.E.; Elsworth, D.; Ingebritsen, S.E.; Mays, D.C.; Wang, C.-Y. Changes in permeability caused by transient stresses: Field observations, experiments, and mechanisms. Rev. Geophys. 2012, 50, RG2004. [Google Scholar] [CrossRef] - Sheshpari, M. Magnotosphere anomaly, hexagonal crystal resonance and created consequences for triggering earthquakes. Electron. J. Geotech. Eng. 2016, 21, 4301–4304. [Google Scholar] - Sankei News. Available online: http://www.sankei.com/world/news/161005/wor1610050062-n1.html (accessed on 13 May 2018). (In Japanese) - Lists of Earthquakes, Wikipedia. Available online: https://en.wikipedia.org/wiki/Lists_of_earthquakes (accessed on 13 May 2018). Figure 1. Cumulative distribution for the normal distribution with the average of 0 and standard deviation of σ. Figure 2. Probability of a catastrophic eruption at Yellowstone supervolcano. Figure 3. Magnitude of giant earthquakes and annual yield of underground nuclear tests from 1900 to 2016. Figure 4. Relationship between annual nuclear yield and annual seismic energy from 1900 to 2016. Figure 5. Annual yield of underground nuclear explosions and magnitude of giant earthquakes in Alaska. Table 1. Probability of the null hypothesis. |Underground Nuclear Explosions||Giant Earthquakes| |Annual Number||Annual Seismic Energy| © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
<urn:uuid:89cc2e88-48ae-4922-a46a-838ffecfebaf>
3.140625
5,641
Academic Writing
Science & Tech.
51.228154
95,568,072
The Nature of Ecology Ecology is a study of the connections among organisms and their living and nonliving environment. The cell is the basic unit of life in organisms. An organism is either prokaryotic or eukaryotic. Organisms may reproduce by asexual reproduction or sexual reproduction. Organisms that reproduce sexually are classified as members of the same species if they can interbreed.Members of a species that reside in the same area at the same time constitute a population. A population normally lives in a particular habitat and shows genetic diversity. Populations of many species make up a community. An ecosystem is a community and its nonliving environment. 4-2 The Earth's Life-Support Systems The lower portion of the earth's atmosphere is the troposphere. The next layer is thestratosphere. The portions of the earth's atmosphere, hydrosphere, and lithosphere in which living organisms exist constitute the biosphere. Life on the earth is sustained by three interconnected factors: the one-way flow of energy from the sun through the biosphere and back into space, the cycling of matter that living organisms need as nutrients for their survival, and gravity. 4-3 EcosystemConcepts and Components Biologists have classified the terrestrial portion of the biosphere into biomes. Each has a distinct climate and specific life forms. Marine and freshwater portions of the biosphere are divided into aquatic life zones. Abiotic components of an ecosystem are physical and chemical factors that influence living organisms. Each population has a range of tolerance to variousabiotic factors and its tolerance limits determine its abundance and distribution. The number of organisms in a population can be affected by a single limiting factor. Most producers capture sunlight energy and make carbohydrates by way of photosynthesis. Some producers carry out chemosynthesis. All other organisms in an ecosystem are consumers or heterotrophs. Most organisms release energy by aerobicrespiration, which requires oxygen. Some get energy by anaerobic respiration. Biological diversity is an important renewable resource. 4-4 Connections: Food Webs and Energy Flow in Ecosystems Organisms get the food and nutrients they need by participating in a food chain. Various food chains can link together to form a food web. Each organism in an ecosystem can be assigned to a trophic level inits food chain or food web. Each trophic level contains a certain amount of biomass. The transfer of energy between these levels has a certain ecological efficiency. Food chains can be represented as a pyramid of energy flow. Biomass storage in various trophic levels of a food chain or webs can be represented by a pyramid of biomass. The number of organisms at each trophic level in a food chain orweb can be represented by a pyramid of numbers. 4-5 Primary Productivity of Ecosystems Gross primary productivity is the rate at which producers use photosynthesis to make more biomass. It varies across the earth. Net primary productivity affects the number of consumers in an ecosystem. The planet's net primary productivity (NPP) ultimately limits the number of consumer organisms (including humans)that can survive on the earth. Humans now use, waste, or destroy about 27 percent of the earth's total NPP and 40 percent of the NPP of the planet's terrestrial ecosystems. This share is expected to increase and thus threaten the habitats and food supplies of other species. 4-6 Connections: Matter Cycling in Ecosystems Nutrients, atoms, ions, and molecules are continuously cycled in nutrientcycles or biogeochemical cycles. The hydrologic cycle collects, purifies, and distributes the earth's water. Other examples are the carbon cycle, the nitrogen cycle, the phosphorus cycle, and the sulfur cycle. Human activities are altering these cycles. 4-7 How Do Ecologists Learn about Ecosystems? Ecologists use field research and laboratory research to gather data. They use systems analysis to... Leer documento completo Regístrate para leer el documento completo.
<urn:uuid:b13cce42-2112-4c66-819a-d8b03b03138d>
3.875
834
Truncated
Science & Tech.
32.342233
95,568,076
Authors: S.Y. Kim, S.J. Jun, J. Min Affilation: KyungWon University, Korea Pages: 28 - 31 Keywords: Silica nanotube, quantum dots, gold nanoparticle, beacon, DNA sensor A DNA senor using silica nanotubes (SNTs) was developed to detect some bacteria by barcoded SNTs. The SNTs were fabricated using anodic aluminum oxide (AAO) templates by the surface sol-gel method. To recognize each DNA sensor, quantum dots were embedded in a SNT (QD SNT) with different colors and order of colors. These effects contributed the possibility of multiplex detection. The length and morphology of completed QD SNTs were investigated by TEM and confocal microscope. The resulting QD SNTs were bonded with beacons that are designed as nucleotides with hairpin structure as a capture probe and a signal probe at the same time. At this process, the colloidal gold nanoparticle was used to bond between QD SNT and beacon by sulfur-gold bond. Furthermore, it was able to perform as a quencher to extinct a fluorescence signal of a beacon when the beacon was in normal condition. The gold-coated QD SNTs immobilized with DNA (bio-SNTs) were fabricated and investigated to confirm the operation by confocal microscope. This research can contribute the potential for the multiplex detection using gold-coated QD SNTs immobilized DNA. Nanotech Conference Proceedings are now published in the TechConnect Briefs
<urn:uuid:7ba0314c-01b4-4a21-8585-3c00b26ec7ef>
2.75
335
Academic Writing
Science & Tech.
41.285973
95,568,079
Simula 67 introduced objects,:2, 5.3 classes,:1.3.3, 2 inheritance and subclasses,:2.2.1 virtual procedures,:2.2.3 coroutines,:9.2 and discrete event simulation,:14.2 and features garbage collection.:9.1 Also other forms of subtyping (besides inheriting subclasses) were introduced in Simula derivatives. Simula is considered the first object-oriented programming language. As its name suggests, Simula was designed for doing simulations, and the needs of that domain provided the framework for many of the features of object-oriented languages today. Simula has been used in a wide range of applications such as simulating VLSI designs, process modeling, protocols, algorithms, and other applications such as typesetting, computer graphics, and education. The influence of Simula is often understated, and Simula-type objects are reimplemented in C++, Object Pascal, Java, C# and several other languages. Computer scientists such as Bjarne Stroustrup, creator of C++, and James Gosling, creator of Java, have acknowledged Simula as a major influence. Kristen Nygaard started writing computer simulation programs in 1957. Nygaard saw a need for a better way to describe the heterogeneity and the operation of a system. To go further with his ideas on a formal computer language for describing a system, Nygaard realized that he needed someone with more computer programming skills than he had. Ole-Johan Dahl joined him on his work January 1962. The decision of linking the language up to ALGOL 60 was made shortly after. By May 1962 the main concepts for a simulation language were set. "SIMULA I" was born, a special purpose programming language for simulating discrete event systems. Kristen Nygaard was invited to visit the Eckert–Mauchly Computer Corporation late May 1962 in connection with the marketing of their new UNIVAC 1107 computer. At that visit Nygaard presented the ideas of Simula to Robert Bemer, the director of systems programming at Univac. Bemer was a sworn ALGOL fan and found the Simula project compelling. Bemer was also chairing a session at the second international conference on information processing hosted by IFIP. He invited Nygaard, who presented the paper "SIMULA -- An Extension of ALGOL to the Description of Discrete-Event Networks". The Norwegian Computing Center got a UNIVAC 1107 August 1963 at a considerable discount, on which Dahl implemented the SIMULA I under contract with UNIVAC. The implementation was based on the UNIVAC ALGOL 60 compiler. SIMULA I was fully operational on the UNIVAC 1107 by January 1965. In the following couple of years Dahl and Nygaard spent a lot of time teaching Simula. Simula spread to several countries around the world and SIMULA I was later implemented on Burroughs B5500 computers and the Russian URAL-16 computer.
<urn:uuid:94cc9708-2664-4bc7-ac02-b4fc9dd19d71>
2.625
629
Knowledge Article
Software Dev.
41.256418
95,568,088
Common Interpretation of Heisenberg's Uncertainty Principle Is Proved False A new experiment shows that measuring a quantum system does not necessarily introduce uncertainty Contrary to what many students are taught, quantum uncertainty may not always be in the eye of the beholder. A new experiment shows that measuring a quantum system does not necessarily introduce uncertainty. The study overthrows a common classroom explanation of why the quantum world appears so fuzzy, but the fundamental limit to what is knowable at the smallest scales remains unchanged. At the foundation of quantum mechanics is the Heisenberg uncertainty principle. Simply put, the principle states that there is a fundamental limit to what one can know about a quantum system. For example, the more precisely one knows a particle's position, the less one can know about its momentum, and vice versa. The limit is expressed as a simple equation that is straightforward to prove mathematically. Heisenberg sometimes explained the uncertainty principle as a problem of making measurements. His most well-known thought experiment involved photographing an electron. To take the picture, a scientist might bounce a light particle off the electron's surface. That would reveal its position, but it would also impart energy to the electron, causing it to move. Learning about the electron's position would create uncertainty in its velocity; and the act of measurement would produce the uncertainty needed to satisfy the principle. Physics students are still taught this measurement-disturbance version of the uncertainty principle in introductory classes, but it turns out that it's not always true. Aephraim Steinberg of the University of Toronto in Canada and his team have performed measurements on photons (particles of light) and showed that the act of measuring can introduce less uncertainty than is required by Heisenberg’s principle. The total uncertainty of what can be known about the photon's properties, however, remains above Heisenberg's limit. Steinberg's group does not measure position and momentum, but rather two different inter-related properties of a photon: its polarization states. In this case, the polarization along one plane is intrinsically tied to the polarization along the other, and by Heisenberg’s principle, there is a limit to the certainty with which both states can be known. The researchers made a ‘weak’ measurement of the photon’s polarization in one plane — not enough to disturb it, but enough to produce a rough sense of its orientation. Next, they measured the polarization in the second plane. Then they made an exact, or 'strong', measurement of the first polarization to see whether it had been disturbed by the second measurement. When the researchers did the experiment multiple times, they found that measurement of one polarization did not always disturb the other state as much as the uncertainty principle predicted. In the strongest case, the induced fuzziness was as little as half of what would be predicted by the uncertainty principle. Don't get too excited: the uncertainty principle still stands, says Steinberg: “In the end, there's no way you can know [both quantum states] accurately at the same time.” But the experiment shows that the act of measurement isn't always what causes the uncertainty. “If there's already a lot of uncertainty in the system, then there doesn't need to be any noise from the measurement at all,” he says. The latest experiment is the second to make a measurement below the uncertainty noise limit. Earlier this year, Yuji Hasegawa, a physicist at the Vienna University of Technology in Austria, measured groups of neutron spins and derived results well below what would be predicted if measurements were inserting all the uncertainty into the system. Fuente Scientific American Scientific American Physics firstname.lastname@example.org
<urn:uuid:9ad7602e-04ab-41c4-b124-b03f4d72c858>
3.703125
768
Nonfiction Writing
Science & Tech.
30.844038
95,568,096
University of Georgia Skidaway Institute of Oceanography scientist Aron Stubbins joined a team of researchers to determine how hydrothermal vents influence ocean carbon storage. The results of their study were recently published in the journal Nature Geoscience. Hydrothermal vents are hotspots of activity on the otherwise dark, cold ocean floor. Since their discovery, scientists have been intrigued by these deep ocean ecosystems, studying their potential role in the evolution of life and their influence upon today's ocean. Stubbins and his colleagues were most interested in the way the vents' extremely high temperatures and pressure affect dissolved organic carbon. Oceanic dissolved organic carbon is a massive carbon store that helps regulate the level of carbon dioxide in the atmosphere--and the global climate. Originally, the researchers thought the vents might be a source of the dissolved organic carbon. Their research showed just the opposite. Lead scientist Jeffrey Hawkes, currently a postdoctoral fellow at Uppsala University in Sweden, directed an experiment in which the researchers heated water in a laboratory to 380 degrees Celsius (716 degrees Fahrenheit) in a scientific pressure cooker to mimic the effect of ocean water passing through hydrothermal vents. The results revealed that dissolved organic carbon is efficiently removed from ocean water when heated. The organic molecules are broken down and the carbon converted to carbon dioxide. The entire ocean volume circulates through hydrothermal vents about every 40 million years. This is a very long time, much longer than the timeframes over which current climate change is occurring, Stubbins explained. It is also much longer than the average lifetime of dissolved organic molecules in the ocean, which generally circulate for thousands of years, not millions. "However, there may be extreme survivor molecules that persist and store carbon in the oceans for millions of years," Stubbins said. "Eventually, even these hardiest of survivor molecules will meet a fiery end as they circulate through vent systems." Hawkes conducted the work while at the Research Group for Marine Geochemistry, University of Oldenburg, Germany. The study's co-authors also included Pamela Rossel and Thorsten Dittmar, University of Oldenburg; David Butterfield, University of Washington; Douglas Connelly and Eric Achterberg, University of Southampton, United Kingdom; Andrea Koschinsky, Jacobs University, Germany; Valerie Chavagnac, Université de Toulouse, France; and Christian Hansen and Wolfgang Bach, University of Bremen, Germany. The study on "Efficient removal of recalcitrant deep-ocean dissolved organic matter during hydrothermal circulation" is available at http://www. Mike Sullivan | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:a2648681-22f0-4e5a-820b-c4f1e2f5dfb8>
3.953125
1,170
Content Listing
Science & Tech.
28.808727
95,568,102
An international team of scientists, including from the University of Adelaide and Curtin University, has found the first evidence of a source of high-energy particles called neutrinos: an energetic galaxy about 4 billion light years from Earth. The observations were made by the IceCube Neutrino Observatory at the Amundsen-Scott South Pole Station, and confirmed by telescopes around the globe and in Earth's orbit. The announcement will be made from the National Science Foundation in the US today. This discovery points to a source of cosmic rays, another type of high-energy particle which has posed an enduring mystery since first detected over 100 years ago. Neutrinos are uncharged subatomic particles that normally pass by the trillion through our bodies and every part of the Earth every second, but they rarely interact with matter - a fact that makes them difficult to detect. "Neutrinos at these very high energies are formed after cosmic ray particles are accelerated (boosted to very high energy) and interact with other particles," says Associate Professor Gary Hill, from the University of Adelaide's School of Physical Sciences and member of the IceCube Collaboration. "So what we've found is not only the first evidence of a neutrino source, but also evidence that this galaxy is a cosmic ray accelerator." IceCube researchers announced the first solid evidence for high-energy neutrinos coming from beyond our galaxy in 2013. "Now we have found the first evidence for a specific source object, a blazar, which is a very high energy type of galaxy," says Associate Professor Hill. "This blazar, designated TXS 0506+056, is about four billion light years from Earth. It's a giant elliptical galaxy with a massive spinning black hole at its core and twin jets of light and high-velocity particles, one of which is aligned towards Earth. "I have been working in this field for almost 30 years and to find an actual neutrino source is an incredibly exciting moment. Now that we've identified a real source, we'll be able to focus in on other objects like this one, to understand more about these extreme events billions of years ago which set these particles racing towards our planet." Two papers published today in the journal Science describe the first evidence for this known blazar as a source of high-energy neutrinos. The IceCube Observatory at the South Pole is equipped with a nearly real-time alert system which is triggered when a very high-energy neutrino collides with an atomic nucleus in the Antarctic ice in or near the IceCube detector. On September 22 last year, the observatory broadcast the coordinates of a neutrino detection to telescopes around the world, calling for follow-up observations of the event. Around 20 observatories on Earth and in space responded to IceCube's alert including NASA's orbiting Fermi Gamma-ray Space Telescope, the High Energy Stereoscopic System (H.E.S.S.) in Namibia and the Major Atmospheric Gamma Imaging Cherenkov Telescope, or MAGIC, in the Canary Islands--which detected a flare of high-energy gamma rays associated with TXS 0506+056. Associate Professor James Miller-Jones from the Curtin University node of the International Centre for Radio Astronomy Research was involved in the team following up the event at radio wavelengths with the Karl G. Jansky Very Large Array in New Mexico, USA. "It's really exciting for Australian-based astronomers to be involved in uncovering these new insights into the high-energy Universe," he said. University of Adelaide's Associate Professor Gavin Rowell is a member of the H.E.S.S. team. He says: "This result heralds a new era for neutrino astronomy, and opens up the long-anticipated linkages with observations using photons or light, such as gamma-rays and radio waves." Dr Sabrina Einecke worked on MAGIC at the Technical University of Dortmund in Germany, and is now at the University of Adelaide. She says: "Seeing gamma-rays with MAGIC at the same time as the neutrino is an important piece of evidence suggesting that these were both made by processes in the blazar jet." IceCube is operated by the IceCube Collaboration of 300 physicists and engineers from 48 institutions in 12 countries, and is led by the University of Wisconsin-Madison, with major funding from the US National Science Foundation. The University of Adelaide research was supported by the Australian Research Council. The two Science research papers are: Multimessenger observations of a flaring blazar coincident with high-energy neutrino IceCube-170922A; and Neutrino emission from the direction of the blazar TXS 0506+056 prior to the IceCube-170922A alert Associate Professor Gary Hill, School of Physical Sciences, University of Adelaide. Mobile +61 466 618 767, firstname.lastname@example.org Associate Professor James Miller-Jones, Curtin University and the International Centre for Radio Astronomy Research (ICRAR). Mobile: + 61 488 484 825, email@example.com Robyn Mills, Media Officer, University of Adelaide. Phone: +61 (0)8 8313 6341, Mobile: +61 (0)410 689 084, firstname.lastname@example.org
<urn:uuid:ac28da6b-6f9e-4754-b7c2-d705e51fab32>
3.34375
1,142
News (Org.)
Science & Tech.
34.501002
95,568,120
Astronomy The Celestial Sphere Constellations Astronomy The Celestial Sphere Constellations - We wish to thank you for visiting this websites, from the lots of web sites presented by internet search engine, you need to click this site. We constantly aim to bring complete info on our web site, on this occasion we offer some photos that match the Astronomy The Celestial Sphere Constellations. We think some of these photos are one of the most appropriate to Astronomy The Celestial Sphere Constellations, hopefully according to you as well. For that we need your responses in the remarks column, so that the perfect of this web site in the future. There are many photos on the web about Astronomy The Celestial Sphere Constellations, we gather from the best picture, the outcome we display on this website for you. If you are not satisfied with the picture we present, you could see the other picture aware gallery below this article. furthermore manosmand2 blogspot along with bright orange arcturus use the big dipper to find it moreover 12026 further astronomy star pla facts along with 421016265139159592 moreover constellation worksheet kids constellations for mc together with d4 explain astronomical phenomena with reference to the earthmoon system as well as hieroglyphic plan of the ancient zodiac together with primum mobile also astrolabe also 프톨레마이오스 in addition 445926800583819357. Disclaimer: This photo is sourced from many relied on internet sites, photo copyright depends on the proprietor of the photo, we do not acknowledge this picture as our residential property or work. Related post of Astronomy The Celestial Sphere Constellations : #parallel of declination celestial.#celestial sphere new york.#celestial coordinate system.#celestial compass gif.#astronomy christmas.#celestial pole.#celestial horizon.#diagram of celestial sphere in meridian.#sphere of the suns card.#celestial sphere diagram.#celestial meridian.#celestial sphere on a point directly above a direction.#model of earth celestial.#celestial sphere map.#map of the celestial equator.#45 degrees north celestial sphere.#celestial equator.#celestial sphere rotating around earth.#meridian and celestial sphere.#atlas celestial sphere.#celestial body of the earth.#celestial sphere local sky at the north pole.#atlas and the celestial sphere.#how many faces does a sphere have.#the point in the sky that is located 90 degrees from the celestial horizon is the.#astronomical spheres.#celestial sphere model.#what are spheres.#celestial equator declination.#celestial sphere seasons.#astronomy di sphere.#jesus astronomy picture day nebula.#celestial stars.#celestial sphere astronomy local sky.
<urn:uuid:957479e2-9d13-4a12-87b3-d71624bdfa0e>
2.578125
587
Truncated
Science & Tech.
37.200181
95,568,122
Environmental resource management is the management of the interaction and impact of human societies on the environment. It is not, as the phrase might suggest, the management of the environment itself. Environmental resources management aims to ensure that ecosystem services are protected and maintained for future human generations, and also maintain ecosystem integrity through considering ethical, economic, and scientific (ecological) variables. Environmental resource management tries to identify factors affected by conflicts that rise between meeting needs and protecting resources. It is thus linked to environmental protection, sustainability and integrated landscape management. Environmental resource management is an issue of increasing concern, as reflected in its prevalence in seminal texts influencing global sociopolitical frameworks such as the Brundtland Commission's Our Common Future, which highlighted the integrated nature of environment and international development and the Worldwatch Institute's annual State of the World reports. Environmental resource management can be viewed from a variety of perspectives. It involves the management of all components of the biophysical environment, both living (biotic) and non-living (abiotic), and the relationships among all living species and their habitats. The environment also involves the relationships of the human environment, such as the social, cultural and economic environment, with the biophysical environment. The essential aspects of environmental resource management are ethical, economical, social, and technological. These underlie principles and help make decisions. The concept of environmental determinism, probabilism and possibilism are significant in the concept of environmental resource management. Environmental resource management covers many areas in science, including geography, biology, social sciences, political sciences, public policy, ecology, physics, chemistry, sociology, psychology, and physiology. Environmental resource management strategies are intrinsically driven by conceptions of human-nature relationships. Ethical aspects involve the cultural and social issues relating to the environment, and dealing with changes to it. "All human activities take place in the context of certain types of relationships between society and the bio-physical world (the rest of nature)," and so, there is a great significance in understanding the ethical values of different groups around the world. Broadly speaking, two schools of thought exist in environmental ethics: Anthropocentrism and Ecocentrism, each influencing a broad spectrum of environmental resource management styles along a continuum. These styles perceive "...different evidence, imperatives, and problems, and prescribe different solutions, strategies, technologies, roles for economic sectors, culture, governments, and ethics, etc."
<urn:uuid:1f95d23a-74ab-4d64-883a-bcacad26b817>
3.65625
490
Knowledge Article
Science & Tech.
-4.911304
95,568,127
TB126: Vertical Trends in the Chemistry of Forest Soil Microcosms Following Experimental AcidificationTechnical Bulletins DescriptionA soil microcosm experiment was conducted (a) to compare dilute H2SO4, NH4NO3 fertilizer, and prilled S as possible experimental soil-acidifying treatments and (b) to observe soil chemical response to simulated throughfall and acidifying treatments. Simulated throughfall had a significant effect on soil chemistry, resulting in increased exchangeable bases and pH in the mineral soil horizons but little effect on the O horizon. Of the acidification treatments only simulated acid rain had significant effects on soil chemistry when compared to the control and the dry treatments. This reflected the relatively slow dissolution rate of the dry treatments coupled with the short duration of the experiment. Simulated acid rain decreased exchangeable base cations and pH while increasing exchangeable Al. The 2.5-cm layer of Bs horizon material immediately below the abrupt E horizon boundary proved to be the soil layer most responsive to chemical alteration. Rights and Access NoteRights assessment remains the responsibility of the researcher. No known restrictions on publication. PublisherMaine Agricultural Experiment Station - soil acidifying treatments Citation InformationFernandez, I.J. 1987. Vertical trends in the chemistry of forest soil microcosms following experimental acidification. Maine Agricultural Experiment Station Technical Bulletin 126.
<urn:uuid:1f0ae715-d24a-4e1c-a5cf-92c06b751cce>
2.515625
280
Academic Writing
Science & Tech.
12.843689
95,568,129
|시간 제한||메모리 제한||제출||정답||맞은 사람||정답 비율| |1 초||128 MB||29||19||17||65.385%| Every day, Farmer John's N cows (1 <= N <= 100,000) cross a road in the middle of his farm. Considering the map of FJ's farm in the 2D plane, the road runs horizontally, with one side of the road described by the line y=0 and the other described by y=1. Cow i crosses the road following a straight path from position (a_i, 0) on one side to position (b_i, 1) on the other side. All the a_i's are distinct, as are all the b_i's, and all of these numbers are integers in the range -1,000,000...1,000,000. Despite the relative agility of his cows, FJ often worries that pairs of cows whose paths intersect might injure each-other if they collide during crossing. FJ considers a cow to be "safe" if no other cow's path intersects her path. Please help FJ compute the number of safe cows. 4 -3 4 7 8 10 16 3 9 There are 4 cows. Cow 1 follows a straight path from (-3,0) to (4,1), and so on. The first and third cows each do not intersect any other cows. The second and fourth cows intersect each other.
<urn:uuid:72ca3cf1-b597-464b-bc2b-f2203764c93e>
2.90625
355
Tutorial
Science & Tech.
97.635702
95,568,156
Actias luna, the luna moth, is a lime-green, Nearctic Saturniid moth in the family Saturniidae, subfamily Saturniinae. It has a wingspan of up to 114 mm (4.5 in), making it one of the largest moths in North America. This moth is found in North America, from east of the Great Plains in the United States to northern Mexico and from Saskatchewan eastward through central Quebec to Nova Scotia in Canada. Luna moths are common as far south as Central Florida. Based on the climate in which they live, luna moths produce differing numbers of generations. In Canada and northern regions, they can live up to seven days and will produce only one generation per year. These reach adulthood from early June to early July. In the northeastern United States around New Jersey or New York, the moths produce two generations each year. The first of these appear in April and May, and the second group can be seen approximately nine to eleven weeks later. In the southern United States, there can be as many as three generations. These are spaced every eight to ten weeks beginning in February. Each instar generally takes about five days to a week to complete. After hatching, the caterpillars tend to wander around before finally settling on eating the particular plant they are on. These caterpillars tend to be gregarious for the first two to three instars, but separate and live independently after that. These caterpillars go through five instars before cocooning. At the end of each instar, a small amount of silk is placed on the major vein of a leaf and the larva then undergoes apolysis. The caterpillar then undergoes ecdysis, or molts from that position leaving the old exoskeleton behind. Sometimes the shed exoskeleton is eaten. Each instar is green, though the first two instars do have some variation in which some caterpillars will have black underlying splotches on their dorsal side. Variation after the second instar is still noticeable, but slight. The dots that run along the dorsal side of the caterpillars vary from a light yellow to a dark magenta. The final instar grows to approximately nine centimeters in length. The luna moth pupates after spinning a cocoon. The cocoon is thin and single layered. Shortly before pupation, the final, fifth-instar caterpillar will engage in a "gut dump" where any excess water, food, feces, and fluids are expelled. The caterpillar will also have an underlying golden reddish‐brown color and become less active. As a pupa, this species is particularly active. When disturbed, if it feels threatened the moth will wiggle within its pupal case, producing a noise. Pupation takes approximately two weeks unless the individual is diapausing. The mechanisms for diapause are generally a mixture of genetic triggers, duration of sunlight or direct light during the day, and temperature. Adults will stay in their cocoons, even if their metamorphosis is complete, until they receive certain biological signals, i.e. light changes, temperature changes, or hormonal signals. When the adult luna moth eclose, or emerges from its cocoon, their abdomen is swollen and the wings are shriveled. The first few hours of the moth's adult life, will be spent under a leaf pumping hemolymph (invertebrates equivalent to blood) from the abdomen into their wings. During this time, the moth is much more vulnerable to predators. Their wings will be soft and wet, the moth will have to wait for their wings to dry and harden before they will be able to fly away. This process can take 2–3 hours to complete. The luna moth typically has a wingspan of 8–11.5 cm (3.1–4.5 in), rarely exceeding 17.78 cm (7.00 in) with long, tapering hind wings, which have eyespots on them to confuse potential predators. Luna moths are common, although rarely seen due to their very brief (1 week) adult lives. As with all Saturniidae, the adults do not eat or have mouths. They emerge as adults solely to mate. They are more commonly seen at night. You can distinguish male luna moths by their larger and wider antennae. Their wing "tails" are expandable decoys that trick hungry bats; they are the moth's anti-predator deflection strategy. As the echolocating hunter comes in for the kill, the moth's moving tails distract and fool the bat, knocking the attacker off target and allowing the moth the split second it needs to get away, alive.
<urn:uuid:3b8ae762-e86b-419b-a9e5-255906d5509f>
3.78125
977
Knowledge Article
Science & Tech.
52.390891
95,568,173
Rated out of 5 stars (4.1154) - 26 Total Votes Wednesday, 18 July, 2018 In this article I will Show you how to display the Date and time on your web page with CLASP! There are various ways to display Dates and Times. You can display the date like at the top of this article or you can display it in the standard MM/DD/YYYY. You can display the Date with the Time or the Time all by itself. You can also abbreviate the name of the Day or display the Time in either 24 hour or 12 hour format. The simplest way to display the Date and Time is thus: <%= now %> Which gives us this: 7/18/2018 1:14:50 AM If we just want to show the standard Date by itself we do this: <%= date %> Which gives us this: 7/18/2018 Or the Time by itself: <%= time %> Will display this: 1:14:50 AM There is a Function we can use to change the way the Date/Time is displayed and it's called FormatDateTime The Syntax for the FormatDateTime function looks like this: The date can be any date or Date Function and is required. The format is a value that says what format to use and is optional. The values and descriptions are listed below |vbGeneralDate||0||Display a date in format mm/dd/yy. If the date parameter is Now(), it will also return the time, after the date| |vbLongDate||1||Display a date using the long date format: weekday, month day, year| |vbShortDate||2||Display a date using the short date format: like the default (mm/dd/yyyy)| |vbLongTime||3||Display a time using the time format: hh:mm:ss PM/AM| |vbShortTime||4||Display a time using the 24-hour format: hh:mm| Here are a few examples. To display the Name of the day and the name of the month and the day and year we do this: <%= FormatDateTime(date,1) %> <%= FormatDateTime(date,vbLongDate) %> Which gives us this: Wednesday, July 18, 2018 If we want to show the time in the 24 hour format we can use this: <%= FormatDateTime(time,4) %> <%= FormatDateTime(time,vbShortTime) %> So we get this: 01:14 We can also display the name of the weekday and month separately and even abbreviate them. For the weekday we first have to get the number of the weekday and this is how we do it: <%= Weekday(date) %> Once we have the number of the weekday we can get the name: <%= WeekdayName(Weekday(date)) %> To abbreviate it we add "true" to the function like this: <%= WeekdayName(Weekday(date),true) %> We do the Month name in the same way, first we get the number of the month then we can get the name of the month and abbreviate it: <%= MonthName(Month(date),true) %> We can also replace Weekday(date) and Month(date) with the number of the weekday or month and get the same result. <%= WeekdayName(4,true) %> <%= MonthName(6,true) %> We can get the year from the date by doing this: <%= Year(date) %> The Copyright notice at the bottom of the page is in an include file and it changes with the year so I never have to change it: The date at the top of this article is written like this: <%= weekdayname(weekday(date)) &", "& day(date) &" "& monthname(month(date)) &", " & year(date) %> = Wednesday, 18 July, 2018 You can do a lot with these ASP Functions like get the date of a holiday from a database and display it, we'll use US Independence day as an example: We will use the Cdate() Function to convert a string variable into a Date variable. <% Dim datHoliday datHoliday = Cdate("7/4/1776") Response.Write MonthName(Month(datHoliday)) &" "& Day(datHoliday) &"th "& Year(datHoliday) %> July 4th 1776 This article only covers the basics on working with Date/Time. If you would like to learn more I suggest you visit w3schools.com, they are a great resource! Steve Frazier has been a classic ASP developer since 2003. He has developed CLASP applications for Fortune 500 companies and popular website's. He has also developed many ASP Scripts of his own! He is Web-master of HTMLJunction as well as its sister site - ASP Junction. He is currently working on a Web Portal that has the functionality of all the most popular Forums and Portals.
<urn:uuid:a9a646ce-aa48-4116-9c37-9cd00483c42c>
3.09375
1,134
Tutorial
Software Dev.
51.001591
95,568,201
Liquid Crystal Thermography and Image Processing in Heat and Fluid Flow Experiments This paper describes new methods which can determine quantitatively two-dimensional temperature distributions on a surface and in a fluid from colour records obtained using a thermosensitive liquid crystal material combined with image processing. Application-type experiments have been carried out both to visualise the complex temperature distribution over a cooled surface disturbed by different solid obstacles, and also to investigate temperature and flow patterns in a rectangular cavity for natural convection. KeywordsHeat Transfer Liquid Crystal Cholesteric Liquid Crystal Local Heat Transfer Coefficient Nusselt Number Distribution Unable to display preview. Download preview PDF. - 2.Akino, N.; Kunugi, T.; Shiina, Y.; Ichimiya, K.; Kurosawa, A: Fundamental study on visualization of temperature fields using thermosensitive liquid-crystals. Flow Visualization V. (ed. R. Reznicek), Washington: Hemisphere (1990) 87–92.Google Scholar - 3.Ashforth-Frost, S.; Wang, L. S.; Jambunathan, K.; Graham, D. P.; Rhine, J. M.: Application of image processing to liquid crystal thermography. Procs. Optical Methods and Data Processing in Heat and Fluid Flow, City University, London (1992) 121–126.Google Scholar - 6.Hiller, W. J.; Kowalewski, T. A.: Simultaneous measurement of temperature and velocity fields in thermal convective flows. Flow Visualization IV (ed. C. Veret), Washington: Hemisphere (1986) 617–622.Google Scholar - 7.Jones, T. V.; Wang, Z.; Ireland, P. T.: The use of liquid crystals in aerodynamic and heat transfer experiments. Procs. Optical Methods and Data Processing in Heat and Fluid Flow, City University, London, (1992) 51–65.Google Scholar - 8.Moffatt, R. J.: Experimental heat transfer. Procs. 9th Intl. Heat Transfer Conf., Jerusalem, Vol. 1, (1991) 308–310.Google Scholar - 9.Parsley, M.: The use of thermochromic crystals in heat transfer and flow visualisation. Research. FLUCOME’88, Sheffield University, England, (1988) 216–220.Google Scholar - 10.Reinitzer, F.: Beiträge zur Kenntniss des Cholestrins, Monatschr. Chem. Wein, 9, (1888) 50–90.Google Scholar - 11.Stasiek, J.; Collins, M. W.; Chew, P. E.: Liquid crystal mapping of local heat transfer in crossed-corrugated geometrical elements for air heat exchangers. EUROTECH DIRECT’ 91 Congress, Birmingham, England, (1991). Paper C413/040.Google Scholar - 12.Stasiek, J.; Collins, M. W.: Internal Reports, City University, London, (1989-92).Google Scholar
<urn:uuid:96bd6d9b-3141-4c1d-b312-9276fe31a007>
2.609375
649
Academic Writing
Science & Tech.
53.543867
95,568,216
By analyzing microscopic pits and scratches on hominid teeth, as well as stable isotopes of carbon found in teeth, researchers are getting a very different picture of the diet habitats of early hominids than that painted by the physical structure of the skull, jawbones and teeth. While some early hominids sported powerful jaws and large molars -- including Paranthropus boisei, dubbed "Nutcracker Man" -- they may have cracked nuts rarely if at all, said CU-Boulder anthropology Professor Matt Sponheimer, study co-author. Such findings are forcing anthropologists to rethink long-held assumptions about early hominids, aided by technological tools that were unknown just a few years ago. A paper on the subject by Sponheimer and co-author Peter Ungar, a distinguished professor at the University of Arkansas, was published in the Oct. 14 issue of Science. Earlier this year, Sponheimer and his colleagues showed Paranthropus boisei was essentially feeding on grasses and sedges rather than soft fruits preferred by chimpanzees. "We can now be sure that Paranthropus boisei ate foods that no self-respecting chimpanzee would stomach in quantity," said Sponheimer. "It is also clear that our previous notions of this group's diet were grossly oversimplified at best, and absolutely backward at worst." "The morphology tells you what a hominid may have eaten," said Ungar. But it does not necessarily reveal what the animal was actually dining on, he said. While Ungar studies dental micro-wear -- the microscopic pits and scratches that telltale food leaves behind on teeth -- Sponheimer studies stable isotopes of carbon in teeth. By analyzing stable carbon isotopes obtained from tiny portions of animal teeth, researchers can determine whether the animals were eating foods that use different photosynthetic pathways that convert sunlight to energy. The results for teeth from Paranthropus boisei, published earlier this year, indicated they were eating foods from the so-called C4 photosynthetic pathway, which points to consumption of grasses and sedges. The analysis stands in contrast to our closest human relatives like chimpanzees and gorillas that eat foods from the so-called C3 synthetic pathway pointing to a diet that included trees, shrubs and bushes. Dental micro-wear and stable isotope studies also point to potentially large differences in diet between southern and eastern African hominids, said Sponheimer, a finding that was not anticipated given their strong anatomical similarities. "Frankly, I don't believe anyone would have predicted such strong regional differences," said Sponheimer. "But this is one of the things that is fun about science -- nature frequently reminds us that there is much that we don't yet understand. "The bottom line is that our old answers about hominid diets are no longer sufficient, and we really need to start looking in directions that would have been considered crazy even a decade ago," Sponheimer said. "We also see much more evidence of dietary variability among our hominid kin than was previously appreciated. Consequently, the whole notion of hominid diet is really problematic, as different species may have consumed fundamentally different things." While the new techniques have prompted new findings in the field of biological anthropology, they are not limited to use in human ancestors, according to the researchers. Current animals under study using the new tooth-testing techniques range from rodents and ancient marsupials to dinosaurs, said Sponheimer. Much of Sponheimer's research on ancient hominids has been funded by the National Science Foundation. Matt Sponheimer | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:2bc2ca83-d6dc-408f-90ed-761637a9cec0>
3.796875
1,398
Content Listing
Science & Tech.
35.629109
95,568,217
Reduced rainfall increases the risk of forest dieback, while in return forest loss might intensify regional droughts. The consequences of this vegetation–atmosphere feedback for the stability of the Amazon forest are still unclear. Here we show that the risk of self-amplified Amazon forest loss increases nonlinearly with dry-season intensification. We apply a novel complex-network approach, in which Amazon forest patches are linked by observation-based atmospheric water fluxes. Our results suggest that the risk of self-amplified forest loss is reduced with increasing heterogeneity in the response of forest patches to reduced rainfall. Under dry-season Amazonian rainfall reductions, comparable to Last Glacial Maximum conditions, additional forest loss due to self-amplified effects occurs in 10–13% of the Amazon basin. Although our findings do not indicate that the projected rainfall changes for the end of the twenty-first century will lead to complete Amazon dieback, they suggest that frequent extreme drought events have the potential to destabilize large parts of the Amazon forest. Research news | 2018-07-10 The World in 2050 initiative launches new report outlining synergies and benefits that render the goals achievable Educational news | 2018-07-02 LEAP our leadership programme designed for changemakers that want to lead social-ecological transformations to sustainability. Application deadline is 5 August 2018. Research news | 2018-06-27 Overfishing, fractured international relationships and political conflicts loom as fish migrate more unpredictably because of climate change. Here is how to deal with it Research news | 2018-06-26 Profit-maximizing approaches are most likely to produce outcomes that harm people or the environment. But it depends on the circumstances whether a sustainable or a safe approach is most suitable, new study argues General news | 2018-06-20 Will lead a redesign of the organisational structure at the centre Research news | 2018-06-20 New book chapter looks into the economic, cultural and ecological reasons why some people leave the fisheries and aquaculture sector, and what could be done to reverse the trend
<urn:uuid:6facd33b-99aa-49ea-92fb-ebd991b84d36>
2.640625
434
Content Listing
Science & Tech.
21.370801
95,568,229
Observations with NASA's Chandra X-ray Observatory have provided the first X-ray evidence of a supernova shock wave breaking through a cocoon of gas surrounding the star that exploded. This discovery may help astronomers understand why some supernovas are much more powerful than others. A supernova shock wave breaking through a cocoon of gas surrounding the star that exploded. Credit: X-ray: NASA/CXC/Royal Military College of Canada/P.Chandra et al); Optical: NASA/STScI On November 3, 2010, a supernova was discovered in the galaxy UGC 5189A, located about 160 million light years away. Using data from the All Sky Automated Survey telescope in Hawaii taken earlier, astronomers determined this supernova exploded in early October 2010 (in Earth's time-frame). This composite image of UGC 5189A shows X-ray data from Chandra in purple and optical data from Hubble Space Telescope in red, green and blue. SN 2010jl is the very bright X-ray source near the top of the galaxy (mouse-over for a labeled version). A team of researchers used Chandra to observe this supernova in December 2010 and again in October 2011. The supernova was one of the most luminous that has ever been detected in X-rays. In optical light, SN 2010jl was about ten times more luminous than a typical supernova resulting from the collapse of a massive star, adding to the class of very luminous supernovas that have been discovered recently with optical surveys. Different explanations have been proposed to explain these energetic supernovas including (1) the interaction of the supernova's blast wave with a dense shell of matter around the pre-supernova star, (2) radioactivity resulting from a pair-instability supernova (triggered by the conversion of gamma rays into particle and anti-particle pairs), and (3) emission powered by a neutron star with an unusually powerful magnetic field. In the first Chandra observation of SN 2010jl, the X-rays from the explosion's blast wave were strongly absorbed by a cocoon of dense gas around the supernova. This cocoon was formed by gas blown away from the massive star before it exploded. In the second observation taken almost a year later, there is much less absorption of X-ray emission, indicating that the blast wave from the explosion has broken out of the surrounding cocoon. The Chandra data show that the gas emitting the X-rays has a very high temperature -- greater than 100 million degrees Kelvin - strong evidence that it has been heated by the supernova blast wave. The energy distribution, or spectrum, of SN 2010jl in optical light reveals features that the researchers think are explained by the following scenario: matter around the supernova has been heated and ionized (electrons stripped from atoms) by X-rays generated when the blast wave plows through this material. While this type of interaction has been proposed before, the new observations directly show, for the first time, that this is happening. This discovery therefore supports the idea that some of the unusually luminous supernovas are caused by the blast wave from their explosion ramming into the material around it. In a rare example of a cosmic coincidence, analysis of the X-rays from the supernova shows that there is a second unrelated source at almost the same location as the supernova. These two sources strongly overlap one another as seen on the sky. This second source is likely to be an ultraluminous X-ray source, possibly containing an unusually heavy stellar-mass black hole, or an intermediate mass black hole. These results were published in a paper appearing in the May 1st, 2012 issue of The Astrophysical Journal Letters. The authors were Poonam Chandra (Royal Military College of Canada, Kingston, Canada), Roger Chevalier and Christopher Irwin (University of Virginia, Charlottsville, VA), Nikolai Chugai (Institute of Astronomy of Russian Academy of Sciences, Moscow, Russia), Claes Fransson (Stockholm University, Sweden), and Alicia Soderberg (Harvard-Smithsonian Center for Astrophysics, Cambridge, MA). Fast Facts for SN 2010jl:Credit Megan Watzke | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:646cbab6-0c83-4d15-822e-d0f2c338666c>
3.953125
1,518
Content Listing
Science & Tech.
38.827
95,568,233
by John Burgeson WHAT WILL YOU DO WHEN THE WELL RUNS DRY? Make no mistake -- the well IS running dry. This "well" contains the primary energy sources, oil, natural gas, coal and uranium, that now power our world. They are limited. Oil will run out first, then the others. Twenty years, fifty years, experts disagree only on the time frame, something else is needed, energy sources which will take over entirely by the year 2100. There ARE “renewable” energy sources. These will NEVER run out. Two thousand years from now they will be providing the energy needs of the 40th century. There are several of these. Which one (or possibly two or three) will prevail? This is what it looks like to me, assuming continuing engineering improvements, but no great technological inventions: 1. Biofuels. No, not made from corn, but from algae and weed-based stock. The energy content of a biofuel is comparable to that of gasoline, one of the most flexible energy sources we know. Biofuels, supplemented by hybrid battery technologies, will run our autos and light trucks. A less costly source, however, will be used for electricity, heating, cooling and lighting our homes and businesses, running our factories, etc. 2. Solar power can potentially provide all the energy the world needs. It's not an energy source for your car; far too little power is generated by a solar panel on a car roof, even at 100% efficiency. There will be solar roof panels on many buildings, solar "farms" in the Nevada desert, and an improved electric grid to distribute the energy. By 2100 AD, this will be the world's primary energy source. 3. Wind power, currently growing 25% yearly, is already significant. The wind blows for free. The potential energy available is about ten times current world energy consumption. While many engineering problems remain to be solved, mostly involving power storage and the electric grid, it could be an ultimate winner.. But wind farms take up land, and most locations have too little wind to be productive. Look for this energy source to increase over the next twenty to fifty years, but not dominate in the long term. 4. Geothermal power plants alone could suffice our planet's needs for well over 1,000 years. Each plant, however, is a unique engineering challenge, and the economics are not clear. I look for them to be a minor part of the mix for a few decades, but not a significant long term solution. 5. Wave and tidal energies from the ocean might be harnessed. But this source does not seem to be large enough to be a winner, although there are several apparently successful pilot plants. 6. A very long shot -- nuclear plants built on the fusion principle. Research on these has been ongoing for 50 years. The "solution" is always 40 years away. I don't see a breakthrough for this technology. If one happens, this energy source will be the big winner. In the short term, through 2050, conventional nuclear plants will play a larger part than today, but solar is going to win out. 7. What about hydrogen for auto fuel? Hydrogen is a fuel -- not a fuel source. Pound for pound, gasoline is three times the power. A car with a 20 gallon gas tank would need a 60 gallon tank if it were to be converted to liquid hydrogen, and a 120 gallon tank if pressurized hydrogen were substituted. Basic physics "kills" the idea except for specialized applications. 8. What about all-electric cars? Again, physics intervenes. A 20 gallon tank carries about 120 pounds of fuel. To get the same range with present-day battery technology, you'd need 3,600 pounds of battery -- at ten times the storage space of your present tank. Yes, this is likely to improve over time. Keep watching. Several brands of limited range all-electric autos are planned to be announced in the next three years. In the meantime, invest in the future. Buy the new energy efficient light bulbs. Consider a hybrid car. Keep studying the issues. And for heaven's sake, don't fall for the "drill, baby, drill" yahoos! They offer, at best, oil-based stopgap solutions, and do nothing to ensure our grandkids' future! 39 visitors since 1/24/2009
<urn:uuid:ec7dbd67-cbee-4e03-9bf6-95172cc57fab>
2.609375
909
Personal Blog
Science & Tech.
63.22875
95,568,238
If you manage to return to Europe in the Ice Age, then you may forgive you for thinking that you have crashed in the desolate zone of the African savannah. But the presence of cold temperatures and 6 tons of furry beasts with extremely long teeth will confirm that you are indeed in the Pleistocene era, the Ice Age. You will visit the mammoth steppe, spanning Eurasia from Spain and the Bering Strait to Canada. It is covered with grass, rarely without trees, and is inhabited by buffaloes, reindeer, tigers, and the “furry” mammoth of the same name. Unfortunately, today’s huge and most large grassland ecosystems have not disappeared for a long time. But a group of geneticists at Harvard University hopes to change this by cloning living elephant cells that contain a small fraction of the synthetic DNA. They claim that the reintroduction of this mammoth-like animal into the Arctic tundra environment may help prevent the release of greenhouse gases from the ground and reduce future emissions as the climate changes cause higher temperatures. Although this may sound like a far-fetched idea, scientists have actually tried the similarities for more than 20 years. The Arctic is covered by areas known as permafrost, which have been frozen since the Pleistocene. Permafrost contains a vast amounts of carbon from the life of dead plants and is locked in extreme cold temperatures. It is estimated that the carbon emissions of these frozen stores are approximately twice as much as that currently in the atmosphere. If it thaws out, microorganisms break down organic matter in the soil and release carbon dioxide and methane into the atmosphere. As a result, permafrost and the associated carbon pools have been likened to “sleeping giants” in our climate system. If they wake up, the resulting greenhouse gas emissions would raise global temperatures even further than currently projected, causing even greater global climate change (a process known as positive feedback). This is where our shaggy friends may come in. Mammoths and other large herbivores of the Pleistocene continually trampled mosses and shrubs, uprooting trees and disturbing the landscape. In this way, they inadvertently acted as natural geo-engineers, maintaining highly productive steppe landscapes full of grasses, herbs and no trees. Bringing mammoth-like creatures back to the tundra could, in theory, help recreate the steppe ecosystem more widely. Because grass absorbs less sunlight than trees, this would cause the ground to absorb less heat and in turn keep the carbon pools and their greenhouse gases on ice for longer. Large numbers of the animals would also trample snow cover, stopping it from acting like insulation for the ground and allowing the permafrost to feel the effects of the bitter Arctic winters. Again, this would, in theory, keep the ground colder for longer. This form of mammoth de-extinction and reintroduction could therefore promote grasslands and simultaneously slow the thawing of these frozen soils. So surely it’s worth it? Pleistocene Park is an epic experiment in the Siberian Arctic that has been underway since 1996 and focused on investigating these processes. It is this park to which the Harvard team hope to deliver the first resurrected mammoth hybrid within the next decade. The park is designed to determine if the animals can disturb and fertilise the current ecosystem where little grows into highly productive pastures, as well as slowing or even reversing permafrost thaw. I’ve been privileged to have visited the park a number of times, and have been amazed at the effort required to undertake such “big science” in this wilderness. We travelled for many hours along the massive Kolyma River to collect reindeer from the Arctic coast, and transported them by small boats to the park – no mean feat in these regions. Adding just another few animals to the experiment was exhausting. But it was totally exhilarating and made me question whether this was such a crazy idea after all. The limited financial and personnel available to the park has made building and monitoring the project’s success difficult. Early evidence with extant species such as musk ox, reindeer and horse suggests animal presence is changing the park landscape structure and cooling the ground. Recently, the park’s grasslands have been shown to reflect more sunlight than the surrounding larch forest, which will reduce the heat penetrating the ground. Scientists have also taken 300 metre-long ground samples from across the landscape to measure the carbon storage in the park, and work out if it differs from that of the surrounding, non-disturbed landscape. Is it worth it? Much of the work relies on public crowdfunding and the park is now seeking money to fill the park with temperature sensors and light sensors. Collecting convincing evidence to back up the theory clearly takes time and huge effort, but we should know soon if this bold plan could make a realistic solution to climate change. Some scientists and conservationists have questioned whether resurrecting the mammoth is really worth it, comparing the high costs with the relative lack of funding for saving the world’s elephants. A key question is whether we need mammoth specifically to make these projects work? Could we not simply knock down trees manually, and then use existing animals? I guess this may depend on whether we decide to expand such an approach across far greater swathes of the Arctic, where human intervention will be costly or even near impossible in places. Yet tackling global climate change needs ambitious, novel and often epic solutions, both to reduce emissions and to minimise the chance positive feedback from the Arctic that may cause untold damage to our climate system. I don’t know if bringing the mammoth back is the right approach, but at the moment we lack a decent solution for keeping the giant Arctic carbon deposits in the ground.
<urn:uuid:e498104d-b5f1-45b0-a233-ecfdfab358a2>
3.875
1,206
News Article
Science & Tech.
39.067824
95,568,253
Earth Gauge is a free information service designed to make it easy to talk about the links between weather and environment. Originally developed for weathercasters, the information is also available to the general public, educators, parents and students. Location: Washington, D.C. Background: Earth Gauge is a partnership between the National Environmental Education Foundation (NEEF) and the American Meteorological Society (AMS). Through a free, weekly e-newsletter, Earth Gauge provides environmental content – quick facts, in-depth fact sheets and visuals – to television weathercasters for use on-air, online and in community outreach. Currently, Earth Gauge is distributed to more than 200 local television weathercasters, radio broadcasters and newspaper journalists in 117 cities. Earth Gauge is also distributed to National Weather Service Warning Coordination Meteorologists, educators, non-profit organizations and other interested subscribers across the country. History: The first Earth Gauge e-newsletter was distributed in 2005 to seven television stations. Today, Earth Gauge content reaches weathercasters at 170 television stations in 117 cities nationwide! National Environmental Education Foundation: www.neefusa.org
<urn:uuid:09853f19-2370-4929-b6fb-13b1ce65b9b1>
2.8125
235
About (Org.)
Science & Tech.
16.244934
95,568,255
+44 1803 865913 By: Michael F Braby(Editor) 1434 pages, colour photos, colour illustrations Nearly 400 species - all those currently recognised from Australia, plus those from surrounding islands - are represented, with all adults and some immature stages displayed in photographs. Introductory chapters cover the history of publications, classification, morphology, distribution, conservation and collection, together with a checklist of the butterfly fauna. A magnificent reference. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I'm telling all my friends about your site. We're all into conservation and the environment and the variety of offerings is really impressive. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:3ebcbcf2-7289-40d6-b6b6-494c82ce556a>
2.9375
174
Product Page
Science & Tech.
30.701398
95,568,256
Microorganisms tend to have a relatively fast rate of evolution. Most microorganisms can reproduce rapidly, and bacteria are also able to freely exchange genes through conjugation , transformation and transduction , even between widely divergent species. This horizontal gene transfer , coupled with a high mutation rate and other means of transformation, allows microorganisms to swiftly evolve (via natural selection ) to survive in new environments and respond to environmental stresses . This rapid evolution is important in medicine, as it has led to the development of multidrug resistant pathogenic bacteria , superbugs , that are resistant to antibiotics . Most sulfate-reducing microorganisms (SRM) present in subsurface marine sediments belong to uncultured groups only distantly related to known SRM and it remains unclear how changing geochemical zones and sediment depth influence their community structure. We mapped the community composition and abundance of SRM by amplicon-sequencing and quantifying dsrB, which encodes dissimilatory sulfite reductase subunit beta, in sediment samples covering different vertical geochemical zones ranging from the surface sediment to the deep sulfate-depleted subsurface at four locations in Aarhus Bay, Denmark. SRM were present in all geochemical zones including sulfate-depleted methanogenic sediment. The biggest shift in SRM community composition and abundance occurring across the transition from bioturbated surface sediments into non-bioturbated sediments below, where redox fluctuations and input of fresh organic matter due to macrofaunal activity are absent. SRM abundance correlated with sulfate reduction rates determined for the same sediments. Sulfate availability showed weaker correlation with SRM abundances and no significant correlation with the composition of the SRM community. The overall SRM species diversity decreased with depth, yet we identified a subset of highly abundant community members that persists across all vertical geochemical zones of all stations. We conclude that subsurface SRM communities assemble by persistence of members of the surface community and that the transition from the bioturbated surface sediment to the unmixed sediment below is a main site of assembly of the subsurface SRM community.
<urn:uuid:e77e7b58-2d01-4c46-9169-a36915477846>
2.84375
442
Academic Writing
Science & Tech.
-2.781392
95,568,258
A solar flare is a large ejection of plasma from the Sun. The solar flare can be ejected towards or away from the Earth. They are not a new phenomena, they've been happening for years and millenias, it is only as we look to the Sun and study it that we now see and notice these solar flares. You should never look at the Sun to see if you can see a flare because your eyes will become damaged long before you see one. The flare will probably not have happened by the time your eyes will have gone bad. To look for a solar flare, you'd need special telescopes. The below picture taken by NASA is of a Corona Mass Ejection (CME) and how big they are compared to the Earth. The corona will eventually break free from the Sun and head off into space. where it may or may not come into contact with the Earth. For the record, the Earth is not that close to the Sun. Fortunately, we have a magnetic force field, the picture courtesy of NASA that surrounds and protects us from harmful solar rays. If we didn't have a magnetic field, our atmosphere would have been stripped away long ago. It is said that life never started on Mars because it didn't have a magnetic field to protect it from the solar rays. In Knowing starring Nicholas Cage, the Earth is hit by a Solar Flare which destroys life on the planet. There is a theory that on Mars long ago, the atmosphere was destroyed by a solar flare. 5 Even though we have a magnetic field, we are not immune from solar flares. The Aurora Australis and Aurora Borealis are light shows in the Southern and Northern hemispheres respectively where the Sun's particles have hit our atmosphere and produce a light show. It is not just NASA and other space agencies that study the Sun but also energy and communication company. Communication companies will want to know when a Solar Flare will hit so that they can manoeuvre their satellite out of the the harmful plasma field. Energy companies on Earth will need to prepare themselves from energy spikes and damage to their electricity pylons. On August 28th, 1859, Earth experienced the largest solar storm to have ever hit the planet. Back in those days, electricity was only in its infancy and there was definitely no mobile phone communications so there was no real damage caused. A solar flare on that size has been estimated to be able to cost somewhere in the region of 0.6 - 2 Trillion dollars1. The Sun goes through a cycle of 11 years of activity when it can appear more active and then more docile than at previous years. In modern times, there have been two notable incidents of solar flares affecting the power grid, they are Quebec storm of March 1989 which caused a hydroelectric dam to shut down and cause the loss of power to an estimate six million people for nine hours, causing a financial loss of $13.2 Billion. The second was the Halloween 2003 incident, there was a power shortage caused by a solar flare. When a solar flare is launched, the astronauts in the International Space Station have to move to a safe location within their shuttle, failure to do so can be fatal for them. The below is a NASA video of a solar flare which they' ve put on Youtube to share. The Auroras are better known by their English names as the Northern and Southern Lights. The simplest explanation of what causes the Auroras is charged particles hitting the atmosphere giving off a light show. Most Auroras occurs in the very north or very south but they have been known (Borealis) to be seen as far south as the Channel Islands near France.2 . Whilst most associate the colours of the Auroras as being green, they can appear as different colours. The colour of the lights is dependent on the gas that the northern lights hit and at what altitude the collision between the particles and the gasses hit.3 Auroras are not just limited to the Earth, any planet that has a magnetosphere will be able to experience an aurora show. The reason why only the polar caps of the north and south poles and not the equator get auroras is because of the flow of the magnet field that protects the planet. At the heart of the Earth is a molten iron core which generates the magnet field that protects us. When particles from the Sun hit our planet, they are pushed to the top of the magnetosphere as you can see from the picture above, the magnet fields only reach the polar regions therefore the interaction between the particle required for an aurora can only occur there. It is impossible for an aurora to appear in the Equator but is extremely rare.4. The below video is something that I found on YouTube showing a time-lapsed video of an Aurora Borealis. For other pictures of auroras, visit Christopher Tandy Photography. If you looked at the northern lights, the lights would not move as fast as they do here, they'd be quite static.
<urn:uuid:66857f1e-6924-4e5b-a916-467569116fd5>
3.71875
1,012
Knowledge Article
Science & Tech.
54.38113
95,568,273
All of the trigonometric functions of an angle θ can be constructed geometrically in terms of a unit circle centered at O. The field emerged in the Hellenistic trigonometric table from 0 to 90 pdf during the 3rd century BC from applications of geometry to astronomical studies. The 3rd-century astronomers first noted that the lengths of the sides of a right-angle triangle and the angles between those sides have fixed relationships: that is, if at least the length of one side and the value of one angle is known, then all other angles and lengths can be determined algorithmically. These calculations soon came to be defined as the trigonometric functions and today are pervasive in both pure and applied mathematics: fundamental methods of analysis such as the Fourier transform, for example, or the wave equation, use trigonometric functions to understand cyclical phenomena across many applications in fields as diverse as physics, mechanical and electrical engineering, music and acoustics, astronomy, ecology, and biology. Trigonometry is also the foundation of surveying. Thus the majority of applications relate to right-angle triangles. Trigonometry on surfaces of negative curvature is part of hyperbolic geometry. Trigonometry basics are often taught in schools, either as a separate course or as a part of a precalculus course. Hipparchus, credited with compiling the first trigonometric table, is known as “the father of trigonometry”. A thick ring-like shell object found at the Indus Valley Civilization site of Lothal, with four slits each in two margins served as a compass to measure angles on plane surfaces or in the horizon in multiples of 40 degrees, up to 360 degrees. 12 whole sections of the horizon and sky, explaining the slits on the lower and upper margins. 12 fold division of horizon and sky, as well as an instrument for measuring angles and perhaps the position of stars, and for navigation. Sumerian astronomers studied angle measure, using a division of circles into 360 degrees. They, and later the Babylonians, studied the ratios of the sides of similar triangles and discovered some properties of these ratios but did not turn that into a systematic method for finding sides and angles of triangles. The ancient Nubians used a similar method. In the 3rd century BC, Hellenistic mathematicians such as Euclid and Archimedes studied the properties of chords and inscribed angles in circles, and they proved theorems that are equivalent to modern trigonometric formulae, although they presented them geometrically rather than algebraically. Book 1, chapter 11 of his Almagest. Ptolemy used chord length to define his trigonometric functions, a minor difference from the sine convention we use today. Ptolemy’s table, and then dividing that value by two. Centuries passed before more detailed tables were produced, and Ptolemy’s treatise remained in use for performing trigonometric calculations in astronomy throughout the next 1200 years in the medieval Byzantine, Islamic, and, later, Western European worlds. Indian mathematician and astronomer Aryabhata. These Greek and Indian works were translated and expanded by medieval Islamic mathematicians. By the 10th century, Islamic mathematicians were using all six trigonometric functions, had tabulated their values, and were applying them to problems in spherical geometry. At about the same time, Chinese mathematicians developed trigonometry independently, although it was not a major field of study for them. Knowledge of trigonometric functions and methods reached Western Europe via Latin translations of Ptolemy’s Greek Almagest as well as the works of Persian and Arabic astronomers such as Al Battani and Nasir al-Din al-Tusi. One of the earliest works on trigonometry by a northern European mathematician is De Triangulis by the 15th century German mathematician Regiomontanus, who was encouraged to write, and provided with a copy of the Almagest, by the Byzantine Greek scholar cardinal Basilios Bessarion with whom he lived for several years. At the same time, another translation of the Almagest from Greek into Latin was completed by the Cretan George of Trebizond. Trigonometry was still so little known in 16th-century northern Europe that Nicolaus Copernicus devoted two chapters of De revolutionibus orbium coelestium to explain its basic concepts. Driven by the demands of navigation and the growing need for accurate maps of large geographic areas, trigonometry grew into a major branch of mathematics. Bartholomaeus Pitiscus was the first to use the word, publishing his Trigonometria in 1595. Gemma Frisius described for the first time the method of triangulation still used today in surveying.
<urn:uuid:1a412244-5c3a-4df9-9428-925c7914bf1c>
4.46875
998
Knowledge Article
Science & Tech.
23.720476
95,568,275
Testing and Documentation In this brief chapter, we will take a look at the tools available and conventions employed in the Ruby world for keeping one’s code in order. We’ll kick off with a section devoted to rake. Understanding the mechanics of this task-oriented, make-like tool can really save you time in the long run—especially where common housekeeping tasks are concerned. From here we’ll move on to a section on unit testing, where we’ll look at some step-by-step examples and make the case that such testing is about as simple asit can be in Ruby. Finally, we’ll cover the documentation idioms and the magic of rdoc. KeywordsTest Suite Code Block Unit Test Automatic Documentation Object File Unable to display preview. Download preview PDF.
<urn:uuid:9939345e-f242-492a-8464-36b18fa86d56>
2.5625
169
Truncated
Software Dev.
49.518421
95,568,280
310: Long Water Record Clip: Season 3 Episode 10 | 6m 54s Most elementary school students learn about the hydrologic cycle, the circulation of water from the atmosphere to the earth and back again. How is this cycle affected by change in climate over time? To find out, scientists at the Coweeta Hydrologic Laboratory have been recording data from a western North Carolina watershed for more than 80 years. Aired: 08/17/17Video has closed captioning.
<urn:uuid:1355823b-59ac-43c5-924b-b3e8b5a125f2>
2.96875
102
Truncated
Science & Tech.
46.80375
95,568,281
Providing reliable, sustainable and environmentally friendly energy is a significant global challenge today. The International Energy Agency (IEA) estimates that the energy consumption for consumer electronics will be doubled in 2022. Since these electrical devices are predominantly battery driven, it creates a large environmental burden. In addition, the renewable energy solutions currently proposed (such as solar panels and PZT materials), are not environmentally benign. This project seeks to reduce this environmental burden by developing a bacteriophage-based piezoelectric generator to convert the human body’s daily activities ( such as walking) to electricity. Since bacteriophage is a natural material and biotechnology techniques enable large-scale fabrication of gene-modified phages, it potentially offers an environmentally friendly and simple approach to green-energy generation. This project hopes to develop such phage-based electrical generator to power electrical devices by harvesting peoples’ daily movements. Responsive City Lights uses interactive light installations to enhance the perception of streets as engaging public spaces. The project implements Crime Prevention through Environmental Design
<urn:uuid:dccfef96-dae3-4df4-b084-808d6c397b82>
2.984375
213
Academic Writing
Science & Tech.
-9.927455
95,568,318