text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Scanning or desorption isotherms? Characterising sorption hysteresis of wood - 175 Downloads Sorption isotherms describe the relation between the equilibrium moisture content of a material and the ambient relative humidity. Most materials exhibits sorption hysteresis, that is, desorption give higher equilibrium moisture contents than absorption at equal ambient climate conditions. Sorption hysteresis is commonly evaluated by determination of an absorption isotherm followed by desorption starting from the highest relative humidity used in the absorption measurement (typically 95%). The latter is often interpreted as the desorption isotherm but is in fact a scanning isotherm, i.e. an isotherm obtained from neither dry nor water-saturated state. In the present study, we investigated the difference between desorption isotherms and scanning isotherms determined by desorption from different high relative humidity levels reached by absorption and how this difference influenced the evaluation of sorption hysteresis. The measurements were performed on Norway spruce (Picea abies (L.) Karst.) using automated sorption balances. Hysteresis evaluated from desorption isotherms gave linear absolute sorption hysteresis for the studied relative humidity range (0–96%), whereas hysteresis evaluated from scanning isotherms gave non-linear curves with a peak between 50 and 80% relative humidity. The position of this peak depended on the relative humidity from which desorption was initiated. Consequently, understanding and evaluation of sorption hysteresis might be challenging if scanning isotherms are used instead of desorption isotherms, hereby increasing the risk of misinterpreting the results. KeywordsSorption isotherm Scanning isotherm Hysteresis Automated sorption balance Dynamic vapour sorption (DVS) Moisture content Water influences most physical wood properties such as mechanical properties and dimensional stability, and additionally plays an important role in wood degradation processes. The interaction between wood and water has therefore been widely studied, and this is often done by measuring sorption isotherms. These depict the equilibrium moisture content of a material as a function of ambient relative humidity (RH) at constant temperature. Typically, the path to equilibrium, i.e. whether equilibrium is obtained through an increase (absorption) or a decrease (desorption) in moisture content, influences the moisture content at given climatic conditions; desorption to equilibrium results in a higher moisture content than absorption to equilibrium with the same ambient climate conditions (Masson and Richards 1906). This phenomenon is termed sorption hysteresis and can be observed in many chemically and structurally different materials, e.g. polymers (Watt 1980), cement based materials (Espinosa and Franke 2006a), food (Wolf et al. 1972) and wood (Pidgeon and Maass 1930). Several mechanisms have been proposed to explain sorption hysteresis such as the ink-bottle or pore-blocking effect (McBain 1935), differences between absorption and desorption in principal radius of condensation (Cohan 1938), or free volume in swelling polymeric materials (Vrentas and Vrentas 1996). Sorption isotherms initiated from non-extreme conditions (neither dry nor saturated state) are generally referred to as scanning isotherms (Espinosa and Franke 2006a, b; Peralta and Bangi 1998; Velasco et al. 2016). Scanning isotherms connect the desorption and absorption isotherms and describe the moisture content of a material when it is exposed to alternate absorption and desorption. Sorption hysteresis in wood and cellulosic fibres is commonly evaluated by determination of the absorption isotherm after initially drying the material, followed by determination of an isotherm in desorption starting from the highest RH used in the absorption measurement (Ammer 1963; Higgins 1957; Jeffries 1960a; Kelsey 1957; Peralta 1995; Pidgeon and Maass 1930; Seifert 1972; Sheppard and Newsome 1929; Urquhart and Eckersall 1930; Wahba and Nashed 1957; Wangaard and Granados 1967; Weichert 1963). In recent years, when the use of automated sorption balances has become more wide-spread, the high RH from which desorption is initiated is typically between 90 and 96% (Ceylan et al. 2014, 2012; Cordin et al. 2017; Hill et al. 2009, 2012b; Himmel and Mai 2015; Hosseinpourpia et al. 2016; Jalaludin et al. 2010; Kymäläinen et al. 2015; Okubayashi et al. 2005a, b; Popescu et al. 2014; Popescu and Hill 2013; Shi and Avramidis 2017; Simón et al. 2017; Xie et al. 2011). However, using this measurement procedure, it is not hysteresis between desorption and absorption isotherms that is evaluated, but hysteresis between a scanning isotherm and the absorption isotherm. This difference is important to consider when interpreting a phenomenon like sorption hysteresis. In this study, we show how the use of scanning isotherms instead of desorption isotherms for characterising sorption hysteresis yields complex results which may lead to misinterpretations of the mechanisms behind sorption hysteresis. Materials and methods Norway spruce (Picea abies (L.) Karst.) from an experimental forest in Southern Sweden was used, see Fredriksson et al. (2016) for further information. Samples of earlywood tissue from mature sapwood were cut and vacuum saturated with deionised water using the following procedure: The samples were subjected to vacuum (1–2 mbar) in a glass desiccator for 1 h, deionized water was then added while running the vacuum pump (20 mbar), and finally atmospheric pressure was re-established. Specimens corresponding to a dry mass of 3–6 mg were then cut using a razor blade. In addition, some measurements were performed on specimens of latewood from mature sapwood and on early- and latewood specimens from mature heartwood, see Supplementary Information (SI). Before the specimens were placed in the sorption balance, each piece was wiped with a moist cloth to remove excess surface water. Sorption measurements and hysteresis evaluation Sorption isotherms were measured at 20 °C using automated sorption balances (DVS Advantage, Surface Measurement Systems Ltd., London) which monitor the mass of a specimen (balance resolution 0.1 µg) while the RH is incrementally changed in pre-programmed steps, see e.g. Williams (1995). The time necessary to reach equilibrium at each RH level is specified either as a fixed period of time or by a mass stability (dm/dt) criterion where the mass change in a given time window is below a threshold value defined by the user. Specific levels of RH are generated by mixing dry and water-saturated streams of nitrogen gas. Here, the accuracy of generated RH was validated using the method described by Wadsö et al. (2009). All measurements started with water-saturated specimens. Desorption isotherms were determined by conditioning water-saturated specimens to the following RH levels: 97-95-90-85-80-70-60-50-40-20-10-5-0%. Subsequently, absorption isotherms were determined by increasing the RH to 97% followed by a scanning desorption isotherm from 97% RH, both using the same RH levels as for the initial desorption. These measurements were part of the study reported by Fredriksson and Thygesen (2017) and used a dm/dt criterion of 0.001% min−1 over 10 min for defining equilibrium at each step. It should be noted that the dm/dt in the sorption balance used is calculated based on a reference mass. If the measurements starts with absorption, this reference mass is generally the dry mass. However, since desorption isotherms were determined in the present study, the reference sample mass was the mass after the first step at 97% RH. The same experimental protocol was also used on latewood specimens from mature sapwood and early- and latewood specimens from mature heartwood, see SI. Four additional desorption isotherms were determined after conditioning water-saturated specimens to the following RH levels: 95-80-65-50-35-0%. For two specimens, the absorption isotherms up to 95% RH were then determined followed by a scanning desorption isotherm from 95% RH, both using the same RH levels as for desorption. For two other specimens, the absorption isotherms were determined up to 80% RH before scanning desorption was initiated. Also here, the same RH levels were used. Due to uncertainties related to the use of dm/dt-criteria (Glass et al. 2018, 2017) and the fact that starting with desorption makes the reference mass used to calculate dm/dt more uncertain, fixed periods of time at each RH level were used: 60 h at 95% RH, 24 h at 80% RH and 0% RH, and 12 h at all other RH levels. For all measurements, the specimen was finally dried for 8 h by using the pre-heater to locally increase the temperature while purging with dry nitrogen gas. The temperature was slowly ramped to 60 °C over 1 h and kept constant at this level for 6 h before it was slowly ramped to 20 °C over 1 h. Finally, this drying protocol was followed by a 2 h thermal stabilisation period at 20 °C before the dry mass was taken. The equilibrium moisture content at each RH level was evaluated as mass of water divided by the dry mass of the specimen. After the measurements had been performed, the dm/dt criteria based on dry mass were calculated for all RH steps in order to estimate the error in moisture content. In absorption, all steps had in the end a dm/dt of less than 3 µg g−1 min−1 with a 2 h regression window, meaning that the moisture contents reported for absorption is less than 0.004 kg kg−1 lower than the true equilibrium value, based upon the errors indicated by Glass et al. (2018). The exception was the absorption step to 95% RH which had a dm/dt of around 6 µg g−1 min−1 with a 2 h regression window which gives an error in moisture content within the range 0.004–0.008 kg kg−1. In desorption, all steps had in the end a dm/dt on average in the range 3–4 µg g−1 min−1 with a 2 h regression window, which indicates that the reported moisture contents are around 0.002–0.008 kg kg−1 higher than the true equilibrium value. Given that the moisture content error for a given dm/dt increases with increasing RH, the calculated absolute sorption hysteresis is estimated to be up to 0.004 kg kg−1 too high at 35% RH and up to 0.012 kg kg−1 too high at 95% RH. Sorption hysteresis was evaluated between absorption and desorption isotherms as well as between absorption and scanning isotherms. The evaluation was made both as absolute difference in moisture content at each RH level and as relative difference. The latter was determined by dividing the absolute moisture content difference by the absorption equilibrium moisture content at each RH level. For the measurements performed using a dm/dt criterion, the moisture content at 97% RH was not included when evaluating hysteresis due to a lack of equilibrium. Results and discussion Figure 2c–f show absolute and relative sorption hysteresis, respectively, evaluated from the sorption isotherm data presented in Fig. 2a, b. The sorption hysteresis calculated from desorption isotherms was markedly different than sorption hysteresis calculated from scanning isotherms. While desorption isotherms in the hygroscopic range gave a linear absolute sorption hysteresis and fairly constant relative sorption hysteresis, the same quantities were complex, non-linear curves when calculated based on scanning isotherms. The latter had a peak in absolute hysteresis around 75% RH when desorption was initiated from 95% RH, but the position of this peak depended on the RH from which the scanning isotherm was initiated (Fig. 2d). That the hysteresis pattern depends on the RH from which desorption is initiated, has also previously been observed by Peralta (1995) who evaluated the ratio between the absorption isotherm and isotherms determined in desorption initiated from different RH levels. Evaluating sorption hysteresis based on scanning isotherms is not necessarily less correct than by use of desorption isotherms. However, it is important to consider how the sorption isotherms were determined when interpreting the hysteresis curves. The currently favoured theory for sorption hysteresis in polymeric materials (Hill and Beck 2017; Hill et al. 2009, 2012a, b; Vrentas and Vrentas 1996) explains the phenomenon as a result of hysteresis in volumetric swelling due to kinetic retardation during shrinkage/swelling. According to this theory, sorption hysteresis is expected to decrease with increasing temperature and only be seen when the constituent polymers are below their softening point. Several studies on water sorption in various cellulosic materials have shown a decrease in hysteresis with increasing temperature (Hill et al. 2009, 2010; Jeffries 1960b; Kelsey 1957; Salmén and Larsson 2018; Weichert 1963) which supports this theory. However, a few studies on wood report insignificant changes in relative hysteresis between 35 and 50 °C (Esteban et al. 2008a, b, 2009). For wood, the cell wall polymers that undergo softening at normal temperature are the hemicelluloses. This occurs around 65–75% RH at room temperature (Engelund et al. 2013; Irvine 1984; Kelley et al. 1987; Olsson and Salmén 2004). For instance, Keating et al. (2013) reported sorption hysteresis to vanish in man-made hemicellulose (galactomannan) films above 75% RH at 25 °C, and this RH level corresponded with the softening point characterised by dynamic mechanical analysis. It can therefore be tempting to associate the decrease in hysteresis for wood above 75% RH, as observed in several studies when desorption was initiated from 90 to 95% RH, with softening of the hemicelluloses. However, as clearly seen in Fig. 2d, the peak in sorption hysteresis for wood is a result of scanning isotherms being used to calculate hysteresis, and the peak position changes with the RH from which desorption is initiated. Whether this is also the case for other biopolymeric materials remains to be investigated. Determination of desorption isotherms require that the measurements starts at water-saturation. Desorption from an initial moisture content reached by absorption at high RH will generate a scanning isotherm. Evaluating sorption hysteresis for Norway spruce wood based on scanning isotherms instead of desorption isotherms, gave a non-linear behaviour and a peak in hysteresis, which depended on the RH from which desorption was initiated. No peak in hysteresis was however seen in the studied RH range (0–96%) when the sorption hysteresis evaluation was based on desorption isotherms, i.e. sorption isotherms initiated from water-saturation. Consequently, understanding and evaluating the mechanisms behind sorption hysteresis is further challenged if scanning isotherms are used instead of desorption isotherms, hereby increasing the risk of misinterpreting the results. Bengt Nilsson is gratefully acknowledged for running the sorption balance measurements. Funding from the Swedish Research Council FORMAS (Grant No. 2013-1024) and the VILLUM FONDEN postdoc programme is gratefully acknowledged. - Ahlgren L (1972) Fuktfixering i porösa byggnadsmaterial (Moisture fixation in porous building materials). Dissertation, Lund UniversityGoogle Scholar - Esteban LG, de Palacios P, Fernández FG, Guindeo A, Cano NN (2008a) Sorption and thermodynamic properties of old and new Pinus sylvestris wood. Wood Fiber Sci 40:111–121Google Scholar - Fortin Y (1979) Moisture content—matric potential relationship and water flow properties of wood at high moisture contents. Dissertation, University of British ColumbiaGoogle Scholar - Fredriksson M, Thygesen LG (2017) The states of water in Norway spruce (Picea abies (L.) Karst.) studied by low-field nuclear magnetic resonance (LFNMR) relaxometry: assignment of free-water populations based on quantitative wood anatomy. Holzforschung 71:77–90. https://doi.org/10.1515/hf-2016-0044 CrossRefGoogle Scholar - Higgins NC (1957) The equilibrium moisture content-relative humidity relationships of selected native and foreign woods. 7:371–377Google Scholar - Hill CAS, Keating BA, Jalaludin Z, Mahrdt E (2012a) A rheological description of the water vapour sorption kinetics behaviour of wood invoking a model using a canonical assembly of Kelvin–Voigt elements and a possible link with sorption hysteresis. Holzforschung 66:35–47. https://doi.org/10.1515/HF.2011.115 CrossRefGoogle Scholar - Irvine GM (1984) The glass transitions of lignin and hemicellulose and their measurement by differential thermal-analysis. Tappi J 67:118–121Google Scholar - Keating BA, Hill CAS, Sun D, English R, Davies P, McCue C (2013) The water vapor sorption behavior of a galactomannan cellulose nanocomposite film analyzed using parallel exponential kinetics and the Kelvin–Voigt viscoelastic model. J Appl Polym Sci 129:2352–2359. https://doi.org/10.1002/app.39132 CrossRefGoogle Scholar - Kelsey KE (1957) The sorption of water vapour by wood. Aust J Appl Sci 8:42–54Google Scholar - Peralta PN (1995) Sorption of moisture by wood within a limited range of relative humidities. Wood Fiber Sci 27:13–21Google Scholar - Peralta PN, Bangi AP (1998) Modeling wood moisture sorption hysteresis based on similarity hypothesis. Part 1. Direct approach. Wood Fiber Sci 30:48–55Google Scholar - Popescu CM, Hill CAS (2013) The water vapour adsorption–desorption behaviour of naturally aged Tilia cordata Mill. wood. Polym Degrad Stabil 98:1804–1813. https://doi.org/10.1016/j.polymdegradstab.2013.05.021 CrossRefGoogle Scholar - Simón C, Esteban LG, Palacios Pd, Fernández FG, García-Iruela A (2017) Sorption/desorption hysteresis revisited. Sorption properties of Pinus pinea L. analysed by the parallel exponential kinetics and Kelvin–Voigt models. Holzforschung 71:171. https://doi.org/10.1515/hf-2016-0097 CrossRefGoogle Scholar - Sing KSW, Rouquerol F, Rouquerol J, Llewellyn P (2014) 8—Assessment of mesoporosity. In: Adsorption by powders and porous solids, 2nd edn. Academic Press, Oxford, pp 269–302. https://doi.org/10.1016/B978-0-08-097035-6.00008-5 - Williams DR (1995) The characterisation of powders by gravimetric water vapour sorption. Int LABMATE 20:40–42Google Scholar - Zillig W (2009) Moisture transport in wood using a multiscale approach. Dissertation, Katholieke Universiteit LeuvenGoogle Scholar Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
<urn:uuid:256a048e-66a2-4962-836c-00df2600f694>
2.796875
4,494
Academic Writing
Science & Tech.
43.426588
95,581,452
Defines the AutoSpaceDE Class.When the object is serialized out as xml, its qualified name is w:autoSpaceDE. Assembly: DocumentFormat.OpenXml (in DocumentFormat.OpenXml.dll) 'Declaration Public Class AutoSpaceDE _ Inherits OnOffType 'Usage Dim instance As AutoSpaceDE public class AutoSpaceDE : OnOffType [ISO/IEC 29500-1 1st Edition] 22.214.171.124 autoSpaceDE (Automatically Adjust Spacing of Latin and East Asian Text) This element specifies whether inter-character spacing shall automatically be adjusted between regions of Latin text and regions of East Asian text in the current paragraph. These regions shall be determined by the Unicode character values of the text content within the paragraph. [Note: This property is used to ensure that the spacing between regions of Latin text and adjoining East Asian text is sufficient on each side such that the Latin text can be easily read within the East Asian text. end note] If this element is omitted on a given paragraph, its value is determined by the setting previously set at any level of the style hierarchy (i.e. that previous setting remains unchanged). If this setting is never specified in the style hierarchy, its value is assumed to be true. [Example: Consider a paragraph in which the spacing should not be automatically adjusted based on the presence of Latin and East Asian text. This setting would be specified using the following WordprocessingML: <w:p> <w:pPr> … <w:autoSpaceDE w:val="false" /> </w:pPr> … </w:p> By explicitly setting the val to false, this paragraph must not automatically adjust the spacing of adjoining Latin and East Asian text. end example] pPr (§126.96.36.199); pPr (§188.8.131.52); pPr (§184.108.40.206); pPr (§220.127.116.11); pPr (§17.9.23); pPr (§18.104.22.168) This element’s content model is defined by the common boolean property definition in §17.17.4. © ISO/IEC29500: 2008. Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
<urn:uuid:19faea19-f637-4b5a-bad6-ddd9d368e2b6>
2.84375
510
Documentation
Software Dev.
63.603643
95,581,478
I can understand that we cannot send any object or wave faster than the speed of light. Why won't this work. Make a solid rod with the a material that cannot compress (or extremely little) and make it for example 1 light year long. Then pull or push one end forward or back more than the distance the material can compress or extend over its length. If we were to use a 1 light year long carbon nanotube (I know its not possible yet) its total mass would be around 15000kg. So its mass wouldn't be completely unmanagable. It completely avoids relativistic effects. Other than initial set-up time and costs why wouldn't this work? Edit: What about using the energy level of the nanotube. Put the nanotube up to a charge state where the next electron requires a different amount of energy to add. As you cannot put electrons into a already occupied level by the Pauli exclusion principle you could measure at the far end the change in the charge state, or break to Pauli exclusion principle using temporal delay. This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research. Do you have a college degree in a scientific field? Get flair in /r/science to indicate your expertise!
<urn:uuid:55481f8a-4149-4b7e-8086-c2b34f578010>
2.90625
289
Comment Section
Science & Tech.
52.697175
95,581,483
In the Midwest, people have a fear of encountering snapping turtles while swimming in local ponds, lakes and rivers. While snapping turtles are not aggressive animals, researchers warn not to approach the animals if they are spotted nearby. Credit: Bill Peterman/University of Missouri Now in a new study, a University of Missouri researcher has found that snapping turtles are surviving in urban areas as their natural habitats are being polluted or developed for construction projects. One solution is for people to stop using so many chemicals that are eventually dumped into the waterways, the scientist said."Snapping turtles are animals that can live in almost any aquatic habitat as long as their basic needs for survival are met," said Bill Peterman, a post-doctoral researcher in the Division of Biological Sciences at MU. "Unfortunately, suitable aquatic habitats for turtles are being degraded by pollution or completely lost due to development. We found that snapping turtles can persist in urbanized areas, despite the potential for more interaction with humans." However, even though turtles are living in urban areas, Peterman says people have nothing to fear."Everyone has a snapping turtle story, but some are just too far-fetched and lead to false accusations," Peterman said. "In reality, snapping turtles aren't aggressive animals and won't bite unless they are provoked. So, if you should happen to see one around your property, simply leave it alone and let it go about its business." "While we didn't study whether the snapping turtle populations were increasing or decreasing, we regularly saw hatchling and juvenile snapping turtles," Peterman said. "Snapping turtles may not be the first animals that come to mind when thinking about urban wildlife, but if we continue to improve waterways in more places, such as big cities, than the species can coexist peacefully." The study, "Movement and Habitat Use of the Snapping Turtle in an Urban Landscape, was published in Urban Ecosystems, and was co-authored by Travis Ryan, associate professor and chair of the Department of Biological Science at Butler University; Jessica Stephens, from the Department of Plant Biology at the University of Georgia-Athens; and Sean Sterrett, from the School of Forestry and Natural Resources at the University of Georgia-Athens. This study is part of ongoing research in urban ecology, conducted through Butler University's Center for Urban Ecology. Christian Basi | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e5036543-62e6-48d6-aa61-45578190991b>
3.640625
1,081
Content Listing
Science & Tech.
34.712284
95,581,497
Effect of Different Waste Recovery Systems on the Overall Waste Generation Rates for an Advanced Life Support System This work demonstrates how studies of life support systems can be used to advance the understanding of environmental principles. Efficient waste recovery systems that are developed for the Advanced Life Support System used in space explorations can be utilised on Earth. As an example, we explored three different ALSS scenarios, each having different waste recovery technologies. The results are compared in terms of the overall waste generation rates. It is concluded that physicochemical waste recovery systems, with their low level of uncertainty in operating conditions and high recovery efficiencies, are the best choice for a 600 day mission to Mars. Simulation-Based Optimization, Technology Comparison, Design under Uncertainty, Crew Diet Utilization, Waste Utilization Date of this Version International Journal of Environment and Pollution, 29 (1,2,3). DOI: 10.1504/IJEP.2007.012805 Inderscience Enterprises Ltd. ALS NSCORT Project Number Project 15 - Simulation Based Optimization Approach to Model and Design an Advanced Life Support System ALS NSCORT Series Dave Kotterman, email@example.com Copyright 2005 Inderscience Enterprises, Ltd. For more information please visit the publisher's website: http://www.inderscience.com/index.php This article is not available through e-pubs. Current Purdue University Faculty, Staff and Students may also access the full-text, electronic version of the article at: http://dx.doi.org/10.1504/IJEP.2007.012805
<urn:uuid:332c0b94-fd15-4612-b7b7-d4ad12dd3cc4>
2.734375
339
Academic Writing
Science & Tech.
33.319241
95,581,509
The classical Carnot heat engine The compressibility factor (Z), also known as the compression factor or the gas deviation factor, is a correction factor which describes the deviation of a real gas from ideal gas behavior. It is simply defined as the ratio of the molar volume of a gas to the molar volume of an ideal gas at the same temperature and pressure. It is a useful thermodynamic property for modifying the ideal gas law to account for the real gas behavior. In general, deviation from ideal behavior becomes more significant the closer a gas is to a phase change, the lower the temperature or the larger the pressure. Compressibility factor values are usually obtained by calculation from equations of state (EOS), such as the virial equation which take compound-specific empirical constants as input. For a gas that is a mixture of two or more pure gases (air or natural gas, for example), the gas composition must be known before compressibility can be calculated. Alternatively, the compressibility factor for specific gases can be read from generalized compressibility charts that plot as a function of pressure at constant temperature. The compressibility factor should not be confused with the compressibility (also known as coefficient of compressibility or isothermal compressibility) of a material which is the measure of the relative volume change of a fluid or solid in response to a pressure change. Definition and physical significance The compressibility factor is defined in thermodynamics and engineering frequently as: In statistical mechanics the description is: For an ideal gas the compressibility factor is per definition. In many real world applications requirements for accuracy demand that deviations from ideal gas behaviour, i.e., real gas behaviour, is taken into account. The value of generally increases with pressure and decreases with temperature. At high pressures molecules are colliding more often. This allows repulsive forces between molecules to have a noticeable effect, making the molar volume of the real gas () greater than the molar volume of the corresponding ideal gas (), which causes to exceed one. When pressures are lower, the molecules are free to move. In this case attractive forces dominate, making . The closer the gas is to its critical point or its boiling point, the more deviates from the ideal case. The compressibility factor is linked to the fugacity coefficient by the relation Generalized compressibility factor graphs for pure gases The unique relationship between the compressibility factor and the reduced temperature, , and the reduced pressure, , was first recognized by Johannes Diderik van der Waals in 1873 and is known as the two-parameter principle of corresponding states. The principle of corresponding states expresses the generalization that the properties of a gas which are dependent on intermolecular forces are related to the critical properties of the gas in a universal way. That provides a most important basis for developing correlations of molecular properties. As for the compressibility of gases, the principle of corresponding states indicates that any pure gas at the same reduced temperature, , and reduced pressure, , should have the same compressibility factor. The reduced temperature and pressure are defined by Here and are known as the critical temperature and critical pressure of a gas. They are characteristics of each specific gas with being the temperature above which it is not possible to liquify a given gas and is the minimum pressure required to liquify a given gas at its critical temperature. Together they define the critical point of a fluid above which distinct liquid and gas phases of a given fluid do not exist. The pressure-volume-temperature (PVT) data for real gases varies from one pure gas to another. However, when the compressibility factors of various single-component gases are graphed versus pressure along with temperature isotherms many of the graphs exhibit similar isotherm shapes. In order to obtain a generalized graph that can be used for many different gases, the reduced pressure and temperature, and , are used to normalize the compressibility factor data. Figure 2 is an example of a generalized compressibility factor graph derived from hundreds of experimental PVT data points of 10 pure gases, namely methane, ethane, ethylene, propane, n-butane, i-pentane, n-hexane, nitrogen, carbon dioxide and steam. There are more detailed generalized compressibility factor graphs based on as many as 25 or more different pure gases, such as the Nelson-Obert graphs. Such graphs are said to have an accuracy within 1-2 percent for values greater than 0.6 and within 4-6 percent for values of 0.3-0.6. The generalized compressibility factor graphs may be considerably in error for strongly polar gases which are gases for which the centers of positive and negative charge do not coincide. In such cases the estimate for may be in error by as much as 15-20 percent. The quantum gases hydrogen, helium, and neon do not conform to the corresponding-states behavior and the reduced pressure and temperature for those three gases should be redefined in the following manner to improve the accuracy of predicting their compressibility factors when using the generalized graphs: where the temperatures are in kelvins and the pressures are in atmospheres. The virial equation is especially useful to describe the causes of non-ideality at a molecular level (very few gases are mono-atomic) as it is derived directly from statistical mechanics: Where the coefficients in the numerator are known as virial coefficients and are functions of temperature. The virial coefficients account for interactions between successively larger groups of molecules. For example, accounts for interactions between pairs, for interactions between three gas molecules, and so on. Because interactions between large numbers of molecules are rare, the virial equation is usually truncated after the third term. The compressibility factor is linked to the potential of intermolecular forces phi by the formula below: The Real gas article features more theoretical methods to compute compressibility factors. Physical reason for temperature and pressure dependence Deviations of the compressibility factor, Z, from unity are due to attractive and repulsive Intermolecular forces. At a given temperature and pressure, repulsive forces tend to make the volume larger than for an ideal gas; when these forces dominate Z is greater than unity. When attractive forces dominate, Z is less than unity. The relative importance of attractive forces decreases as temperature increases (see effect on gases). As seen above, the behavior of Z is qualitatively similar for all gases. Molecular nitrogen, N2, is used here to further describe and understand that behavior. All data used in this section were obtained from the NIST Chemistry WebBook. It is useful to note that for N2 the normal boiling point of the liquid is 77.4 K and the critical point is at 126.2 K and 34.0 bar. The figure on the right shows an overview covering a wide temperature range. At low temperature (100 K), the curve has a characteristic check-mark shape, the rising portion of the curve is very nearly directly proportional to pressure. At intermediate temperature (160 K), there is a smooth curve with a broad minimum; although the high pressure portion is again nearly linear, it is no longer directly proportional to pressure. Finally, at high temperature (400 K), Z is above unity at all pressures. For all curves, Z approaches the ideal gas value of unity at low pressure and exceeds that value at very high pressure. To better understand these curves, a closer look at the behavior for low temperature and pressure is given in the second figure. All of the curves start out with Z equal to unity at zero pressure and Z initially decreases as pressure increases. N2 is a gas under these conditions, so the distance between molecules is large, but becomes smaller as pressure increases. This increases the attractive interactions between molecules, pulling the molecules closer together and causing the volume to be less than for an ideal gas at the same temperature and pressure. Higher temperature reduces the effect of the attractive interactions and the gas behaves in a more nearly ideal manner. As the pressure increases, the gas eventually reaches the gas-liquid coexistence curve, shown by the dashed line in the figure. When that happens, the attractive interactions have become strong enough to overcome the tendency of thermal motion to cause the molecules to spread out; so the gas condenses to form a liquid. Points on the vertical portions of the curves correspond to N2 being partly gas and partly liquid. On the coexistence curve, there are then two possible values for Z, a larger one corresponding to the gas and a smaller value corresponding to the liquid. Once all the gas has been converted to liquid, the volume decreases only slightly with further increases in pressure; then Z is very nearly proportional to pressure. As temperature and pressure increase along the coexistence curve, the gas becomes more like a liquid and the liquid becomes more like a gas. At the critical point, the two are the same. So for temperatures above the critical temperature (126.2 K), there is no phase transition; as pressure increases the gas gradually transforms into something more like a liquid. Just above the critical point there is a range of pressure for which Z drops quite rapidly (see the 130 K curve), but at higher temperatures the process is entirely gradual. The final figures shows the behavior at temperatures well above the critical temperatures. The repulsive interactions are essentially unaffected by temperature, but the attractive interaction have less and less influence. Thus, at sufficiently high temperature, the repulsive interactions dominate at all pressures. This can be seen in the graph showing the high temperature behavior. As temperature increases, the initial slope becomes less negative, the pressure at which Z is a minimum gets smaller, and the pressure at which repulsive interactions start to dominate, i.e. where Z goes from less than unity to greater than unity, gets smaller. At the Boyle temperature (327 K for N2), the attractive and repulsive effects cancel each other at low pressure. Then Z remains at the ideal gas value of unity up to pressures of several tens of bar. Above the Boyle temperature, the compressibility factor is always greater than unity and increases slowly but steadily as pressure increases. It is extremely difficult to generalize at what pressures or temperatures the deviation from the ideal gas becomes important. As a rule of thumb, the ideal gas law is reasonably accurate up to a pressure of about 2 atm, and even higher for small non-associating molecules. For example, methyl chloride, a highly polar molecule and therefore with significant intermolecular forces, the experimental value for the compressibility factor is at a pressure of 10 atm and temperature of 100 °C. For air (small non-polar molecules) at approximately the same conditions, the compressibility factor is only (see table below for 10 bars, 400 K). Compressibility of air Normal air comprises in crude numbers 80 percent nitrogen N 2 and 20 percent oxygen O 2. Both molecules are small and non-polar (and therefore non-associating). We can therefore expect that the behaviour of air within broad temperature and pressure ranges can be approximated as an ideal gas with reasonable accuracy. Experimental values for the compressibility factor confirm this. |Pressure, bar (absolute)| values are calculated from values of pressure, volume (or density), and temperature in Vassernan, Kazavchinskii, and Rabinovich, "Thermophysical Properties of Air and Air Components;' Moscow, Nauka, 1966, and NBS-NSF Trans. TT 70-50095, 1971: and Vassernan and Rabinovich, "Thermophysical Properties of Liquid Air and Its Component, "Moscow, 1968, and NBS-NSF Trans. 69-55092, 1970. - Properties of Natural Gases Archived 2011-02-06 at the Wayback Machine.. Includes a chart of compressibility factors versus reduced pressure and reduced temperature (on last page of the PDF document) - Zucker, Robert D.; Biblarz, Oscar (2002). Fundamentals of Gas Dynamics (2nd ed.). Wiley Books. ISBN 0-471-05967-6. page 327 - McQuarrie, Donald A.; Simon, John D. (1999). Molecular Thermodynamics. University Science Books. ISBN 1-891389-05-X. page 55 - Y.V.C. Rao (1997). Chemical Engineering Thermodynamics. Universities Press (India). ISBN 81-7371-048-1. - Smith, J.M.; et al. (2005). Introduction to Chemical Engineering Thermodynamics (Seventh ed.). McGraw Hill. ISBN 0-07-310445-0. page73 - NIST Chemistry WebBook - Perry's chemical engineers' handbook (6 ed.). MCGraw-Hill. 1984. ISBN 0-07-049479-7. page 3-268 - Perry's chemical engineers' handbook (6 ed.). MCGraw-Hill. 1984. ISBN 0-07-049479-7. page 3-162 - Compressibility factor (gases) A Citizendium article. - Real Gases includes a discussion of compressibility factors. - Compressibility factor Calculator (gase) Engineering Units article. - Determine Compressibility factor Engineering Units article.
<urn:uuid:ddc648ee-5404-4419-ad83-7e0cf2a2e6f2>
3.859375
2,732
Knowledge Article
Science & Tech.
32.445474
95,581,510
+44 1803 865913 The Insects and Arachnids of Canada Series, Part 25, focuses on a group of Canadian and Alaskan weevils in the subfamily Entiminae. Several species of this broad-nosed weevil subfamily are detrimental to agriculture and forestry. This handbook provides information about 49 genera and 123 species. It includes a key to the subfamilies of the Canadian Curculionidae, a key to the genera of Entiminae, and keys to the species in each genus, where required. The volume describes each species, with observations on weevil biology and host plants, and provides maps illustrating species distribution. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects EXCELLENT SERVICE FROM NHBS. I will continue to choose them wherever possible for future purchases. Good service deserves to be rewarded. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:1ea8ef30-ff50-4d82-ba41-8c9679f29355>
2.78125
224
Product Page
Science & Tech.
37.86175
95,581,545
If time doesn't exist for a photon, how could anything ever "happen" to it? The concept of photons running with stopped clocks is something that is pulled straight out of relativity; the faster you’re moving, the slower your onboard clocks are moving, and the closer to the speed of light you’re operating, the more sluggish they get. Once you reach the speed of light, your clock runs infinitely slow - for practical purposes, we can say that time doesn't flow for the photon. As with all things relativity, this isn’t an absolute statement- light still has a finite speed, and we can observe light taking fixed amounts of time to traverse large distances. When light goes zipping around our Universe, it is physically moving through space at a speed of 186,000 miles every second. But if you could affix a clock to it, an observer that’s not moving at the speed of light would not see the clock moving forwards the way their own clocks do. A hypothetical person moving at the speed of light wouldn’t notice anything weird with their clock, but what they might notice is that the Universe is full of things to smash into. No matter how fast you’re going, if there’s something in front of you, and you can’t dodge it, you will hit it. This is as true for humans as it is for light, and light is even less capable of dodging an oncoming object than we humans are. Light always travels in locally straight lines - the only way to bend light is to make a curve in the shape of space. A photon will then follow that curve, but there’s no onboard navigation. Photons are effectively stuck playing the world’s most obnoxious game of bumper cars, continually bouncing from impact to impact . From our non-speedy perspective, the clocks on photons do not tick forward between impacts, so if the photon has the good fortune to get re-emitted by whatever it ran into, it will, from our viewpoint, instantaneously smash directly into something else without its onboard clock ticking onwards at all. The photon may not get re-emitted by whatever it ran into, (this is one way to get rid of a photon). The energy of whatever it hit will increase, so the energy isn’t lost. However, if it hits something particularly cold, the object won’t be radiating much, and the photon’s energy will be a convenient donation. More commonly, after some amount of time, a new photon will be produced, at a different energy level, carrying energy away from whatever the photon punched itself into earlier. That new photon has an equally short apparent flight until it smashes into something else. It’s not the most glamorous of paths through the Universe, but a continual ricocheting from solid matter to solid matter is how photons in our Universe go about it.
<urn:uuid:a0d7aac0-8d44-45c7-8665-677dc9085c5c>
3.0625
607
Personal Blog
Science & Tech.
52.44852
95,581,562
The authors explain at length the principles of chemical kinetics and approaches to computerized calculations in modern software suites- mathcad and maple. Mathematics is crucial in determining correlations in chemical processes and requires various numerical approaches. Often significant issues with mathematical formalizations of chemical problems arise and many kinetic problems can't be solved without computers. Numerous problems encountered in solving kinetics' calculations with detailed descriptions of the numerical tools are given. Special attention is given to electrochemical reactions, which fills a gap in existing texts not covering this topic in detail. The material demonstrates how these suites provide quick and precise behavior predictions for a system over time (for postulated mechanisms).Examples, i.e., oscillating and non-isothermal reactions, help explain the use of mathcad more efficiently. Also included are the results of authors' own research toward effective computations. Publisher: Springer Verlag GmbH Number of pages: 344 Weight: 545 g Dimensions: 235 x 155 x 19 mm Edition: 2011 ed. You may also be interested in... Please sign in to write a review
<urn:uuid:10e8bba9-53c5-4318-840d-0295a5e9887b>
3.09375
225
Product Page
Science & Tech.
28.909655
95,581,563
In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). The theorem is named after the Greek astronomer and mathematician Ptolemy (Claudius Ptolemaeus). Ptolemy used the theorem as an aid to creating his table of chords, a trigonometric table that he applied to astronomy. If the quadrilateral is given with its four vertices A, B, C, and D in order, then the theorem states that: where the vertical lines denote the lengths of the line segments between the named vertices. In the context of geometry, the above equality is often simply written as This relation may be verbally expressed as follows: - If a quadrilateral is inscribable in a circle then the product of the measures of its diagonals is equal to the sum of the products of the measures of the pairs of opposite sides. Moreover, the converse of Ptolemy's theorem is also true: - In a quadrilateral, if the sum of the products of its two pairs of opposite sides is equal to the product of its diagonals, then the quadrilateral can be inscribed in a circle. - 1 Examples - 2 Proofs - 3 Corollaries - 4 Ptolemy's inequality - 5 See also - 6 Notes - 7 References - 8 External links Ptolemy's Theorem yields as a corollary a pretty theorem regarding an equilateral triangle inscribed in a circle. Given An equilateral triangle inscribed on a circle and a point on the circle. The distance from the point to the most distant vertex of the triangle is the sum of the distances from the point to the two nearer vertices. Proof: Follows immediately from Ptolemy's theorem: Any square can be inscribed in a circle whose center is the center of the square. If the common length of its four sides is equal to then the length of the diagonal is equal to according to the Pythagorean theorem and the relation obviously holds. More generally, if the quadrilateral is a rectangle with sides a and b and diagonal d then Ptolemy's theorem reduces to the Pythagorean theorem. In this case the center of the circle coincides with the point of intersection of the diagonals. The product of the diagonals is then d2, the right hand side of Ptolemy's relation is the sum a2 + b2. Copernicus – who used Ptolemy's theorem extensively in his trigonometrical work – refers to this result as a 'Porism' or self-evident corollary: - Furthermore it is clear (manifestum est) that when the chord subtending an arc has been given, that chord too can be found which subtends the rest of the semicircle. A more interesting example is the relation between the length a of the side and the (common) length b of the 5 chords in a regular pentagon. In this case the relation reads b2 = a2 + ab which yields the golden ratio Side of decagon If now diameter AF is drawn bisecting DC so that DF and CF are sides c of an inscribed decagon, Ptolemy's Theorem can again be applied – this time to cyclic quadrilateral ADFC with diameter d as one of its diagonals: - where is the golden ratio. whence the side of the inscribed decagon is obtained in terms of the circle diameter. Pythagoras's theorem applied to right triangle AFD then yields "b" in terms of the diameter and "a" the side of the pentagon is thereafter calculated as As Copernicus (following Ptolemy) wrote, - "The diameter of a circle being given, the sides of the triangle, tetragon, pentagon, hexagon and decagon, which the same circle circumscribes, are also given." Proof by similarity of triangles Let ABCD be a cyclic quadrilateral. On the chord BC, the inscribed angles ∠BAC = ∠BDC, and on AB, ∠ADB = ∠ACB. Construct K on AC such that ∠ABK = ∠CBD; since ∠ABK + ∠CBK = ∠ABC = ∠CBD + ∠ABD, ∠CBK = ∠ABD. Now, by common angles △ABK is similar to △DBC, and likewise △ABD is similar △KBC. Thus AK/AB = CD/BD, and CK/BC = DA/BD; equivalently, AK·BD = AB·CD, and CK·BD = BC·DA. By adding two equalities we have AK·BD + CK·BD = AB·CD + BC·DA, and factorizing this gives (AK+CK)·BD = AB·CD + BC·DA. But AK+CK = AC, so AC·BD = AB·CD + BC·DA, Q.E.D. The proof as written is only valid for simple cyclic quadrilaterals. If the quadrilateral is self-crossing then K will be located outside the line segment AC. But in this case, AK−CK=±AC, giving the expected result. Proof by trigonometric identities Let the inscribed angles subtended by , and be, respectively, , and , and the radius of the circle be , then we have , , , , and , and the original equality to be proved is transformed to from which the factor has disappeared by dividing both sides of the equation by it. Now by using the sum formulae, and , it is trivial to show that both sides of the above equation are equal to Proof by inversion Choose an auxiliary circle centered at D with respect to which the circumcircle of ABCD is inverted into a line (see figure). Then Without loss of generality has radius . Then and can be expressed as respectively. Multiplying previous relation by yields Ptolemy's equality. In the case of a circle of unit diameter the sides of any cyclic quadrilateral ABCD are numerically equal to the sines of the angles and which they subtend. Similarly the diagonals are equal to the sine of the sum of whichever pair of angles they subtend. We may then write Ptolemy's Theorem in the following trigonometric form: Applying certain conditions to the subtended angles and it is possible to derive a number of important corollaries using the above as our starting point. In what follows it is important to bear in mind that the sum of angles . Corollary 1. Pythagoras's theorem Let and . Then (since opposite angles of a cyclic quadrilateral are supplementary). Then: Corollary 2. The law of cosines Let . The rectangle of corollary 1 is now a symmetrical trapezium with equal diagonals and a pair of equal sides. The parallel sides differ in length by units where: It will be easier in this case to revert to the standard statement of Ptolemy's theorem: The cosine rule for triangle ABC. Corollary 3. Compound angle sine (+) Formula for compound angle sine (+). Corollary 4. Compound angle sine (−) Let . Then . Hence, Formula for compound angle sine (−). This derivation corresponds to the Third Theorem as chronicled by Copernicus following Ptolemy in Almagest. In particular if the sides of a pentagon (subtending 36° at the circumference) and of a hexagon (subtending 30° at the circumference) are given, a chord subtending 6° may be calculated. This was a critical step in the ancient method of calculating tables of chords. Corollary 5. Compound angle cosine (+) This corollary is the core of the Fifth Theorem as chronicled by Copernicus following Ptolemy in Almagest. Let . Then . Hence Formula for compound angle cosine (+) Despite lacking the dexterity of our modern trigonometric notation, it should be clear from the above corollaries that in Ptolemy's theorem (or more simply the Second Theorem) the ancient world had at its disposal an extremely flexible and powerful trigonometric tool which enabled the cognoscenti of those times to draw up accurate tables of chords (corresponding to tables of sines) and to use these in their attempts to understand and map the cosmos as they saw it. Since tables of chords were drawn up by Hipparchus three centuries before Ptolemy, we must assume he knew of the 'Second Theorem' and its derivatives. Following the trail of ancient astronomers, history records the star catalogue of Timocharis of Alexandria. If, as seems likely, the compilation of such catalogues required an understanding of the 'Second Theorem' then the true origins of the latter disappear thereafter into the mists of antiquity but it cannot be unreasonable to presume that the astronomers, architects and construction engineers of ancient Egypt may have had some knowledge of it. The equation in Ptolemy's theorem is never true with non-cyclic quadrilaterals. Ptolemy's inequality is an extension of this fact, and it is a more general form of Ptolemy's theorem. It states that, given a quadrilateral ABCD, then where equality holds if and only if the quadrilateral is cyclic. This special case is equivalent to Ptolemy's theorem. - C. Ptolemy, Almagest, Book 1, Chapter 10. - Wilson, Jim. "Ptolemy's Theorem." link verified 2009-04-08 - De Revolutionibus Orbium Coelestium: Page 37. See last two lines of this page. Copernicus refers to Ptolemy's theorem as "Theorema Secundum". - Proposition 8 in Book XIII of Euclid's Elements proves by similar triangles the same result: namely that length a (the side of the pentagon) divides length b (joining alternate vertices of the pentagon) in "mean and extreme ratio". - And in analogous fashion Proposition 9 in Book XIII of Euclid's Elements proves by similar triangles that length c (the side of the decagon) divides the radius in "mean and extreme ratio". - An interesting article on the construction of a regular pentagon and determination of side length can be found at the following reference - De Revolutionibus Orbium Coelestium: Liber Primus: Theorema Primum - Alsina, Claudi; Nelsen, Roger B. (2010), Charming Proofs: A Journey Into Elegant Mathematics, Dolciani Mathematical Expositions, 42, Mathematical Association of America, p. 112, ISBN 9780883853481. - In De Revolutionibus Orbium Coelestium, Copernicus does not refer to Pythagoras's theorem by name but uses the term 'Porism' – a word which in this particular context would appear to denote an observation on – or obvious consequence of – another existing theorem. The 'Porism' can be viewed on pages 36 and 37 of DROC (Harvard electronic copy) - "Sine, Cosine, and Ptolemy's Theorem". - To understand the Third Theorem, compare the Copernican diagram shown on page 39 of the Harvard copy of De Revolutionibus to that for the derivation of sin(A-B) found in the above cut-the-knot web page - Coxeter, H. S. M. and S. L. Greitzer (1967) "Ptolemy's Theorem and its Extensions." §2.6 in Geometry Revisited, Mathematical Association of America pp. 42–43. - Copernicus (1543) De Revolutionibus Orbium Coelestium, English translation found in On the Shoulders of Giants (2002) edited by Stephen Hawking, Penguin Books ISBN 0-14-101571-3 - Amarasinghe, G. W. I. S. (2013) A Concise Elementary Proof for the Ptolemy's Theorem, Global Journal of Advanced Research on Classical and Modern Geometries(GJARCMG) 2(1): 20–25 (pdf). - Proof of Ptolemy's Theorem for Cyclic Quadrilateral - MathPages – On Ptolemy's Theorem - Elert, Glenn (1994). "Ptolemy's Table of Chords". E-World. - Ptolemy's Theorem at cut-the-knot - Compound angle proof at cut-the-knot - Ptolemy's Theorem on PlanetMath - Ptolemy Inequality on MathWorld - De Revolutionibus Orbium Coelestium at Harvard. - Deep Secrets: The Great Pyramid, the Golden Ratio and the Royal Cubit - Ptolemy's Theorem by Jay Warendorff, The Wolfram Demonstrations Project. - Book XIII of Euclid's Elements
<urn:uuid:b0b6cb6d-f9f2-46ad-86a2-ca4579ce458b>
4.25
2,837
Knowledge Article
Science & Tech.
48.163695
95,581,584
The BP oil spill in the Gulf of Mexico during 2010 was one of the worst environmental accidents in recent history and harmed over 100,000 marine wildlife. While massive oil spills don’t happen every year, smaller oil incidents still occur daily at varying scales. When they occur, they spell disaster for oceans and the creatures that live inside of them. For businesses, oil leaks or spills signal a tremendous loss of money in not only the product itself but for expensive cleanup efforts. It’s a mess no one wants. Researchers at the U.S. Department of Energy’s Argonne National Laboratory devised a new technology to offer a cleanup solution that benefits everyone. The Oleo Sponge recovers oil and other petroleum products from water in a way that is easily adaptable. That's right: A sponge that literally removes oil from water. The #OleoSponge - our answer to #oil spills. It can separate oil from water and when used on a larger scale, will be able to actually catch oil from the water column as it's spilled beneath the ocean's surface. Developed courtesy of #ArgonneNationalLaboratory in response to the #DeepWaterHorizon distaster. #ChiSciFest #chiscifest17 #oilspill #fossilfuels #materialscience #oleo #climatesolutions #environment How does this work? The Oleo Sponge, which looks like a seat cushion, is made out of a unique absorbent foam that loves oil and isn’t so into water. This oil-attracting and water-repelling feature helps the material efficiently pull oil from water. For perspective, during the cleanup process, the sponge can take in up to 90 times its own weight in oil. This could be HUGE!!! 🛳⛽ A group of researchers at the Argonne National Laboratory have developed a sponge that will collect oil from bodies of water, which could improve how harbors and ports are cleaned, as well as how oil spills are managed. . . “The Oleo Sponge offers a set of possibilities that, as far as we know, are unprecedented,” said co-inventor Seth Darling, a scientist with Argonne’s Center for Nanoscale Materials and a fellow of the University of Chicago’s Institute for Molecular Engineering. . . At tests at a giant seawater tank in New Jersey called Ohmsett, the National Oil Spill Response Research & Renewable Energy Test Facility, the Oleo Sponge successfully collected diesel and crude oil from both below and on the water surface. . . “The material is extremely sturdy. We’ve run dozens to hundreds of tests, wringing it out each time, and we have yet to see it break down at all,” according to Darling. . . The team is actively looking to commercialize the material; those interested in licensing the technology or collaborating with the laboratory on further development may contact firstname.lastname@example.org. . . For more info and a video demonstration visit Argonne’s website via link in bio! . . Big thanks to @mchllsong for the great link! 🌞 This sponge sets itself apart from most other absorbent sponges because the product minimizes waste in two ways. First, the Oleo Sponge can be wrung out and reused over and over again. Many sorbent technologies become saturated, and then both the sponge and oil go in the trash. Secondly, the oil that is wrung out can be salvaged for future use. This feature incentivizes businesses to clean up the mess more thoroughly because it helps mitigate financial losses. The most distinguishing feature of this new sponge is that it’s the first of its kind that can also grab oil under the water’s surface. When major oil spills occur, oil doesn’t always stick to the top of the water. Oil can drift below the surface and become difficult to capture. Other sponges and methods have a difficult time capturing oil under the surface. While the technique of skimming allows for some oil recovery, it can’t grab oil under the water and is more limited because it can only be used when the water is calm and when there is a thick layer of the oil on top of the surface. Burning the oil is another go-to method, but that doesn’t recover any of the oil and releases a significant amount of toxins into the environment. The Oleo Sponge is an environmentally friendly option since it not only retrieves the oil in hard to reach places, but it does so without harming people or the environment. Belize has saved the second-biggest coral reef in the world, which provides food and economical benefits to the Central American country. After passing legislation to ban oil exploration, UNESCO has taken it off their endangered list. Beluga whales are heading from China to a new home on an Icelandic island that brings them closer to a natural habitat. Multiple organizations are not only providing them a better home, but are hoping that other entertainment parks follow in their footsteps. To keep rare bat species in an area where they thrive, a community that's already created nearly 100 sustainable homes is changing their street lights. These new red LED bulbs will allow humans to continue operating at night while the bats can avoid it. Mountain gorillas remain an endangered species, but conservation efforts such as regulated tourism and habitat protection has increased their population over the last 35 years. It's jumped 25 percent in a specific African region in the past eight years.
<urn:uuid:591e16b1-8aeb-4c60-93ff-251248d6f014>
3.640625
1,140
Knowledge Article
Science & Tech.
48.494932
95,581,611
To cite this page, please use the following: · For print: . Accessed · For web: Found most commonly in these habitats: 0 times found in forest, 1 times found in Rainforest, 2 times found in tropical rainforest, 0 times found in rubber plantation, 1 times found in secondary rainforest, 1 times found in mature wet forest, 0 times found in motane rainforest, 0 times found in relict rainforest on limestone. Found most commonly in these microhabitats: 0 times forest litter, 3 times ex sifted leaf litter, 1 times litter, 1 times leaf litter, 0 times in rotten wood fragment, 1 times ex soil, 0 times 3pm on buttress of fig tree, 0 times 2pm, forager in litter on damp flat ground 6m from rubber tree. Collected most commonly using these methods: 3 times Winkler, 2 times Malaise trap, 1 times pan trap. Elevations: collected from 180 - 1800 meters, 941 meters average AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb. Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:1
<urn:uuid:8d72363f-dc29-4984-a7a5-352485cff662>
2.90625
334
Knowledge Article
Science & Tech.
57.908462
95,581,631
Cambodia, a country located in Southeast Asia, continues to rebuild itself since power was placed in the Cambodian People’s Party since 1997. The country’s environmental statistics are ranked very low when compared to the rest of the world, and some of that is due to the ongoing reliance of coal-fired power plants. While that may not change anytime soon, there are options in the works to make it less harmful. still leave the country annually. Much of this is due to the lack of electricity in rural areas, where around 80 percent of the population exists. The government has set a goal of electricity reaching 70 percent of all households in Cambodia by 2030. At the moment, 56 percent of the country receives electricity, and that is limited to 34 percent in rural areas. Demand continues to increase higher than the country can supply it. To meet those demands, they’re in the process of installing a from GE and Toshiba in Preah Sihanouk, Cambodia. It would be the third power plant in the area and it is estimated to increase domestic energy generation by 10 percent. While adding coal-fired power plants seems like a step in the wrong direction, the country isn’t developed enough to strictly run on renewables. Costs and accessibility are the main issues that keep solar and wind generation from becoming a bigger part of the process. Hydropower generates only four percent of the total amount of energy Cambodia produces. However, carbon emissions continue to spread rapidly throughout the country. According to the , 0.34 metric tons of CO2 emissions are spewing out and the annual rate of growth is slightly above five percent. To put that in perspective, there were 0.08 metric tons of emissions in 1994. 2004 saw the biggest change with a near 15 percent increase. To battle the rising emissions, from coal called Predix. Released back in 2015, this software will figure out the most efficient way to run these power plants based on weather patterns. If there are strong winds in the area, for instance, emissions from the population can be halted by putting strict regulations on driving. Rain would provide a boost in hydroelectricity generation, meaning the coal-fired power plants won’t have to run as often. GE has invested $1 billion into Predix, software that could change how efficient most of our equipment currently runs at. For example, an of their products through Predix. Information is sent through the cloud and processed in a centralized area. New updates will soon have machines like elevators learn how to be efficient on their own. These seven Etsy shops from around the world offer an impressive range of cruelty-free products you can feel good about putting on your face. A new report shares why decentralized energy grids will power the homes of the future and make a major difference in the lives of those in developing countries currently with limited or zero access to electricity. Starbucks and McDonalds are working together to rethink to-go cups and inviting others to join them in creating eco-friendly packaging in an effort to reduce waste and environmental impact. A new report finds that meat and dairy producers are on track to surpass the oil industry's greenhouse gas emissions.
<urn:uuid:79303d8c-36d5-426b-b35d-6dc2fd78539d>
3.171875
652
News Article
Science & Tech.
49.207552
95,581,633
Java Applets: Interactive Programming - The purpose of this assignment is - if-else if structure - Create a Tic-Tac-Toe program This is a two-player game. The first person to go is X's, the second person is O's. Once one player gets three X's or O's in a row - either horizontally, vertically, or diagnolly, then they win and a pop-up window appears declaring the winner. Below is the JFrame that pops up when there is a winner: Right-click on the above links and select to save target - Start with the template for designing each area: Note that there are many notes inside this template to help you along the way. You can run this code, but you'll only see the startOver button appear. - First start with the title, as this is review of old material. - Then get the buttons for the Tic Tac Toe board to appear, with white blanks - Add event handling for playing Tic Tac Toe - Finally, add code to handle the start Over button Copyright © 2006-2007: E.S.Boese All rights reserved.
<urn:uuid:8ca153c1-94dd-46b3-9d2e-749875f86956>
3.625
252
Tutorial
Software Dev.
65.53765
95,581,677
Female, under Hindwings |Class:||Animals (Animalia) - Jointed Legs (Arthropoda) - Insects (Insecta)| |Order:||Butterflies & Moths (Lepidoptera)| |Family:||Geometer (:Geometridae Ennominae Nacophorini)| |Species:||Russet Crest-moth (Fisera cf hypoleuca)| |This Photo:||Female, Horn| General Species Information: Found on Ellura (in the Murray Mallee) The hindwings trail with a dirty purple band underneath. A brown speckled cream body. Males have bipectinate antennae (2 rows of filaments), while females (shown here) have thread-like antennae (filiform). The front legs are brown, while the rear 4 are white. A fairly large moth with a wingspan of ~50mm and body & head length of ~16mm
<urn:uuid:cddcd668-7941-4aad-b607-29dc164f37bf>
2.78125
211
Knowledge Article
Science & Tech.
22.263418
95,581,686
“Science is one of the few things in the world that truly transcends national borders…” The science web site EarthSky has featured our “Global Communication and Science” workshops. - What can lunar eclipses do for science? - Longest lunar eclipse until 2123 - Helping people become better language-learners - Red moon over Perth on timeanddate.com - Countdown to the January 31st total lunar eclipse - What is the saros cycle and how does it foretell eclipses? - Coming soon to a sky near you: a teachable moment - The Great American Eclipse on timeanddate.com - “Eclipsim”, and other eclipse stories - Creating new connections across Asia
<urn:uuid:c537e9dd-67f2-4226-87f7-971b0f96154a>
2.671875
158
Content Listing
Science & Tech.
38.901636
95,581,697
Mainz-based physicists involved in detector construction and analysis of future experiments The SuperKEKB particle accelerator at the KEK research center in Japan has recently reached a major milestone: electrons and positrons have been circulated for the first time around the rings. The accelerator is now being commissioned and the start of data taking is foreseen for 2017. One of the core questions to be investigated in these experiments is why the universe today is filled almost only with matter while in the Big Bang matter and antimatter should have been created in equal amounts. Physicists at Johannes Gutenberg University Mainz (JGU) are involved in the development of the slow-control of the detector. The group of Professor Concettina Sfienti at the Institute of Nuclear Physics at Mainz University will be working together with some 600 scientists from 23 countries to analyze the data. As the new accelerator is designed to deliver forty times more collisions than its predecessor KEKB, the Belle detector is also being upgraded to cope with the extreme requirements of the modified collider. The German contribution to the new Belle-II detector is a high-resolution tracker that is at the heart of the device and can very precisely record the tracks left by the generated particles. It is accurate to less than half the thickness of a human hair. The team of physicists from Mainz provide the expertise to create the software required to monitor the detector and the readout electronics. This software is used to control the operating parameters of the detector and to continually monitor its efficiency. Although the high collision rate envisaged means that it is necessary to employ hardware that comes close to the very limits of what is feasible and is thus extremely expensive, the flip side of the coin is that this should make it possible to detect rare events. "We have reached an important turning point in the development of the SuperKEKB, an accelerator that will have forty times the luminosity of the most powerful collider ever built. The experiment will supply us with a lot of new highly precise data which may also lead to the discovery of new particles," said Sfienti. Moreover, it is hoped that evidence of very rare events that may have occurred in the early phases of the creation of our universe will be discovered, providing insight into new laws of physics beyond those of the Standard Model. Professor Dr. Concettina Sfienti Institute of Nuclear Physics Johannes Gutenberg University Mainz (JGU) 55099 Mainz, GERMANY phone +49 6131 39-25841 http://www.uni-mainz.de/presse/20191_ENG_HTML.php – press release ; https://www.kek.jp/en/index.html – KEK research center ; http://www.kek.jp/en/NewsRoom/Release/20160302163000/ – press release "First turns and successful storage of beams in the SuperKEKB electron and positron rings", KEK Petra Giegerich | idw - Informationsdienst Wissenschaft What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:4a0a7ba2-a430-4253-83c6-a945e161744c>
2.828125
1,271
Content Listing
Science & Tech.
40.454519
95,581,700
Complex Analysis for Mathematics and Engineering by John H. Mathews, Russell W. Howell Publisher: Jones & Bartlett Learning 2006 Number of pages: 633 This book presents a comprehensive, student-friendly introduction to Complex Analysis concepts. Its clear, concise writing style and numerous applications make the foundations of the subject matter easily accessible to students. Home page url Download or read it online for free here: by George Cain The textbook for an introductory course in complex analysis. It covers complex numbers and functions, integration, Cauchy's theorem, harmonic functions, Taylor and Laurent series, poles and residues, argument principle, and more. by Leif Mejlbro - BookBoon This is an introductory book on complex functions theory. From the table of contents: Introduction; The Complex Numbers; Basic Topology and Complex Functions; Analytic Functions; Some elementary analytic functions; Index. by C.L. Siegel - Tata Institute of Fundamental Research A systematic study of Riemann matrices which arise in a natural way from the theory of abelian functions. Contents: Abelian Functions; Commutator-algebra of a R-matrix; Division algebras over Q with a positive involution; Cyclic algebras; etc. by W W L Chen - Macquarie University Introduction to some of the basic ideas in complex analysis: complex numbers; foundations of complex analysis; complex differentiation; complex integrals; Cauchy's integral theorem; Cauchy's integral formula; Taylor series; Laurent series; etc.
<urn:uuid:8887621b-38cf-4559-b0e2-3a1e5e7eaa20>
2.765625
326
Content Listing
Science & Tech.
24.303531
95,581,707
Yeast Programmed to Convert Plant Sugars into Oils News Jan 17, 2017 | Original Story From Massachusetts Institute of Technology MIT engineers have genetically reprogrammed a strain of yeast so that it converts sugars to fats much more efficiently, an advance that could make possible the renewable production of high-energy fuels such as diesel. The researchers, led by Gregory Stephanopoulos, the Willard Henry Dow Professor of Chemical Engineering and Biotechnology at MIT, modified the metabolic pathways of yeast that naturally produce large quantities of lipids, to make them about 30 percent more efficient. “We have rewired the metabolism of these microbes to make them capable of producing oils at very high yields,” says Stephanopoulos, who is the senior author of the study. This upgrade could make the production of renewable high-energy fuels economically feasible, and the MIT team is now working on additional improvements that would help get even closer to that goal. “What we’ve done is reach about 75 percent of this yeast’s potential, and there is an additional 25 percent that will be subject of follow-up work,” Stephanopoulos says. Renewable fuels such as ethanol made from corn are useful as gasoline additives for running cars, but for large vehicles like airplanes, trucks, and ships, more powerful fuels such as diesel are needed. “Diesel is the preferred fuel because of its high-energy density and the high efficiency of the engines that run on diesel,” Stephanopoulos says. “The problem with diesel is that so far it is entirely made from fossil fuels.” Efforts to develop engines that run on biodiesel made from used cooking oils have had some success, but cooking oil is a relatively scarce and expensive fuel source. Starches such as sugar cane and corn are cheaper and more plentiful, but these carbohydrates must first be converted into lipids, which can then be turned into high-density fuels such as diesel. To achieve this, Stephanopoulos and his colleagues began working with a yeast known as Yarrowia lipolytica, which naturally produces large quantities of lipids. They focused on fully utilizing the electrons generated from the breakdown of glucose. To achieve this, they transformed Yarrowia with synthetic pathways that convert surplus NADH, a product of glucose breakdown, to NADPH, which can be used to synthesize lipids. They ended up testing more than a dozen modified synthetic pathways. “It turned out that the combination of two of these pathways gave us the best results that we report in the paper,” Stephanopoulos says. “The actual mechanism of why a couple of these pathways work much better than the others is not well-understood.” Using this improved pathway, the yeast cells require only two-thirds of the amount of glucose needed by unmodified yeast cells to produce the same amount of oil. While this new glucose-to-lipid conversion process could be economically feasible at current prices for corn-starch, the researchers are hoping to make the process even more efficient, Stephanopoulos says. This article has been republished from materials provided by Massachusetts Institute of Technology. Note: material may have been edited for length and content. For further information, please contact the cited source. ‘Good Cholesterol’ May Not Always be Good for Postmenopausal WomenNews Postmenopausal factors may have an impact on the heart-protective qualities of high-density lipoproteins (HDL) – also known as ‘good cholesterol’ – according to a study led by researchers in the University of Pittsburgh Graduate School of Public Health.READ MORE What Makes Good Brain Proteins Turn Bad?News The protein FUS is implicated in two neurodegenerative diseases: amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD). Using a newly developed fruit fly model, researchers have zoomed in on the protein structure of FUS to gain more insight into how it causes neuronal toxicity and disease.
<urn:uuid:f5261f37-d371-4d3b-9c05-cc5f39c1e1c3>
3.265625
829
Truncated
Science & Tech.
27.300711
95,581,743
A group of scientists at The Scripps Research Institute (TSRI) and San Diego Supercomputer Center at the University of California at San Diego (UCSD) have used a powerful laser in combination with innovative quantum mechanical computations to measure the flexibility of mouse antibodies. The new technique, described in an upcoming issue of the journal Proceedings of the National Academy of Sciences, is significant because protein flexibility is believed to play an important role in antibody-antigen recognition, one of the fundamental events in the human immune system. "This is the first time anybody has ever gone into a protein and experimentally measured the frequency of protein vibrations in response to an applied force," says Floyd Romesberg, assistant professor in the Department of Chemistry at The Scripps Research Institute, who led the study. Keith McKeown | EurekAlert! Colorectal cancer risk factors decrypted 13.07.2018 | Max-Planck-Institut für Stoffwechselforschung Algae Have Land Genes 13.07.2018 | Julius-Maximilians-Universität Würzburg For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:0cca9472-44d3-47bb-9aa6-2140ac957de2>
2.78125
813
Content Listing
Science & Tech.
35.295603
95,581,763
Technology Quarterly. Thought experiments The researchers studied stone tools that were used by people in the Early Ahmarian culture and Oli Scarff/Getty Images ... How To Make Your Kids Smarter: 10 Steps Backed By Science Next step of sociobiological evolution It has been one of the biggest unanswered questions for decades – is there, or Artificial intelligence could evolve to be 'billions of times smarter' than humans. That's ... face of the devastating gender gap in science education, here is a thoughtful, beautiful piece of early science education presented by two women, ... Is a 'science student' smarter than an 'arts student'? Tonium Pacemaker: Pocket turntables 20 Percent of Neanderthal Genome Lives On in Modern Humans, Scientists Find— The authors of the study argue that sexual equality may have proved an evolutionary advantage for Ray Kurzweil writes that, due to paradigm shifts, a trend of exponential growth extends Moore's law from integrated circuits to earlier transistors, ... Early humans ate very much like modern pigs and bear and were 'simply acquiring enough 157 Artificial Intelligence Platforms to Help You Grow Your Business Scientists in Dresden have found a single single gene that may be responsible for the large Women who look for partners who are intelligent are less likely themselves to seek a career Analog Summing Demystified Part 2 – Thinking ... Why Smart People Are Stupid Some modern humans have more Neanderthal DNA in their genetic make-up than first thought Masayoshi Son, the CEO of Japanese tech conglomerate Softbank, has been preparing his company for this scenario for quite some time. Sex, drugs and coding: The wild early days of Google Inside the Race to Hack the Human Brain Scientists have found that pigs are smarter than dogs, and can solve problems just as You use recreational drugs AI Technology — What Could Possibly Go Wrong? How do you know that something you cannot see is still present? Full object permanence, including the ability to track invisible trajectories of objects ... Benefits & Risks of Artificial Intelligence. “ US President Barack Obama writes with his left hand. REUTERS/Kevin Lamarque Who's smarter ... What is Environmental Science? - Definition and Scope of the Field - Video & Lesson Transcript | Study.com Technological singularity will turn us into super humans some time in the next 12 years, Our library starts simply with First Order concepts, but then builds up to Second and Third Order ideas. Does that yellow Ford see the gray Chevy coming ... A cartoon showing a man addiction to his smartphone The vast number-crunching power of these devices mean that as soon as they are available, cryptocurrencies will be suddenly more vulnerable to attack. Onkyo TX-NR906: The first receiver that lets you adjust each video feed Visual representation of the data on persons with disabilities provided in the text of the article And yet, even as these intelligences outpace human beings in almost every intellectual arena in which they're entered, they seem no closer to being like us, ... As mentioned above, there are advantages to not cooperating, which students of game theory know quite well. The algorithm that can lie and get away with it ... Neanderthal and human skulls Octopuses can out-think humans on some things. Brian Gratwicke on Flickr What is the smartest ... Will a Nicotine Patch Make You Smarter? [Excerpt] Move Over, Coders—Physicists Will Soon Rule Silicon Valley Mice injected with human brain cells grow to have 'half human brains' that make The DeepMind AI mirrors the learning brain in a simple way: it reuses what it 305 best Anthropogenesis & Earliest Migrations images on Pinterest | Human evolution, Archaeology and Early humans Car manufacturers are busy at this very moment building vehicles that we would never call self-conscious. That's because they are being built too well. Older than you think: These tools are now known to have been made in Crete Samsung 4K Ultra HD Blu-ray Player cognitive skills rise and fall psychology intelligence graph As part of the BackDoor experiment setup, a 2Watt speaker array system for jamming applications. Neanderthals – not modern humans – were first artists on Earth, experts claim | Science | The Guardian A professor built an AI teaching assistant for his courses — and it could shape the future of education - Business Insider Poster with a quote about the wrong use of technology laptop computer working focus What is Chemical Energy? - Definition & Examples - Video & Lesson Transcript | Study.com Kennis models of Homo sapiens (left) and a Neanderthal man Let's get started with the advantages: Science, Technology, Engineering, and Mathematics have an immense impact on our lives and Opinion: Should we use gene editing to produce disease-free babies? A scientist who helped discover CRISPR weighs in. Tech Scientists began by training a dozen goats to use a two-step process to retrieve A pair of bottlenose dolphins which scientists believe are not as clever as once thought Transforming Science Education snow owl hearable amplifier smart intelligent intelligence man glasses listen think johnny depp transcendence Bone mass was found to be around 20 per cent higher in the foragers - the Anthony Levandowski (right) who has registered the first church of AI says he is Newly discovered neuron could repair the brain, make us smarter - Geek.com Imagine that two people are carving a six-foot slab of wood at the same time. One is using a hand-chisel, the other, a chainsaw. Ultimately it may be possible to meld together human and artificial intelligence
<urn:uuid:3aa3eb29-23d8-4865-a250-c82330ef51b1>
2.71875
1,207
Content Listing
Science & Tech.
43.630331
95,581,766
Presented by: Jacques Vanier (Département de physique, Université de Montréal). The lecture will cover one of the most fascinating subject, that of atomic frequency standards used to measure time with great accuracy. Of all measurements of physical quantities, time, a quantity that we have difficulty to define and interpret physically, is the quantity that we can measure most accurately. This is due to the practical realization of clocks that use atomic properties. These so-called atomic clocks have made possible the verification of the validity of basic physics theories such as that of relativity. They also have made possible implementation of systems, such as the Global Positioning System, know as GPS. Such a system, using atomic clocks in Earth satellites, has made possible accurate physical positioning everywhere on the surface of the planet, an easy operation available to anyone. These standards are also the most precise instruments ever realized in research laboratories and in National Institutes responsible of standards representing the International System of Units (SI). I will describe the physics of operation of these instruments, particularly new developments using optical techniques, such as laser cooling, permitting the realization of precision of the order of parts in 1018. I will include in the presentation a short description of the most important applications of these clocks. About the speaker Jacques Vanier completed his undergraduate studies in physics at the University of Montreal before moving to McGill University to undertake his graduate studies. During his career, he has worked in industry (Varian, Hewlett-Packard), taught physics and established a research laboratory in quantum electronics at Laval University, Québec, Canada. He has worked at the National Research Council of Canada in Ottawa, where he founded the Institute for National Measurement Standards and was his director until 1994. His research work was oriented towards the understanding and the application of quantum electronics phenomena and he has been a consultant for several companies engaged in the development of atomic clocks. Vanier has also been very active on the academic circuit, giving lectures and making presentations at numerous conferences in Universities, National Institutes and Summer Schools around the world. He has written more than 120 publications and is the author of review articles and books on masers, lasers and atomic clocks. His 2-volume book The Quantum Physics of Atomic Frequency Standards, (Editor: Adam Hilger1989), written with C. Audoin, is recognized as a main reference in the field. He has recently written with C. Tomescu a third volume of this book describing recent developments in the field (Taylor & Francis 2016). He is also the author of the book, The Universe, a challenge to the mind, (Imperial College Press 2011), describing in common language the physics of the universe we live in. Vanier is a Fellow of the Royal Society of Canada, of the American Physical Society and of the Institute of Electrical and Electronic Engineers. He has been the fortunate recipient of several awards for his contributions to the field of measurement science. He has remained active in science during his formal retirement and is currently Adjunct Professor at the Physics Department, University of Montreal.
<urn:uuid:81fa0a5f-4b3b-4c4b-9594-e861b4e13c87>
3.15625
618
Truncated
Science & Tech.
20.90181
95,581,794
As part of individual interviews incorporating whole number and rational number tasks, 323 grade 6 children in Victoria, Australia were asked to nominate the larger of two fractions for eight pairs, giving reasons for their choice. All tasks were expected to be undertaken mentally. The relative difficulty of the pairs was found to be close to that predicted, with the exception of fractions with the same numerators and different denominators, which proved surprisingly difficult. Students who demonstrated the greatest success were likely to use benchmark (transitive) and residual thinking. It is hypothesised that the methods of these successful students could form the basis of instructional approaches which may yield the kind of connected understanding promoted in various curriculum documents and required for the development of proportional reasoning in later years. - RT @A2SchoolsSuper: We ❤️ #TreeTown @A2schools #InspireA2 #A2gether Ann Arbor named best place to live in America - again https://t.co/yRrH… 6 months ago - RT @SpringerEdu: Teachers’ talk about the mathematical practice of attending to precision link.springer.com/article/10.100… 6 months ago - RT @SpringerEdu: RT @authorzone: Discover what English-language #SpringerNature books topped 2017’s list in downloads, citations, and menti… 6 months ago - Sharing some thoughts about the process of settling down in Ann Arbor. #Sabbatical fulbright.no/grantee-experi… 9 months ago - RT @velonews: Time trials can be a snooze, but the elite men’s ITT championship race was quite the opposite. velonews.com/2017/09/news/b… 10 months ago
<urn:uuid:55c0c784-bb33-4044-8786-bb6cdd8962bd>
3.140625
366
Content Listing
Science & Tech.
38.374418
95,581,807
The new millipede that Paul Marek discovered is as pretty as it is dangerous. The thumb-sized millipede that crawls around the forest floor of Southwest Virginia's Cumberland Mountains has more color combinations than any other millipede discovered. Apheloria polychroma, as the millipede is known, also has an enviable trait in the animal world -- it's covered in cyanide, ensuring any bird that snacks on the colorful but lethal invertebrate won't do it a second time. Lots of other millipedes that don't have as much toxic defense mimic Apheloria polychroma's coloring in hopes of avoiding becoming another link in the food chain. This is the 10th species that Marek, an assistant professor in the Virginia Tech College of Agriculture and Life Sciences' Department of Entomology, has discovered and named in recent years. Apheloria polychroma was named for its rainbow of colors and was described by Marek; Jackson Means, a graduate student from Keswick, Virginia; and Derek Hennen, a graduate student from Little Hocking, Ohio. Marek runs the only millipede lab in the United States. The team's findings were recently published in the journal Zootaxa. While Marek's work is focused on small things, his research helps tell the larger story of the quickly changing natural world. By documenting the many living organisms of the planet, he is helping avoid anonymous extinction -- a process in which a species goes extinct before its existence, role in the ecosystem, or potential benefit to humanity is known. "It is imperative to describe and catalog these species so that we know what role they play in the ecosystem -- and what impact we are having on them," said Marek. "This region is ripe with biodiversity and is an excellent living laboratory to do this work." The millipedes that copy Apheloria polychroma use what is called Mullerian mimicry, where different species converge on a shared aposematic (warning signal) to defend themselves against a common predator. The more frequently predators encounter what appears to be the same brightly colored unpalatable millipede and memorize its warning colors, the better the collective advertisement of their noxiousness. In addition to the millipede's colorful exoskeleton, it also serves an important role in the ecosystem as a decomposer by breaking down decaying leaves, wood, and other vegetation to unlock and recycle their nutrients for future generations of forest life. Zeke Barlow | EurekAlert! NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:79042144-50d1-4e8c-80fd-ea52e5a3460d>
3.96875
1,152
Content Listing
Science & Tech.
35.849971
95,581,809
Scientists say they’re studying bighorn sheep on an island off the coast of Mexico to determine the effects of climate change on endangered species. The sheep, brought to Tiburon Island in 1975, are not at risk from disease or predators, said Barry Brook, a researcher with Faculty of 1000, a London-based cooperative of international scientists. Climate change is the only variable threat to the sheep, making them good subjects for a mathematical model aimed at predicting the effects of such change, Brook and fellow researchers from Germany, the United States and Mexico said. One part of the model simulates the effect of increased drought on the sheep’s population, drought being a side-effect of climate change. Because the calculations can be adapted to other species, the study should aid in the conservation of small populations of animals elsewhere on the planet, Brook said.
<urn:uuid:e9c25e3d-8c26-42e7-aa17-b9489ef9e689>
3.84375
175
News Article
Science & Tech.
32.333986
95,581,812
Species Detail - Greater White-toothed Shrew (Crocidura russula) - Species information displayed is based on all datasets. Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM). Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84). Greater White-toothed Shrew Invasive Species: Invasive Species || Invasive Species: Invasive Species >> Medium Impact Invasive Species 7 January (recorded in 2017) 31 December (recorded in 2016) National Biodiversity Data Centre, Ireland, Greater White-toothed Shrew (Crocidura russula), accessed 20 July 2018, <https://maps.biodiversityireland.ie/Species/119451>
<urn:uuid:5adf4fbe-c2ab-4ff0-9357-18ba2579c137>
2.625
181
Structured Data
Science & Tech.
21.485032
95,581,821
Monash researchers have discovered a new mechanism that enables plants to regulate their flowering in response to raised temperatures. The findings, published in the journal Nature Plants, potentially could lead to the development of technology allowing the physiological response of plants to be controlled and the impacts of warming temperatures mitigated. The Monash team, led by Associate Professor Sureshkumar Balasubramanian, made the discovery by applying a combination of genetic, molecular and computational biology experiments to the flowering plant Arabidopsis. Associate Professor Balasubramanian explained how two key basic cellular processes work together to reduce the levels of a protein that normally prevents flowering, allowing the plants to produce flowers in response to elevated temperature. “This is very exciting as our understanding of how these genetic mechanisms work together opens up whole new possibilities for us to be able to develop technology to control when plants flower under different temperatures. These mechanisms are present in all organisms, so we may be able to transfer this knowledge to crop plants, with very promising possibilities for agriculture,” Associate Professor Balasubramanian said. While Associate Professor Balasubramanian discovered the genetic basis of temperature-induced flowering ten years ago, only now, with the availability of new computational approaches, were the researchers able to discover this mechanism.
<urn:uuid:ab13ebe7-65a5-4dee-8d0e-b27c629db9ad>
3.578125
264
News (Org.)
Science & Tech.
-10.480526
95,581,834
The following is the first few sections of a chapter from The Busy Coder's Guide to Android Development, plus headings for the remaining major sections, to give you an idea about the content of the chapter. To be able to have more intelligent code — code that can adapt to Internet activity on the device — Android offers the TrafficStats class. This class really is a gateway to a block of native code that reports on traffic usage for the entire device and per-application, for both received and transmitted data. This chapter will examine how you can access TrafficStats and interpret its data. Understanding this chapter requires that you have read the core chapters and understand how Android apps are set up and operate. The preview of this section was the victim of a MITM ('Martian in the middle') attack. The preview of this section apparently resembled a Pokémon. The preview of this section will not appear here for a while, due to a time machine mishap.
<urn:uuid:47ef8289-2fb3-4b82-be5d-1d783bac8d5d>
2.65625
205
Truncated
Software Dev.
44.815737
95,581,857
The team unveiled in a report in the journal Physical Review Letters this month a ready-made method for detecting the collision of stars with an elusive type of black hole that is on the short list of objects believed to make up dark matter. Such a discovery could serve as observable proof of dark matter and provide a much deeper understanding of the universe's inner workings. Postdoctoral researchers Shravan Hanasoge of Princeton's Department of Geosciences and Michael Kesden of NYU's Center for Cosmology and Particle Physics simulated the visible result of a primordial black hole passing through a star. Theoretical remnants of the Big Bang, primordial black holes possess the properties of dark matter and are one of various cosmic objects thought to be the source of the mysterious substance, but they have yet to be observed. If primordial black holes are the source of dark matter, the sheer number of stars in the Milky Way galaxy -- roughly 100 billion -- makes an encounter inevitable, the authors report. Unlike larger black holes, a primordial black hole would not "swallow" the star, but cause noticeable vibrations on the star's surface as it passes through. Thus, as the number of telescopes and satellites probing distant stars in the Milky Way increases, so do the chances to observe a primordial black hole as it slides harmlessly through one of the galaxy's billions of stars, Hanasoge said. The computer model developed by Hanasoge and Kesden can be used with these current solar-observation techniques to offer a more precise method for detecting primordial black holes than existing tools. "If astronomers were just looking at the sun, the chances of observing a primordial black hole are not likely, but people are now looking at thousands of stars," Hanasoge said. "There's a larger question of what constitutes dark matter, and if a primordial black hole were found it would fit all the parameters -- they have mass and force so they directly influence other objects in the universe, and they don't interact with light. Identifying one would have profound implications for our understanding of the early universe and dark matter." Although dark matter has not been observed directly, galaxies are thought to reside in extended dark-matter halos based on documented gravitational effects of these halos on galaxies' visible stars and gas. Like other proposed dark-matter candidates, primordial black holes are difficult to detect because they neither emit nor absorb light, stealthily traversing the universe with only subtle gravitational effects on nearby objects. Because primordial black holes are heavier than other dark-matter candidates, however, their interaction with stars would be detectable by existing and future stellar observatories, Kesden said. When crossing paths with a star, a primordial black hole's gravity would squeeze the star, and then, once the black hole passed through, cause the star's surface to ripple as it snaps back into place. "If you imagine poking a water balloon and watching the water ripple inside, that's similar to how a star's surface appears," Kesden said. "By looking at how a star's surface moves, you can figure out what's going on inside. If a black hole goes through, you can see the surface vibrate."Eyeing the sun's surface for hints of dark matter Video simulations of the researchers' calculations were created by NASA's Tim Sandstrom using the Pleiades supercomputer at the agency's Ames Research Center in California. One clip shows the vibrations of the sun's surface as a primordial black hole -- represented by a white trail -- passes through its interior. A second movie portrays the result of a black hole grazing the Sun's surface. Marc Kamionkowski, a professor of physics and astronomy at Johns Hopkins University, said that the work serves as a toolkit for detecting primordial black holes, as Hanasoge and Kesden have provided a thorough and accurate method that takes advantage of existing solar observations. A theoretical physicist well known for his work with large-scale structures and the universe's early history, Kamionkowski had no role in the project, but is familiar with it. "It's been known that as a primordial black hole went by a star, it would have an effect, but this is the first time we have calculations that are numerically precise," Kamionkowski said. "This is a clever idea that takes advantage of observations and measurements already made by solar physics. It's like someone calling you to say there might be a million dollars under your front doormat. If it turns out to not be true, it cost you nothing to look. In this case, there might be dark matter in the data sets astronomers already have, so why not look?" One significant aspect of Kesden and Hanasoge's technique, Kamionkowski said, is that it narrows a significant gap in the mass that can be detected by existing methods of trolling for primordial black holes . The search for primordial black holes has thus far been limited to masses too small to include a black hole, or so large that "those black holes would have disrupted galaxies in heinous ways we would have noticed," Kamionkowski said. "Primordial black holes have been somewhat neglected and I think that's because there has not been a single, well-motivated idea of how to find them within the range in which they could likely exist." The current mass range in which primordial black holes could be observed was set based on previous direct observations of Hawking radiation -- the emissions from black holes as they evaporate into gamma rays -- as well as of the bending of light around large stellar objects, Kesden said. The difference in mass between those phenomena, however, is enormous, even in astronomical terms. Hawking radiation can only be observed if the evaporating black hole's mass is less than 100 quadrillion grams. On the other end, an object must be larger than 100 septillion (24 zeroes) grams for light to visibly bend around it. The search for primordial black holes covered a swath of mass that spans a factor of 1 billion, Kesden explained -- similar to searching for an unknown object with a weight somewhere between that of a penny and a mining dump truck. He and Hanasoge suggest a technique to give that range a much-needed trim and established more specific parameters for spotting a primordial black hole. The pair found through their simulations that a primordial black hole larger than 1 sextillion (21 zeroes) grams -- roughly the mass of an asteroid -- would produce a noticeable effect on a star's surface. "Now that we know primordial black holes can produce detectable vibrations in stars, we could try to look at a larger sample of stars than just our own sun," Kesden said. "The Milky Way has 100 billion stars, so about 10,000 detectable events should be happening every year in our galaxy if we just knew where to look." This research was funded by grants from NASA and by the James Arthur Postdoctoral Fellowship at New York University. Morgan Kelly | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:387d08ea-316f-4d1a-95d5-b48af8074885>
4.09375
2,023
Content Listing
Science & Tech.
41.638549
95,581,865
CORVALLIS, Ore. (KGW) - Researchers at Oregon State University said they made a disturbing discovery along the ocean floor off Oregon's coast. Crabs and other marine life are suffocating from a lack of oxygen in spots called Dead Zones. "A Dead Zone is an area in the ocean that basically has too low of oxygen values," said Marine Ecologist Dr. Francis Chan. "When the oxygen levels are high, the crabs are happy, and then the oxygen started to decline," said Chan, "And overtime they died, they suffocated on the sea floor." Oregon Department of Fish and Wildlife biologists were tracking crab populations when they found bizarre things happening in the waters off the coast this summer. OSU has been tracking dead zones for more than a decade and Chan said the one they found this year is one of the worst since 2006. Chan believes the appearance of Dead Zones comes down to a climate change. The good news he said, the fall storms are helping stir up the ocean and get rid of them. However, he also said they will likely return next spring. - OSU researchers find dead zones off the Oregon coast - Researchers at OSU make breakthrough in autism research - OSU leads construction for new research ship - OSU researchers make breakthrough on Cystic Fibrosis treatment - Crab harvesting closure extended on Oregon coast - Surfers brave the storm on Oregon coast - Whale carcass rolls ashore along Oregon Coast - Southern Oregon coast reopens for crabbing season - Coast Guard searching for missing Oregon fisherman - OSU finds cause of panel failure during Peavy Hall construction
<urn:uuid:5bec4b97-7ada-4b4c-b4da-4bef640cf390>
3.09375
337
News Article
Science & Tech.
41.551912
95,581,887
Some features of this site are not compatible with your browser. Install Opera Mini to better experience this site. Phytoplankton Bloom in the Great Barrier Reef This page contains archived content and is no longer being updated. At the time of publication, it represented the best available science. However, more recent observations and studies may have rendered some content obsolete. The Sea in many places is here cover’d with a kind of a brown scum, such as Sailors generally call spawn; upon our first seeing it, it alarm’d us, thinking we were among Shoals, but we found the same depth of Water were it was as in other places. Sailing through the Coral Sea outside the Great Barrier Reef, Captain James Cook made those observations on August 28, 1770. His journals contain the first mention of the long brown filaments of cyanobacteria that are common along the Australian coast. On August 9, 2011, the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite captured this view of a similar band of brown between the Great Barrier Reef and the Queensland shore. Though it’s impossible to identify the species from satellite imagery, such red-brown streamers are usually trichodesmium. Sailors have long called these brown streamers “sea sawdust.” Trichodesmium, a form of cyanobacteria, are small, usually single-celled organisms that grow in the ocean and produce food through photosynthesis like plants. They play an important role in Earth’s oceans because they convert nitrogen gas from the atmosphere to ammonia, a fertilizer that plants can use to grow. At the same time, trichodesmium removes carbon dioxide from the atmosphere. Their blooms often occur in warm, nutrient-poor waters. Charles Darwin observed one such bloom from the HMS Beagle in 1832, marveling that “their numbers must be infinite.”
<urn:uuid:27bbc8ff-97c1-47f6-bed3-b70b4d93b88c>
3.703125
402
Knowledge Article
Science & Tech.
42.717355
95,581,890
Laser-induced structural transformations such as transformation hardening, annealing, recrystallization, glazing, shock hardening, etc., are based on the high processing temperatures that can be reached during short heating and cooling cycles under high-power pulsed laser or rapidly scanned cw-laser irradiation. Short processing cycles permit material transformations within thin films and surfaces without significant influence on the substrate or the underlying bulk material. In the case of surface absorption, which is a good approximation with many applications, the thickness of the heated zone is approximately described by the heat diffusion length, l T . The time for heating the material to a certain temperature and depth, and the time for cooling can both be calculated from the equations given in Chaps. 6–9. The thickness of the modified layer, Δh ti l T , decreases with decreasing pulse length. With ultrashort laser pulses Δh becomes so small that cooling rates up to more than 1012 K/s can be achieved. If τ l < D/v 0 2 ≈ 10−12 to some 10−14 s (v o is the sound velocity), the finite velocity of the heat front must be taken into account (Sect. 2.2). In any case, with such cooling rates it is possible to freeze non-equilibrium phases, suppress nucleation, etc. There is, however, a limitation. The transformation temperature must be sustained during a time which is longer or at least comparable to the time required for the phase transformation to take place. Furthermore, with many systems, successful laser processing is related to strong temperature gradients which induce internal stresses, redistributions of defects, different types of transport phenomena, etc. KeywordsHeat Affected Zone Interface Velocity Laser Annealing Transformation Hardening Strong Temperature Gradient Unable to display preview. Download preview PDF.
<urn:uuid:b55ba69e-5306-4849-badd-94f712ad594f>
2.65625
376
Truncated
Science & Tech.
39.114948
95,581,894
Hi Learner I am thankful to you , to reading this article .In this article you will learning about Operators of C#Programming. Before diving into this article have look on previous articles. In previous articles we discussed about C# Conditional... Category: C# Sharp This section provides detailed information on key C# language features and features accessible to C# through the .NET Framework. C# Sharp is one of the language of .NET language it was generic, object oriented, platform independence, and language independence Platform dependency Means if any application is targeting on particular Operating System that application will run only in that particular operating system. CLR:- it is the execution engine of for all .net language it’s provides security, automatic memory management ,probability , exception handling and JIT compiler JIT compiler: – JIT [just in time] compiler convert CLR code into exe code to run on machine adopting this process at program execution time Garbage collector: – [automatic memory management]:- it will reclaim the memory of unused objects on run time of the .net application - Implemented few console ,windows Forms Application using C#Collections ,C#linq and opps Concepts - you can Find Huge Amount C# Application Here Have Look on These Things Hi 🙂 Developer I hope this article will be helpful to you, to get the basic knowledge on C-sharp Static vs Non static vs const vs Readonly Variables and here you will doing few examples on console application using C...
<urn:uuid:3d3c8a25-abfc-462d-bad4-5c0bd7d4865f>
2.515625
315
Content Listing
Software Dev.
33.851788
95,581,904
In 1978, the mathematician John McKay noticed what seemed like an odd coincidence. He had been studying the different ways of representing the structure of a mysterious entity called the monster group, a gargantuan algebraic object that, mathematicians believed, captured a new kind of symmetry. Mathematicians werent sure that the monster group actually existed, but they knew that if it did exist, it acted in special ways in particular dimensions, the first two of which were 1 and 196,883. McKay, of Concordia University in Montreal, happened to pick up a mathematics paper in a completely different field, involving something called the j-function, one of the most fundamental objects in number theory. Strangely enough, this functions first important coefficient is 196,884, which McKay instantly recognized as the sum of the monsters first two special dimensions. Most mathematicians dismissed the finding as a fluke, since there was no reason to expect the monster and the j-function to be even remotely related. However, the connection caught the attention of John Thompson, a Fields medalist now at the University of Florida in Gainsville, who made an additional discovery. The j-functions second coefficient, 21,493,760, is the sum of the first three special dimensions of the monster: 1 + 196,883 + 21,296,876. It seemed as if the j-function was somehow controlling the structure of the elusive monster group. Soon, two other mathematicians had demonstrated so many of these numerical relationships that it no longer seemed possible that they were mere coincidences. In a 1979 paper called Monstrous Moonshine, the pair John Conway, now of Princeton University, and Simon Norton conjectured that these relationships must result from some deep connection between the monster group and the j-function. They called it moonshine because it appeared so far-fetched, said Don Zagier, a director of the Max Planck Institute for Mathematics in Bonn, Germany. They were such wild ideas that it seemed like wishful thinking to imagine anyone could ever prove them. It took several more years before mathematicians succeeded in even constructing the monster group, but they had a good excuse: The monster has more than 1053 elements, which is more than the number of atoms in a thousand Earths. In 1992, a decade after Robert Griess of the University of Michigan constructed the monster, Richard Borcherds tamed the wild ideas of monstrous moonshine, eventually earning a Fields Medal for this work. Borcherds, of the University of California, Berkeley, proved that there was a bridge between the two distant realms of mathematics in which the monster and the j-function live: namely, string theory, the counterintuitive idea that the universe has tiny hidden dimensions, too small to measure, in which strings vibrate to produce the physical effects we experience at the macroscopic scale. Borcherds discovery touched off a revolution in pure mathematics, leading to a new field known as generalized Kac-Moody algebras. But from a string theory point of view, it was something of a backwater. The 24-dimensional string theory model that linked the j-function and the monster was far removed from the models string theorists were most excited about. It seemed like just an esoteric corner of the theory, without much physical interest, although the math results were startling, said Shamit Kachru, a string theorist at Stanford University. But now moonshine is undergoing a renaissance, one that may eventually have deep implications for string theory. Over the past five years, starting with a discovery analogous to McKays, mathematicians and physicists have come to realize that monstrous moonshine is just the start of the story. Last week, researchers posted a paper on arxiv.org presenting a numerical proof of the so-called Umbral Moonshine Conjecture, formulated in 2012, which proposes that in addition to monstrous moonshine, there are 23 other moonshines: mysterious correspondences between the dimensions of a symmetry group on the one hand, and the coefficients of a special function on the other. The functions in these new moonshines have their origins in a prescient letter by one of mathematics great geniuses, written more than half a century before moonshine was even a glimmer in the minds of mathematicians. The 23 new moonshines appear to be intertwined with some of the most central structures in string theory, four-dimensional objects known as K3 surfaces. The connection with umbral moonshine hints at hidden symmetries in these surfaces, said Miranda Cheng of the University of Amsterdam and Frances National Center for Scientific Research, who originated the Umbral Moonshine Conjecture together with John Duncan, of Case Western Reserve University in Cleveland, Ohio, and Jeffrey Harvey, of the University of Chicago. This is important, and we need to understand it, she said. The new proof strongly suggests that in each of the 23 cases, there must be a string theory model that holds the key to understanding these otherwise baffling numerical correspondences. But the proof doesnt go so far as to actually construct the relevant string theory models, leaving physicists with a tantalizing problem. At the end of the day when we understand what moonshine is, it will be in terms of physics, Duncan said. The symmetries of any given shape have a natural sort of arithmetic to them. For example, rotating a square 90 degrees and then flipping it horizontally is the same as flipping it across a diagonal in other words, 90-degree rotation + horizontal flip = diagonal flip. During the 19th century, mathematicians realized that they could distill this type of arithmetic into an algebraic entity called a group. The same abstract group can represent the symmetries of many different shapes, giving mathematicians a tidy way to understand the commonalities in different shapes. Over much of the 20th century, mathematicians worked to classify all possible groups, and they gradually discovered something strange: While most simple finite groups fell into natural categories, there were 26 oddballs that defied categorization. Of these, the biggest, and the last to be discovered, was the monster. Before McKays serendipitous discovery nearly four decades ago, there was no reason to think the monster group had anything to do with the j-function, the second protagonist of the monstrous-moonshine story. The j-function belongs to a special class of functions whose graphs have repeating patterns similar to M. C. Eschers tessellation of a disk with angels and devils, which shrink ever smaller as they approach the outer boundary. These modular functions are the heroes of number theory, playing a crucial role, for instance, in Andrew Wiles 1994 proof of Fermats Last Theorem. Any time you hear about a striking result in number theory, theres a high chance that its really a statement about modular forms, Kachru said. As with a sound wave, the j-functions repeating pattern can be broken down into a collection of pure tones, so to speak, with coefficients indicating how loud each tone is. It is in these coefficients that McKay found the link to the monster group. In the early 1990s, building on work by Igor Frenkel of Yale University, James Lepowsky of Rutgers University and Arne Meurman of Lund University in Sweden, Borcherds made sense of McKays discovery by showing that there is a particular string theory model in which the j-function and the monster group both play roles. The coefficients of the j-function count the ways strings can oscillate at each energy level. And the monster group captures the models symmetry at those energy levels. The finding gave mathematicians a way to study the mind-bogglingly large monster group using the j-function, whose coefficients are easy to calculate. Math is all about building bridges where on one side you see more clearly than on the other, Duncan said. But this bridge was so unexpectedly powerful that before you see the proof its kind of crazy. While mathematicians explored the ramifications of monstrous moonshine, string theorists focused on a seemingly different problem: figuring out the geometry for the tiny dimensions in which strings are hypothesized to live. Different geometries allow strings to vibrate in different ways, just as tightening the tension on a drum changes its pitch. For decades, physicists have struggled to find a geometry that produces the physical effects we see in the real world. An important ingredient in some of the most promising candidates for such a geometry is a collection of four-dimensional shapes known as K3 surfaces. In contrast with Borcherds string theory model, Kachru said, K3 surfaces fill the string theory textbooks. Not enough is known about the geometry of K3 surfaces to count how many ways strings can oscillate at each energy level, but physicists can write down a more limited function counting certain physical states that appear in all K3 surfaces. In 2010, three string theorists Tohru Eguchi of Kyoto University in Japan, Hirosi Ooguri of the California Institute of Technology in Pasadena, and Yuji Tachikawa of the University of Tokyo in Japan noticed that if they wrote this function in a particular way, out popped coefficients that were the same as some special dimensions of another oddball group, called the Mathieu 24 (M24) group, which has nearly 250 million elements. The three physicists had discovered a new moonshine. This time, physicists and mathematicians were all over the discovery. I was at several conferences, and all the talk was about this new Mathieu moonshine, Zagier said. Zagier attended one such conference in Zurich in July 2011, and there, Duncan wrote in an email, Zagier showed him a piece of paper with lots of numbers on it the coefficients of some functions Zagier was studying called mock modular forms, which are related to modular functions. Don [Zagier] pointed to a particular line of numbers and asked me in jest, I think if there is any finite group related to them, Duncan wrote. Duncan wasnt sure, but he recognized the numbers on another line: They belonged to the special dimensions of a group called M12. Duncan buttonholed Miranda Cheng, and the two pored over the rest of Zagiers piece of paper. The pair, together with Jeffrey Harvey, gradually realized that there was much more to the new moonshine than just the M24 example. The clue to the full moonshine picture, they found, lay in the nearly century-old writings of one of mathematics legendary figures. In 1913, the English mathematician G. H. Hardy received a letter from an accounting clerk in Madras, India, describing some mathematical formulas he had discovered. Many of them were old hat, and some were flat-out wrong, but on the final page were three formulas that blew Hardys mind. They must be true, wrote Hardy, who promptly invited the clerk, Srinivasa Ramanujan, to England, because, if they were not true, no one would have the imagination to invent them. Ramanujan became famous for seemingly pulling mathematical relationships out of thin air, and he credited many of his discoveries to the goddess Namagiri, who appeared to him in visions, he said. His mathematical career was tragically brief, and in 1920, as he lay dying in India at age 32, he wrote Hardy another letter saying that he had discovered what he called mock theta functions, which entered into mathematics beautifully. Ramanujan listed 17 examples of these functions, but didnt explain what they had in common. The question remained open for more than eight decades, until Sander Zwegers, then a graduate student of Zagiers and now a professor at the University of Cologne in Germany, figured out in 2002 that they are all examples of what came to be known as mock modular forms. After the Zurich moonshine conference, Cheng, Duncan and Harvey gradually figured out that M24 moonshine is one of 23 different moonshines, each making a connection between the special dimensions of a group and the coefficients of a mock modular form just as monstrous moonshine made a connection between the monster group and the j-function. For each of these moonshines, the researchers conjectured, there is a string theory like the one in monstrous moonshine, in which the mock modular form counts the string states and the group captures the models symmetry. A mock modular form always has an associated modular function called its shadow, so they named their hypothesis the Umbral Moonshine Conjecture umbra is Latin for shadow. Many of the mock modular forms that appear in the conjecture are among the 17 special examples Ramanujan listed in his prophetic letter. Curiously enough, Borcherds earlier proof of monstrous moonshine also builds on work by Ramanujan: The algebraic objects at the core of the proof were discovered by Frenkel, Lepowsky and Meurman as they analyzed the three formulas that had so startled Hardy in Ramanujans first letter. Its amazing that these two letters form the cornerstone of what we know about moonshine, said Ken Ono, of Emory University in Atlanta, Ga. Without either letter, we couldnt write this story. Finding the Beast In the new paper posted on arxiv.org, Duncan, Ono and Onos graduate student Michael Griffin have come up with a numerical proof of the Umbral Moonshine Conjecture (one case of which the M24 case had already been proven by Terry Gannon, of the University of Alberta in Edmonton, Canada). The new analysis provides only hints of where physicists should look for the string theories that will unite the groups and the mock modular forms. Nevertheless, the proof confirms that the conjecture is on the right track, Harvey said. We had all this structure, and it was so intricate and compelling that it was hard not to think there was some truth to it, he said. Having a mathematical proof makes it a solid piece of work that people can think seriously about. The string theory underlying umbral moonshine is likely to be not just any physical theory, but a particularly important one, Cheng said. It suggests that theres a special symmetry acting on the physical theory of K3 surfaces. Researchers studying K3 surfaces cant see this symmetry yet, she said, suggesting that there is probably a better way of looking at that theory that we havent found yet. Physicists are also excited about a highly conjectural connection between moonshine and quantum gravity, the as-yet-undiscovered theory that will unite general relativity and quantum mechanics. In 2007, the physicist Edward Witten, of the Institute for Advanced Study in Princeton, N.J., speculated that the string theory in monstrous moonshine should offer a way to construct a model of three-dimensional quantum gravity, in which 194 natural categories of elements in the monster group correspond to 194 classes of black holes. Umbral moonshine may lead physicists to similar conjectures, giving hints of where to look for a quantum gravity theory. That is a big hope for the field, Duncan said. The new numerical proof of the Umbral Moonshine Conjecture is like looking for an animal on Mars and seeing its footprint, so we know its there, Zagier said. Now, researchers have to find the animal the string theory that would illuminate all these deep connections. We really want to get our hands on it, Zagier said.
<urn:uuid:5c8855f0-07d3-46bd-8b1c-56b5fdd70d49>
2.84375
3,186
Nonfiction Writing
Science & Tech.
36.238131
95,581,910
November 14 2016 Astronomy Newsletter Here's the latest article from the Astronomy site at BellaOnline.com. Scutum the Shield Vienna, September 1683. For two months the city had been besieged by an army of the Ottoman Empire, and couldn't hold out much longer. But what does this have to do with astronomy? The link is the constellation Scutum (the Shield). And here's the previous article. I was traveling and in the excitement didn't get the newsletter written. Black Moon – Is That a Thing? Followers of social media may know what a “black moon” is. It has been linked to dramatic predictions of doom and gloom. However it's not an astronomical term. So what is a black moon and would we survive it? *Anniversaries – spacecraft* (1) November 5, 2013: The Indian Space Research Organisation (ISRO) launched its first interplanetary mission, the Mars Orbiter Mission (MOM), also called Mangalyaan (“Mars-craft”, from Sanskrit). (2) November 7, 1996: NASA launched the Mars Global Surveyor, a global mapping mission that surveyed the atmosphere and the surface of Mars. (3) November 13, 1971: NASA's Mariner 9 became the first spacecraft to orbit another planet. Three previous missions, Mariners 4, 6 and 7, had made fly-bys of Mars. (1) November 8, 1656: Edmond Halley. He was one of the greatest minds of his era and is still known today for the comet which bears his name after he correctly predicted its return. http://www.bellaonline.com/articles/art48305.asp (2) November 9, 1934: Carl Sagan. Astronomer, science writer and communicator, humanitarian, still sadly missed. (3) November 11, 1875: Vesto Slipher. He was an American astronomer and director of the Lowell Observatory in Arizona. His name isn't well known to the general public, but his work was the foundation for some of Edwin Hubble's later work. (4) November 15, 1738: William Herschel. He was the first person ever to discover a new planet. In partnership with his sister Caroline, he laid the foundations for modern astronomy. http://www.bellaonline.com/articles/art300195.asp Please visit http://astronomy.bellaonline.com/Site.asp for even more great content about Astronomy. I hope to hear from you sometime soon, either in the forum http://forums.bellaonline.com/ubbthreads.php/forums/323/1/Astronomy or in response to this email message. I welcome your feedback! Do pass this message along to family and friends who might also be interested. Remember it's free and without obligation. I wish you clear skies. Mona Evans, Astronomy Editor One of hundreds of sites at BellaOnline.com Unsubscribe from the Astronomy Newsletter Online Newsletter Archive for Astronomy Site Master List of BellaOnline Newsletters
<urn:uuid:e4b0f7fd-e40b-44c8-80d1-43d759621c10>
3.234375
652
News (Org.)
Science & Tech.
60.892653
95,581,917
Estimation of Water Stage Over Wetlands of South Florida Using TRMM Precipitation Radar Observations Everglades are a critical component of the regional hydrological cycle in South Florida. Anthropogenic activities in this region have deteriorated the wetland ecosystem and efforts have been made to restore and preserve it. Seasonal and interannual changes in water stage result in saltwater intrusion and inhibit ecosystem conservation measures. Hence, there is a need to monitor water stage in wetlands. Microwave remote sensing with its sensitivity to surface characteristics provides an opportunity to measure changes in water stage from space. Spaceborne remotely sensed images can provide a comprehensive spatio-temporal distribution of water stage over an area thereby eliminating the need to monitor water stage separately at each measurement site. This research relates water stage measurements (ws) to Tropical Rainfall Measuring Mission Precipitation Radar backscatter (σ°). σ° response to partially exposed vegetation is used as the basis of the model. Variations in the water depth change the amount of exposed vegetation canopy that is reflected in the σ° measurements. An empirical linear model is developed that expresses ws in terms of σ°. The impact of vegetation on the model is studied by examining model performance over various landcovers. The ws model is applied to stage data on sites operated by South Florida Water Management District. Eleven year data (1998 to 2008) is used for this research. The model is calibrated using 75% of the time period of data to estimate model parameters. The model is validated using the remaining 25% of the time period. The estimated water stage measurements from the model are compared with observed measurements over different landcovers. The model performance is assessed by comparing correlation coefficient (R), root mean square error (rmse), and mean absolute error (mae) between observed and modeled water stage measurements. The model works reasonably well in the regions with tree heights greater than 5 m such as deciduous forest (R=0.58, rmse=0.58 ft, mae=0.45 ft), mixed forest (R=0.61, rmse=0.78 ft, mae=0.57 ft), and woodlands (R=0.56, rmse=0.65 ft, mae=0.54 ft). Other low lying landcovers such as cropland (R=0.27, rmse=0.63 ft, mae=0.48 ft) and closed shrubland (R=0.33, rmse=0.56 ft, mae= 0.43 ft) do not show significant performance of the model. This is because the vegetation in croplands and closed shrubland are submerged under water for most part of the year. The incident microwave radiations get specularly reflected from the water surface resulting in lesser backscatter. On the other hand, in areas with tall vegetation, the incident radiations get backscattered from the vegetation above the water surface exhibiting distinct response as a result of the amount of submergence. The modeled values of water stage compare well with the observed water stage in areas with tall vegetation. Thus microwave remote sensing can provide a comprehensive spatio-temporal distribution of water stage. This research provides a new insight into measurement of water stage using spaceborne remote sensing techniques. Florida—Everglades; Hydrologic cycle; Nature--Effect of human beings on; Saltwater encroachment; Wetlands; Wetland conservation; Wetland restoration Civil and Environmental Engineering | Civil Engineering | Environmental Engineering | Environmental Health and Protection | Environmental Monitoring | Environmental Sciences | Water Resource Management Use Find in Your Library, contact the author, or interlibrary loan to garner a copy of the item. Publisher policy does not allow archiving the final published version. If a post-print (author's peer-reviewed manuscript) is allowed and available, or publisher policy changes, the item will be deposited. Estimation of Water Stage Over Wetlands of South Florida Using TRMM Precipitation Radar Observations. In R. N. Palmer, World Environmental and Water Resources Congress 2010: Challenges of Change American Society of Civil Engineers.
<urn:uuid:b67c2c16-70e6-432e-8b9b-d4f6ccbb8ad2>
2.890625
856
Academic Writing
Science & Tech.
33.661334
95,581,919
At a Glance - Recent events in the tropics have many wondering how many tropical cyclones could fit in the Atlantic Basin. - Since records began, the highest number of tropical systems in the Atlantic simultaneously is five. - One expert believes that there's room for more, based on historical data and conditions needed for hurricane formation. It can be disconcerting to see two or even three hurricanes lined up in the Atlantic Basin, all seemingly ready to strike the Caribbean or North America. But this time of year, it's not all that rare, and the Atlantic Basin is certainly capable of holding a trio of tropical systems – or more – at any given time. As Hurricane Jose spun in the Atlantic Ocean on Sept. 7, Irma was already a massive storm, and Katia was a Category 1 hurricane. The three storms gave meteorologists and researchers alike a case of déjà vu, since virtually the same thing happened with Karl, Igor and Julia seven years ago. The Atlantic Basin has proven it can hold as many as five tropical cyclones simultaneously, as it did Sept. 10-12, 1970, NOAA's Hurricane Research Division said. According to one expert, if the conditions are just right, the Atlantic might be capable of holding even more than five tropical systems at once. Popular Mechanics spoke to Dr. Anand Gnanadesikan, a climate modeler and professor of Earth and planetary sciences at the Krieger School of Arts and Sciences, who said historical data suggests a less than 1 percent chance of four or more tropical cyclones at once. But as history has shown, it's certainly possible. Using what's known about hurricane formation and historical data, Gnanadesikan believes it's possible that the Atlantic Basin could hold as many as seven tropical cyclones at once, and even then, there would still be plenty of open room in the large ocean for more development. For now, the record remains at five simultaneous storms. If that record ever falls, our friends at the National Hurricane Center might be more than a little busy.
<urn:uuid:a135b77a-0bbd-4e34-b878-0a6b74da1220>
3.171875
424
News Article
Science & Tech.
40.822558
95,581,921
Pattern of Uplifted Islands in the Main Ocean Basins. - Published Article Science (New York, N.Y.) - Publication Date Feb 15, 1963 Most uplifted islands lie in one of three types of tectonic location: on mid-ocean ridges; between 200 and 750 km on the convex side of island arcs; and along a great circle across the southern Pacific, which may be a fault. Since the usual habit of islands is to subside, these islands may owe their uplift to their special tectonic positions. The regularity of this pattern of uplift supports the view that in the earth an elastic surface layer rests upon a plastic or viscous substratum. Report this publication From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine. This record was last updated on 06/13/2016 and may not reflect the most current and accurate biomedical/scientific data available from NLM. The corresponding record at NLM can be accessed at https://www.ncbi.nlm.nih.gov/pubmed/17788294
<urn:uuid:4d922e0d-b225-470f-9c87-bba10a4b9fc1>
3.65625
234
Knowledge Article
Science & Tech.
63.101591
95,581,924
poll() is the executor’s API to the future - it’s how it’s able to determine the status of the future (done, not done, errored). Asynchronous work is typically done, well, asynchronously . Future represents some work executing on a threadpool, for example, then that work can be progressing concurrently. Future only tells the executor whether it’s done or not (via poll()), and it needs to have a way to determine that by setting up some communication with the threadpool. Future is waiting on bytes to be available for reading on the socket, then that Future will have registered an event notification to be delivered by whatever OS kernel API when there’s some data to be read. That work is also happening concurrently, and not even in userland in this case. So it’s better to think of poll() as just the API an executor has to ask the Future for its status. I think I sort of answered this above as well. It’s not that the Future's work makes progress per say, but rather it’s how the executor can drive the future to completion by asking it for its status, and if it’s not ready, keeping it stored somewhere until a task notification has been received, and the future is re-polled for its status. The most interesting case is when a future returns Ok(Async::NotReady) - to avoid busy polling it, as in the dummy executor I showed earlier, there’s machinery for the future to return NotReady but to also register a notification such that when the asynchronous operation makes progress, the future is re-polled. For a deep dive on how tokio interoperates with futures, take a look at https://cafbit.com/post/tokio_internals/. It should make the connection between a future, a task (i.e. notification mechanism), and an executor (tokio event loop in this case) a bit more concrete.
<urn:uuid:bb411e00-51c4-4067-be00-f850c1824d31>
3.390625
434
Q&A Forum
Software Dev.
43.323346
95,581,936
According to Tom Hawtin A closure is a block of code that can be referenced (and passed around) with access to the variables of the enclosing scope. In programming languages, a closure (also lexical closure or function closure) is a function or reference to a function together with a referencing environment—a table storing a reference to each of the non-local variables (also called free variables or upvalues) of that function. JSR Proposal: Closures for Java Summary: Add support so programs can operate on an arbitrary block of code with parameters, and simplify the use of methods that receive such blocks. This JSR provides support for operating on an arbitrary “block of Java code”, or body, which is either a statement list, an expression, or a combination of both. We call the mechanism a closure expression. Wrapping statements or an expression in a closure expression does not change their meaning, but merely defers their execution. Evaluating a closure expression produces a closure object. The closure object can later be invoked, which results in execution of the body, yielding the value of the expression (if one was present) to the invoker. A closure expression can have parameters, which act as variables whose scope is the body. In this case the invoker of the closure object must provide compatible arguments, which become the values for the parameters. In addition, this JSR may support a new invocation statement for methods that accept a closure to simplify their use in common cases. Pasted from <http://www.javac.info/consensus-closures-jsr.html> Since Java 1.1, anonymous inner class have provided this facility in a highly verbose manner. They also have a restriction of only being able to use final (and definitely assigned) local variables. (Note, even non-final local variables are in scope, but cannot be used.) Java SE 8 is intended to have a more concise version of this for single-method interfaces*, called “lambdas”. Lambdas have much the same restrictions as anonymous inner classes, although some details vary randomly. *Originally the design was more flexible allowing Single Abstract Methods (SAM) types. Unfortunately the new design is less flexible, but does attempt to justify allowing implementation within interfaces. Bringing Closures to Java 5, 6 and 7 – http://mseifed.blogspot.se/2012/09/bringing-closures-to-java-5-6-and-7.html Page Visitors: 216
<urn:uuid:16a49b33-8914-4cfd-95ed-be36fa3a71e5>
3.734375
527
Knowledge Article
Software Dev.
38.47004
95,581,937
Authors: Jean Louis Van Belle MAEc BAEc BPhil The geometry of the elementary quantum-mechanical wavefunction and a linearly polarized electromagnetic wave consist of two plane waves that are perpendicular to the direction of propagation: their components only differ in magnitude and – more importantly – in their relative phase (0 and 90° respectively). The physical dimension of the electric field vector is force per unit charge (N/C). It is, therefore, tempting to associate the real and imaginary component of the wavefunction with a similar physical dimension: force per unit mass (N/kg). This is, of course, the dimension of the gravitational field, which reduces to the dimension of acceleration (1 N/kg = 1 m/s2). The results and implications are remarkably elegant and intuitive: - Schrödinger’s wave equation, for example, can now be interpreted as an energy diffusion equation, and the wavefunction itself can be interpreted as a propagating gravitational wave. - The energy conservation principle then gives us a physical normalization condition, as probabilities (P = |ψ|2) are then, effectively, proportional to energy densities (u). - We also get a more intuitive explanation of spin angular momentum, the boson-fermion dichotomy, and the Compton scattering radius for a particle. - Finally, this physical interpretation of the wavefunction may also give us some clues in regard to the mechanism of relativistic length contraction. The interpretation does not challenge the Copenhagen interpretation of quantum mechanics: interpreting probability amplitudes as traveling field disturbances does not explain why a particle hits a detector as a particle (not as a wave). As such, this interpretation respects the complementarity principle. Comments: 35 Pages. Unique-IP document downloads: 225 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:5c3a0859-9c50-471c-82ba-ca35808269f7>
2.90625
502
Academic Writing
Science & Tech.
22.794745
95,581,939
Introducing the Deep Exploration Series on Python This post is the first in a series intended to dig deep into the Python interpreter. Python is a mature language, with commits dating back to August 1990. Along the way, the developers have evolved a series of processes for contributing that allow them to move relatively quickly with few issues/errors. In this series, we'll take a look at various modules and pieces of functionality of the Python language. We'll look at design choices, their impact, and their evolution. We'll also look at the design of the language itself and learn about the operations of the interpreter as it parses the language all the way to the main eval loop. Finally, we'll attempt to give practical takeaways that fall out of a deeper understanding of the language. The cpython implementation of Python (which is the standard on most machines) has been ported over to GitHub from its home in Mercurial. I think it also had a time under SVN, but the engineers managed to preserve (for the most part) the commit logs. If you've never seen it before, git provides a really nice feature to view the commit history of a single file. This article is brought to you in part by: I went through about a decade of commit history on Objects/dictobject.c to bring a historical accounting of interesting tidbits as well as an overview of the modern implementation of the dict algorithm. (If you find any inaccuracies or errata, please send me a note. ) The Dict Interface and Mappings The dict interface has seen a lot of changes over the years. There was actually a long period when "for k in d" did not exist! Check out the handy timeline below. A dict is a generalization of a mapping type. They've been around forever, but a dict is still the only instance of them in Python according to the official documentation. You can actually find conflicting documentation, though, which I tend to agree with. If a mapping is defined as an object that maps immutable keys to values, then collections.OrderedDict and collections.defaultdict also qualify. Early on, there was a mappingobject.c file that was actually renamed dictobject.c. Anyway, these are all nice to know — but let's get into the nuts and bolts by looking at the history of design decisions for this module. B-trees, Hashing, and Collisions There are a lot of semantics that go into the implementation of the Python dictionary. In the following subsections, we'll talk about the design choices and implementation details for the basic approach to dictionaries. We'll get almost up to the complete implementation of the dict as it's known today. I leave a few things for a future post (e.g., split dicts and type-specific implementations). B-trees vs. Hashing Python uses a hash table to get O(1) lookups on randomly accessed keys. The other most common choice for mapping objects is the binary tree lookup. Python's choice of the hash table over the B-tree was a conscious one. Python’s dictionaries are implemented as resizable hash tables. Compared to B-trees, this gives better performance for lookup (the most common operation by far) under most circumstances, and the implementation is simpler. —python.org In fact, you can see the performance difference very clearly: When Keys Collide The idea of a collision is one concept that can make or break a hash table implementation. Handling these properly for your use case can make your approach very fast. Handling them improperly can make things very slow. Python hash had a history of problems with their approach to handling collisions that were worked out once and for all before Python 2.7. Going through these change sets is a phenomenal way to understand the shortcomings of various design decisions with respect to resolving collisions. To understand Python's approach to hash tables, it's important to know a little bit of terminology: - hash: a numeric value a key is mapped to by a hashing algorithm. In other words; f(key) -> hash - slot: an empty position in the hash table a dictionary key-value can possibly be saved to - dummy key: when a key-value is deleted from a dictionary, we don't leave behind an empty slot. This would make it impossible to find other keys (we'll go into that later). We leave behind a dummy instead. - NULL keys: the value a key has before a slot in the table is used/occupied by a key-value pair - entry: a hash and a key-value pair occupying a slot The mapping object is a container for mapping entry(s). It maintains a reference to ma_fill (the number of non-NULL keys — i.e., dummy keys + entries); ma_used (the number of non-NULL, non-dummy keys); and ma_size, a prime number representing the size (memory allocated) of the underlying table. At the core of Python's lookup algorithm is a method called "lookmapping." Guido describes it in his commit as: The basic lookup function used by all operations. This is essentially Algorithm D from Knuth Vol. 3, Sec. 6.4. Open addressing is preferred over chaining since the link overhead for chaining would be substantial. Now that we have the building blocks, let's understand the difference between Open Addressing and Chaining. I'll explain below, but you're encouraged to have a look at the Wikipedia pages linked to above. Knuth's books are also available on Amazon if you're interested in picking up a copy. In Open Addressing, we allocate our whole address space. This includes space for all the entries we are able to insert directly into the space, as well as those that find their slot through a sequence of collisions. In the figure below, you'll see an example of a linear probe. The key 'a' hashes to 12416037344. Taking the modulo with the table size (8), we find it falls into slot 0. What if we want to add 'i' now, which would also map to slot 0? It's as simple as jumping forward some number of slots according to "incr." In our case, incr is set to 3. Each key that might collide with slot 0 will find its home n times incr away, where n is the number of collisions that have occurred up to that point. The scenario we've described here has O(1) lookups in the best case, and O(n) in the worst. In the worst case, it amounts to a linear search of the space. Open Addressing differs from chaining in the way memory is allocated. Rather than probing for a new slot when a collision occurs, we just append to a linked list. This also has O(1) lookups in the best case, and O(n) in the worst. The major difference is the allocation overhead. In chaining, we have the additional operation of allocating an element on the heap for every collision that occurs during an insert operation. The quality of a hash table implementation in many ways comes down to the quality of its collision resolution mechanism. If keys become too close together in the table, we end up with a situation called "clustering." Clusters can make it likely there will be more than usual collisions over a range of keys and are, hence, typically undesirable. Python's actual probe formula at the onset was as shown above: (3 OK. But really. What's the problem? Well, on January 16, 1997, Guido wrote this gem, with an acknowledgement that the core algorithm behind the dict lookup must be AFAP. Clearly, at this time, it was not. In modern implementations of GCC and clang, you'll find the modulo operation is optimized. But once upon a time, it actually broke down to this formula: ...which is three operations in one. Wow! To get away from this and move toward fewer, faster operations, Guido (with help from other contributors) committed a new lookup algorithm implementing the Galois Field for randomized lookups. This decreased the collision rate substantially. As a side note: In Python, the modulo operation is still a little slower than the bitwise & (which provides similar functionality). Anyway, after the Galois Field, there were a series of minor changes to the lookmapping method that resulted in a 100% speed improvement. Have a look at the commit. It's very Death by a Thousand Cuts. For those of you having trouble picking out what was changed, they - changed sequences of "if" statements to "if/else"; - postponed casting "hash" until it was needed; and - stored "hash" in a register instead of on the stack. After these changes spent some time in the community for a few years, it was brought up on the python-dev list that not all the bits of the hash are actually coming into play when computing what slot an entry should be placed in. One set in particular resulted in all entries landing in the same slot. That's the set defined by [i << 16 for i in range(20000)]. Consider the following commit message from the cpython project: If the table size doubles every time (for the set mentioned above), every single entry is placed in slot 0. That's pretty bad. To fix this, Tim Peters committed a feature in May 2001 that uses polynomial division to incorporate more bits of the hash value with each subsequent collision. I suppose the team thought this was a little difficult to understand. I'm not sure why, but a month later, on June 1, 2001, the polynomial division approach was stripped out for a pretty simple recurrent function. To quote the comments in dictobject.c: At this point, the first half of collision resolution is to visit table indices via this recurrence... j = ((5 For any initial j in range(2 Again from dictobject.c: The other half of the strategy is to get the other bits of the hash code into play. This is done by initializing an (unsigned) variable "perturb" to the full hash code and changing the recurrence to: At this time, the following rules were solidified: - Table size is 2******n, with a minimum size of 8 slots. - Table indices are computed from hashes using the same number of hash bits as there are bits in the table size. - Load factor (number of filled slots) must be below 2/3 table size or the table resizes to the next power of two. OK. We just went through a lot of implementation. It's kind of nice to take something practical away from all this that you can use in your day-to-day. Here are a few things that fall directly out of what we reviewed. Resizing Is Not "Free" Our default table size is 8 elements. Since the load factor must remain below 2/3, our table will resize upon adding a 5th element. Let's have a look at the difference between instantiating two 4-element dicts versus one 8-element dict. You might expect the two 4-element dicts to take longer because of the overhead associated with creating two separate containers. That happens not to be the case. Consider the following code: The output appears as ...indicating about a 30% performance benefit to avoiding resizing. Now, in the words of Donald Knuth, "premature optimization is the root of all evil." I'm certainly not advocating splitting up logically related dicts to get speed improvements. If you're worried about such small speed improvements, you should probably consider pypy or another language altogether. On the other hand, if you have a database query, for example, you may consider replacing your SELECT Small Keys Are Better Than Large Ones We saw the 500-fold improvement for the set of integers bit-shifted 16 times. What about 32 times? Sounds crazy, right? 1 << 32 is in the 4B range. That's large but not astronomical. In fact, some schemes for assigning user IDs borrow from this range. Consider the following code: As you might have guessed from the context, the dict with very large keys takes about 30% longer to assign to, indicating a much higher rate of collisions. Dicts Can Waste a Lot of Memory Since the resizing plan for dicts follows the standard exponential (2 People have probably told you the order of a dict is not guaranteed. That's true! But it actually is...kind of. The methods that calculate keys and place elements in the dict in cpython are deterministic. That means passing the same values will have the same results each time. What causes elements to go out of order is actually the history of the dict. Consider the following. In this example, we have two keys: 1111 and 111. Since our table size is 8 (the default for a new dict), we know only three bits will be taken into account to place an element in the table (since there are 8 possible combinations of three bits). Knowing that, we can hash them. The last three bits of each is 111, which means we'll have a collision for sure. Adding them to dict a in one order and to dict b in another order ensures the first element gets the first slot in the dict. When we try to add the next element, we'll have a collision, and it will be placed at another slot in the table (which happens to be before the first slot in this case). When we print the keys of the dict, then, they're out of order. This behavior is consistent, though, and reproducible. As long as we add the elements in the same order, the order will be the same for the latest versions of python 2.7 and 3.x as of the time of this writing. Dict Keys Should Be Treated As Immutable For those of you who haven't tried, this is what happens when you attempt to use a mutable type as a key for a dict: The problem here stems from the fact that mutable types don't have a __hash__() method by default. There is no inherent reason for this for a given container, but when you consider what __hash__() is used for, there's a great reason. If you add a list as a key for a dict, then you mutate it; the hash will change. This means future lookups for that element will fail because passing the value to look up will have a different hash. Let's see an example. First, we have to extend list with a class that defines a __hash__() method. This will do just fine. Now we can use our list as a mutable dict key! hlist is our hashable list. hlist2 is a separate hashable list containing the same elements. Setting d[hlist] = 'a', we can look up the value 'a' with either hlist or the new list, hlist2 (which contains the same values in the same order). This is because the lists are the same by value, and they have the same output under __hash__(). Now let's mutate hlist. To be clear: The key of d that yields a is hlist. It's not the value of hlist, but hlist itself. Modifying hlist by reference, then, will have some ill effects. This will cause the value of hlist to change as well as the hash. It's already in its assigned slot, though, which will most likely be inappropriate given its new hash and value. So, as you might expect, appending 4 to hlist causes lookup errors. It still exists in d as a key, but now it has a different hash and a different value. When our lookup by the original variable, hlist, runs, we calculate the wrong hash, and it appears hlist is no longer in d. When we do the lookup by the original value, hlist2, we find the correct slot based on its hash, but the value doesn't match, so it appears hlist2 is not in d.
<urn:uuid:c9af81e9-5ad4-4869-9f7c-431f78b81504>
2.6875
3,387
Truncated
Software Dev.
62.036654
95,581,943
Scientists from the John Innes Centre (JIC), Norwich (1) have today reported that highly toxic compounds, called free radicals, are essential to plant growth. The researchers had found that the controlled production of free radicals is an essential first step in switching on the expansion of cells that underlies the growth of plant shoots, roots, leaves and buds. A phenomenon that is especially evident in the spring. The research is reported in the international scientific journal Nature. "This is a completely novel discovery" said Dr Liam Dolan (leader of the research project at JIC). "For the first time we have strong evidence that all cell growth is controlled by the production of these highly reactive and therefore very toxic free radicals. At this time of year plants are juggling with a life and death balance as cells in sprouting seedlings and opening buds make high levels of these molecules in order to drive the expansion of new leaves, roots and shoots". The research team have identified a gene (RHD2) that makes a protein, which produces free radicals(2). They have demonstrated that controlled production of free radicals by RHD2 stimulates calcium channels in the membranes of cells resulting in calcium being taken up by the cells. The accumulation of calcium in turn activates cell expansion. The scientists measured cell growth in roots and root hairs of Arabidopsis thaliana(3). In plants where the RHD2 gene was inactivated by a mutation the roots and root hairs were stunted. The multidisciplinary team used sophisticated microscopy to reveal the effect of RHD2 on free radical production and calcium movement into cells. Ray Mathias | alfa O2 stable hydrogenases for applications 23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Science Education 23.07.2018 | Health and Medicine 23.07.2018 | Life Sciences
<urn:uuid:1c1950be-ef78-482c-b279-76e2c474ce34>
3.0625
916
Content Listing
Science & Tech.
41.782464
95,581,970
Sea ice coverage at the poles remains small By National Oceanic and Atmospheric Administration June 18, 2018 Climate-wise, Mother Earth had three of a kind in hand last month: It was the fourth warmest May, the fourth warmest March–May period and the fourth warmest year to date on record for the globe. Let’s take a closer look more the highlights from NOAA’s latest monthly global climate analysis: Climate by the numbers The average global temperature in May 2018 was 1.44 degrees above the 20th-century average of 58.6 degrees F. This was the fourth highest for May in the 139-year record (1880–2018). Last month also was the 42nd consecutive May and the 401st consecutive month with above-average temperatures. March through May 2018 | Season The average global temperature for March–May was 1.48 degrees above average of 56.7 degrees, making it the fourth warmest such period on record. The year to date | January through May 2018 The year-to-date average global temperature was 1.39 degrees above average of 55.5 degrees F. This tied 2010 as the fourth warmest average temperature for the year to date.
<urn:uuid:63d580bb-2f1f-40f8-a305-b5334c642b00>
3.0625
252
News (Org.)
Science & Tech.
63.621224
95,581,974
- Research news - Open Access Single-cell enzyme monitoring? © BioMed Central Ltd 2004 Published: 10 August 2004 A new technique to rapidly detect enzyme activity published online August 8 in Nature Biotechnology is sensitive enough to identify reactions from as few as 500 molecules, according to researchers at the University of Strathclyde, Glasgow, who say their method could potentially detect multiple enzyme activities simultaneously and at levels found within single cells (Nat Biotechnol 2004, DOI:10.1038/nbt1003). "We think we can probably apply the technology to most enzyme classes," researcher Barry Moore said of the method, which employs surface-enhanced resonance Raman scattering (SERRS). In SERRS, the target compound is adsorbed onto a roughened metal surface, producing an enhanced vibrational spectrum of the target, characterized by multiple sharp peaks, that serves as a fingerprint. The research team used a suspension of citrate-reduced silver particles roughly 40 nanometers in diameter as their metal surface. The key to detecting enzyme activities at ultra-low levels is a newly devised class of substrates covered by a University of Strathclyde patent. Each consists of three components - an enzyme recognition site, a benzotriazole azo dye, and an enzyme-cleavable linker that stably joins the other components. When free, the dye has a strong penchant for adsorbing to silver nanoparticles by displacing their citrate surface layers, and when this happens it can generate an increase of up to 10 to 14 times in the SERRS signal intensity, enough that near single-molecule detection levels of such dyes are observed. The substrates proved susceptible to hydrolysis by a wide range of hydrolases, including lipases, esterases, and proteases. In experiments, the substrates acted rapidly, screening for activity and enantioselectivity for 14 enzymes in less than 30 seconds. Extrapolating this productivity gives a potential throughput of roughly 100,000 enzymes per day per instrument, comparing favorably with other screening techniques, the investigators said. "You can measure SERRS on any standard Raman spectrometer," researcher Duncan Graham said. He noted no high-throughput SERRS techniques are currently employed, although his group has suggested high-throughput SERRS techniques previously for genomics. The technique detected 0.025 micrograms/milliliter of lipase from Pseudomonas cepacia after reaction for 10 minutes, corresponding to at most 0.8 picomoles of enzyme in the 1-milliliter sample volume. Given the microscope lens SERRS uses only interrogates a small portion of the sample, a conservative estimate puts the actual sample volume at picoliters, whereas more realistically it is closer to femtoliters, the researchers wrote in their report. This suggests they likely sampled reactions arising from only 500 or so molecules of enzyme at most, they add. "In the long term, you could think of measurements directly in vivo, where you would get the nanoparticles into the cells," Moore told us. The investigators are currently synthesizing new substrates to monitor other major enzyme classes, such as phosphatases and p450s. They suggest the system could be optimized so that SERRS can be used to monitor single-enzyme kinetics, as has already been done with fluorescence. Also, so far, standard Raman optics have been used, but the authors note there is great scope for miniaturization. Graham noted his group was developing a microfluidics device to monitor enzyme selectivity and activity. Because each dye produces a characteristic SERRS spectrum - akin to a fingerprint - that can easily be identified and quantified in a mixture, synthesizing substrates with different enzyme recognition sites coupled with different dyes could make it possible to monitor the action of multiple enzymes simultaneously, the investigators suggest. They note that it is very difficult to achieve by alternate techniques and is expected to provide a pathway to a wide range of new bioanalytical assays. "One application of this technology is in the development of new diagnostic methods, such as in detecting specific enzyme biomarkers of disease states. For example, enzymes such as metalloproteinases are known to be involved in cancer initiation and progression," Moore said. "It's ingenious work," said Chad Mirkinof Northwestern University in Evanston, Ill., who did not participate in this research. "It's a technique that's going to allow you to look at a variety of molecular biotechnology processes with high sensitivity with a technique that looks like it's fairly straightforward to implement. But Mirkinsaid that calibration would be "one of the big issues with this." "One of the main problems with SERRS is changing SERRS responses, since every surface is slightly different, with hot spots and cold spots, and calibration is often a difficult issue to deal with," Mirkin said. - Nature Biotechnology, [http://0-www.nature.com.brum.beds.ac.uk/nbt/] - Barry D. Moore, [http://www.chem.strath.ac.uk/people.php?id=cbas115] - Raman spectroscopy, [http://en.wikipedia.org/wiki/Raman_spectroscopy] - Duncan Graham, [http://homepages.strath.ac.uk/~bas96104/] - Duncan Graham: Publications, [http://homepages.strath.ac.uk/~bas96104/publications.htm] - First person: Chad Mirkin, The Scientist, 17:9, January 27, 2003., [http://www.the-scientist.com/yr2003/jan/upfront4_030127.html] - Chad Mirkin, [http://www.chem.northwestern.edu/~mkngrp/]
<urn:uuid:01a500ce-b81b-485a-84ed-3743398a544f>
2.515625
1,236
Truncated
Science & Tech.
36.870134
95,582,000
Align Paragraph and Table Borders with Page Border.When the object is serialized out as xml, its qualified name is w:alignBordersAndEdges. Assembly: DocumentFormat.OpenXml (in DocumentFormat.OpenXml.dll) 'Declaration Public Class AlignBorderAndEdges _ Inherits OnOffType 'Usage Dim instance As AlignBorderAndEdges public class AlignBorderAndEdges : OnOffType [ISO/IEC 29500-1 1st Edition] 22.214.171.124 alignBordersAndEdges (Align Paragraph and Table Borders with Page Border) This element specifies that paragraph borders specified using the pBdr element (§126.96.36.199) and table borders using the tblBorders element (§17.4.40) shall be adjusted to align with extents of the page border defined using the pgBorders element (§17.6.10) if the spacing between these borders is less than or equal to 10.5 points (one character width) or less from the page border. The presence of this setting shall ensure there are no gaps of one character width or less between adjoining page and paragraph/table borders, as borders which are perfectly aligning shall not be displayed in favor of the intervening page border. If this element is omitted, then borders shall not be automatically adjusted to prevent gaps of less than one character width. If the page border is not measured from the text extents using a value of text in the offsetFrom attribute on the pgBorders element, then it can be ignored. [Example: Consider the following WordprocessingML fragment from the document settings: The alignBordersAndEdges element has a value of true specifying that borders must be adjusted to prevent gaps of less than one character width. If a document has a page border specified to appear 4 points from the text extents, and within that page a paragraph border specified to appear one point from the text extents, that would normally appear like this: If this element is present, then those gaps (which are all of three points in width) must be adjusted to ensure that the borders align exactly and the paragraph border is suppressed: This element’s content model is defined by the common boolean property definition in §17.17.4. © ISO/IEC29500: 2008. Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
<urn:uuid:e1c1e69a-67a3-4456-aea3-ccedc1ce868f>
2.5625
528
Documentation
Software Dev.
48.877092
95,582,001
Aerosols can influence the climate indirectly by acting as cloud condensation nuclei and/or ice nuclei, thereby modifying cloud optical properties. In contrast to the widespread global warming, the central and south central United States display a noteworthy overall cooling trend during the 20th century, with an especially striking cooling trend in summertime daily maximum temperature (Tmax) (termed the U.S. “warming hole”). Here we used observations of temperature, shortwave cloud forcing (SWCF), longwave cloud forcing (LWCF), aerosol optical depth and precipitable water vapor as well as global coupled climate models to explore the attribution of the “warming hole”. We find that the observed cooling trend in summer Tmax can be attributed mainly to SWCF due to aerosols with offset from the greenhouse effect of precipitable water vapor. A global coupled climate model reveals that the observed “warming hole” can be produced only when the aerosol fields are simulated with a reasonable degree of accuracy as this is necessary for accurate simulation of SWCF over the region. These results provide compelling evidence of the role of the aerosol indirect effect in cooling regional climate on the Earth. Our results reaffirm that LWCF can warm both winter Tmax and Tmin. A major barrier to reliable prediction of climate change on decadal and longer scales is the characterization of uncertainties in the magnitude of the estimated cloud-mediated (indirect) effects of aerosols1. The aerosol indirect effect can be negative or positive by suppressing or invigorating the development of clouds and precipitation under different circumstances due to the complex interaction between aerosols and cloud droplets2,3. Airborne absorbing aerosols have been reported to raise regional temperature by reducing the local large-scale cloud cover1. The competing radiative effects of climate include the greenhouse effect (warming due to infrared absorbers) and the “whitehouse” effect (cooling due to visible wavelength reflectors)1. As reflectors, clouds affect the climate by reflecting incoming solar radiation back to space (shortwave cloud forcing (SWCF)), which tends to decrease the daytime maximum surface temperature (Tmax) (cooling effect), and by trapping outgoing infrared radiation (longwave cloud forcing (LWCF)), which tends to increase both nighttime minimum (Tmin) and daytime Tmax (warming effect). In addition, the increase of infrared absorbers such as greenhouse gases (e.g., CO2 and precipitable water vapor (Q)) and absorbing aerosol results in an increase in both daytime Tmax and nighttime Tmin (warming effect due to longwave forcing), whereas the increase of visible reflectors such as sulfate aerosols and clouds leads to a decrease of the daytime Tmax (cooling effect due to shortwave forcing)4,5,6. If the infrared absorption dominates and consequently the greenhouse effect increases, both nighttime Tmin and daytime Tmax should increase with potentially larger effects during the winter due to its longer nights and more stable lapse rate7. If the visible reflection dominates and the whitehouse effect increases, the daytime Tmax should decrease, primarily when solar radiation is the greatest (summer)6,7. In contrast to the widespread global warming, the central and south central United States display a noteworthy overall cooling trend over the past century, with an especially striking cooling trend in summertime daily Tmax (termed the U.S. “warming hole”)1,8,9 (also Supplementary Fig. S1A and Supplementary Note 1). Several explanations have been suggested for this cooling trend, which seem partly associated with the change in sea surface temperatures10, low-level circulations/soil moisture feedback9, internal dynamic variability11, the change in cumulus clouds12, the positive low-level moisture convergence13, large-scale circulation modes (El Nino/Southern Oscillation)8 and land surface processes14. It has been speculated that the aerosol direct and indirect effects play a significant role in the observed strong anticorrelation between trends in summer daily Tmax and precipitation in these regions8. The strong anticorrelation between precipitation and Tmax (and diurnal temperature range) during the warm season has also been found in many other regions15,16. Here we use monthly mean observational data sets of Tmax and Tmin from the Global Historical Climatology Network Monthly (GHCNM)17, cloud properties (SWCF and LWCF at the top of atmosphere (TOA), cloud optical depth (COD) and cloud fractions) from the Clouds and Earth's Radiant Energy System (CERES)18, aerosol optical depth (AOD) from Terra-MODIS, Q from, National Center for Environmental Prediction (NCEP) reanalysis data, and global coupled climate models (Supplementary Notes 1, 2, 3) to explore the attribution of the U.S. “warming hole”. The very strong correlation between summer Tmax and SWCF (correlation coefficient (r) > 0.67 is statistically significant at the 0.05 level) in the scatter plots of Fig. 1A during 2000–2011 is a strong indication that SWCF is one of the major driving forces for the noted variability in summer Tmax over the continental U.S. (CONUS). This is strongly supported by a nearly perfect match of negative trends in the western U.S (WUS) and positive trends in the eastern U.S. (EUS) for summer Tmax and SWCF with the U.S. High Plains dryline as separation. The negative trends in summer Tmax in Maine are collocated with consistent negative trends in SWCF (Figs. 2A and 2B). This is confirmed by the consistent longitudinal variation of the trends of summer Tmax, and SWCF in Fig. 1C. This is evidence that the SWCF trends are one of the main causes for negative trends in WUS and positive trends in EUS for summer Tmax. Fig. 1A strongly supports the assumption that response of temperature to the climate forcing is proportional7,19. Since SWCF by definition is negative, the positive slope (0.12 ± 0.002 (2σ) and 0.15 ± 0.003°C/(W/m2) for EUS and WUS, respectively) means that during summer, when solar radiation is the greatest, more clouds can reflect more incoming solar radiation back to space (larger negative SWCF values), systematically decreasing the daytime Tmax significantly over the CONUS. One obvious question remains as to what causes the observed regional scale change in clouds. Although all cloud droplets must form on preexisting aerosol particles that act as cloud condensation nuclei (CCN)3, cloud distributions depend not only on the available aerosol particles that serve as CCN but also on prevalent atmospheric dynamic and thermodynamic processes3,20. Although there is substantial evidence of the aerosol indirect effect (AIE)2,3,21,22,23,24,25,26, the summer Tmax can change because of variation in large-scale atmospheric circulation. Following Kaufman et al.27, a multiple linear regression (MLR) is used to analyze the influence of synoptic meteorological parameters (from NCEP reanalysis) and aerosols on summer Tmax, its trends and SWCF trends as listed in Table 1 (Supplementary Note 11). Note that the correlations between variables do not prove causality and that the aerosol indirect effect on climate cannot be untangled with high degree of confidence until regional climate models can predict climate change and cloud evolution with high precision. Table 1 indicates SWCF and Q are the two major contributors to variability in both summer Tmax (β-coefficients for relative importance are 0.48 and 0.44, respectively) and its trends (β-coefficients are 0.38 and 0.37, respectively) over the CONUS. This is supported by very significant linear correlations between summer Tmax and Q over the EUS in Fig. 3 and the nearly perfect match of positive trends in EUS for summer Tmax and Q in Figs. 2A and 2E except the northeast portion of U.S. Q is considered as the most important greenhouse gas with positive feedback1 (Supplementary Note 10). The results within and outside the “warming hole” are similar to those over the CONUS except that Q is not important for summer Tmax for outside the “warming hole” as shown in Table 1 and Fig. S14B. The results in Table 1 further reveal that the aerosol direct effect calculated by a box model28 (Supplementary Note 6) does not play a significant role in decreasing summer Tmax over the CONUS, in agreement with other studies7,13,29. The very poor correlations between moisture convergence and summer Tmax (Supplementary Figs. S11 and S12, Supplementary Note 9) indicate unimportance of the moisture convergence for the summer Tmax and U.S. “warming hole”. A high population density and energy, and combustion-related atmospheric emissions interspersed with heavily forested areas in the EUS provide precursors and sources of anthropogenic and biogenic inorganic and organic aerosols30,31,32,33, which can be CCN. The close match of negative trends in summer AOD and positive trends in SWCF over the source regions of the central and eastern U.S. in Figs. 2B and 2D is strong evidence that the AOD trends are the main cause of positive trends in SWCF during 2000–2011. This is confirmed by the consistent longitudinal variation of the negative trends in both AOD and COD, and positive trends in SWCF over the EUS in Fig. 1C and significant linear correlations between the trends of longitudinal mean AOD and mean SWCF in Fig. 1D (Supplementary Fig. S6). Table 1 shows that the trends of AOD are mainly responsible for the variability in the trend of SWCF and the variability of longitudinal means of SWCF trends. To explain the attribution of the “warming hole” for the period of 1950 to 2011 (Supplementary Fig. S1A) (or 1901 to 2011 (Supplementary Fig. S5A)), we analyze available results of nineteen global coupled models from the World Climate Research Programme's (WCRP's) Coupled Model Intercomparison Project phase 5 (CMIP5) multimodel data set34 for both periods of 2000 to 2011 and 1950 to 2011 (Supplementary Notes 12, 2). Fig. 1B shows the scatter plots from the simulations of the MIROC-ESM-CHEM model for the period of 1950 to 2011. Similar to the observations, very strong linear correlations are noted for summer Tmax-SWCF in all models as listed in Supplementary Table S1 except the CESM1-CAM5-1-FV, which exhibits a comparatively smaller slope and weak correlation. The observed slopes for 2000 to 2011 fall within the range estimated from the models with substantial agreement between the observations and models. The model results show apparent independence of the period of analysis in the slopes and correlations as evidenced by similar slopes and correlations for both periods. Thus, we can confidently conclude that the observed slopes and correlations for summer Tmax-SWCF for 2000 to 2011 in Fig. 1 are representative of those for the longer timescale (1950 to 2011). Nationally, SO2 emissions grew from 1950 to about 1980 and then decreased by more than 60% between 1980 and 201030 and there is a linear relationship between decreasing aerosol sulfate concentrations and SO2 emissions30. This is in agreement with the observations that for summer Tmax in EUS, there are almost uniformly negative trends during 1950–1985 (Fig. 4E) in contrast to almost uniformly positive trends during 1985–2011 (Supplementary Fig. S41I) and 2000 to 2011 (Fig. 2A). This is supported by the fact that over the United States cloud cover has increased from 1949 to 2001 in summer and annual means with all of this increase occurring prior to the early 1980s35. The trend analyses for the global coupled models from CMIP5 (Supplementary Note 12) indicate that only MIROC-ESM-CHEM model successfully shows negative trends in summer Tmax (i.e., the U.S. “warming hole”) (Figs. 4A for the observations, and S39A for the models) and SWCF (Fig. 4B) over the central U.S. during 1950–2011. Detailed analysis (Supplementary Note 13) shows that our MIROC-ESM-CHEM model successfully and consistently reproduced the observed summer features for the long-term (i.e., “warming hole” over the central U.S. for the 1901–2011 (Supplementary Fig. S38) and 1950–2011 periods (Supplementary Fig. S39) and negative trends in Tmax over the EUS for the 1950–1985 period with positive trends in AOD and negative trends in SWCF (Supplementary Fig. S40) and for the short-term of 2000–2011 for Tmax (positive trends), AOD (negative trends), SWCF (positive trends) and Q (positive trends in southeast and negative trends in northeast) over the EUS (Supplementary Fig. S43, Supplementary Note 13). MIROC-ESM-CHEM missed the “warming hole” over the south central U.S. during the 1950–2011 period because MIROC-ESM-CHEM did not include the AIE on subgrid convective clouds and this effect is dominantly important over the south central U.S.36,37,38. On contrary, MIROC-ESM did not capture the “warming hole” for the 1950–2011 (Supplementary Fig. S39) and other observed features such as Q for the 2000–2011 period (Supplementary Fig. S43) because of different distributions of AOD and SWCF from the MIROC-ESM simulations (Supplementary Figs. S39G and S39F) relative to those from the MIROC-ESM-CHEM (Supplementary Figs. S39C and S39B). Comparisons of distribution patterns from the MIROC-ESM-CHEM and MIROC-ESM simulations (Supplementary Note 13), especially for AOD and SWCF, indicate that the results of AOD and SWCF from the MIROC-ESM simulations are not in right locations as shown in Supplementary Figs. S39G and S39F relative to those from the MIROC-ESM-CHEM (see Supplementary Figs. S39C and S39B) for the 1950–2011 period. The results from the MIROC-ESM-CHEM showed the positive trends in AOD over the central U.S. while MIROC-ESM-CHEM showed the negative trends in AOD over the central U.S. for the 1950–2011 period. This difference causes the different results for the SWCF and Tmax as shown in Supplementary Fig. S39. Since the only difference between MIROC-ESM-CHEM and MIROC-ESM is that the MIROC-ESM simulations used prescribed monthly mean 3-D chemical fields while the MIROC-ESM-CHEM simulations used chemical fields calculated by the online photochemical module (Supplementary Note 13). Since good chemical fields will affect greenhouse gases such as H2O and aerosol (AOD) fields, the much better performance of MIROC-ESM-CHEM relative to MIROC-ESM suggests the attribution of the “warming hole” to aerosol indirect effect (Supplementary Note 13). The U.S. “warming hole” (i.e. the decrease of summer Tmax) over the central and south central U.S. regions in Fig. 4A is caused by the increase of clouds (Fig. 4B) due to increase of aerosols (Fig. 4C) with offset from the greenhouse effect of Q (increase of Q) (Fig. 4D) during 1950 to 2011 (Supplementary Notes 11, 13). The consistent cooling trends in summer Tmax in EUS during 1950–1985 (Fig. 4E) are because of both increase of clouds (Fig. 4F) due to increase of aerosols (Fig. 4G) and decrease of Q (Fig. 4H) (Supplementary Notes 11, 13). In addition, the very strong linear correlation between winter Tmin (Tmax) and LWCF (r > 0.65 is statistically significant at the 0.05 level) in Figs. 5A and 5B for the 2000–2011 period shows that LWCF is one of the major driving forces for the noted change in winter Tmin (Tmax) in the region restricted to latitudes ≥ 36°N because of latitudinal dependence of the climate response to radiative forcing7,19,20. A global study shows that a radiative forcing can yield a larger response at high latitude than at low latitude because of sea ice feedback and more stable lapse rate at high latitude, especially with calculated clouds7. Since LWCF by definition is positive, the positive slopes here imply that during the winter, more clouds can trap more outgoing infrared radiation, systematically increasing both nighttime Tmin and daytime Tmax significantly over the CONUS at latitudes ≥ 36°N. This is supported by a nearly perfect match of consistent negative and positive trends in Tmin and Tmax with those of LWCF over the CONUS at latitudes > 36°N (Supplementary Fig. S9), indicating that the climate changes in these regions are more complicated and should be analyzed separately. The model results in Figs. 5C and 5D and Supplementary Table S1 show that the observed slopes and correlations for winter Tmin-LWCF and winter Tmax-LWCF for 2000 to 2011 are representative of those for the longer timescale (1950 to 2011). The summer AODs decrease over the CONUS, especially in the EUS (Fig. 5D), whereas the winter AODs increase at latitude > 36°N (Supplementary Fig. S9) from 2000 to 2011. Over the ocean outside of the CONUS both summer and winter AOD increase (Fig. 2D for summer and Supplementary Fig. S9 for winter). The results over the WUS are similar to those of the EUS but with slightly smaller slopes and lower correlation coefficients, indicating that the response of winter Tmax and Tmin to LWCF is slightly weaker in the WUS than the EUS. We have strived to explore the attribution of the U.S. “warming hole” by using observations of temperature, SWCF, LWCF, AOD and precipitable water vapor as well as nineteen global coupled climate models. Our analysis shows that there are a very strong correlation between summer Tmax and SWCF and a nearly perfect match of negative trends in the WUS and positive trends in the EUS for them during 2000–2011 over the CONUS. Note that the correlation (0.64) between SWCF and summer Tmax is higher than that (0.46) between cloud fraction and summer Tmax over the eastern U.S. as shown in Supplementary Fig. S44, indicating that SWCF is better variable in terms of change of summer Tmax. On the other hand, Quuass et al. (2009)39 pointed out that the positive strong correlation between AOD and cloud fraction may be due to the aerosol cloud lifetime effect, dynamical influences such as convergence, humidity welling and the bias in the satellite retrievals and none of these can provide a unique explanation. The MLR analysis shows that SWCF and precipitable water vapor are the two major contributors to variability in both summer Tmax and its trends over the CONUS. It is found that there are the consistent longitude variation of the negative trends in both AOD and COD and significant linear correlation between the trends of longitudinal mean AOD and SWCF. This indicates that the trends of AOD are mainly responsible for the variability in the trends of SWCF and the variability of longitudinal means of SWCF trends. The MIROC-ESM-CHEM36,37,38 coupled climate model (Supplementary Note 13) reveals that the observed “warming hole” (i.e., negative trend in summertime Tmax) can be produced only when the aerosol fields are simulated reasonably as this is necessary for reasonable simulation of SWCF over the region. Since the purpose of this paper to analyze all CMIP5 GCMs models and show the results, more work is needed to prove the superiority of MIROC-ESM-CHEM. In conclusion, these results provide compelling evidence of the role of the aerosol indirect effect in cooling regional climate on the Earth. On the other hand, many theoretical explanations about the attribution of the warming hole have been suggested. On the basis of analysis of 192 simulations from 22 CMIP5 climate models, Kumar et al.40 found that models with relatively higher skill in simulating the North Atlantic low-frequency (multidecadal) oscillations are more likely to reproduce the warming hole over the North America. Leibensperger et al.21 showed that the regional radiative forcing from the anthropogenic aerosols can cool the central and eastern U.S. by 0.5–1.0°C on average during 1970–1990 and that aerosol cooling can increase the southerly flow of moisture from the Gulf of Mexico which result in increased cloud cover and precipitation in the central U.S. This leads to largest cooling effect from the anthropogenic aerosols in the central U.S. The model simulations of Mickley et al.41 over the U.S. for 2010–2050 found that removal of U.S. aerosols can cause significant regional warming with temperature during summer heat wave increasing by as much as 1–2 K in the northeastern U.S., in part, because of positive feedbacks involving soil moisture and low cloud cover. Pan et al.9,14 believed that local/regional land-surface processes were partly responsible for the warming hole through their role in replenishment of seasonally depleted hydrologic cycle (soil moisture). Kunkel et al.11 pointed out that the warming hole is associated with variations in sea surface temperature (SSTs) in the tropic Pacific and that there was a strong association between the central U.S. temperatures and observed variability of North Atlantic SSTs. Lower SSTs over the North Atlantic can increase the anticyclonic transport of moisture from the Gulf of Mexico. Meehl et al.13 believed that altered moisture convergence can increase precipitation with concomitant increases of soil moisture, surface evaporation and increased cloudiness. It is clear that all related works pointed to the fact that invoke changes in the moisture-aerosol-cloud-precipitation-SWCF interaction in the warming hole region. As stated in Rosenfeld et al3, all cloud droplets must form on preexisting aerosol particles that act as CCN. This means that moisture needs aerosol particles to form clouds. To completely understand the moisture-aerosol-cloud-precipitation-SWCF interaction in the warming hole region, this will need more comprehensive models and is beyond of the scope of this work. Since the moisture-aerosol-cloud-precipitation-SWCF interaction is complicated, this interaction may be not linear. On the other hand, the southeast is upwind of the industrialized areas of the NE corridor, but is rich in aerosols from biogenic sources. Thus the possible greater moisture availability and the presence of sufficient aerosols (from both biogenic and anthropogenic sources) could provide for an ideal combination. We use the observational data of monthly mean maximum (Tmax) and minimum (Tmin) temperatures at thousands of stations (Supplementary Fig. S1) obtained from the Global Historical Climatology Network Monthly (GHCNM) version 3 (last updated: 04/11/2012) (http://www.ncdc.noaa.gov/ghcnm)17. The global monthly 1.0° × 1.0° data for shortwave flux (all-sky, clear sky), longwave flux (all-sky, clear sky), cloud optical depth (COD), and cloud fractions under daytime and nighttime conditions at the TOA between March 2000 and December 2011 measured by the Clouds and the Earth's Radiant Energy System (CERES)18 were downloaded from the NASA CERES website (http://ceres.larc.nasa.gov). The global monthly 1.0° × 1.0° data for aerosol optical depth (AOD) at 550 nm and total precipitable water vapor between March 2000 and December 2011 on the basis of Terra-MODIS measurements were downloaded from the NASA Giovanni website (http://gdata1.sci.gsfc.nasa.gov/daac-bin/G3/gui.cgi?instance_id=aerosol_monthly). The global monthly 2.5° × 2.5° mean meteorological fields for the period of 1950 to 2011 were downloaded from the National Center for Environmental Prediction (NCEP)/NCAR reanalysis website (http://www.esrl.noaa.gov/psd/data/reanalysis/reanalysis.shtml) (Supplementary Notes 1, 3). Nineteen global coupled climate models The global model results from the World Climate Research Programme's (WCRP's) Coupled Model Intercomparison Project phase 5 (CMIP5) multimodel data set34 were obtained from the website (http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php). Nineteen global climate models used in this work include GFDL-CM3, GFDL-ESM2G, NCAR-CCSM4, CESM1-CAM5-1-FV, NASA-GISS-E2-R, IPSL-CM5A-LR, INM-CM4, MPI-ESM-LR, MOHC-HadCM3, MOHC-HadGEM2-CC, MOHC-HadGEM2-ES, MRI-CGCM3, BCC-CSM1-1, NCC-NorESM1-M, CNRM-CM5, NIMR-KMA-HadGEM2, CSIRO-BOM-ACCESS1-0, MIROC-ESM-CHEM, and MIROC-ESM (Supplementary Note 2). Note that the analysis of the results of the CMIP5 GCMs uses a single member of the simulation ensemble from each GCM in this study. The model results from the historical (simulation of recent past)34 and RCP45 (future projection forced by RCP (representative concentration pathway) 4.5 (radiative forcing of 4.5 W m−2)) runs were used for 1850 to 2005 and 2006 to 2011, respectively. The historical simulations (1850–2005) imposed changing conditions (consistent with observations) which may include1 atmospheric composition (including CO2) due to both anthropogenic and volcanic influences, solar forcing, emissions or concentrations of short-lived species and natural and anthropogenic aerosols or their precursors and land use6,34. On the other hand, the RCP45 future climate projections (2006–2100) identify a concentration pathway that approximately results in a radiative forcing of 4.5 W m−2 at year 2100, relative to pre-industrial conditions6. We thank Prof. Susan Solomon for insightful discussions that led to a substantial strengthening of the manuscript, Prof. Daniel Rosenfeld and Dr. Christian Hogrefe for helpful comments. We thank the CERES, GHCNM, MODIS and WCRP CMIP5 groups for producing the data used in this paper. The United States Environmental Protection Agency through its Office of Research and Development funded and managed the research described here. It has been subjected to Agency's administrative review and approved for publication. Analyses and visualizations used in this study were produced with the Giovanni online data system, developed and maintained by the NASA GES DISC. We also acknowledge the MODIS mission scientists and associated NASA personnel for the production of the data used in this research effort. The part of this work is supported by the “Zhejiang 1,000 Talent Plan” and Research Center for Air Pollution and Health in Zhejiang University. Attribution of the United States
<urn:uuid:1e93bed7-2d1e-4593-ad60-ec703e66d986>
3.609375
5,987
Academic Writing
Science & Tech.
50.781522
95,582,023
Eulerian and Hamiltonian circuits are defined with some simple examples and a couple of puzzles to illustrate Hamiltonian circuits. A game for 2 players. Set out 16 counters in rows of 1,3,5 and 7. Players take turns to remove any number of counters from a row. The player left with the last counter looses. A game for 2 players with similaritlies to NIM. Place one counter on each spot on the games board. Players take it is turns to remove 1 or 2 adjacent counters. The winner picks up the last counter. Given the nets of 4 cubes with the faces coloured in 4 colours, build a tower so that on each vertical wall no colour is repeated, that is all 4 colours appear. There are lots of different methods to find out what the shapes are worth - how many can you find? The Tower of Hanoi is an ancient mathematical challenge. Working on the building blocks may help you to explain the patterns you notice. A game for 2 players that can be played online. Players take it in turns to select a word from the 9 words given. The aim is to select all the occurrences of the same letter. Can you put the 25 coloured tiles into the 5 x 5 square so that no column, no row and no diagonal line have tiles of the same colour in them? Using the digits 1, 2, 3, 4, 5, 6, 7 and 8, mulitply a two two digit numbers are multiplied to give a four digit number, so that the expression is correct. How many different solutions can you find? Can you be the first to complete a row of three? An ordinary set of dominoes can be laid out as a 7 by 4 magic rectangle in which all the spots in all the columns add to 24, while those in the rows add to 42. Try it! Now try the magic square... Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? A game for 2 players. Take turns to place a counter so that it occupies one of the lowest possible positions in the grid. The first player to complete a line of 4 wins. Can you work out how to win this game of Nim? Does it matter if you go first or second? Everthing you have always wanted to do with dominoes! Some of these games are good for practising your mental calculation skills, and some are good for your reasoning skills. Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? Can you mentally fit the 7 SOMA pieces together to make a cube? Can you do it in more than one way? Is it possible to use all 28 dominoes arranging them in squares of four? What patterns can you see in the solution(s)? Take ten sticks in heaps any way you like. Make a new heap using one from each of the heaps. By repeating that process could the arrangement 7 - 1 - 1 - 1 ever turn up, except by starting with it? A game that demands a logical approach using systematic working to deduce a winning strategy Place the numbers 1, 2, 3,..., 9 one on each square of a 3 by 3 grid so that all the rows and columns add up to a prime number. How many different solutions can you find? Using some or all of the operations of addition, subtraction, multiplication and division and using the digits 3, 3, 8 and 8 each once and only once make an expression equal to 24. Using the interactivity, can you make a regular hexagon from yellow triangles the same size as a regular hexagon made from green triangles ? Show how this pentagonal tile can be used to tile the plane and describe the transformations which map this pentagon to its images in the tiling. Can you spot the similarities between this game and other games you know? The aim is to choose 3 numbers that total 15. A game for 2 people. Take turns placing a counter on the star. You win when you have completed a line of 3 in your colour. How many moves does it take to swap over some red and blue frogs? Do you have a method? Factor track is not a race but a game of skill. The idea is to go round the track in as few moves as possible, keeping to the rules. The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? Using the 8 dominoes make a square where each of the columns and rows adds up to 8 There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules? Can you arrange the digits 1, 1, 2, 2, 3 and 3 to make a Number Sandwich? How many differently shaped rectangles can you build using these equilateral and isosceles triangles? Can you make a square? Use these four dominoes to make a square that has the same number of dots on each side. Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour. In the ancient city of Atlantis a solid rectangular object called a Zin was built in honour of the goddess Tina. Your task is to determine on which day of the week the obelisk was completed. Four friends must cross a bridge. How can they all cross it in just 17 minutes? Can you use small coloured cubes to make a 3 by 3 by 3 cube so that each face of the bigger cube contains one of each colour? The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"? A game in which players take it in turns to choose a number. Can you block your opponent?
<urn:uuid:dba7c840-c956-4c54-96cf-91ffb487e016>
3.734375
1,282
Content Listing
Science & Tech.
75.224172
95,582,027
Nasa announced the discovery of seven Earth-like planets orbiting a star called Trappist-1, about 39 light years away, on Wednesday. Subscribe to Guardian Science and Tech ► http://bit.ly/substech The find has widely excited the astronomy community because of its implications in the hunt for alien life beyond the solar system. Three of the planets in the Trappist-1 system are in the habitable zone near the star and so could have water on their surfaces. Guardian science website ► https://www.theguardian.com/science Guardian technology website ► https://www.theguardian.com/uk/technology The Guardian on YouTube: The Guardian ► http://www.youtube.com/theguardian Owen Jones talks ► http://www.youtube.com/owenjonestalks Guardian Football ► http://www.youtube.com/guardianfootball Guardian Culture ► http://www.youtube.com/guardianculturearts Guardian Wires ► http://www.youtube.com/guardianwires Guardian Food ► http://www.youtube.com/guardianfood Blessed shall you be in the city, and blessed shall you be in the country. – Deuteronomy 28:3 cool #nasa videos- Nasa announces discovery of seven Earth-sized planets – video report #Space #videos #NASA #News #SpaceVideos Originally posted 2017-02-23 09:20:24. Republished by Blog Post Promoter
<urn:uuid:54d07169-ff67-4bf7-ab6e-1fb44731a0c0>
3.015625
322
News Article
Science & Tech.
58.739318
95,582,032
Experienced practitioners from diverse organisations came together to discuss threatened species monitoring at the workshop entitled ‘Enhancing Monitoring for Threatened Species to Improve Conservation Outcomes.’ Government, NGO, community group and university representatives presented case studies, the insights from which helped shape lively discussion around the decisions, processes and challenges of threatened species monitoring. Participants unanimously agreed that monitoring is an essential part of threatened species recovery; however, they asked why threatened species monitoring is rarely carried out, and why, when it is carried out, is it rarely effective in terms of positively affecting conservation outcomes? Discussion revolved around solutions to this problem, including the potential for new technologies (drones, thermal cameras) to aid with monitoring design, the value of citizen science, and the contribution of Indigenous groups to threatened species monitoring. One of the insights from the workshop was the need to include people at all stages of monitoring design and application. Hub researcher, Natasha Robinson, from the Australian National University observed that “to improve threatened species conservation we need people to engage with and value threatened species monitoring.” “Practitioners need to demonstrate the value of on-going monitoring through reporting on our successes (and failures), and to engage with a broad range of people – from community, to land managers, to Indigenous people, to funding bodies and government. “Without support from these different groups, monitoring is at risk of not being integrated into decision making and on ground management, and therefore not contributing to positive conservation outcomes.” On-ground threatened species conservation will be assisted by practical guidelines - under development as part of this project - that aim to enhance the effectiveness of threatened species monitoring. See more information. Image: Long-term monitoring of threatened species comes with its challenges (image supplied by Natasha Robinson) Most people know that cats kill many birds and mammals, but they also have impacts on less charismatic species. Australian cats are killing about 650 million reptiles per year, according to new research published in the journal Wildlife Research. You have to be pretty lucky to make a living by combining your passion and interests, and that’s exactly how Dr Daniel White feels about his current state of affairs. Dan began his career studying genes, and has since applied his science to saving species. Here he describes how. The TSR Hub recognises that outcomes for threatened species will be improved by increasing Indigenous involvement in their management. In response to this, the Hub is guided by an Indigenous Reference Group and has a number of projects across Australia that are collaborating with Indigenous groups on threatened species research on their country. A new contagious fungal plant disease has entered Australia, myrtle rust. It’s highly mobile, can reproduce rapidly and is infecting many species across a broad geographic range. Containment and eradication responses have so far been unsuccessful. Australia is losing large old hollow-bearing trees in our mountain ash forests due to logging, fires and climate change. A team at the Australian National University have been investigating the importance of these trees, the implications of their loss and things we can do to ensure we have enough mountain giants for the future.
<urn:uuid:b72a2518-3a18-4f78-a87e-9d3696e786f9>
3.140625
643
News (Org.)
Science & Tech.
16.479545
95,582,036
We may soon be able to detect Earth-like planets outside of our solar system, but if we do, how will we know whether their atmospheres have the right ingredients for life? New research actually promises to make that process fairly straightforward. ScienceDaily has more: When a planet passes in front of its parent star, part of the starlight passes through the planet's atmosphere and contains information about the constituents of the atmosphere, providing vital information about the planet itself. This is called a transmission spectrum and even though astronomers can't use exactly the same method to look at the Earth's atmosphere, they were able to gain a spectrum of our planet by observing light reflected from the Moon towards the Earth during a lunar eclipse. This is the first time the transmission spectrum of the Earth has been measured. The spectrum not only contained signs of life but these signs were unmistakably strong. It also contained unexpected molecular bands and the signature of the earth ionosphere. According to Enric Palle of the Instituto de Astrofisica de Canarias, whose team discovered the process, "Now we know what the transmission spectrum of a inhabited planet looks like, we have a much better idea of how to find and recognize Earth like planets outside our solar system where life may be thriving." Palle added that studying the spectrum is a "very effective way" to learn about a planet's biological processes.
<urn:uuid:02247195-e56d-462d-9a63-6c28a2f46e85>
3.84375
282
News Article
Science & Tech.
35.163439
95,582,039
Foreword preface acknowledgments chapter 1 introduction history ecology and science of old growth forests in the east andrew barton and william keeton. Precomand cartea ecology and recovery of eastern old growth forests de andrew m barton la preul de 19941 lei cu livrare prin curier oriunde n romnia. Ecology and recovery of eastern old growth forests the landscapes of north america including eastern forests have been shaped by humans for millennia through fire agriculture hunting and other means but the arrival of europeans on america s east. List of old growth forests eastern old growth forests prospects for rediscovery and recovery island press isbn . An old growth forest also termed hardwood forests of the eastern united states can develop old growth characteristics in one or two ecology forest How it works: 1. Register a Free 1 month Trial Account. 2. Download as many books as you like ( Personal use ) 3. No Commitment. Cancel anytime. 4. Join Over 100.000 Happy Readers. 5. That's it. What you waiting for? Sign Up and Get Your Books.
<urn:uuid:798dcc9a-5ca7-4f7f-b1e2-d33d1219ee39>
3.078125
230
Truncated
Science & Tech.
56.675613
95,582,056
What’s on the horizon for renewable energy this year? The past few years has seen a remarkable rise in demand for renewable energy. With the development of complex technology, the pressing issue of climate change and an increasing recognition of the need to move to a low-carbon future, the popularity of energy that harnesses nature’s resources has skyrocketed. Today, we’re greener than we’ve ever been before, and that is partly thanks to the massive leaps that we’ve taken in technology, which have allowed us to push further, think more creatively, and streamline the processes that we currently have. This trend is set to continue in 2018, too. Solar power and wind energy had a record-breaking year in 2017, with wind energy powering the entirety of the country for six years- or 118% of the entire nation’s electricity. This year, wind energy also became cheaper to produce per Gigawatt than nuclear power, pointing the way to a greener future. Similarly, solar energy has been radically updated over the course of 2017, with inventions like solar powered roads, solar batteries and solar-tracking mounts making it onto the market in the past year, providing people and companies with fast, easy solutions to renewable power. Countries think so, too. Despite the apparent lobbying in favour of the coal industry in America, thanks to its President, Donald Trump, the solar industry currently employs more people in the USA than the entirety of the coal industry. This trend is becoming more apparent worldwide, too, with markets opening up in Asia. China has recently pledged to invest £292bn into developing its renewable power by 2020, with a particular focus on wind power, whilst India recently completed construction of the world’s largest solar farm. Worldwide, it seems that green energy is taking off, and shows no signs of stopping. This rise in popularity is partly due to the more advanced technologies that are making it easier and cheaper than ever to produce low-carbon energy. Recent developments have made it possible to create large-scale batteries that are capable of storing more energy than ever before, thus paving the way for large-scale wind- or solar-farm projects; similarly, developments in lithium-ion batteries are bringing a new generation of more energy-efficient electric cars to the road. For those looking to work in renewables, expect the industry to blossom over the next year, with more breakthroughs in battery power pushing the limits of what we can do, and the way in which we create and store power. One thing’s clear: the future is green. GPM Horizon on your desktop GPM Horizon on your laptop GPM Horizon on your smartphone MUNICH, Germany - 21 June 2018 - DNV GL is launching GPM Horizon, the world's first...Public Technologies 2018-06-21 (Source: SMMT - Society of Motor Manufacturers and Traders Ltd ) Maidenhead-based van operator, McFarlane Telfer (McFT), says it is 'gunning for FORS Gold' after becoming...Public Technologies 2016-12-12 Commenting on the publication today (Thursday) of ONS data on green jobs in the UK, which shows a fall in total direct and indirect jobs in low carbon and renewable energy...Public Technologies 2017-04-06 One will be a short-term horizon going up to 2022 and the other will be a medium-term horizon targeting an ‘energy-ready’ India by 2040, said an official familiar with the...Moneycontrol 2017-06-13 (Source: West Fargo Public School District No 6 ) Microsoft Word - Honeywell Green Boot Camp - Forness release.docx News FOR IMMEDIATE RELEASE Contact: Heather Konschak,...Public Technologies 2016-04-27 The event is addressed to organizations that are already preparing their application but also for those who will be interested and would like to know more about the Horizon...Public Technologies 2017-05-18 Popular News Stories Casualties after blast near Kabul airport following Vice President Dostum's arrival: Officials The Times of India J-K: Three militants linked to murder of police constable shot dead in Kulgam, Kashmir The event is addressed to organizations that are already preparing their application but also for those who will be interested and would like to know more about the Horizon prizes in order to discover and to address the challenges faced by the contestants. The prizes might be very appealing to cities and local authorities that have already in their plan to improve their energy...Public Technologies 2017-05-18 Cora E. Forness, 86, of Keewatin, died Monday, July 4, 2016, in her home. She was born in Indianapolis, Ind., May 23, 1930, to Fred and Cecil Carter. Cora had worked for many years at Lockheed Martin as a “Rosie the Riveter.” She became a Minnesotan in 1974, making her home in Keewatin. She loved casino trips, reading and watching sports on television, especially the Twins and...Hibbing Daily Tribune 2016-07-06 (Source: SMMT - Society of Motor Manufacturers and Traders Ltd ) One of the top names in the waste and recycling industry has become a champion for the Fleet Operator Recognition Scheme, FORS. Reconomy, which provides outsourced recycling and resource management services, has become a FORS Champion, aiming to promote FORS to a wider audience by encouraging the uptake of the...Public Technologies 2017-01-04 (Source: SMMT - Society of Motor Manufacturers and Traders Ltd ) Changes to the Fleet Operators Recognition Scheme (FORS) are to have a greater focus on environmental performance when they come into force next month. The voluntary code of conduct, which recognises various elements of fleet performance including safety, compliance, driver training and efficiency, is updated...Public Technologies 2016-10-11 Stage One gas-fired power station and associated infrastructure complete Stage Two will deliver large and small-scale solar and battery storage Onslow microgrid to demonstrate cutting edge alternative energy solution Energy Minister Ben Wyatt was in Onslow today to view the completed gas-fired power station and associated infrastructure built as Stage One of the Onslow Power...Public Technologies 2018-07-03
<urn:uuid:84d1bbb2-bf88-4088-bc1c-6da844939b21>
2.765625
1,298
Content Listing
Science & Tech.
45.16953
95,582,061
Modern web browsers employ a suite of performance optimization techniques to improve user experiences. Preconnect hint is one such optimization and allows browsers to discover critical hostnames and proactively establish a connection to them for serving requests in the near future. In this article, I discuss some characteristics of connections established via preconnect hints as observed by analyzing several large-scale datasets collected from Akamai’s infrastructure and some in-lab experimentation. The key findings from this work are: - When browsers establish a pre-connection, the first HTTP request on the connection is often sent a few hundred milliseconds after the connection is established because a request may not be available when preconnections happen and so the browser must spend time to parse the HTML, and other resources to discover a request that could be sent on the connection. - If the time-gap between when the connection is established and when the first request is sent is larger than ten seconds, the browser closes the connection and thus defeats the purpose of sending a preconnect hint. Developers must ensure that preconnecthints are used within the first ten seconds. - Occasionally, the pre-connections may never be used to send HTTP requests. In such situations, there may be a minimal CPU load on the server infrastructure. Modern web pages utilize dozens of hostnames to download hundreds of resources. For each of these resources, the browser performs a quick lookup into its TCP cache to check whether a connection to the associated hostname already exists, and also whether the connection is available for use. If the TCP connection is not available, the browser performs a lookup into its DNS cache to check if a DNS entry exists for the hostname in question. If both DNS and TCP entries are not available in the cache, the browser performs a DNS lookup and establishes a new TCP connection, followed by a TLS handshake wherever required. When DNS entries and connections are not already available, the page load time could inflate, especially if they are needed to load a resource that lies on the web page critical path. To keep these preliminary tasks from happening on the web page critical path, many web developers make use of preconnect hints that let browsers perform a DNS lookup and establish a TCP/TLS session with the host as soon as the hint is available. A good web development practice sends preconnect hints either in the HTTP response headers of the requested basepage HTML, such as HTTP/1.1 200 OK Link: <https://www.foundry.systems>; rel=preconnect; <link rel="preconnect" href="https://www.foundry.systems" /> tags embedded in the HTML. In the above example, as soon as the above hint is available to a browser that supports preconnect hints, the browser would perform a DNS lookup and establish a connection to www.foundry.systems, even if there was no pending HTTP request. preconnect hints is not the only reason why a web browser would preconnect to hostnames. Google Chrome, for example, has a built-in predictor that learns the structure of web pages navigated by the user and performs a speculative preconnect to various hostnames as soon as the user navigates to a page. For example, if the predictor knows that previous visits of the user to the page https://www.example.com/index.html required resources from img.example.com and css.example.com, the next time the user navigates to the same page, the browser could proactively establish connections to img.example.com and css.example.com before it even discovers resources to be downloaded on these connections. In this blog post, I will discuss some characteristics of connections established either via web developer preconnect hints or web browser speculative preconnect hints. In some cases, proactively established connections are not used by browsers to send any HTTP requests. This may happen because of any of these four scenarios: - The predictor suggests opening a connection to a host based on the user’s previous navigation, but the web page has changed and does not require any resource from the proactively connected hostname. - An HTTP request was canceled and the established connection remains unused. - A request was ready to be sent and the browser starts to establish a connection for it, but before the connection establishment completes, some other connection to the same host became available and the request was transferred on that connection. - A browser may not remember that a server was HTTP/2-capable and thus it opens multiple parallel connections in an HTTP/1.1 fashion, but only uses one of the connections after negotiating HTTP/2. Unused Preconnects: The Experiment Given the above scenarios for unused preconnects, I next investigated how Chrome (Version 64) treats such connections when they remain idle for some time. For experimental purposes, I set up three test pages to instruct the browser to preconnect to a host and load a resource on that host after different intervals. In the first test page, https://dev.utkarshgoel.in/preconnect.html, I added a preconnect hint in the HTML <head> tag to connect to an HTTP/2 capable host www.foundry.systems. Note that this page has nothing else in the HTML. Running a Wireshark instance in the background as I load the page shows that Chrome established a TCP and TLS handshake to www.foundry.systems. I also ran a capture at chrome://net-internals/#http2 in the background. However, the connection did not get registered as an HTTP/2 connection in the net-internals does not show a SETTINGS frame being sent on the connection. In the second test page, https://dev.utkarshgoel.in/preconnect_with_delayed_request.html, in the <head> tag, I added a preconnect hint for www.foundry.systems along with an external JS that blocks execution of any other JS on the page for five seconds. Inside the body of the HTML, I added an img tag with an empty src attribute. The HTML then has an inline JS that sets the src attribute of the image to an image pointing to www.foundry.systems. The purpose of this experiment is to load a resource from www.foundry.systems five seconds after the connection has established. For this experiment, I see in the Wireshark capture that similarly to the previous experiment, Chrome establishes a TCP and a TLS session to www.foundry.systems, but this time the net-internals registers the connection as an HTTP/2 connection. However, the connection shows up in net-internals five seconds after establishment, at which time the SETTINGS frame is sent. This and the previous experiment show that Chrome sends the HTTP/2 SETTINGS frame (as this marks the start of an HTTP/2 connection) only once there is an HTTP request to be sent out on the connection. In the third test page, https://dev.utkarshgoel.in/preconnect_with_delayed_request_12s.html, I cloned the second test page and modified the external JS to block the execution of any other JS for 12 seconds. Similarly to the second experiment, I observed that the connection in net-internals was registered after 12 seconds. However, in the Wireshark capture, I observed two connections being established, instead of just one. As shown in the screenshot below, the two connections were about 12 seconds apart: After loading the test page with different blocking values for the external JS, I found that Chrome discards any connection state where a connection is not used within the first 10 seconds after its establishment. In my experiment, the inline JS loaded the image 12 seconds after the preconnect, so Chrome established a new connection because the 10-second limit had been exceeded. Therefore, one recommendation here is to ensure that when preconnect hints are advertised with the goal of eliminating DNS and TCP/TLS handshakes from a web page critical path, the browser must be able to discover a resource that needs that connection within 10 seconds. Another observation I made in this experiment was that even when the server sent a TLS session ticket for when the client connected to the server the first time, the client did not advertise the session ticket in its clientHello when it connected the server the second time. To observe this behavior, take a look at the second red box in the screenshot below, highlighting the size of session tickets advertised in the The above behavior motivated me to set up my fourth experiment, where in the test page (https://dev.utkarshgoel.in/preconnect_with_repeated_delayed_requests.html) I cloned the third test page and added an external JS and an inline JS in the body of the HTML. The purpose of the second external JS was to block the execution of the second inline JS for additional 70 seconds, since that’s how long I discovered Chrome takes to terminate the previous HTTP/2 connections via net-internals. To summarize, the flow of the page load will be as follows: HTML load -> Preconnect -> wait 12 seconds -> Reconnect -> load image -> wait 70+ seconds -> Reconnect -> load image As shown in the above screenshot, running such an experiment results in establishing three connections. In the Wireshark capture, I see that Chrome advertises the session ticket only in the third clientHello (as indicated in the red box). This indicates that session tickets are pulled from the buffer/passed to the upper-layer only if an HTTP request was sent on the connection the last time. Amount of Traffic Generated by Unused Connections After learning about the above characteristics of the connections opened proactively by Chrome, I was interested to find out how often servers receive connection requests that don’t get used for serving any HTTP requests. This is important because if browsers open too many such connections, they might be putting too much load on the servers. For every TLS session, servers have to perform CPU-intensive public key cryptography, whether requests are served or not. To find an answer to the above question, I studied stats of over 1.7 million TCP connections established to Akamai’s distributed infrastructure for content delivery. I found out that up to 6% of the TLS connections are never used for HTTP requests. While there are a few cases where proactively established connections don’t get used, in most cases, such connections are used for serving HTTP requests. However, given that preconnects happen early in the page navigation and it might take some time before the browser could discover a request to be sent on the connection, I was also interested to investigate the time-gap between when the connection establishment finishes and when the first HTTP request arrives on the server. This time-gap will tell us the time for which proactively established connections remain idle right after they are established. Figure 1. Boxplot distributions of time-gaps observed across different hostnames. For figure clarity, I have plotted the graph for only 200 hostnames. Used Preconnects: The Experiment For this analysis, I utilized over 500 Akamai edge servers to collect stats of over 7.3 million HTTP requests generated by Chrome browser over HTTP/2 connections. The x-axis in Figure 1 shows time-gap distribution observed across 200 hostnames. The y-axis shows the time-gap in milliseconds. As shown in the figure, on a proactively established connection the first HTTP request could arrive on the server as late as four seconds in the median case after the connection establishment. However, for most hostnames, the first request arrives about 50 milliseconds after the end of connection establishment. Such large time-gapsare equivalent to several round trips in wired broadband networks and fewer round trips in fast mobile networks. Additionally, I discovered that this behavior applies only to hostnames associated to subresources embedded in the HTML. Further, since the HTTP/2 protocol does not allow the server to push anything before the client makes its first request on the connection, the servers lack the ability to act upon such time-gaps in favor of improving performance. Theoretically, one could leverage the experimental unbound-server-push proposal to push critical resources while the connection is idle. However, as shown in the previous section, Chrome is not reading incoming data on these idle connections, so this technique cannot be used without a change to how Chrome handles network sockets. While there are many minor and major findings from this study, I’ll highlight a few that I think are most important for web developers: - Since preconnect hints are advertised to remove time-consuming DNS lookups and TCP/TLS handshakes from the web page critical path, when developing web pages we should ensure that such connections are used within the first 10 seconds. Simply put, ensure that the website does not have a JS that may prevent the browser from discovering resources that need those pre-connections. Additionally or alternatively, make sure to compare your website performance with and without preconnecthints to verify that the hints do not hurt the performance. - There is additional CPU load incurred on the servers from unused connections. In this article, I have discussed one approach to reduce this load. For connections established to most hostnames that are associated to subresources on the HTML, the connection remains idle for about 50 milliseconds after it is established. The experimental unbound-server-push proposal submitted to the IETF might be one way to make use of the times during which the connections remain idle. Utkarsh Goel is an architect in Akamai’s Web Performance business unit who likes to build technologies to improve the current state of web performance. He is also a member of Akamai’s Foundry, the cutting-edge arm for applied R&D that believes in the “fail fast, succeed faster” philosophy and focuses on exploring new technology opportunities to improve all forms of Internet performance. Thanks to Akamai’s Mike Bishop, Moritz Steiner, and Stephen Ludin for brainstorming some research ideas and providing feedback on an early version of this article. Categories: Web Performance
<urn:uuid:b2d4e41d-6581-4870-ba4e-4945e9e19aca>
2.609375
2,979
Academic Writing
Software Dev.
45.639596
95,582,063
Symbion is the name of a relatively recently-discovered animal genus, unique enough to have merited its own phylum. It was originally represented by a single species, pandora, but related species including americanus have since been described. Many commentaries on Symbion refer to the original work by Symbion pandora's co-discoverers Peter Funch and Reinhardt Kristensen, which Funch followed up in a more detailed description of some of S. pandora's bizarre life cycle. The observations made by Funch and Kristensen involved both live specimens under a stereo microscope and fixed specimens in a scanning electron microscope. Funch and Kristensen, being the first to elucidate the morphology of the genus, proposed the following taxonomy for S. pandora : kingdom Animalia, phylum Cycliophora, class Eucycliophora, order Symbiida, family Symbiidae, genus Symbion, species pandora. The genus name refers to the species' symbiotic, commensal habitatation on the mouthparts of the lobster, on whose leftovers it feeds. The name of the first-discovered species, "pandora", was chosen to describe the plethora of alloforms borne by the species in its feeding stage. Thus far Symbion is the only genus assigned to the phylum Cycliophora, which is the 36th phylum to be included in the animal kingdom. Funch has suggested that Cycliophora is sufficiently related to several other phyla, particularly Entoprocta, to justify their amalgamation into a new superphylum; other related phylum is Entoprocta. Genetic analysis of Symbion appears to indicate a closer relation to Gnathifera. A brief summary of Symbion's life cycle follows. The dominant generation of Symbion is an asexual feeding (or "sessile") stage, during which the organisms cling to their host lobster with an adhesive disc. In the case of Symbion pandora, the host crustacean is the Norwegian lobster (Nephrops norvegicus); other hosts include the American lobster (Homarus americanus), host to Symbion americanus, and the European lobster (Homarus gammarus). The saclike Symbion reaches its greatest size in this stage, approximately 0.5 mm in length. Accompanied by a regeneration of the feeding stage's inner organs is a series of interior buddings that results in the birth of Symbion larvae, the first of three motile and non-feeding stages. The larvae are released with a new feeding stage already growing inside, enabling Symbion to continue its asexual cycle and to fully populate its host's mouthparts. The sexual cycle of the species begins when the moulting period of the host lobster nears completion. The feeding stage of Symbion, perhaps signaled by hormones from the lobster, now begins to produce two alternate motile stages, dwarf males and females. The mature, sperm-filled dwarf males attach themselves to feeding stages inside which a female and its oocyte are developing. A zygote results in the female, which escapes from the feeding stage only to return and settle back onto the lobster host. While the female dies and then degenerates, leaving only a cyst of cuticle, its embryo differentiates into a chordoid larva. The larva, equipped with cilia for locomotion, hatches and settles onto a new lobster host. There the chordoid larva metamorphoses into another asexual feeding stage, and Symbion's entire life cycle is repeated. - Irish KE, Norse EA (1996) Scant emphasis on marine biodiversity. Conservation Biology 10: 680. - Obst M, Funch P, Giribet G (2005) Hidden diversity and host specificity in cycliophorans: a phylogeographic analysis along the North Atlantic and Mediterranean Sea. Molecular Ecology 14: 4427–4440. - Funch P, Kristensen RM (1995) Cycliophora is a new phylum with affinities to Entoprocta and Ectoprocta. Nature 378: 711-714. - Funch P (1996) The chordoid larva of Symbion pandora (Cycliophora) is a modified trochophore. Journal of Morphology 230: 231-263. - Morris SC (1995) A new phylum from the lobster's lips. Nature 378: 661-662.
<urn:uuid:b0c867f5-e0b2-4cf2-90fe-9479530dc900>
3.390625
931
Knowledge Article
Science & Tech.
31.846756
95,582,090
Bacteria are traditionally considered unicellular organisms. However, increasing experimental evidence indicates that bacteria seldom behave as isolated organisms. Instead, they are members of a community in which the isolated organisms communicate among themselves, thereby manifesting some multi-cellular behaviors. In an article to be published Friday (Oct. 26) in the journal Science, the Hebrew University scientists describe the new communication factor they have discovered that is produced by the intestinal bacteria Escherichia coli. The new factor is secreted by the bacteria and serves as a communication signal between single bacterial cells. The research was carried out by a group headed by Prof. Hanna Engelberg-Kulka of the Department of Molecular Biology at the Hebrew University –Hadassah Medical School. It includes Ph.D. student Ilana Kolodkin-Gal , and a previous Ph.D. student, Dr Ronen Hazan. In addition, the research included Dr Ariel Gaathon from the Facilities Unit of the Medical School. The communication factor formed by Escherichia coli enables the activation of a built-in “suicide module” which is located on the bacterial chromosome and is esponsible for bacterial cell death under stressful conditions. Therefore, the new factor has been designated EDF (Extra-cellular Death Factor). While suicidal cell death is counterproductive for the individual bacterial cell, it becomes effective for the bacterial community as a whole by the simultaneous action of a group of cells that are signaled by EDF. Under stressful conditions in which the EDF is activated, a major sub-population within the bacterial culture dies, allowing the survival of the population as a whole. Understanding how the EDF functions may provide a lead for a new and more efficient class of antibiotics that specifically trigger bacterial cell death in the intestine bacteria Escherichia coli and probably in many other bacteria, including those pathogens that also carry the “suicide module.” The discovered communication factor is a novel biological molecule, noted Prof Engelberg-Kulka. It is a peptide (a very small protein) that is produced by the bacteria. The chemical characterization of the new communication factor was particularly difficult for the researchers because of two main reasons: it is present in the bacterial culture in minute amounts, and the factor decomposes under the conditions that are routinely used during standard chemical characterization methods. Therefore, it was necessary to develop a new specific method. The research has also identified several bacterial genes that are involved in the generation of the communication factor, said Prof. Engelberg-Kulka. . The research on this project was supported by the Israel Science Foundation (ISF), the U.S.-Israel Binational Science Foundation (BSF), and the American National Institutes of Health (NIH). Jerry Barach | The Hebrew University of Jerusal Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:efd53b3a-de9e-4a3f-aa61-9edf1256aa66>
2.984375
1,220
Content Listing
Science & Tech.
31.082714
95,582,106
Learn by Example Building and Deploying Real-World Node.js Applications from Absolute Scratch About This Video You will learn how to structure your Node/Express applications, create data models, relate data, display views, authenticate users, create helpers and much more. Project 1 - Vidjot An application where content creators can register and jot down and manage ideas for future videos.The first project is quite simple as it is meant to be an introduction where the author explains everything about Express routing, middleware, templates, Mongoose, and so on. We implement Passport and a local strategy where we store emails as usernames and encrypted passwords in our database. We will prepare and deploy this app to Heroku and add a domain name. Project 2 - Storybooks: A much more sophisticated project. A social network for creating public and private stories. This app uses a Google OAuth 2.0 strategy for authentication. Users can login and create stories which can be set to public or private. They can also choose if comments are allowed to be posted. We will create a dashboard for users to manage their stories. We will create helpers for authentication and access control as well as handlebars template helpers. We will prepare and deploy this app to Heroku and add a domain name.
<urn:uuid:24792280-2b62-4de4-bc8f-64c64a88a0ef>
2.53125
261
Truncated
Software Dev.
45.687978
95,582,107
Mirror [#1]: Easy-To-Follow Tutorial To Learn Python Programming In Less Than One Week.pdf - 32,728 KB/Sec Mirror [#2]: Easy-To-Follow Tutorial To Learn Python Programming In Less Than One Week.pdf - 26,573 KB/Sec You Can Become A Skilled Python Programmer In Less Than One Week! The Python programming language has long been seen as one of the best ones to use. It has a big library to use, is easy to read, and has all the great features that you are going to need when first learning how to work with coding. It is all there ready for you to use, and you just need to take the first steps! This guidebook is going to help you to take these first steps by showing you exactly how to get started with the Python programming language. Whether you have worked in coding before or you are just looking to get started, this guidebook has all the topics and steps that you need to get your first code written in no time. Inside this guidebook you will learn: - Why should I learn about Python? - The basic parts of the Python code - Working with classes and objects - Working on inheritances - Exception handling - Working with decision control structures - The importance of loops - What file input and output means in this language - The different operators available to make the code stronger - Some practice writing out codes to make fun games all on your own inside Python. Working on the Python language can be one of the most rewarding experiences. There is a lot of power that can be behind these programs but it is simple enough for even a beginner to be able to use all on their own. When you are ready to get started on working with Python and some of your own codes, make sure to check out this guidebook and see just how great it can be to do all of this with Python!
<urn:uuid:057518da-91d8-411a-ae54-8152bbcc42fc>
3.078125
401
Product Page
Software Dev.
57.245979
95,582,111
Dissolution or creation of huge gypsum deposits changed sulfate content of the oceans Scientists have discovered a potential cause of Earth's "icehouse climate" cooling trend of the past 45 million years. It has everything to do with the chemistry of the world's oceans. "Seawater chemistry is characterized by long phases of stability, which are interrupted by short intervals of rapid change," says geoscientist Ulrich Wortmann of the University of Toronto, lead author of a paper reporting the results and published this week in the journal Science. "We've established a new framework that helps us better interpret evolutionary trends and climate change over long periods of time. The study focuses on the past 130 million years, but similar interactions have likely occurred through the past 500 million years." Wortmann and co-author Adina Paytan of the University of California Santa Cruz point to the collision between India and Eurasia approximately 50 million years ago as one example of an interval of rapid change. This collision enhanced dissolution of the most extensive belt of water-soluble gypsum on Earth, stretching from Oman to Pakistan and well into western India. Remnants of the collision are exposed in the Zagros Mountains in western Iran. The dissolution or creation of such massive gypsum deposits changes the sulfate content of the ocean, say the scientists, affecting the amount of sulfate aerosols in the atmosphere and thus climate. "We propose that times of high sulfate concentrations in ocean water correlate with global cooling, just as times of low concentrations correspond with greenhouse [warmer] periods," says Paytan. "When India and Eurasia collided, it caused dissolution of ancient salt deposits, which resulted in drastic changes in seawater chemistry." That may have led to the end of the Eocene epoch--the warmest period of the modern-day Cenozoic era--and the transition from a greenhouse to an icehouse climate. "It culminated in the beginning of the rapid expansion of the Antarctic ice sheet," says Paytan. Canada's Natural Sciences and Engineering Research Council supports Wortmann's research and the U.S. National Science Foundation (NSF) supports Paytan research. "Abrupt changes in seawater composition are a new twist in our understanding of the links among ocean chemistry, plate tectonics, climate and evolution," says Candace Major, program director in NSF's Division of Ocean Sciences. To make the discovery, the researchers combined past seawater sulfur composition data collected by Paytan with Wortmann's recent discovery of the strong link between marine sulfate concentrations and carbon and phosphorus cycling. They found that seawater sulfate reflects huge changes in the accumulation and weathering of gypsum, which is the mineral form of hydrated calcium sulfate. "While it's been known for a long time that gypsum deposits can be formed and destroyed rapidly, the effect of these processes on seawater chemistry has been overlooked," says Wortmann. "The idea represents a paradigm shift in our understanding of how ocean chemistry changes over time, and how these changes are linked with climate." Data used in the research were collected aboard the ocean drillship JOIDES Resolution and through the Integrated Ocean Drilling Program (IODP). IODP is an international research program dedicated to advancing scientific understanding of the Earth through drilling, coring and monitoring the subseafloor. The JOIDES Resolution is a scientific research vessel managed by the U.S. Implementing Organization of IODP. Texas A&M University, Lamont-Doherty Earth Observatory of Columbia University and the Consortium for Ocean Leadership comprise the implementing organization. Two lead agencies support the IODP: the U.S. NSF and Japan's Ministry of Education, Culture, Sports, Science and Technology. Additional program support comes from the European Consortium for Ocean Research Drilling, the Australia-New Zealand IODP Consortium, India's Ministry of Earth Sciences, the People's Republic of China's Ministry of Science and Technology, and the Korea Institute of Geoscience and Mineral Resources.
<urn:uuid:e5fb748e-37e2-4137-96c6-272a672bac8c>
3.5
844
News Article
Science & Tech.
28.634908
95,582,115
In what's beginning to look like a case of planetary measles, a third red spot has appeared alongside its cousins -- the Great Red Spot and Red Spot Jr. -- in the turbulent Jovian atmosphere. In what's beginning to look like a case of planetary measles, a third red spot has appeared alongside its cousins--the Great Red Spot and Red Spot Jr.--in the turbulent Jovian atmosphere. This third red spot, which is a fraction of the size of the two other features, lies to the west of the Great Red Spot in the same latitude band of clouds. The visible-light images were taken on May 9 and 10 with Hubble's Wide Field and Planetary Camera 2. This third red spot, which is a fraction of the size of the two other features, lies to the west of the Great Red Spot in the same latitude band of clouds. The new red spot was previously a white oval-shaped storm. The change to a red color indicates its swirling storm clouds are rising to heights like the clouds of the Great Red Spot. One possible explanation is that the red storm is so powerful it dredges material from deep beneath Jupiter's cloud tops and lifts it to higher altitudes where solar ultraviolet radiation -- via some unknown chemical reaction -- produces the familiar brick color. Detailed analysis of the visible-light images taken by Hubble's Wide Field Planetary Camera 2 on May 9 and 10, and near-infrared adaptive optics images taken by the W.M. Keck telescope on May 11, is revealing the relative altitudes of the cloud tops of the three red ovals. Because all three oval storms are bright in near-infrared light, they must be towering above the methane in Jupiter's atmosphere, which absorbs the Sun's infrared light and so looks dark in infrared images. Turbulence and storms first observed on Jupiter more than two years ago are still raging, as revealed in the latest pictures. The Hubble and Keck images also reveal the change from a rather bland, quiescent band surrounding the Great Red Spot just over a year ago to one of incredible turbulence on both sides of the spot. Red Spot Jr. appeared in spring of 2006. The Great Red Spot has persisted for as long as 200 to 350 years, based on early telescopic observations. If the new red spot and the Great Red Spot continue on their courses, they will encounter each other in August, and the small oval will either be absorbed or repelled from the Great Red Spot. Red Spot Jr. which lies between the two other spots, and is at a lower latitude, will pass the Great Red Spot in June. The Hubble and Keck images may support the idea that Jupiter is in the midst of global climate change, as first proposed in 2004 by Phil Marcus, a professor of mechanical engineering at the University of California, Berkeley. The planet's temperatures may be changing by 15 to 20 degrees Fahrenheit. The giant planet is getting warmer near the equator and cooler near the South Pole. He predicted that large changes would start in the southern hemisphere around 2006, causing the jet streams to become unstable and spawn new vortices. The Hubble team members are Imke de Pater, Phil Marcus, Mike Wong and Xylar Asay-Davis of the University of California, Berkeley, and Christopher Go of the Philippines. The Keck team members were de Pater, Wong, and Conor Laver of the University of California, Berkeley, and Al Conrad of the Keck Observatory. The contributions by the amateur network (http://jupos.privat.t-online.de/) are invaluable for this research. For images and more information, visit:http://hubblesite.org/news/2008/23 The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency (ESA) and is managed by NASA's Goddard Space Flight Center (GSFC) in Greenbelt, Md. The Space Telescope Science Institute (STScI) conducts Hubble science operations. The institute is operated for NASA by the Association of Universities for Research in Astronomy, Inc., Washington, DC. Ray Villard | newswise What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:5a1f2f43-78ca-4188-9f45-d1b383879de4>
3.1875
1,480
Content Listing
Science & Tech.
49.077941
95,582,122
Gregg Hallinan of the National University of Ireland, Galway, who is presenting the discovery at the RAS National Astronomy Meeting in Preston on 18th April, said, “Brown dwarfs tend to be seen as a bit boring – the cinders of the galaxy. Our research shows that these objects can be fascinating and dynamic systems, and may be the key to unlocking this long-standing mystery of how pulsars produce radio emissions.” Since the discovery of pulsars forty years ago, astronomers have been trying to understand how the rotating neutron stars produce their flashing radio signals. Although there have been many attempts to describe how they produce the extremely bright radio emissions, the vast magnetic field strengths of pulsars and the relativistic speeds involved make it extremely difficult to model. Brown dwarfs are now the second class of stellar object observed to produce this kind of powerful, amplified (coherent) radio signal at a persistent level. The emissions from the brown dwarfs appear to be very similar to those observed from pulsars, but the whole system is on a much slower and smaller scale, so it is much easier to decipher exactly what is going on. Importantly, the mechanisms for producing the radio emissions in brown dwarfs are well understood, as they are almost identical to the processes that produce radio emissions from planets. Hallinan said, “It looks like brown dwarfs are the missing step between the radio emissions we see generated at Jupiter and those we observe from pulsars”. Jupiter’s volcanic moon, Io, is a source of electrically charged gas that is accelerated by the planet’s magnetic field and causes powerful radio laser, or maser, emissions. The radiation can be so intense that Jupiter frequently outshines the Sun as a source of energy at radio wavelengths. For some time, scientists have thought that there may be similarities between this type of maser emission and pulsars’ lighthouse-like beams of radio waves. Observations of the brown dwarf, TVLM 513, using the Very Large Array (VLA) radio telescope, may provide the first direct evidence for that link. The group observed the brown dwarf over a period of 10 hours at two different frequencies. In both cases, a bright flash was observed every 1.96 hours. As yet, the processes controlling the radio flashes from TVLM 513 are still unclear. There is no evidence of a binary system, so interaction of the magnetosphere with a stellar wind from a nearby star seems an unlikely cause, nor is there any sign of an orbiting planet that could produce a scenario like that of Jupiter and Io. However, rapid rotation is also thought to be a source of electron acceleration for a component of Jupiter’s maser emission and this may also be the main source of TVLM 513’s radio flashes. The group is now planning a large survey of all the known brown dwarfs in the solar neighbourhood to find out how many are radio sources and how many of those are pulsing. If a large fraction of brown dwarfs are found to pulse, it could prove a key method of detection for these elusive objects. Anita Heward | alfa What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:e57f99ef-e9ae-485e-bd8a-71524cb1a6bc>
3.578125
1,290
Content Listing
Science & Tech.
42.824953
95,582,123
More than one billion people worldwide rely on fish as an important source of animal protein, states the United Nations Food and Agriculture Organization. And while fish provide slightly over 7 per cent of animal protein in North America, in Asia they represent about 23 per cent of consumption. A cross section of a zebrafish eye shows the localization of mercury in the outer segments of photoreceptor cells. Reference: Malgorzata Korbas, Barry Lai, Stefan Vogt, Sophie-Charlotte Gleber, Chithra Karunakaran, Ingrid J. Pickering, Patrick H. Krone, and Graham N. George. "Methylmercury Targets Photoreceptor Outer Segments." ACS Chemical Biology (2013). Humans consume low levels of methylmercury by eating fish and seafood. Methylmercury compounds specifically target the central nervous system, and among the many effects of their exposure are visual disturbances, which were previously thought to be solely due to methylmercury-induced damage to the brain visual cortex. However, after combining powerful synchrotron X-rays and methylmercury-poisoned zebrafish larvae, scientists have found that methylmercury may also directly affect vision by accumulating in the retinal photoreceptors, i.e. the cells that respond to light in our eyes. Dr. Gosia Korbas, BioXAS staff scientist at the Canadian Light Source (CLS), says the results of this experiment show quite clearly that methylmercury localizes in the part of the photoreceptor cell called the outer segment, where the visual pigments that absorb light reside. “There are many reports of people affected by methylmercury claiming a constricted field of vision or abnormal colour vision,” said Korbas. “Now we know that one of the reasons for their symptoms may be that methylmercury directly targets photoreceptors in the retina.” Korbas and the team of researchers from the University of Saskatchewan including Profs. Graham George, Patrick Krone and Ingrid Pickering conducted their experiments using three X-ray fluorescence imaging beamlines (2-ID-D, 2-ID-E and 20-ID-B) at the Advanced Photon Source, Argonne National Laboratory near Chicago, US, as well as the scanning X-ray transmission beamline (STXM) at the Canadian Light Source in Saskatoon, Canada. After exposing zebrafish larvae to methylmercury chloride in water, the team was able to obtain high-resolution maps of elemental distributions, and pinpoint the localization of mercury in the outer segments of photoreceptor cells in both the retina and pineal gland of zebrafish specimens. The results of the research were published in ACS Chemical Biology under the title “Methylmercury Targets Photoreceptor Outer Segments”. Korbas said zebrafish are an excellent model for investigating the mechanisms of heavy metal toxicity in developing vertebrates. One of the reasons for that is their high degree of correlation with mammals. Recent studies have demonstrated that about 70 per cent of protein-coding human genes have their counterparts in zebrafish, and 84 per cent of genes linked to human diseases can be found in zebrafish. “Researchers are studying the potential effects of low level chronic exposure to methylmercury, which is of global concern due to methylmercury presence in fish, but the message that I want to get across is that such exposures may negatively affect vision. Our study clearly shows that we need more research into the direct effects of methylmercury on the eye,” Korbas concluded. Acknowledgments: This work was supported by the Canadian Institutes of Health Research, the Saskatchewan Health Research Foundation and the Natural Sciences and Engineering Research Council of Canada. About the CLS: The Canadian Light Source is Canada’s national centre for synchrotron research and a global centre of excellence in synchrotron science and its applications. Located on the University of Saskatchewan campus in Saskatoon, the CLS has hosted 1,700 researchers from academic institutions, government, and industry from 10 provinces and territories; delivered over 26,000 experimental shifts; received over 6,600 user visits; and provided a scientific service critical in over 1,000 scientific publications, since beginning operations in 2005. CLS operations are funded by Canada Foundation for Innovation, Natural Sciences and Engineering Research Council, Western Economic Diversification Canada, National Research Council of Canada, Canadian Institutes of Health Research, the Government of Saskatchewan and the University of Saskatchewan. Synchrotrons work by accelerating electrons in a tube at nearly the speed of light using powerful magnets and radio frequency waves. By manipulating the electrons, scientists can select different forms of very bright light using a spectrum of X-ray, infrared, and ultraviolet light to conduct experiments. Synchrotrons are used to probe the structure of matter and analyze a host of physical, chemical, geological and biological processes. Information obtained by scientists can be used to help design new drugs, examine the structure of surfaces in order to develop more effective motor oils, build more powerful computer chips, develop new materials for safer medical implants, and help clean-up mining wastes, to name a few applications. Mark Ferguson | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:8989d01a-77a7-4ab1-9fc1-54e661725bd0>
3.375
1,764
Content Listing
Science & Tech.
31.548537
95,582,179
How Big is the Universe?. The observable Universe is greater than 12 x 10 9 light years in radius. (12 x 10 9 years)( 365 days ) ( 24 hr ) ( 60 min ) ( 60 sec ) ( 3 x 10 8 m ) ( 1 km ) The observable Universe is greater than 12 x 109 light years in radius. (12 x 109 years)(365 days) (24 hr) (60 min) (60 sec) (3 x 108m ) ( 1 km ) ( year ) (day ) ( hr ) ( min ) ( sec ) ( 103 m ) = 2 x 1021 km !!!! (20000000000000000000000 km) Photo of Universe not available. COBE image of the Milky Way: (Courtesy of Ned Wright (calculate that distance in light years!) Inside each avian erythocyte is a nucleus which contains molecules of DNA This is DEET Molecular Formula: C12H17NO Detamide; Metadelphene; MGK; Off; Diethyltoluamide; Deet; Delphene; N,N-diethyl-m-toluamide; diethyl-m-toluamide; 3-methyl-N,N-diethylbenzamide; m-toluic acid diethylamide; ai 3-22542 Atomic orbitals…regions in space where the electrons hang out. The mass of an electron is 9.1 x 10-31 kg. It is not possible to know precisely where an electron is now and where it is going at the same time
<urn:uuid:3b04a69d-d642-4b30-98a8-627e7e87f4e9>
3.375
342
Knowledge Article
Science & Tech.
86.061575
95,582,181
Motion direction is computed by similar types of neural circuits At first glance, the eyes of mammals and those of insects do not seem to have much in common. However, a comparison of the neural circuits for detecting motion shows surprising parallels between flies and mice. Scientists have learned a lot about the visual perception of both animals in recent years. Alexander Borst at the Max Planck Institute of Neurobiology in Martinsried and Moritz Helmstaedter at the Max Planck Institute for Brain Research, both of whom have made a significant contribution to the current level of knowledge in the case of flies and mice, have now demonstrated the similarities. A fly's eye consists of more than a thousand individual facets and can cover most of the head's surface. This provides flies with a panoramic view, as it were. Human eyes, in comparison, are small but mobile. Both can see colour, but the colour spectra are different. The fly brain can also detect more than 80 images per second separately from one another, while our limit is 24 images per second. Insects therefore see rapid movements much better and more precisely than we humans. Despite all these differences, "seeing" is an essential sense for flies and humans – and their eyes face a similar problem: individual photoreceptors only "see" individual pixels in an overall image. The brain therefore needs to compute shapes, distances or movements from these individual items of information. But how? In their comparison of the visual systems of flies and mice, Alexander Borst and Moritz Helmstaedter have now been able to demonstrate that a few very efficient basic rules apparently determine these computations. "Insects and mammals are separated by about 550 million years of development and yet there are surprising parallels in how their brains process visual motion information," explains Alexander Borst, who, together with his department at the Max Planck Institute of Neurobiology, analysed the neural circuits underlying motion vision in the fly brain. "It looks as if we have a very robust solution for computing the direction of motion with neurons," adds his colleague Moritz Helmstaedter, who studies the wiring diagram of the mouse brain at the Max Planck Institute for Brain Research. In their article in the journal Nature Neuroscience, the two researchers have now elaborated the system parallels. Splitting, processing and merging Photoreceptors respond to changes in contrasts – they increase or reduce their activity depending on whether a previously bright point darkens or a dark point brightens. A number of years ago, Alexander Borst and his team demonstrated that photoreceptors in the fly eye pass their information onto two groups of cells: one responds only to a dark-light change ("light on"), the other group recognizes only light-dark changes ("light off"). Scientists have known about a similar separation of contrast changes in the form of ON and OFF bipolar cells in the mammalian retina for more than 40 years. This parallel, however, is just the first of several similarities. Following the splitting into ON and OFF channels, the direction of motion is computed from the information delivered by the photoreceptors in each channel separately. Once the direction of motion is computed, the information from ON- and OFF-channels is fused and represented as vector coordinates along four orthogonal directions: rightward, leftward, downward and upward. Proven circuit as basis "This is where the parallels end," says Moritz Helmstaedter. In the mouse brain, the fusion of ON and OFF channels still takes place very early in the wiring. The motion information originates from a relatively small area in the field of vision and is now linked with other information and sent to higher brain regions. In the fly, on the other hand, the motion direction computed in this way has already reached the neurons that affect behaviour: the motion information originates from a large area in the field of vision and, based on this, the neurons can trigger a change of course through the wing muscles, for example. There could be two reasons for the parallels in the processing of movements that have now been shown. One is that the neural circuit already existed in the common ancestor of these very different species. Alternatively, the same circuits could have developed independently of one another in mammals and insects. Regardless of the origin of the parallels, their existence shows that it must be a very robust and proven processing pathway. "We assume that this circuit represents the best-possible computation of motion directions by neurons – requiring the minimum number of cells and entailing maximum energy efficiency," says Alexander Borst, summarizing the results. This is a finding that may be an important basis for the development of artificial systems but also for understanding brain functions. Prof. Dr. Alexander Borst Max Planck Institute of Neurobiology, Martinsried Phone: +49 89 8578-3251 Fax: +49 89 8578-3252 Dr. Stefanie Merker Max Planck Institute of Neurobiology, Martinsried Phone: +49 89 8578-3514 Alexander Borst & Moritz Helmstaedter Common circuit design in fly and mammalian motion vision Nature Neuroscience 2015 Aug;18(8):1067-76. doi: 10.1038/nn.4050 Prof. Dr. Alexander Borst | Max Planck Institute of Neurobiology, Martinsried Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ca0c8337-ade3-4a0a-8653-3f09d19f32d3>
3.65625
1,677
Content Listing
Science & Tech.
39.17403
95,582,210
I'm wondering if you could help me answer a few questions about a lab I'm doing. We are preparing an a,b-unsaturated ketone via Michael and aldol condensation reactions. The reactants are trans-chalcone and ethyl acetoacetate (in ethanol and NaOH). This creates 6-ethoxycarbonyl-3,5-diphenyl-2-cyclohexenone. 1) A white solid remains in centrifuge tube after acetone extraction-it fizzes when hydrochloric acid is added suggesting sodium carbonate was formed- How did it form? Write a balanced equation for the formation?© BrainMass Inc. brainmass.com July 23, 2018, 1:43 pm ad1c9bdddf Thanks for letting me work on your question. Here is what I think: Given the presence of carbon dioxide in ... This solution explains how chemical reaction of NaOH in the air contributes to the formation of white solid.
<urn:uuid:1f6325d7-7e38-4a2c-a158-e01c8d6cbf48>
2.78125
208
Q&A Forum
Science & Tech.
55.631244
95,582,220
The descending limb of the loop of Henle has low permeability to ions and urea, while being highly permeable to water. The thin ascending limb is not permeable to water, but it is permeable to ions. The medullary thick ascending limb remains impermeable to water with sodium, potassium (K+) and chloride (Cl-) ions being reabsorbed by active transport; K+ is passively transported along its concentration gradient through a K+ leak channel in the apical aspect of the cells, back into the lumen of the ascending limb. This K+ "leak" generates a positive electrochemical potential difference in the lumen. The electrical gradient drives more reabsorption of Na+, as well as other cations such as magnesium (Mg2+) and importantly calcium Ca2+. The loop of Henle is supplied by blood in a series of straight capillaries descending from the cortical efferent arterioles. These capillaries also have a countercurrent exchange mechanism that prevents washout of solutes from the medulla, thereby maintaining the medullary concentration. As water is osmotically driven from the descending limb into the interstitium, it readily enters the vasa recta. The low bloodflow through the vasa recta allows time for osmotic equilibration, and can be altered by changing the resistance of the vessels' efferent arterioles.
<urn:uuid:5eca7025-0129-4594-962f-51066f6e6446>
3.484375
289
Knowledge Article
Science & Tech.
28.910569
95,582,225
History in the early days of object oriented technology before the mid 1990s there were many different competing methodologies for software development and object . Object oriented software engineering commonly known by acronym oose is an object modeling language and methodology oose was developed by ivar jacobson in 1992 . Read software engineering object oriented modeling by jitendra patel with rakuten kobo software engineering and object oriented modeling this book is specially . Introduction to software engineering uml rumbaughs object modeling technique the creator of the object oriented software engineering . The ultimate guide to unified modeling language including the history object modeling technique and object oriented software engineering How it works: 1. Register Trial Account. 2. Download The Books as you like ( Personal use )
<urn:uuid:c380460c-a2e8-484c-a120-4e3c7beed6ac>
3.125
151
Truncated
Software Dev.
7.504876
95,582,230
SOUTH AUSTRALIAN BUTTERFLIES WHAT BUTTERFLY IS THAT ? HOW TO IDENTIFY SOUTH AUSTRALIAN BUTTERFLIES |Butterfly watching can be like bird watching. All that is required are binoculars, a field guide, and a notebook and pen to record the observed information. There are still large areas within South Australia in which the distribution data of butterflies is unknown. There is plenty of scope for an observer to make a new recording of a butterfly for South Australia and indeed there is still scope to observe a totally new butterfly. Within the past 10 years, 10 new distributional recordings within South Australia have been made. For the past twenty five years in Australia, there has been on average a new butterfly described every year. Distribution data for the more rare butterflies is always extremely useful for future conservation work. More detailed information is still required on the biology and food hosts of South Australian butterflies. In particular, the life histories for the unique Lycaenid butterflies Bronze Ant-blue (Acrodipsas brisbanensis), Eastern Large Bronze Azure (Ogyris halmaturia) and the Mallee Bronze Azure (Ogyris subterrestris). A comprehensive data base of information on South Australian butterflies is currently being put together by volunteers, who are eager to receive new information. This information can also be published in the newsletter of Butterfly Conservation South Australia. For information on how you can further take an interest in butterflies refer to Butterfly Conservation South Australia. Its Hesperilla chrysotricha. Its what ! Has it got a common name ? I call it the Chrysotricha Sedge-skipper, but others call it the Golden-haired Sedge-skipper and in Tasmania people know it as the Shoreline or Plebeia Skipper. But why so many names ? There are many common names for butterflies because people in different places have given them their own local names. Scientists need to agree on one name so that no one is confused about which butterfly is being written or spoken about. Scientists classify animals into groups. The groups of similar animals are subdivided into smaller groups until a genus and species name is given. Animals in the same genus are very similar. Animals in the same species can interbreed to produce fertile offspring (although insects being invertebrates do not always follow the same interbreeding rules as vertebrates). Scientists always write the scientific name in italics. Within the Kingdom ANIMALIA, butterflies are classified in the following manner: - Order LEPIDOPTERA (from Greek lepido, scale, and ptera, wings) There are about 250,000 moth species, and about 17,000 butterfly species in the world. About 400 butterflies live in Australia and of these, 78 species are presently known from South Australia. The South Australian butterflies are grouped into five families, within two superfamilies. Family PAPILIONIDAE, 3 species belonging to the swallowtails. Family PIERIDAE, 9 species commonly called whites or yellows. Family NYMPHALIDAE, 17 species commonly called browns (satyrs), danaids, nymphs, etc. Family LYCAENIDAE, 28 species commonly called blues, coppers, hairstreaks and metalmarks. Other smaller groupings commonly used in classification, in descending order of rank, are Subfamilies, Supertribes, Tribes, Subtribes, Genus, Subgenus, Species and finally Subspecies. Each butterfly has its own scientific name. The same scientific name cannot be used again, except within another kingdom. (The classification of animals into groups is only an arbitrary system, based on the collective similarities of morphological attributes of the different animals. A final grouping result is dependant largely on the scientist(s) that study that particular section of animals or in our case, butterflies. Hence, the morphological attributes used by scientists to classify one group of butterflies may differ to those used by another group of scientists to classify another group of butterflies, (or in some cases, even the same group of butterflies and often resulting in a different classification). Unfortunately, there is presently no rigorous methodology in place such that each of the different classification groups can be defined by a given set of consistent morphological attributes or parameters, (i.e. five scientists using the same information will give five different results). The use of DNA tissue analysis in recent years has provided additional parameters to aid in further defining these classification groups in butterflies, and has shown many of these earlier arbitrary groupings to be inadequate.) Want more information ? Try Natural History Museum Glossary In the case of our butterfly above, the full scientific name (for written text purposes), for the butterfly that occurs in South Australia is Hesperilla chrysotricha (Meyrick and Lower, 1902) cyclospila (Meyrick and Lower, 1902). The first italic word denotes the genus (starting with a capital letter), the second italic word (in combination with the genus name) denotes its species name, while the third italic word (in combination with the preceding genus and species names) is its subspecific name. The latter describes the smallest subgroup, used to define geographically isolated, distinctly recognizable morphological populations within a species distributional range. Use of three italic names is called a trinomial naming system. The scientific names used are invariably derived from the classical Latin or Greek languages, but sometimes proper nouns from the modern languages are also used, since there are only a limited number of words available in the former languages. The species Hesperilla chrysotricha is present in southern mainland Australia, and in Tasmania. The subspecies cyclospila occurs in South Australia, Victoria and Tasmania. Another subspecies chrysotricha occurs in Western Australia. Its morphology is different to the eastern states butterfly. The subspecies chrysotricha is called the nominotypical race because it was from Western Australia that the species was first described by the scientists. The nominotypical race always uses the species name as its subspecific name. The two subspecie populations are geographically separated by the semi-arid Nullarbor Plain, and therefore there has not been any interbreeding between the two populations, and so the two populations have remained morphologically distinct. In special circumstances (for reference purposes), both the species name and subspecies name is followed by the name of the person(s) who first described the butterfly (gave it a scientific species or subspecies name) in scientific literature, and the year of its publication. Meyrick and Lower were two gentlemen who gave it the chrysotricha species name (in 1902). They also gave the South Australian butterfly the subspecific cyclospila name (again in 1902). The scientists' personal names are placed in brackets because they originally described the butterfly under the genus name Telesto. If they had described it under the genus Hesperilla then their personal names would not have been placed in brackets. (If there are no recognised subspecies, then the scientific name is treated as a binomial). It is obvious the full written text name of the above butterfly is inordinately long, so it is usual to leave out the scientist's name(s) after the italic species name, and just recognise the scientist(s) who gave the butterfly its subspecies name. Unless the text is a full blown scientific treatise, then it is also normal to leave out the publication date, to shorten the name even further. When writing about the butterfly in general terms, and especially after the full name of the butterfly has been written down earlier (or elsewhere) in the text, then it is also normal to then completely leave off the scientist's name(s). The latter conventions have been followed on this internet site. Sometimes, butterflies are given form or variant scientific names. e.g. Catopsilia pomona pomona (Fabricius) form catilla (Cramer). This occurs when the adult butterfly (either male or female, or both) has a distinctly different, but consistent morphological pattern within the overall species population, and which is not due to geographical isolation. This morphology is usually genetically controlled (recessive, dominant or sex linked), but sometimes it is phenotypic (environmentally) controlled. So, there you have it. Confused ? Never mind, so am I, but it will all fall in place after a while when you have viewed a few names in the Checklist of South Australian Butterflies. IS IT A SKIPPER OR A "TRUE" BUTTERFLY ? |Skippers are not closely related to the 'true butterflies' as they evolved from moths independently of the true butterflies. However, because the two groups look similar, and as they both generally fly together during the day, they are usually lumped together as butterfly fauna.| |Usually a sombre brown and yellow colour||Usually brightly coloured| |Most skippers at rest have a distinctive pose, with the forewing leading edge inclined to the body at a shallow angle, (although some skippers called "flats", rest with both wings held flat).||Most butterflies at rest have a distinctive pose, with the forewing leading edge inclined to the body at a high angle, (although some butterflies prefer to rest with the wings held flat).| |Antennae widely spaced on the head||Antennae closely spaced on the head| |Antennae clubs are usually hooked||Antennae clubs are not hooked| |Flight rapid and jerky (hence the general common name of skipper)||Flight flappy or may glide (although there are a few oriental swallowtails that fly like skippers)| |At rest the hindwings are often depressed independent of the upright forewings||At rest the hindwings are not depressed independent of the upright forewings| |Peripheral wing veins not stalked. i.e. they all emanate from the cell or wing base||Peripheral wing veins can be stalked. i.e. they emanate from other peripheral wing veins| |An epiphysis is present on the foreleg||An epiphysis is present on the foreleg only in the Papilionidae| IS IT A BUTTERFLY OR A MOTH ? |Usually fly during the day||Usually fly during the night| |Antennae are usually clubbed on the ends||Antennae are usually either thread-like or comb-like| |The forewing and hind-wing are not coupled by a frenulum, (except for one Australian 'living fossil' butterfly called the Regent Skipper, Euschemon rafflesia)||The forewings and hind-wings are coupled by various means, often by a frenulum, to make one continuous surface| |The pupa is usually not in a silken cocoon||The pupa is usually enclosed in a silken cocoon| |The butterfly usually rests with the wings folded together and upright above the body||The moth usually rests with the wings folded back along the body in the shape of a long pitched tent or roof| Author: R. GRUND, © copyright 1999, all rights reserved. Last update 12 July 2011.
<urn:uuid:c8994738-1ef3-45bd-acc1-f0c72748b831>
3.75
2,373
Knowledge Article
Science & Tech.
28.752535
95,582,236
Pfiesteria is a genus of heterotrophic dinoflagellates that has been associated with harmful algal blooms and fish kills. Pfiesteria complex organisms (PCOs) were claimed to be responsible for large fish kills in the 1980s and 1990s on the coast of North Carolina and in tributaries of the Chesapeake Bay. In reaction to the toxic outbreaks, six states along the US east coast have initiated a monitoring program to allow for rapid response in the case of new outbreaks and to better understand the factors involved in Pfiesteria toxicity and outbreaks. New molecular detection methods have revealed that Pfiesteria has a worldwide distribution. Discovery and namingEdit Pfiesteria was discovered in 1988 by North Carolina State University researchers JoAnn Burkholder and Ed Noga. The genus was named after Lois Ann Pfiester (1936–1992), a biologist who did much of the early research on dinoflagellates. There are two species described, Pfiesteria piscicida (from Latin Pisces, fish; cida, killer.), which has a complex life cycle and the species Pfiesteria shumwayae, also with a complex life cycle. The type locality of Pfiesteria piscicida is Pamlico River Estuary, North Carolina, U.S.A. Early research resulted in the hypothesis that Pfiesteria is a predatory dinoflagellate that acts as an ambush predator, utilizing a "hit and run" feeding strategy. Release of a toxin paralyzes the respiratory systems of susceptible fish, such as menhaden, causing death by suffocation. Pfiesteria then consumes the tissue sloughed off its dead prey. - Life cycle: Early research suggested a complex lifecycle of Pfiesteria piscicida, but this has become controversial over the past few years due to conflicting research results. Especially contested is the question of whether toxic amoeboid forms exist. - Toxicity to fish: The hypothesis of Pfiesteria killing fish via releasing a toxin in the water has been questioned as no toxin could be isolated and no toxicity was observed in some experiments. Toxicity appears to depend on the strains and assays used. The lesions observed on fish presumed killed by Pfiesteria have been attributed to water molds by some researchers. However, it has also been established that Pfiesteria shumwayae kills fish by feeding on their skin through myzocytosis. In early 2007, a highly unstable toxin produced by the toxic form of Pfiesteria piscicida was identified. - Human illness: The effects of PCOs on humans have been questioned, leading to the "Pfiesteria hysteria hypothesis." A critical review of this hypothesis in the late 1990s concluded that Pfiesteria-related illness was unlikely to be caused by mass hysteria. Concluding that there was no evidence to support the existence of Pfiesteria-associated human illness, the National Institutes of Health discontinued funding for research into the effects of Pfiesteria toxin on humans shortly after a CDC sponsored Pfiesteria conference in 2000. A subsequent evaluation, however, concluded that PCOs can cause human illness. The controversy about the risk of Pfiesteria exposure to human health is still ongoing. - Magnien RE (2001). "State monitoring activities related to Pfiesteria-like organisms". Environ. Health Perspect. Brogan &. 109 (Suppl 5): 711–4. doi:10.2307/3454918. JSTOR 3454918. PMC . PMID 11677180. - Rublee PA, Remington DL, Schaefer EF, Marshall MM (2005). "Detection of the Dinozoans Pfiesteria piscicida and P. shumwayae: a review of detection methods and geographic distribution". J. Eukaryot. Microbiol. 52 (2): 83–9. doi:10.1111/j.1550-7408.2005.05202007.x. PMID 15817112. - Steidinger, K.A.; Burkholder, J.M.; Glasgow, H.B. Jr.; Hobbs, C.W.; Garrett, J.K.; Truby, E.W.; Noga, E.J.; Smith, S.A. (1996). "Pfiesteria piscicida gen. et sp. nov. (Pfiesteriaceae fam. nov.), a new toxic dinoflagellate with a complex life cycle and behavior". J. Phycol. 32: 157–164. doi:10.1111/j.0022-3646.1996.00157.x. - Parrow, M.W.; Burkholder, J.M. (2004). "The sexual life cycles of Pfiesteria piscicida and cryptoperidiniopsoids (dinophyceae)". J. Phycol. 40 (4): 664–673. doi:10.1111/j.1529-8817.2004.03202.x. - Parrow, M.W.; Burkholder, J.M. (2003). "Reproduction and sexuality in Pfiesteria shumwayae (Dinophyceae)". J. Phycol. 39 (4): 697–711. doi:10.1046/j.1529-8817.2003.03057.x. - Eichhorn, Susan E.; Raven, Peter H.; Evert, Ray Franklin (2005). Biology of plants. New York: W.H. Freeman and Company. p. 205. ISBN 0-7167-1007-2. - "Pfiesteria: Frequently Asked Questions". Archived from the original on 2007-09-28. Retrieved 2008-01-06. - Miller TR, Belas R (2003). "Pfiesteria piscicida, P. shumwayae, and other Pfiesteria-like dinoflagellates". Res. Microbiol. 154 (2): 85–90. doi:10.1016/S0923-2508(03)00027-5. PMID 12648722. - Peglar MT, Nerad TA, Anderson OR, Gillevet PM (2004). "Identification of amoebae implicated in the life cycle of Pfiesteria and Pfiesteria-like dinoflagellates". J. Eukaryot. Microbiol. 51 (5): 542–52. doi:10.1111/j.1550-7408.2004.tb00290.x. PMID 15537089. - Burkholder JM, Gordon AS, Moeller PD, et al. (2005). "Demonstration of toxicity to fish and to mammalian cells by Pfiesteria species: comparison of assay methods and strains". Proc. Natl. Acad. Sci. U.S.A. 102 (9): 3471–6. doi:10.1073/pnas.0500168102. PMC . PMID 15728353. - Vogelbein WK, Lovko VJ, Shields JD, et al. (2002). "Pfiesteria shumwayae kills fish by micropredation not exotoxin secretion". Nature. 418 (6901): 967–70. doi:10.1038/nature01008. PMID 12198545. - Moeller PD, Beauchesne KR, Huncik KM, Davis WC, Christopher SJ, Riggs-Gelasco P, Gelasco AK (2007). "Metal complexes and free radical toxins produced by Pfiesteria piscicida". Environ. Sci. Technol. 41 (4): 1166–72. doi:10.1021/es0617993. PMID 17598275. - Greenberg DR, Tracy JK, Grattan LM (1998). "A critical review of the Pfiesteria hysteria hypothesis". Md Med J. 47 (3): 133–6. PMID 9601200. - CDC National Conference on Pfiesteria: From Biology to Public Health October 18–20, 2000, Atlanta GA - Collier DN, Burke WA (2002). "Pfiesteria complex organisms and human illness". South. Med. J. 95 (7): 720–6. doi:10.1097/00007611-200295070-00012. PMID 12144078. - Morris JG, Grattan LM, Wilson LA, et al. (2006). "Occupational exposure to pfiesteria species in estuarine waters is not a risk factor for illness". Environ. Health Perspect. 114 (7): 1038–43. doi:10.1289/ehp.8627. PMC . PMID 16835056. - Shoemaker RC, Lawson W (2007). "Pfiesteria in estuarine waters: the question of health risks". Environ. Health Perspect. 115 (3): A126–7. doi:10.1289/ehp.115-a126. PMC . PMID 17431460.
<urn:uuid:cb54b4fb-07e3-4c64-9007-99892cc35896>
3.515625
1,981
Knowledge Article
Science & Tech.
75.170854
95,582,241
Lately, new technologies have joined the road to the eventual colonization of Mars. These most recent elements are a pair of advances in technology that will help settle on the Red Planet or at least try and do so. The first one is an animated map of Mars’ surface created by ecologist Wieger Wamelink at Wageningen University This helps show factors that will be important for the location of a possible human colony. The Two Projects that Helped Make the Map of Mars Wageningen University and its “Food in Mars and Moon” project have been studying the soil of Mars since 2013. In doing so, scientists were eventually able to grow a variety of crops necessary for survival. These included tomatoes, carrots, and peas, among others. This study helped provide some of the knowledge and experience necessary to put together a map of Mars. One that includes ratings of different minerals in the soil, climate, terrain, atmosphere, radiation, and nearby ice deposits. All of these elements must be suitable for crops to grow. Interestingly, areas rated well for a possible human colony are places where Mars Rovers have often landed before. The second technological advancement is a set of small, insect-sized mini-drones developed by NASA. These look like robot bees with cicada wings. They could be used for exploring the surface of the Red Planet and gathering samples. The adorable tiny robot bees are aerodynamic and use a tiny fraction of the energy a rover does for exploration. This is mainly due to how their wings operate in Mars’ low gravity and thin atmosphere. The “Marsbees,” as they have been dubbed, are outfitted to gather samples of air. In doing so, they are searching for methane, a sign of life, on Mars. The exploration of Mars seems looming, with some estimates projecting a human presence on the Red Planet as soon as the mid- to late-2020’s. Scientists and projects from all over the world are assisting in overcoming the many hurdles that might come with exploring and living on a planet so far from Earth. Image Source: Pixabay Latest posts by Nancy Young (see all) - Missouri Man Robbed by Date and Accomplice in Park - July 15, 2018 - Bose Poised to Launch Sleepbuds, In-Ear Headphones That Help You Sleep - July 15, 2018 - Russia Is Developing a Space Debris Laser to Keep Space Clean - July 15, 2018
<urn:uuid:71675f03-e7c2-4cf8-a7af-4a32d0a09dfb>
3.921875
513
Personal Blog
Science & Tech.
43.127079
95,582,242
Bees brighter than we knew, study finds / They pass cognitive tests usually given apes, people Bees are famously busy -- but they're also pretty brainy. Our pollen-hunting friends possess "higher cognitive functions," judging by cunning experiments in which the creatures learned to compare and distinguish different colors and patterns, according to today's issue of Nature. In what an outside expert praises as "an exciting discovery," the French researcher Martin Giurfa and four colleagues showed that honeybees -- that's Apis mellifera to bee fanciers -- excel at cognitive tests normally performed by lab primates and human volunteers. In the current film "O Brother, Where Art Thou?" George Clooney announces to his little band of waifs that he should lead them because "I'm the one with the capability for abstract thought." As the Nature article shows, bees also can engage in abstract thought. The creatures can "master abstract inter-relationships," specifically the cognitive concepts of "sameness" and "difference," Giurfa and his team report. Hence, "higher cognitive functions are not a privilege of vertebrates," that is, creatures with backbones and much more complex nervous systems. To demonstrate this, Giurfa and his team exposed bees to a simple Y-shaped maze. The entrance to the maze was marked with a particular symbol -- say, the color yellow. A bee flying through the entrance encountered a branching pathway. One branch was marked with the color yellow, another with the color blue. Bees that pursued the yellow-marked path discovered at its end a vial rich in sugar. Bees that took the blue path got no sugar. Normally, bees would have been just as likely to fly one way as another. But via Giurfa's experiment, the bees learned that sugar lay at the end of the route marked with the same symbol as that marking the outside entrance. In other words, "same" equals "sugar." The bees demonstrated an ability to recognize "sameness" and "difference" -- fundamental skills on any test of cognitive abilities. In a second experiment, the bees showed they could apply the concepts of "sameness" and "difference" beyond what they had learned in the first experiment. In subsequent experiments, the opening to the maze was marked by a different symbol -- such as vertical dark lines. In that case, on entering the maze the bees re-encountered the two pathways, which this time were marked not with colors but, rather, with lines -- vertical lines on one path, horizontal lines on the other. Had the bees remembered the lesson of the first experiment, namely that "same" equals "sugar"? They had. In the second experiment, more than 70 percent of the bees promptly flew down the path marked by vertical dark lines, the same symbol as that above the entrance. Judging by the experiments, bees' capacity for abstract thought is so impressive that Giurfa, who works at both Laboratoire d'Ethologie et Cognition Animale in Toulouse, France, and Institut fur Biologie in Berlin, bristled when a Chronicle reporter characterized bee cognition as "low-degree." "I disagree with your characterization of this being a 'low-degree' of intelligence," Giurfa replied by e-mail. "In fact, it would be the opposite! "(In the past) many researchers thought that this kind of learning -- learning of an abstract rule, which is independent of the stimuli used -- can only be possible in primates and human beings. Here (in this experiment) we show that this is not true. Abstract rules can also be mastered by the mini brain of a honeybee." "It is an exciting discovery," said a leading bee authority, Professor Michael S. Engel, curator of the division of entomology at the University of Kansas. "Early in the last century, (zoologist Karl) von Frisch shattered our concept of insect cognitive capacities by demonstrating that honeybees communicated by an abstract language -- that is, via the famous 'waggle dance. ' This eventually won him the Nobel Prize. "The findings by Giurfa and colleagues further reveal the cognitive level of bees and at the same time suggest that seemingly complex behaviors may have a relatively simplistic neural (nervous system) architecture," Engel added. In other words, if bees -- with their relatively simple nervous systems -- can be so smart, then human intelligence might eventually be explained more easily than previously assumed.
<urn:uuid:637e3199-6f97-471c-98b6-13dfc3c6a92a>
3.453125
934
News Article
Science & Tech.
38.5756
95,582,266
All rights reserved Bacteria are the simplest living things that exist on our planet and were the first form of life existed. Persisted throughout history and today continue to be the most abundant life. Although there are many types of bacteria, all share certain basic characteristics and today we will define some of the key. Generally, all bacteria are prokaryotic unicellular organisms, that means are formed by a single cell and that it has no core. Usually measured on average a micrometer, although really giant bacteria have been found, they came to measuring about a millimeter. Bacteria can have ball format (coconuts); Comma (vibrões); Spiral (spirillae) and also the rod (bacillus), among others. The latter are the most common. One way to classify bacteria is according to their way of nutrition. From this, you can define the bacterium is autótrofa (produces its own food) and heterotrophic (own food produces are not). Many of autotrophic bacteria produce organic matter thanks to the energy of sunlight, which means they are autotrophic photosynthetic equal to those plants. Other obtains the necessary energy to survive thanks to chemical reactions involving inorganic substances obtained from the environment, these are called autotrophic quimiossintetizantes. Some bacteria require oxygen to survive, others, however, can live without contact with the air in no time, and in certain cases can die if exposed to oxygen. The latter are called anaerobes and for air, perform a complex process known as anaerobic respiration. All rights reserved There are also other ways to classify bacteria, for example according Its growth patterns and to multiply as in the case of chains formed by streptococci. All bacteria multiply by division, the simplest form of reproduction in nature observed that explains the huge amount of existing bacteria. The bacteria usually have a cell wall and one or more flagella, which use to get around around. This walking ability is a major benefit, as they can move to more favorable areas for growth where the conditions or the availability of food are better. On our planet; in our work; in our home; in our bed and even within us, there is an unimaginable number of bacteria in many different shapes, sizes, colors, behaviors and characteristics. The differences between these features allows the bacteria can survive in almost all sorts of places, from the simplest to the most extreme. Are the true dominant species on the planet and its characteristics make them be amazing. Take a look at this video that shows how things would be if we could see the bacteria with the naked eye. The study of all these characteristics of the bacteria is in charge of bacteriology, microbiology one of the areas.
<urn:uuid:9210be98-061f-4345-b4d6-bce8823c9cb5>
3.515625
576
Knowledge Article
Science & Tech.
29.881844
95,582,277
What is Python? Executive Summary Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level built in data structures, combined with dynamic typing and dynamic binding, make it very attractive for Rapid Application Development, as well as for use as a scripting or glue language to connect existing components together. Python’s simple, easy to learn syntax emphasizes readability and therefore reduces the cost of program maintenance. Python supports modules and packages, which encourages program modularity and code reuse. The Python interpreter and the extensive standard library are available in source or binary form without charge for all major platforms and can be freely distributed. Often, programmers fall in love with Python because of the increased productivity it provides. Since there is no compilation step, the edit-test-debug cycle is incredibly fast. Debugging Python programs is easy: a bug or bad input will never cause a segmentation fault. Instead, when the interpreter discovers an error, it raises an exception. When the program doesn’t catch the exception, the interpreter prints a stack trace. A source level debugger allows inspection of local and global variables, evaluation of arbitrary expressions, setting breakpoints, stepping through the code a line at a time, and so on. The debugger is written in Python itself, testifying to Python’s introspective power. On the other hand, often the quickest way to debug a program is to add a few print statements to the source: the fast edit-test-debug cycle makes this simple approach very effective. - Lectures 11 - Quizzes 0 - Duration 50 hours - Skill level All levels - Language English - Students 0 - Assessments Self
<urn:uuid:c5e66d0c-9416-4d68-b9be-ced681902d90>
3.75
348
Product Page
Software Dev.
29.618324
95,582,292
Researchers from the Fraunhofer Institutes for Mechanics of Materials IWM and for Environmental, Safety and Energy Technology UMSICHT are collaborating on a project entitled "Bionic Manufacturing", which aims to develop products that are lightweight but strong and economic in their use of materials - imitating the perfected structures found in nature. Sudoku puzzles represent a popular exercise recommended to improve logical and creative thinking. A team of scientists from the Catalan Institute of Nanotechnology, ICREA, and Universitat Autonoma de Barcelona investigated the properties of a special kind of sudoku, made by assembling tiny molecules into a 3x3 square array. Scientists at The University of Nottingham are developing microscopic organic medical imaging systems to support a new generation of breakthrough treatments for currently incurable diseases and chronic life-threatening illnesses. Researchers have used sulfur-coated hollow carbon nanofibers and an electrolyte additive to fabricate a superior rechargeable lithium battery cathode. According to Cui, putting silicon nanowire anodes and sulfur-coated carbon cathodes into one battery could be the next generation in battery design. Rice University physicists have created a tiny "electron superhighway" that could one day be useful for building a quantum computer, a new type of computer that will use quantum particles in place of the digital transistors found in today's microchips. Scientists at Stanford and SLAC have found a potential way to harness the amazing properties of topological insulators - materials that conduct electricity only along their surfaces - for use in electronics and other applications. By using lasers to help grow nanotubes on a silicon plate, the researchers have created structures that, when viewed under a scanning electron microscope, resemble a jellyfish in the ocean. This image was recently awarded first prize in the national photo competition "Making Nano Visible." Scientists have created a working cloaking device that not only takes advantage of one of nature's most bizarre phenomenon, but also boasts unique features; it has an 'on and off' switch and is best used underwater.
<urn:uuid:02606d09-05c6-4086-a480-77f10da7e48a>
2.703125
421
Content Listing
Science & Tech.
-0.040934
95,582,295
A team of astronomers led by Caltech has discovered a giant swirling disk of gas 10 billion light-years away--a galaxy-in-the-making that is actively being fed cool primordial gas tracing back to the Big Bang. Using the Caltech-designed and -built Cosmic Web Imager (CWI) at Palomar Observatory, the researchers were able to image the protogalaxy and found that it is connected to a filament of the intergalactic medium, the cosmic web made of diffuse gas that crisscrosses between galaxies and extends throughout the universe. Using the Cosmic Web Imager (CWI) at Palomar Observatory to study a system 10 billion light years away, a team of astronomers led by Caltech has unveiled a galaxy in the making being fed cool gas by a filament of the cosmic web. This picture combines a visible light image with data from CWI. A filament of the cosmic web (outlined here with parallel curved lines) can be seen funneling cold gas onto the protogalaxy (outlined with an ellipse). The CWI is an integral field spectrograph; the researchers used it to create a multiwavelength map showing the velocities with which gas in the system is moving with respect to the center of the system. The red side of the disk is rotating away from us, while the blue side is rotating toward us. Gas within the filament is moving at a constant velocity that matches the blue side of the rotating disk. Credit: Chris Martin/PCWI/Caltech The finding provides the strongest observational support yet for what is known as the cold-flow model of galaxy formation. That model holds that in the early universe, relatively cool gas funneled down from the cosmic web directly into galaxies, fueling rapid star formation. A paper describing the finding and how CWI made it possible currently appears online and will be published in the August 13 print issue of the journal Nature. "This is the first smoking-gun evidence for how galaxies form," says Christopher Martin, professor of physics at Caltech, principal investigator on CWI, and lead author of the new paper. "Even as simulations and theoretical work have increasingly stressed the importance of cold flows, observational evidence of their role in galaxy formation has been lacking." The protogalactic disk the team has identified is about 400,000 light-years across--about four times larger in diameter than our Milky Way. It is situated in a system dominated by two quasars, the closest of which, UM287, is positioned so that its emission is beamed like a flashlight, helping to illuminate the cosmic web filament feeding gas into the spiraling protogalaxy. Last year, Sebastiano Cantalupo, then of UC Santa Cruz (now of ETH Zurich) and his colleagues published a paper, also in Nature, announcing the discovery of what they thought was a large filament next to UM287. The feature they observed was brighter than it should have been if indeed it was only a filament. It seemed that there must be something else there. In September 2014, Martin and his colleagues, including Cantalupo, decided to follow up with observations of the system with CWI. As an integral field spectrograph, CWI allowed the team to collect images around UM287 at hundreds of different wavelengths simultaneously, revealing details of the system's composition, mass distribution, and velocity. Martin and his colleagues focused on a range of wavelengths around an emission line in the ultraviolet known as the Lyman-alpha line. That line, a fingerprint of atomic hydrogen gas, is commonly used by astronomers as a tracer of primordial matter. The researchers collected a series of spectral images that combined to form a multiwavelength map of a patch of sky around the two quasars. This data delineated areas where gas is emitting in the Lyman-alpha line, and indicated the velocities with which this gas is moving with respect to the center of the system. "The images plainly show that there is a rotating disk--you can see that one side is moving closer to us and the other is moving away. And you can also see that there's a filament that extends beyond the disk," Martin says. Their measurements indicate that the disk is rotating at a rate of about 400 kilometers per second, somewhat faster than the Milky Way's own rate of rotation. "The filament has a more or less constant velocity. It is basically funneling gas into the disk at a fixed rate," says Matt Matuszewski (PhD '12), an instrument scientist in Martin's group and coauthor on the paper. "Once the gas merges with the disk inside the dark-matter halo, it is pulled around by the rotating gas and dark matter in the halo." Dark matter is a form of matter that we cannot see that is believed to make up about 27 percent of the universe. Galaxies are thought to form within extended halos of dark matter. The new observations and measurements provide the first direct confirmation of the so-called cold-flow model of galaxy formation. Hotly debated since 2003, that model stands in contrast to the standard, older view of galaxy formation. The standard model said that when dark-matter halos collapse, they pull a great deal of normal matter in the form of gas along with them, heating it to extremely high temperatures. The gas then cools very slowly, providing a steady but slow supply of cold gas that can form stars in growing galaxies. That model seemed fine until 1996, when Chuck Steidel, Caltech's Lee A. DuBridge Professor of Astronomy, discovered a distant population of galaxies producing stars at a very high rate only two billion years after the Big Bang. The standard model cannot provide the prodigious fuel supply for these rapidly forming galaxies. The cold-flow model provided a potential solution. Theorists suggested that relatively cool gas, delivered by filaments of the cosmic web, streams directly into protogalaxies. There, it can quickly condense to form stars. Simulations show that as the gas falls in, it contains tremendous amounts of angular momentum, or spin, and forms extended rotating disks. "That's a direct prediction of the cold-flow model, and this is exactly what we see--an extended disk with lots of angular momentum that we can measure," says Martin. Phil Hopkins, assistant professor of theoretical astrophysics at Caltech, who was not involved in the study, finds the new discovery "very compelling." "As a proof that a protogalaxy connected to the cosmic web exists and that we can detect it, this is really exciting," he says. "Of course, now you want to know a million things about what the gas falling into galaxies is actually doing, so I'm sure there is going to be more follow up." Martin notes that the team has already identified two additional disks that appear to be receiving gas directly from filaments of the cosmic web in the same way. Additional Caltech authors on the paper, "A giant protogalactic disk linked to the cosmic web," are principal research scientist Patrick Morrissey, research scientist James D. Neill, and instrument scientist Anna Moore from the Caltech Optical Observatories. J. Xavier Prochaska of UC Santa Cruz and former Caltech graduate student Daphne Chang, who is deceased, are also coauthors. The Cosmic Web Imager was funded by grants from the National Science Foundation and Caltech. Deborah Williams-Hedges | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:8d95bc19-b13d-4086-ac15-0b90aef8e1c3>
3.15625
2,185
Content Listing
Science & Tech.
43.422423
95,582,311
Sometimes described as “flammable ice,” hydrates consist of water molecules that create cages around “guest molecules” such as methane, which is one carbon atom bonded with four hydrogen atoms, a principal component of natural gas. Vast stores of hydrates exist in subsurface sediments of permafrost and deep oceans and are considered a major potential energy resource. The U.S. Geological Survey estimates that the total amount of carbon captured in methane hydrate, worldwide, is at least twice the total amount held in fossil fuels. The flux of hydrates in the environment may play a role in the global carbon cycle and long-term climate patterns. NIST researchers spent three years combing the literature on gas hydrates and comparing and evaluating data collected in experiments by numerous sources. The database contains about 12,000 individual data points for about 150 compounds spanning 400 different chemical systems. The data include phase equilibria (proportions of solid, liquid and gas phases in a material at a given temperature and pressure) and thermophysical property information such as thermal conductivity. The NIST web interface also provides the first electronic access to scientific results from the 2002 Mallik research well in Canada, an international geophysical experiment exploring the properties of naturally occurring hydrates and the feasibility of using them as energy resources. The new database is meant for use by climate modelers, researchers studying the potential recovery of hydrates for practical applications and the petroleum industry, which has long been interested in preventing unprocessed hydrates from infiltrating natural gas pipelines. The NIST gas hydrates web site uses technology that acts like a desktop computer application. Whereas traditional web interfaces do most of their work on a file server, transmitting information slowly to clients over network connections, the new NIST web interface provides fast, customized service by doing much of the data sorting and presentation on client computers. NIST developed the database in association with CODATA (the international Committee on Data for Science and Technology). Funding was provided by the National Energy Technology Laboratory of the U.S. Department of Energy. The database is available at http://gashydrates.nist.gov. Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:0d6d8b5a-efbd-4873-9299-ce90b8ea8ded>
3.59375
1,051
Content Listing
Science & Tech.
30.437003
95,582,329
The striking North Face of the Bernese Alps is the result of a steep rise of rocks from the depths following a collision of two tectonic plates. This steep rise gives new insight into the final stage of mountain building and provides important knowledge with regard to active natural hazards and geothermal energy. The results from researchers at the University of Bern and ETH Zürich are being published in the «Scientific Reports» specialist journal. Mountains often emerge when two tectonic plates converge, where the denser oceanic plate subducts beneath the lighter continental plate into the earth’s mantle according to standard models. But what happens if two continental plates of the same density collide, as was the case in the area of the Central Alps during the collision between Africa and Europe? Vertical cross-section through the Alps 15 million years ago. Schematic drawing © M. Herwegh, Institute for Geology, University of Bern. Geologists and geophysicists at the University of Bern and ETH Zürich examined this question. They constructed the 3D geometry of deformation structures through several years of surface analysis in the Bernese Alps. With the help of seismic tomography, similar to ultrasound examinations on people, they also gained additional insight into the deep structure of the earth’s crust and beyond down to depths of 400 km in the earth’s mantle. Viscous rocks from the depths A reconstruction based on this data indicated that the European crust’s light, crystalline rocks cannot be subducted to very deep depths but are detached from the earth’s mantle in the lower earth’s crust and are consequently forced back up to the earth’s surface by buoyancy forces. Steep fault zones are formed here, which push through the earth’s crust and facilitate the steep rise of rocks from the depths. There are textbook examples of these kinds of fault zones in the Hasli valley, where they appear as scars in form of morphological incisions impressively cutting through the glacially polished granite landscape. The detachment of the earth’s crust and mantle takes place at a depth of 25-30 kilometres. This process is triggered by the slow sinking and receding of the European plate in the upper earth’s mantle towards the north. In specialist terminology, this process is called slab rollback. The high temperatures at these depths make the lower crust’s rocks viscous, where they can subsequently be forced up by buoyant uplift forces. Together with surface erosion, it is this steep rise of the rocks from lower to mid-crustal levels which is responsible for the Bernese Alps’ steep north front today (Titlis – Jungfrau region – Blüemlisalp range). The uplift data in the range of one millimetre per year and today’s earthquake activity indicate that the process of uplift from the depths is still in progress. However, erosion on the earth’s surface causes continuous ablation which is why the Alps do not carry on growing upwards endlessly. Important for natural hazards and geothermal energy The analysis of the steep fault zones are not just of scientific interest though. The seismic partly still active faults are responsible for the rocks weathering more intensively on the surface and therefore landslides and debris flows occurring, for example in the Halsi valley in the extremely steep areas of the Spreitlaui or Rotlaui. The serious debris flows in the Guttannen area are based, among other things, on this structural preconditioning of the host rocks. The leakage of warm hydrothermal water, which it is important to explore for geothermal energy and the 2050 energy policy, can be traced directly back to the brittle fracturing of the upper earth’s crust and the seeping in of cold surface waters. The water is heated up in the depths and arrives at the surface again through the steep fault zones – for example, in the Grimsel region. In this sense, the new findings lead to a deeper understanding of surface processes, which influence our infrastructures, for example the transit axes (rail, roads) through the Alps. Marco Herwegh, Alfons Berger, Roland Baumberger, Philip Wehrens & Edi Kissling: «Large-Scale Crustal-Block-Extrusion During Late Alpine Collision», Scientific Reports, 24.03.2017, doi: 10.1038/s41598-017-00440-0 Prof. Marco Herwegh, Institute of Geological Sciences, University of Bern Tel. +41 31 631 87 61 / 87 64 / email@example.com Prof. Edi Kissling, Institute for Geophysics at ETH Zürich Tel. +41 44 633 26 23 / firstname.lastname@example.org Nathalie Matter | Universität Bern Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:1261ac36-dbbc-4afa-98db-6a145b3b601b>
4
1,607
Content Listing
Science & Tech.
45.009815
95,582,339
A feature of the Earth's atmosphere which has long puzzled scientists is replicated in the atmosphere of Saturn, according to new research. Saturn, like Earth, produces electron beams which not only accelerate towards its auroral region but also away from it, say scientists this week in Nature. These 'anti-planetward' electrons puzzle scientists because they do not produce auroral light and do not fit into the current understanding of how auroras, which are usually found around a planet's poles, are created. Auroras are an effect of light emitted from the upper atmosphere. The aurora on Earth, sometimes known as the Northern Lights, is a bright and colourful glow sometimes seen in the night sky in parts of the northern hemisphere. Auroras are usually generated when atmospheric atoms become excited by the electrons that are accelerating towards the planet. The fact that there are also anti-planetward auroral electrons on Saturn just as on Earth, suggests that such electrons are a universal feature of all auroras. It was previously unclear whether anti-planetward electrons were a unique feature of the aurora on Earth. Auroras similar to the one on Earth are found on most planets in our Solar System. Professor Michele Dougherty, of the Space and Atmospheric Physics group at Imperial College London and one of the authors of the research, said: "Auroras are still very mysterious and we dont fully understand what the connection is between the auroras and the electrons accelerating away from the Earth. The fact that we have now observed the same thing happening on Saturn means that we are even more curious about why this is taking place." Scientists discovered anti-planetward electrons on Saturn using measurements taken by the Magnetospheric Imaging Instrument on the Cassini spacecraft. Magnetic field data from the magnetometer onboard Cassini was used to analyse the electron beams and Professor Dougherty is Principal Investigator for the magnetometer. Source: Imperial College London Explore further: Reconnection tames the turbulent magnetic fields around Earth
<urn:uuid:4764f385-d96f-443a-9155-14a0519bc8fc>
3.671875
402
News Article
Science & Tech.
22.270693
95,582,372
However, while we have plenty of CaSiO3 supply, it had never been seen by humans in its natural environment. The rocks can shed light on the deepest parts of Earth's core. Analysis of the diamond provided rock-hard evidence that material from the oceanic crust-which stretches just six miles below the sea-is recycled into the lower mantle, where the gem was formed. An earlier study, from December 2016, looked at fragments of some of the world's largest precious stones (the flakes are produced when rough stones are cut and polished) and based on the minerals trapped in them, concluded that they formed at depths corresponding to the deep mantle. While most of us might value diamonds for their beauty, geologists like the minerals for what they can tell us about the goings on deep within our planet. The perovskite diamond was found less than a kilometer below Earth's surface. "Nobody has ever managed to keep this mineral [calcium silicate perovskite] stable at the Earth's surface.The only possible way of preserving this mineral at the Earth's surface is when it's trapped in an unyielding container like a diamond", explained Graham Pearson, a researcher and professor at the University of Alberta and one of the study's authors, in the recently published report. Snapchat now lets you tag other users in Stories In 2016, Facebook-owned Instagram added mentions to Stories , which let users tag other accounts by typing the '@' sign. Snapchat confirmed they're testing the feature to TechCrunch but didn't reveal when the feature will go live. Researchers have estimated that around 93 percent of the Earth's lower mantle is made up of silicate perovskites. Diamonds are capable of forming deep in the Earth's mantle - sometimes as much as 400 miles beneath the crust. The research team behind the discovery didn't set out to find a new form of ice, but it turns out that at least a few diamonds harbor ice-VII - ice that is around one and a half times as dense as the ice we're used to, and boasting a different crystalline structure in atoms as well. The intense and crushing pressure is believed to have formed the diamond, trapping the rare Earth mineral inside it in the process. Prevented from crystallizing under high pressure, the water froze as geological activity eventually moved the diamonds to the surface. "The diamond lattice doesn't relax much, so the volume of the inclusion remains nearly constant whether it's in the Earth's mantle or in your hand", Tschauner said. "It provides fundamental proof of what happens to the fate of oceanic plates as they descend into the depths of the Earth". According to Brandon Specktor of Live Science, the piece of CaSiO3 was visible to the naked eye once the diamond was polished, but an worldwide team of researchers collaborated on analyzing the precious stone with X-ray and spectroscopy tests. The paper on the perovskite diamond appeared online Wednesday in the journal Nature, under the title "CaSiO3 perovskite in diamond indicates the recycling of oceanic crust into the lower mantle". It is true that the history of the planet Earth has been only recorded partially.
<urn:uuid:5570a3ef-77be-49ce-8607-bcafdfb3088d>
3.28125
665
News Article
Science & Tech.
41.346212
95,582,383
Biggest, smallest, shortest, longest and tallest. Fill and squirt the same shapes but different sizes. Use small, medium or large cubes to measure with. Thanks Wendy. Use length and weight to decide what to load on the ship. Explore the volume of cube-shaped rocks in Cubiland. Which object is the heaviest? Many levels. Click and drag the right number of ingredients to the mixing bowl. Investigate the volume of rocks that fly out of a volcano. Match the metric conversions as quick as you can. Try this scale reader which includes weight in grams and kilograms. Use a measuring scale to add the correct amount to the mixing bowl. Design your own BAMZOOKI race track using length and mass. Match the pairs. Eg 2 litres = 2000 millitres. Shoot the 3D shapes with the same volume. Calculate and measure the ingredients to bake the cake! Revision of when and why to use certain units of measurement. Great graphics too. Logic problem solving with weight (mass) as a factor. Use three different sized pots to solve the problem. Investigate length, width and height of a cubic metre. Series of questions about the length, width and height of a cubic metre. Put explosives under the rocket to figure out what it needs to launch. Estimate the volume of objects such as a filing cabinet, a DVD player, and fridge. Informative explanation about volume and how to measure it. Explore surface area and volume of rectangular prisms. Use the rotation and zoom tools to estimate the height, length and width. Use these rotation and zoom tools to work out volume.
<urn:uuid:c061d7af-bc3c-4ef2-96ac-d3bda4b1fcdc>
3.875
356
Content Listing
Science & Tech.
63.138579
95,582,395
Postdoctoral Research Associate, Massachusetts Institute of Technology Creating synthetic light-harvesting units mimicking photosynthesis using DNA Photosynthetic light-harvesting is a hallmark for nanoscale engineering of chromophores to achieve efficient photon absorption, transport, and energy conversion. The orientation, precise localization, and separation of these pigments are determined by the association of the pigments to protein scaffolds. The hierarchical and dense molecular organization of photosynthetic pigments lead to the formation of a manifold of delocalized excited states useful for long-range energy transfer. Creating artificial light-harvesting devices that implement the design principles of pigment organization typically found in photosynthetic light-harvesting systems requires a scaffold that can control the formation of chromophore aggregates in a robust manner. DNA is a potential scaffold to assemble light-harvesting units due to its programmable sequence design. This sequence programmability allows for precise positioning of dyes in space. Assembling light-harvesting units can be further extended into more complex structures through DNA origami — a method by which a large piece of DNA is being folded into specific shape by short complementary DNA strands. Combining the sequence precision of DNA with the ability to create arbitrary geometric shapes using DNA origami, a myriad of exciton networks can be designed and investigated, similar to a nanoscale breadboard, which can provide insight to nanoscale light-harvesting and energy transport in unprecedented detail. Abstract: The remarkable performance and quantum efficiency of biological light-harvesting complexes has prompted a multidisciplinary interest in engineering biologically inspired antenna systems as a possible route to novel solar cell technologies. Key to the effectiveness of biological “nanomachines” in light capture and energy transport is their highly ordered nanoscale architecture of photoactive molecules. Recently, DNA origami has emerged as a powerful tool for organizing multiple chromophores with base-pair accuracy and full geometric freedom. Here, we present a programmable antenna array on a DNA origami platform that enables the implementation of rationally designed antenna structures. We systematically analyze the light-harvesting efficiency with respect to number of donors and interdye distances of a ring-like antenna using ensemble and single-molecule fluorescence spectroscopy and detailed Förster modeling. This comprehensive study demonstrates exquisite and reliable structural control over multichromophoric geometries and points to DNA origami as highly versatile platform for testing design concepts in artificial light-harvesting networks. Pub.: 23 Feb '16, Pinned: 29 Jun '17 Abstract: Mimicking green plants' and bacteria's extraordinary ability to absorb a vast number of photons and harness their energy is a longstanding goal in artificial photosynthesis. Resonance energy transfer among donor dyes has been shown to play a crucial role on the overall transfer of energy in the natural systems. Here, we present artificial, self-assembled, light-harvesting complexes consisting of DNA scaffolds, intercalated YO-PRO-1 (YO) donor dyes and a porphyrin acceptor anchored to a lipid bilayer, conceptually mimicking the natural light-harvesting systems. A model system consisting of 39-mer duplex DNA in a linear wire configuration with the porphyrin attached in the middle of the wire is primarily investigated. Utilizing intercalated donor fluorophores to sensitize the excitation of the porphyrin acceptor, we obtain an effective absorption coefficient 12 times larger than for direct excitation of the porphyrin. On the basis of steady-state and time-resolved emission measurements and Markov chain simulations, we show that YO-to-YO resonance energy transfer substantially contributes to the overall flow of energy to the porphyrin. This increase is explained through energy migration along the wire allowing the excited state energy to transfer to positions closer to the porphyrin. The versatility of DNA as a structural material is demonstrated through the construction of a more complex, hexagonal, light-harvesting scaffold yielding further increase in the effective absorption coefficient. Our results show that, by using DNA as a scaffold, we are able to arrange chromophores on a nanometer scale and in this way facilitate the assembly of efficient light-harvesting systems. Pub.: 29 Jan '13, Pinned: 07 Jun '17 Abstract: The extent of photon energy transfer through individual DNA-based molecular wires composed of five dyes is investigated at the single molecular level. Combining single-molecule spectroscopy and pulse interleaved excitation imaging, we have directly resolved the time evolution spectral response of individual constructs, while simultaneously probing DNA integrity. Our data clearly show that intact wires exhibit photon-transfer efficiencies close to 100% across five dyes. Dynamical and multiple pathways for the photon emission resulting from conformational freedom of the wire are readily uncovered. These results provide the basis for guiding the synthesis of DNA-based supramolecular arrays with improved photon transport at the nanometer scale. Pub.: 22 Dec '06, Pinned: 07 Jun '17 Abstract: Using the principle of self-assembly, a fluorescence-based photonic network is constructed with one input and two spatially and spectrally distinct outputs. A hexagonal DNA nanoassembly is used as a scaffold to host both the input and output dyes. The use of DNA to host functional groups enables spatial resolution on the level of single base pairs, well below the wavelength of light. Communication between the input and output dyes is achieved through excitation energy transfer. Output selection is achieved by the addition of a mediator dye intercalating between the DNA base pairs transferring the excitation energy from input to output through energy hopping. This creates a tool for selective excitation energy transfer on the nanometer scale with spectral and spatial control. The ability to direct excitation energy in a controlled way on the nanometer scale is important for the incorporation of photochemical processes in nanotechnology. Pub.: 09 Sep '11, Pinned: 07 Jun '17 Abstract: Fluorescence resonance energy transfer (FRET) is a promising means of enabling information processing in nanoscale devices, but dynamic control over exciton pathways is required. Here, we demonstrate the operation of two complementary switches consisting of diffusive FRET transmission lines in which exciton flow is controlled by DNA. Repeatable switching is accomplished by the removal or addition of fluorophores through toehold-mediated strand invasion. In principle, these switches can be networked to implement any Boolean function. Pub.: 10 Mar '12, Pinned: 07 Jun '17 Abstract: Obtaining quantitative information about molecular assemblies with high spatial and temporal resolution is a challenging task in fluorescence microscopy. Single-molecule techniques build on the ability to count molecules one by one. Here, a method is presented that extends recent approaches to analyze the statistics of coincidently emitted photons to enable reliable counting of molecules in the range of 1-20. This method does not require photochemistry such as blinking or bleaching. DNA origami structures are labeled with up to 36 dye molecules as a new evaluation tool to characterize this counting by a photon statistics approach. Labeled DNA origami has a well-defined labeling stoichiometry and ensures equal brightness for all dyes incorporated. Bias and precision of the estimating algorithm are determined, along with the minimal acquisition time required for robust estimation. Complexes containing up to 18 molecules can be investigated non-invasively within 150 ms. The method might become a quantifying add-on for confocal microscopes and could be especially powerful in combination with STED/RESOLFT-type microscopy. Pub.: 25 Jun '13, Pinned: 07 Jun '17 Abstract: Assembling DNA-based photonic wires around semiconductor quantum dots (QDs) creates optically active hybrid architectures that exploit the unique properties of both components. DNA hybridization allows positioning of multiple, carefully arranged fluorophores that can engage in sequential energy transfer steps while the QDs provide a superior energy harvesting antenna capacity that drives a Förster resonance energy transfer (FRET) cascade through the structures. Although the first generation of these composites demonstrated four-sequential energy transfer steps across a distance >150 Å, the exciton transfer efficiency reaching the final, terminal dye was estimated to be only ~0.7% with no concomitant sensitized emission observed. Had the terminal Cy7 dye utilized in that construct provided a sensitized emission, we estimate that this would have equated to an overall end-to-end ET efficiency of ≤ 0.1%. In this report, we demonstrate that overall energy flow through a second generation hybrid architecture can be significantly improved by reengineering four key aspects of the composite structure: (1) making the initial DNA modification chemistry smaller and more facile to implement, (2) optimizing donor-acceptor dye pairings, (3) varying donor-acceptor dye spacing as a function of the Förster distance R0, and (4) increasing the number of DNA wires displayed around each central QD donor. These cumulative changes lead to a 2 orders of magnitude improvement in the exciton transfer efficiency to the final terminal dye in comparison to the first-generation construct. The overall end-to-end efficiency through the optimized, five-fluorophore/four-step cascaded energy transfer system now approaches 10%. The results are analyzed using Förster theory with various sources of randomness accounted for by averaging over ensembles of modeled constructs. Fits to the spectra suggest near-ideal behavior when the photonic wires have two sequential acceptor dyes (Cy3 and Cy3.5) and exciton transfer efficiencies approaching 100% are seen when the dye spacings are 0.5 × R0. However, as additional dyes are included in each wire, strong nonidealities appear that are suspected to arise predominantly from the poor photophysical performance of the last two acceptor dyes (Cy5 and Cy5.5). The results are discussed in the context of improving exciton transfer efficiency along photonic wires and the contributions these architectures can make to understanding multistep FRET processes. Pub.: 13 Jul '13, Pinned: 07 Jun '17 Abstract: Taking inspiration from photosynthetic mechanisms in natural systems, a light-sensitive photo protective quenching element was introduced into an artificial light-harvesting antenna model to control the flow of energy as a function of light intensity excitation. The Orange Carotenoid Protein (OCP) is a non-photochemical quencher in cyanobacteria: under high light conditions the protein undergoes a spectral shift, and by binding to the phycobilisome it absorbs excess light and dissipates it as heat. By using DNA as a scaffold, an antenna system made of organic dyes (Cy3, Cy5) was constructed, and OCP was assembled on it as a modulated quenching element. By controlling the illumination intensity it is possible to switch the direction of excitation energy transfer from the donor Cy3 to either of two acceptors. Under low light conditions energy is transferred from Cy3 to Cy5, and under intense illumination, energy is partially transferred to OCP as well. These results demonstrate the feasibility of controlling the pathway of energy transfer using light intensity systems. Pub.: 14 Jan '17, Pinned: 07 Jun '17 Abstract: An efficient artificial light-harvesting system is fabricated from a cyclic polysaccharide, sulfato-β-cyclodextrin (SCD); an aggregation-induced emission molecule, an oligo(phenylenevinylene) derivative (OPV-I); and a fluorescent dye, nile red (NiR), via noncovalent interactions in an aqueous solution. In this system, the OPV-I/SCD supramolecular assembly acts as a donor, and NiR that is loaded into the OPV-I/SCD assembly acts as an acceptor. Significantly, an efficient energy-transfer process occurs between the OPV-I/SCD assembly and the loaded NiR, leading to an extremely high antenna effect. Pub.: 07 Jun '17, Pinned: 07 Jun '17
<urn:uuid:24b6af4e-4027-4ede-b310-749e26ca73c3>
2.71875
2,544
Content Listing
Science & Tech.
15.279301
95,582,396
Physicist Richard Schnee hopes to find traces of dark matter by studying particles with low masses and interaction rates, some of which have never been probed before Physicist Richard Schnee hopes to find traces of dark matter by studying particles with low masses and interaction rates, some of which have never been probed before. This is assistant professor Richard Schnee. Credit: Syracuse University The ongoing search for invisible dark matter is the subject of a recent article involving physicists from Syracuse University's College of Arts and Sciences. Research by Richard Schnee, assistant professor of physics, is referenced in Symmetry magazine, a joint publication of the Stanford Linear Accelerator Center in Palo Alto, Calif., and Fermilab in Batavia, Ill. "Scientists looking for dark matter face a serious challenge, in that no one knows its properties," says Schnee, also principal investigator of the Cryogenic Dark Matter Search (CDMS) Physics Lab at SU. "Experiments have seen no signs of dark matter particles that have high masses, but a few experiments have claimed hints of possible interactions from dark matter particles with low masses." An expert in particle physics, Schnee hopes to find traces of dark matter with an experiment that is more sensitive to such low-mass dark matter particles. He and his postdoctoral research associate, Raymond Bunker, are part of a multinational team of scientists working on SuperCDMS, an experiment in the University of Minnesota's Soudan Underground Laboratory that is designed to detect dark matter. (In addition to leading part of the experiment's data analysis, Bunker helped edit a paper about the experiment that has been submitted to Physical Review Letters.) Schnee's team is rounded out by two graduate students: Yu Chen and Michael Bowles. Although dark matter has never been seen directly, it is thought to be six times more prevalent in the universe than normal matter. "Everywhere we look, objects are accelerating due to gravity, but the acceleration is too large to be caused by only the matter we see," Schnee says. "Even more remarkably, we can infer that this extra dark matter is composed not of normal atoms, but other kinds of particles." Scientists believe the mystery particles are WIMPs (Weakly Interacting Massive Particles), which travel at hundreds of thousands miles per hour through space and shower the Earth on a continuous basis. Unlike normal matter, WIMPs do not absorb or emit light, so they cannot be viewed with a telescope. "Spotting the occasional WIMP that interacts with something is extremely challenging because particle interactions from natural radioactivity occur at a much higher rate. Detecting a WIMP is like spotting a needle in a haystack," Schnee continues. Enter CDMS, whose hyper-sensitive detectors can differentiate between rare WIMP interactions and common ones involving radioactivity. The size of a hockey puck, a CDMS detector is made up of a semiconductor crystal of germanium that, when cooled to almost absolute zero, can detect individual particular interactions. The presence of layers of Earth--like those at the Soudan lab--provide additional shielding from cosmic rays that otherwise would clutter the detector, as it waits for passing dark matter particles. "We cool our detectors to very low temperatures, so we can detect small energies that are deposited by the collisions of dark matter particles with the germanium," says Schnee. "Other materials, including argon, xenon, and silicon, are also used to detect low-mass dark matter particles. We need to consider as many materials as possible, along with germanium." SU is one of 14 universities working collaboratively in the search for WIMPs. In the Physics Building, Schnee and his team have constructed an ultra-low radon "clean room," in hopes of reducing the number of interactions from radioactivity that look like WIMPs. (Alpha and beta emissions from radon, a type of radioactive gas, can mimic WIMP interactions in a detector.) "Unfortunately, radon is all around us, so, even with this 'clean room,' some radon-induced interactions will still mimic WIMPs," Schnee says. "All of us are building different types of detectors and are constantly improving our methods, in hopes of spotting WIMP interactions." Housed in The College, the Department of Physics has been educating students and carrying out research for more than 125 years. Graduate and undergraduate opportunities are available in fields ranging from biological and condensed matter physics, to cosmology and particle physics, to gravitational wave detection and astrophysics. Rob Enslin | Eurek Alert! What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:9ff63425-0f54-4bd8-a768-23dc21935b57>
3.28125
1,586
Content Listing
Science & Tech.
35.944935
95,582,412
TURN, TURN, TURN Curiosity took this selfie on Mars in February. The rover has found what appears to be a seasonal cycle of methane in the Martian atmosphere and more signs of organic molecules in the soil. HIDDEN DEPTHS Evidence is growing that plumes of water erupt through the icy surface of Jupiter’s moon Europa, seen in this composite image taken by the Galileo spacecraft in the late 1990s. JPL-Caltech/NASA, SETI Institute UNDER PRESSURE Using lasers, scientists compressed iron to high pressures that are likely found in large, rocky exoplanets’ cores. Here, an image of inside the laser chamber is shown with an artist’s rendering of an exoplanet.
<urn:uuid:308048a5-ce27-4ff2-b469-f3e74dfe5b98>
2.796875
154
Content Listing
Science & Tech.
36.049
95,582,437
Turning the Titanic Over fifteen hundred people died when the "unsinkable" Titantic sank in 1912, just days into the passenger steamship's first trip from Southampton, England to New York City. Collision with an iceberg undisputedly caused the tragedy, but recent news has raised the possibility that human error also played a role in the accident. Suspicions and possibilities floated to the surface last week as news sources reported on Louise Patten's claim that her grandfather, Charles Lightoller, second officer on the Titanic's only trip, claimed to have had reports from the captain and first officer that a steering mistake had turned the ship into the iceberg rather than away from it. The mistake, if indeed it happened that way, may be attributed to a change in steering systems at that time, a move away from the "tiller" system (where you push right to go left and vice versa) to a system more like modern cars—you turn the way you want to go. While Patten reportedly told his wife his account of what happened after the tragedy, he never revealed the possibility of human error in his meetings with investigators. Most likely, the truth will never be known for certain. But the news offers ground for speculation, and it's at the heart of a new novel by Patten. The following projects might be smooth sailing for those interested in hydrodynamics and curious about events that may have coincided to down the famed ship: - How Much Weight Can Your Boat Float? (Difficulty: 4) - Rocking the Boat (Difficulty: 4-5) - Making It Shipshape: Hull Design and Hydrodynamics (Difficulty: 5-8) - Archimedes Squeeze: At What Diameter Does an Aluminum Boat Sink? (Difficulty: 4-5) You Might Also Enjoy these Previous Entries: - Real-world Blood Typing and the Value of Blood Donation - A Pet Science Project Success - Laurel vs. Yanny and Student STEM - Inspiring Students about STEM Careers and Robotics - Celebrate Engineers Week with the Fluor Challenge - Can Aerodynamic Suits Give U.S. Speed Skaters an Edge? - Put a Heart Health Spin on Valentine's Day - Classroom Science for Flu Season
<urn:uuid:25e03d99-b235-4212-926c-690e957a404a>
2.890625
479
Personal Blog
Science & Tech.
42.28641
95,582,466
Using Python in Grasshopper In the previous section we saw how Grasshopper, despite being a great platform for algorithmic design, can also be limiting when trying to incorporate more complex programming elements. On the other hand, programming directly with computer code in a programming language like Python gives us much more control over our models and allows us to develop more complex and interesting design spaces but has a much higher learning curve and can be quite intimidating when first starting out. So how do we choose one over the other? Luckily we don’t have to! In fact we can easily extend the basic functionality of Grasshopper by embedding Python code directly into special Grasshopper nodes that can then interact with other nodes in our model. This allows us to rely mostly on Grasshopper for basic elements, and use code only when necessary for more complex functionality. As discussed previously, there are now hundreds of different programming languages to choose from — so why would we choose to work with Python? Python is a very modern, general-purpose and high-level object-oriented programming language. In recent years Python has become extremely popular in a variety of fields outside of computer programming such as science, medicine, statistics, math, and machine learning. This popularity can be attributed to Python’s - relatively simple syntax, with a focus on simplicity, clarity, and readability which makes it less complicated to learn and write - extensibility through a large collection of external libraries - a huge support community of active users. Unlike more complex languages such as C++, Python is not meant for full software development. Because of it’s emphasis on ease of use, it also tends to be less efficient and somewhat slower than these languages, but this is usually not an issue for applications outside of software development. Lately, Python is also starting to be embedded as a scripting language within many different design software, including ArcGIS, QGIS, Rhino, Solidworks, and Autodesk Fusion. This is a big change from earlier years when every design tool had it’s own proprietary scripting language (Rhinoscript for Rhino, EKL for Catia, Actionscript for Illustrator), so you had to learn a whole new language each time you used a new tool. Integrating Python as a standard scripting language allows designers to learn a single language and use it to control a variety of design software. So if you are not a programmer but want to learn one programming language that will give you the most use in your design career, there is a strong argument that the language should be Python. Python in Rhino and Grasshopper Starting in version 5 Rhino has Python directly embedded alongside (and as an eventual replacement for) Rhinoscript. In addition to running any standard Python code, it can also communicate with the Rhino interface and work with geometry by using a set of special Python libraries provided by Rhino. Using these libraries designers can use Python to control almost every feature of Rhino, from the model geometry, to its views, cameras, and even the user interface. You can also work with Python directly in Grasshopper through an external plugin called GHPython. This plugin gives you a ‘Python’ node in Grasshopper which allows you to embed code into your models. Code written in these nodes uses the same libraries to handle geometry and perform various modeling tasks, and is able to communicate with both the Grasshopper and Rhino environments. The great thing about the GHPython node is that it allows you to mix and match between working with code and normal Grasshopper nodes. This way, there is no pressure to develop your entire model just through code starting with a blank text file, which can be very intimidating for people just starting out with scripting or computational design. Instead, you can develop most of the model using standard Grasshopper nodes, and only use Python nodes to do more complex tasks that are difficult or impossible in standard Grasshopper. In a way GHPython is like a ‘gateway drug’ into using code for computational design. You can start off by writing small, simple scripts for specific purposes, and gradually develop more and more complex code as you learn. Installing the GHPython library Unfortunately the GHPython library does not come pre-installed with Grasshopper and must be download and installed separately. Luckily this is very easy to do: 1. Go to http://www.food4rhino.com/app/ghpython and download the latest stable version. At the time of this writing 0.6.0.3 was the latest version and has been tested to work with all the examples in the class. 2. The library is contained in a single file called ghpython.gha. Once this file is downloaded, right click on it and go to properties. If there is a button or a checkbox in the properties window that says ‘Unblock’, check it or click on it to disable the blocking so that Grasshopper can see the file. 3. To load the library into Grasshopper you need to put this file into a special ‘Components’ folder where all the libraries are kept. The easiest way to find this folder is to launch Grasshopper and from the menu go to File -> Special Folders -> Components Folder. Now copy and paste the ghpython.gha file into this folder. 4. To see the GHPython node you need to restart Grasshopper either by restarting Rhino or typing the GrasshopperUnloadPlugin command in Rhino to shut down Grasshopper and then restart it. If you just close the Grasshopper window it only hides it and does not actually shut it down. Using the GHPython node The GHPython library ads just one additional node to Grasshopper. You can find it under the Maths tab. Let’s put one of these nodes onto the canvas and see how it works. You can see that the Python node has input and output ports just like any other node in Grasshopper, except it can have any number of inputs and outputs and you can call them whatever you want. You can add more inputs and outputs by zooming in on the node until you see minus icons appear next to the input/output names and plus icons appear between them. You can use these icons to either add or remove inputs and outputs. You can also rename them by right clicking on the name and entering a new name in the text box at the top of the menu. These inputs and outputs are automatically brought into the Python script where you can use them as variables. This allows the Python script to interact with the rest of your Grasshopper model by bringing data in through input variables and then outputting other data that was created within the script. Double clicking on the center of the node brings up the script editing window where you can write the actual Python code. Let’s create a simple ‘hello world’ example to see how this works. Change the name of one of the Python node’s inputs to ‘input’ and the name of one of the outputs to ‘output’. Plug a text panel into the input and type whatever you want into the panel. Attach another text panel into the output so we can see the results. Now type this line of code into the script window: output = "hello " + input and click the ‘Test’ button at the bottom of the window. This button will execute the script and you should see the results above. Clicking OK will save changes to your script and close the editor window. Clicking Close will close the window without saving changes. This simple example brings in text from the Grasshopper model through its input node, joins this text to another piece of text, and then assigns the result to the output node, which can then be used in the Grasshopper model. Integrating code into Grasshopper like this is a very powerful way to extend the basic functionalities of Grasshopper and will allow us to create interesting and complex design spaces using the full tools of computation. The next series of posts will go through how to work with the five fundamental elements of programming in Python and introduce you to some of the basic concepts of the language. Although we won’t be connecting the nodes to anything yet, you can follow along with the code examples by entering them directly into the Python node script editor. Once we establish some of the basics, the following section of posts will go over how we can use Python to work with geometry and interact with other Grasshopper nodes. For more information about learning the Python programming language you can follow these guides: For more specific information on working with Python in Rhino and Grasshopper you can consult these resources: - http://developer.rhino3d.com/guides/rhinopython/ — Rhino’s Python support page - http://developer.rhino3d.com/guides/rhinopython/python-rhinoscriptsyntax-introduction/ — specific support for the rhinoscriptsyntax Python library - http://www.rhino3d.com/download/IronPython/5.0/RhinoPython101 —book about using Python in Rhino - http://rhinopython-docs.appspot.com/ — documentation of the main Rhino Python library - http://developer.rhino3d.com/api/RhinoScriptSyntax/win/ — documentation of the rhinoscriptsyntax library - http://www.grasshopper3d.com/forum/topics/general-python-questions — general discussion about the GHPython node - https://github.com/mcneel/rhinoscriptsyntax — github repo for the rhinoscriptsyntax library
<urn:uuid:7f9ba611-f46c-499a-9e6c-89103ecaf1f1>
2.953125
2,046
Tutorial
Software Dev.
45.222898
95,582,473
Our work doesn’t stop with discovering life on Earth—we’re also helping to sustain it. Woven throughout the breathtaking forests of the Western United States is a dilemma that encapsulates the complexities of twenty-first century conservation. Jack Dumbacher’s research about the relationship between the region’s native endangered Northern spotted owl (Strix occidentalis) and its populous relative— the barred owl (Strix varia)—has uncovered controversies surrounding land and species management in the face of modern deforestation. “We were alarmed by rapidly declining spotted owl populations, and suspected that they were being edged out by—and possibly hybridizing with—more aggressive barred owls showing up outside of their traditional range,” says Dumbacher. “With the clock ticking for endangered spotted owls, we needed to study these interactions to properly guide species survival plans.”
<urn:uuid:d9184b05-a440-4c36-9d4e-69a96f4a4cf6>
3.375
182
Knowledge Article
Science & Tech.
17.123333
95,582,488
Developing a Process-Level Understanding Arctic sea-ice is changing dramatically, with rapid declines in summer sea-ice extent and a shift toward relatively more first year ice and less multi-year ice. Ultimately sea-ice decline is linked to broader global climate change, but at a regional scale many interdependent processes and feedbacks within the atmosphere, ocean, and sea-ice contribute to the broader observed changes. The primary objective of MOSAiC is to develop a better understanding of these important coupled-system processes so they can be more accurately represented in regional- and global-scale models. Such enhancements will contribute to improved modeling of global climate and weather, and Arctic sea-ice predictive capabilities. Guiding Science Questions What are the causes and consequences of an evolving and diminished Arctic sea ice cover? - What are the seasonally-varying energy sources, mixing processes, and interfacial fluxes that affect the heat and momentum budgets of sea ice? - How does sea ice move and deform over its first year of existence? - Which processes contribute to the formation, properties, precipitation, and maintenance of Arctic clouds and their interactions with aerosols and - How do interfacial exchange rates, biology, and chemistry couple to regulate ecosystems and the major elemental cycles in the high Arctic - How do ongoing changes in the Arctic ice-ocean-atmosphere system impact larger-scale heat and mass transfers of importance to climate and
<urn:uuid:1eb71729-0473-4ac7-871e-ce5253318f7e>
3.203125
318
Academic Writing
Science & Tech.
5.238701
95,582,493
NASA has developed an instrument that will enable researchers to determine plant water use and to study how drought conditions affect plant health. Soon to be installed on the International Space Station, the instrument is called Ecostress (an abreviation for 'ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station') and will measure the temperature of plants from space. "Plants draw in water from the soil and, as they are heated by the Sun, the water is released through pores on the plants' leaves through a process called transpiration. This cools the plant down, much as sweating does in humans. "However, if there is not enough water available to the plants, they close their pores to conserve water, causing their temperatures to rise," NASA explained in a news release today about the project. It continued: "Plants use those same pores to take up carbon dioxide from the atmosphere for photosynthesis - the process they use to turn carbon dioxide and water into the sugar they use as food. If they continue to experience insufficient water availability, or 'water stress', they eventually starve or overheat, and die." Ecostress data will therefore show these changes in plants' temperatures, providing insight into their health and water use while there is still time for water managers to correct agricultural water imbalances. Simon Hook, Ecostress' principal investigator at NASA's Jet Propulsion Laboratory in Pasadena, California, said that when a plant is so stressed that it turns brown, it's often too late for it to recover. "But measuring the temperature of the plant lets you see that a plant is stressed before it reaches that point," he said. These temperature measurements are also considered an early indicator of potential droughts, so when plants in a given area start showing signs of water stress through elevated temperature, its likely an agricultural drought is likely underway. Having this data in advance will therefore give the agricultural community a chance to prepare and respond accordingly, NASA said "Ecostress will allow us to monitor rapid changes in crop stress at the field level, enabling earlier and more accurate estimates of how yields will be impacted," added Martha Anderson, an Ecostress science team member. "Even short-term moisture stress, if it occurs during a critical stage of crop growth, can significantly impact productivity." Ecostress will hitch a ride to the space station on a NASA-contracted, SpaceX cargo resupply mission scheduled to launch from Cape Canaveral Air Force Station in Florida on 29 June. Once it arrives, NASA said it will be robotically installed on the exterior of the station's Japanese Experiment Module Exposed Facility Unit. Over the next year, the new instrument will use the space station's unique low Earth orbit to collect data over multiple areas of land at different times of day, producing detailed images of areas as small as 40 by 70 metres, or around the size of a small farm, every three to five days. Children as young as four to be taught about the dangers of social media Bans already issued to hundreds of players who used offensive language The site is perfectly situated for launching small satellites into orbit Delegates at the ESOF 2018 conference were warned that their perceptions of the digital age were coloured by private industry
<urn:uuid:8b521007-e87c-4855-ba8d-291b9464b77a>
3.75
664
News Article
Science & Tech.
18.647576
95,582,508
The NoVA Neutrino Experiment is the largest neutrino detector in the world at this time, and includes the largest plastic block structure ever built, known as the “Far Detector.” The 260 members of the NOvA Neutrino Experiment recently reported their initial findings in two papers. The team is trying get a clearer picture of the role neutrinos, mysterious subatomic particles, played in the evolution of the cosmos. The first paper, in Physical Review Letters, describes the first appearance of electron neutrinos in the NOvA experiment. A second paper, in Physical Review D, describes the disappearance of muon neutrinos in the experiment. Taken together, the papers offer insights into fundamental neutrino properties such as mass, the way neutrinos oscillate from one type to another, and whether neutrinos are a key to the dominance of matter in the universe. “THESE ARE ABSOLUTELY STUNNING ELECTRON NEUTRINO EVENTS. WE’VE LOOKED AT THEM AND THEY’RE TEXTBOOK PERFECT—ALL 11 OF THEM SO FAR.” In a presentation describing the results, physicist Mayly Sanchez clicked to a slide showing the telltale track of an electron neutrino racing through the 14,000-ton Far Detector of the experiment. Since that detector started full operations in November 2014, two analyses of data from the long-distance experiment have made the first experimental observations of muon neutrinos changing to electron neutrinos. One analysis found 11 such transitions. Sanchez wrote on her slide, “All 11 of them are absolutely gorgeous.” HOW THE EXPERIMENT WORKS NOνA scientists use a 300-ton particle detector at US Department of Energy’s Fermilab near Chicago (the Near Detector) and a 14,000-ton detector in northern Minnesota (the Far Detector) to study neutrino oscillations. The Near Detector sits in a cavern 350 feet underground and measures the composition of the neutrino beam as it leaves the Fermilab site. As they travel straight through the earth, the neutrinos oscillate. The Far Detector records what types of neutrino arrive in Minnesota. Sanchez, an Iowa State University associate professor of physics and astronomy who is also an Intensity Frontier Fellow at Fermilab, is one of the leaders of the NOvA experiment. She serves on the experiment’s executive committee and co-leads the analysis of electron neutrino appearance in the Far Detector. The paper about electron neutrino appearance reports two, independent analyses of detector data: One found six cases of the muon neutrinos sent to the Far Detector oscillating into electron neutrinos. The other found 11 oscillations. If there were no oscillations, researchers predicted there would be one electron neutrino observed in the Far Detector. Sanchez says the flickering electron neutrino tracks she helped analyze prove the experiment can do what it was designed to do—spotting and measuring neutrinos after they make the 500-mile, 3-millisecond journey from Fermilab to the Far Detector in northern Minnesota. The detector is huge: 344,000 plastic cells within a structure 200 feet long, 50 feet high, and 50 feet wide, making it the world’s largest freestanding plastic structure. “The big news here is we observed electron neutrino appearance,” Sanchez says. If the calibrations and parameters had been just a little off, “we might not have seen anything,” she says. “When you design an experiment like this, you hope that nature is kind to you and allows you to do a measurement.” In this case, physicists are detecting and measuring mysterious and lightweight neutrinos. They’re subatomic particles that are among the most abundant in the universe but almost never interact with matter. They’re created in nature by the sun, by collapsing stars, and by cosmic rays interacting with the atmosphere. They’re also created by nuclear reactors and particle accelerators. There are three types of neutrinos: electron, muon, and tau. As they travel at almost the speed of light, they oscillate from one type to another. Takaaki Kajita of Japan and Arthur B. McDonald of Canada won the 2015 Nobel Prize in Physics for their contributions to the independent, experimental discoveries of neutrino oscillation. 3 GOALS OF NOVA The NOvA experiment has three main physics goals: make the first observations of muon neutrinos changing to electron neutrinos, determine the tiny masses of the three neutrino types, and look for clues that help explain how matter came to dominate antimatter in the universe. At the beginning of the universe, physicists believe there were equal amounts of matter and antimatter. That’s actually a problem because matter and antimatter annihilate each other when they touch. But the universe still exists. So something happened to throw off that balance and create a universe full of matter. Could it be that neutrinos decayed asymmetrically and tipped the scales toward matter? The NOvA experiment, as it takes more and more neutrino data, could provide some answers. Sanchez likes the data she’s seen: “These are absolutely stunning electron neutrino events. We’ve looked at them and they’re textbook perfect—all 11 of them so far.” Featured Photo Credit: Reidar Hahn/Fermilab, the NoVA Near Detector. Now, Check Out: - This Incredible Device is Testing Space-Time to See if it Might be Pixelated - Physics Duo Wins the Nobel Prize for Solving Longstanding Neutrino Puzzle - Explainer: what is antimatter? - Large Hadron Collider sees tantalizing hints of a new particle that could revolutionise physics - Physics Professor Dreams Up a Homework Problem and then Discovers This
<urn:uuid:743d96f5-1e21-4550-96e9-aa372c804754>
3.640625
1,277
News Article
Science & Tech.
33.503415
95,582,532
|The C Preprocessor| CPP supports two more ways of indicating that a header file should be read only once. Neither one is as portable as a wrapper ‘#ifndef’ and we recommend you do not use them in new programs, with the caveat that ‘#import’ is standard practice in Objective-C. CPP supports a variant of ‘#include’ called ‘#import’ which includes a file, but does so at most once. If you use ‘#import’ instead of ‘#include’, then you don't need the conditionals inside the header file to prevent multiple inclusion of the contents. ‘#import’ is standard in Objective-C, but is considered a deprecated extension in C and C++. ‘#import’ is not a well designed feature. It requires the users of a header file to know that it should only be included once. It is much better for the header file's implementor to write the file so that users don't need to know this. Using a wrapper ‘#ifndef’ accomplishes this goal. In the present implementation, a single use of ‘#import’ will prevent the file from ever being read again, by either ‘#import’ or ‘#include’. You should not rely on this; do not use both ‘#import’ and ‘#include’ to refer to the same header file. Another way to prevent a header file from being included more than once is with the ‘#pragma once’ directive. If ‘#pragma once’ is seen when scanning a header file, that file will never be read again, no matter what. ‘#pragma once’ does not have the problems that ‘#import’ does, but it is not recognized by all preprocessors, so you cannot rely on it in a portable program.
<urn:uuid:09a9b5a1-fb70-4234-bd39-5129095fe4b0>
2.578125
417
Documentation
Software Dev.
54.719423
95,582,550
The Influence of Al2O3 Addition on Electrical Properties of Sintered Copper Electric conductivity is often a very significant property in the application of metals. Electrical properties of metals can be explained from the viewpoint of a quantitative analysis of electronic effects in these materials. The valent electron for copper, whose electronic configuration is 1s 2 2s 2 2p 6 3s 2 3p 6 3d 10 4s is 4s, so it can be assumed that one valent electron is free in each copper atom. Thus, the concentration of conductive electrons is equal to the number of atoms per volume unit2. KeywordsValent Electron Electric Resistivity Copper Atom Sintered Sample Agglomerate Size Unable to display preview. Download preview PDF.
<urn:uuid:8ece7aad-5915-4012-9aee-44538e047605>
3.09375
158
Truncated
Science & Tech.
30.409943
95,582,572
posted by Kevin A 1.00 L flask is filled with 1.05 g of argon at 25 ∘C. A sample of ethane vapor is added to the same flask until the total pressure is 1.150 atm . a.)What is the partial pressure of argon, PAr, in the flask? b.)What is the partial pressure of ethane, Pethane, in the flask? so used Dalton's Law. I will be happy to critique your thinking. It does no good to post a series of test questions using different names, none of which indicate any work on your part.
<urn:uuid:4821de69-511d-4076-bd1c-d4d860299b41>
2.625
130
Q&A Forum
Science & Tech.
88.047874
95,582,575
In this chapter, we develop algorithms for implementing Seuss programs on multiprocessor architectures in which the processors communicate using messages. The implementation strategy is to partition the boxes over a set of processors and have a scheduler that instructs each processor which action to execute next. The scheduler can be centralized or distributed among the processors. In the next section, we describe the scheduler in abstract terms that permits either type of implementation; specific implementations are described in section 11.5. KeywordsSchedule Strategy Action Execution Black Node Concurrent Execution Multiple Alternative Unable to display preview. Download preview PDF.
<urn:uuid:f615ad02-06c4-453b-86ef-e9c09de639ed>
2.53125
123
Truncated
Software Dev.
15.655612
95,582,604
Wolves are good indicators of long-term ecosystem changes, since they depend on healthy populations of prey species. Alaska is home to an estimated 7,000 to 11,000 wolves. Wolf packs vary in size and range between Alaska's parks, for example, sometimes wolves disperse between Denali National Park and Preserve and Yukon-Charley Rivers National Preserve. The results of a two-decade long study of wolf populations in Alaska’s Yukon-Charley Rivers National Preserve have yielded new insights into how species management programs in adjacent areas affect protected wolf populations. National Park Service researchers monitored wolf population dynamics for 22 years (1993–2014) in order to assess how two large-scale wolf control programs, which had the primary goal of increasing the size of the Fortymile caribou herd, affected a wolf population located within the adjacent protected area of Yukon-Charley Rivers National Preserve. The study is one of only four in North America conducted for this length of time. The study found that during periods when wolf control programs were implemented, wolf survival rates in the national preserve were lower than usual even though the preserve encompasses 2.7 million acres and wolf control activities are prohibited in the preserve (and on other lands managed by the National Park Service). Other measures of population dynamics (dispersal, births and deaths) are also substantially different during years of wolf control.
<urn:uuid:5d3c2969-e991-4c96-a9f2-171a1c137ba0>
4.3125
286
Knowledge Article
Science & Tech.
24.598142
95,582,608
Does length contraction means the contraction of space time? No. Length contraction can be seen as a result of viewing a 4d object at a different angle. It's closely analogous to the fact that you can slice a sausage perpendicular to its length and get a circular face, or at an angle and get an elliptical face. The sausage hasn't changed (and certainly spacetime hasn't), but the part of it you are looking at has. No. It cannot be that, because we all occupy the same spacetime yet we don't all observe the same length contraction. Length contraction is a natural result of the relativity of simultaneity. The length of an object is the distance between where its ends are at the same time, so when things that are at the same time for one observer are not at the same time for another observer, they will find different lengths. Looking at the twin paradox, the effect of the twin in acceleration is there that the twin on the spaceship does experience a time dilation. Right, I'll take some time to review this No, NOBODY ever "experiences" time dilation, It is something you see in objects that are moving relative to you but they see YOU as time dilated at the same time you see them as time dilated. You, right now as you read this, are MASSIVELY time dilated according to a particle in the CERN accelerator. Has your watch slowed down? The twin paradox is an example of differential ageing, a different phenomenon than length contraction and time dilation. One way of seeing that it is different from time dilation is to consider teh time dilation that is present in the twin paradix: at every point in the journey, the travelling twin is at rest relative to himself while the stay-at-home twin is moving; therefore the stay-at-home clock is the time-dilated one as far as the traveller is concerned. However, traveller is still able to correctly calculate that he will age less than stay-at-home - even though stay-at-home's clock is dilated throughout the journey. Hmm, the result of the twin paradox is that the person moving away from earth experience a slower time when traveling close to the speed of light. Therefore when the twin on earth ages 10 years, the twin traveling on the spaceship only ages 6 years by traveling toward a fraction of the speed of light. The same thing happens with gravitation time dilation as the person closer to earth experiences a slower time passing by then the one away from earth by a few nanosecond. No, the person on the spaceship does NOT experience slower time. He/she experiences time passing at one second per second just as does the stay-at-home. What happens is that the person on the spaceship takes a different path through space-time and therefore experiences fewer ticks of his one-second-per-second clock than does the stay-at-home. EDIT: and by the way, this is one of the most confusing things when you first start to look into special relativity so you're in good company not getting it right away. Well yes neither of the twins would experience a slower time, it feels shorter for one 6 years, and longer for the other one 10 years. I am thinking that speed causes space time to contract through length contraction matches the fact that the person on earth with a more compact space time has a time dilation. If gravity increases with a more compact space time the result would be more prominent As we have said several times, nothing happens to spacetime in the scenarios we are discussing. Both length contraction and time dilation are effects that an observer observes happening to objects that are in motion relative to them. This is a symmetric effect - two observers in relative motion measure the same thing happening to the other. So it cannot be due to "spacetime contracting". Both observers would have to claim that spacetime was contracted for the other. The twin paradox is showcasing a different, but related, phenomenon. There is still no spacetime contraction involved. It turns out that the elapsed time showing on your wristwatch is a measure of "distance" travelled through spacetime (it's actually called the "interval"). In other words, your wristwatch measures "distance" through spacetime in a way analogous to the way the odometer in a car measures distance through space. The different ages of the twins comes from the fact that they took different routes through spacetime. This does not involve any kind of change to spacetime. It's essentially no different from the fact that the straight line distance between two points is different from the distance between them travelling via a third - one side of a triangle is not the same length as the other two put together. Well what Wikipedia says on time dilation is that "The laws of nature are such that time itself (i.e. spacetime) will bend due to differences in either gravity or velocity – each of which affects time in different ways." I'm still skeptical about how velocity bend time, I'm not sure if it's mass related That kind of thing is why you should be wary of Wikipedia as a source. Velocity does nothing to spacetime or mass (old textbooks will disagree about mass, but so-called relativistic mass has been a deprecated term since the 1970s, pop sci presentations notwithstanding). Assuming you are after a non-mathematical look at relativity, Ben Crowell's book Relativity for Poets is a good source. It's freely available from www.lightandmatter.com This sentence makes it seem as though one person can travel closer to the speed of light than another. Instead there is relative motion between two people. Either one could claim that the other is the one moving closer to the speed of light, or just the opposite. Either one could claim that he is the one moving closer to the speed of light. It's a meaningless assertion either way because as you chase after a light beam you find you make no more progress in catching it than does the other person. When the travelling twin moves away from the staying twin, the staying twin moves away from the traveling twin. The situation is symmetrical and each will observe the other's clock as running slow. The same is true when their relative motion causes them to get closer to each other. The only time the situation is not symmetrical is when the traveling twin changes direction, and it is this part that's responsible for the difference in proper times experienced by the twins. It is one thing to say that each twin's proper time differs from the other's dilated time. This always involves elapses of time between events that are spatially separated in at least one their rest frames. It is quite another to say that their proper times differ from each other. This always involves elapses of time between events that are not spatially separated in either rest frame. The former has nothing to do with the difference in their ages whereas the latter has everything to do with it. In terms of velocity and gravity: If two observers are travelling with respect to each other, what does spacetime care about that? Spacetime remains flat and unchanged, but the two observers have different perspectives on it. If a large mass is occuping space, on the other hand, spacetime itself curves and this curvature can be detected by any observer. The sentence is just plain flat-out wrong. This might be a good time to remind everybody that wikipedia is not an acceptable source at Physics Forums, and stuff like this is the reason why. I thought it was both RoS and time dilation? What do you call it when you travel to Alpha Centuari at 90% of the speed of light and measure the distance traveled to be 2 LY (if I did that math right...). It's hard to separate the two - but I can say that the distance between earth and Alpha Centauri is pretty much by definition the distance between where the earth is right now and where Alpha Centauri is right now. That definition works whether I'm at rest relative to them or not, and yields the appropriately contracted length if I am not. Travel time only comes into it when we consider how long an object (not necessarily at rest relative to me) would take to traverse that distance. My usual example is how do you measure the length of a beetle? You just stand it on a ruler and read off the position of its head and the position of its tail. If the beetle is walking, though, that procedure will not get you its length if you don't make the measurements at the same time. In that example, failing to measure simultaneously could just be sloppy experimentalism. But the relativity of simultaneity means that there is genuine, unresolvable, disagreement over what constitutes "at the same time" in different frames (and the beetle is moving in at least one of them), and that's where length contraction comes from. Observers at rest in the two frames use the same procedure to measure length, but because they disagree about simultaneity they get different lengths. Time dilation isn't directly relevant to this, although you can't build a symmetric picture of the world without invoking ot as well - you end up with an absolute rest frame. Isn't "an absolute rest frame" by definition the measurement of "proper length" with no distortion of observation via movement relative to the object (or distance) measured? So then the distance to Alpha Centauri or the length of a beetle is not changed by "how you look at" either in relative motion. How is the above wrong? "absolute rest frame" is an ambiguous term. Do you simply mean a frame of reference in which is an object is at rest? If so, then leave out the "absolute" since it adds nothing but confusion. If on the other hand, you mean an absolute frame of reference against which anything's motion can be measured, then that is an incorrect concept as there is no such thing. That's not an "absolute" rest frame, it's the rest frame of the beetle and the ruler, which is to say a frame in which the beetle and the ruler are not moving. That's a very convenient frame to use if the beetle and the ruler are sitting on a lab bench and the lab bench is bolted to the same concrete floor upon which I am standing... But I would find it very difficult to convince an observer watching with a telescope from Mars that the beetle, ruler, lab bench, and concrete floor were not moving - they're attached to the earth, which is going around the sun at a very different speed than Mars. For that matter, if I had a good enough telescope and he had a beetle and a ruler, I would find that his beetle and ruler were contracted relative to mine. Who is to say why my perspective (my beetle is uncontracted, his beetle is contracted) has any more natural significance than his perspective (my beetle is contracted, his beetle is not)? We can imagine consulting an independent authority, such as an alien from the Andromeda galaxy (moving at about 300 km/sec relative to both of us) who is right, but the only answer we'd get is that we're both wrong. The phrase, "absolute rest frame" quotes Ibix above. Of course all motion is "relative to what?," so there is no absolute frame of reference for velocity. But "proper length" always refers to length as measured from at rest with the object or distance in question. That leaves the question, "Is length variable with how you measure it or do "things" and distances have objective lengths independent of "how you look at them." (Objective vs subjective, the latter meaning frame dependent.) Measurement does not change lengths or distances. They exist objectively prior to varieties of frames of reference from which they are measured. Something to consider besides repeating the rules of orthodox special relativity/ subjectivity. PROPER length has an absolute value but measured/calculated length is frame dependent so yes, length is variable depending on how you measure it. and you misunderstand, apparently, the way in which he used it, which was to say somewhat indirectly what I said specifically in post #20 which is that there IS no such thing as an absolute frame of reference. @Ibix, jump in if I have misrepresented what you meant. You are correct that there is one frame in which the beetle is at rest, and all observers agree what that frame is and what length the beetle is in that frame. As Nugatory and phinds have said, though, that is only a special frame for that one beetle, not for beetles in general nor for the laws of physics. If you start with Einstein's two postulates you derive the Lorentz transforms, which can be interpreted as length contraction, time dilation and the relativity of simultaneity. So insisting that one of these doesn't happen implies that at least one of the two postulates is wrong. My memory of a long ago argument is that denying time dilation forces you to abandon the principle of relativity. I didn't mean to kick off a long discussion - I was only meaning to note that my claim that you get length contraction from the relativity of simultaneity didn't mean to imply that you could forget about time dilation entirely. Separate names with a comma.
<urn:uuid:c83ab767-a1e0-4061-bf9a-a58cd1a5f980>
3.03125
2,747
Comment Section
Science & Tech.
49.645086
95,582,610
A diagram displaying how the user interacts with utility software program on a typical desktop computerThe software software program layer interfaces with the working system , which in turn communicates with the hardware The arrows indicate info stream. This involves passing directions from the appliance software , via the system software, to the hardware which in the end receives the instruction as machine code Each instruction causes the pc to carry out an operation—moving information , carrying out a computation , or altering the control movement of instructions. With a degree in CIS from Waubonsee, college students have the inspiration needed for a profession as a software developer, computer safety specialist, techniques analyst, or administrator for networks, databases or systems—or different expertise specialties such as program or challenge manager, testing and engineering. Evaluate minimum hardware requirements for the completely different purposes in your most popular office suite. Pc software engineers apply the ideas of computer science and mathematical analysis to the design, development, testing, and analysis of the software program and programs that make computer systems work. Would you wish to learn how to make software program run faster and extra reliably on completely different kinds of computer systems and working programs? Pc Software program Methods Engineers report utilizing a deep pool of abilities on the job. Software program is written in a number of programming languages; there are many programming languages in existence, and every has not less than one implementation, every of which consists of its own set of programming tools. One of the best free COMPUTER software applications aren’t about the cost (lack thereof), they’re a few contemporary alternative—collections of code that puts the dumb hardware in your laptop to smart use, instruments that would accomplish something from balancing your household price range to helping to cure most cancers.…
<urn:uuid:85730e3f-66ed-43a5-a578-2925e8d8638f>
2.875
357
Spam / Ads
Software Dev.
7.93066
95,582,613
Scientists have developed a new type of “super wood” that is more than 10 times stronger and tougher than normal wood – and this innovation could potentially become a natural and inexpensive substitute for steel and other materials. » Currently browsing: Ecological Products A UK-based team of researchers has created a graphene-based sieve capable of removing salt from seawater. The vessel will be able to scoop up around five tons of plastic every day, and then melt it down – all in yet another private effort to help clean up the ocean. In a world seduced by screens, the future of paper might seem uncertain. But many in the industry remain optimistic – after all, you can’t blow your nose on an email. Zero Mass Water makes solar panel arrays that pull clean drinking water from the air. The $4,500 arrays just launched in the United States. Zero Mass arrays could come in handy in areas where water sources are far away or scarce. Some homeowners have purchased arrays as an alternative to plastic water bottles. Wristbands that help babies get a better start, a porta gaming console, better football helmets and super sustainable crops. The Hybrid Module Mobility concept isn’t pedaled directly, but instead employs a pedal-powered alternator for riders to partially recharge its batteries. There’s no doubt that tidal power is severely lagging behind other forms of renewable energy like wind and solar power. However, as two-thirds of the Earth’s surface is covered by water, we’d be crazy not to try and harness energy from water that’s continuously in motion. Last week, Lund University reported that microplastics cross the blood-brain barrier to accumulate in the brains of fish, and this build-up may be related to behavioral disorders in fish, including slower eating and less exploration of their environments. Futuristic airless tire is 3D printed, won’t go flat or need replacement.
<urn:uuid:5ebfa768-3928-40e1-a077-9979db128487>
2.59375
406
Content Listing
Science & Tech.
41.594159
95,582,616
Scientists are unveiling a rare octopus that has never been on public display before. And unlike other octopuses, where females have a nasty habit of eating their partners during sex, Larger Pacific Striped Octopuses mate by pressing their beaks and suckers against each other in an intimate embrace. The beautiful creature can also morph from dark red to black-and-white stripes and spots and can shape-shift from flat to expanded. The sea dweller will be on display starting today (Mar. 6) at the California Academy of Sciences in San Francisco. "I'm thrilled that Academy visitors will have the opportunity to view this fascinating animal up close in the aquarium, where they'll see just why its beauty, unique mating technique and social habits are intriguing the cephalopod community," said Richard Ross, a biologist at the California Academy of Sciences, in a statement. Octopuses are known for their clever antics, including their various means of disguise. For instance, the Atlantic longarm octopus (Macrotritopus defilippi) has been observed mimicking a flounder by swimming forward with its arms trailing behind like flounder fins. That octopus even contorted its soft body so both eyes moved to the left like a flounder's would. The octopus can change its color pattern from a deep red hue to a wacky combination of stripes and spots. Credit: Richard Ross And the mimic octopus (Thaumoctopus mimicus) can shift its color and shape in mind-boggling ways, impersonating everything from sea snakes and giant crabs to stingrays. [See Video of Octopus Mimicking a Flatfish] The Larger Pacific Striped Octopus was discovered in 1991, but it was largely forgotten for more than a decade. The species is so new that it still doesn't have a name. Unlike other octopus species, females survive many years to lay several clutches of eggs, rather than dying after reproducing once. A female is going on display first, but will soon be joined by a male companion, at which point the scientists expect them to mate often (and peacefully). Though scientists don't know much about the creature's natural living conditions, the Larger Pacific Striped Octopus is thought to live in large groups with 40 other octopuses of the same species. The researchers hope to introduce more individuals into the aquarium to see how their behavioral dynamics change in bigger groups. - Marine Marvels: Spectacular Photos of Sea Creatures - Cuttlefish Cuties: Photos of Color-Changing Cephalopods - Dangers in the Deep: 10 Scariest Sea Creatures Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
<urn:uuid:bcd8d3ec-d251-48b1-bff1-cdc455e16de7>
3.515625
580
News Article
Science & Tech.
35.686
95,582,622
The asteroid was officially recognised as part of a Near-Earth Object observing programme with German astronomers Lothar Kurtze and Felix Harmouth. UK schoolchildren were recruited by the Faulkes Telescope (FT) Education Director David Bowdley in spring 2006. They conducted the follow-up observations necessary to better understand the asteroid’s orbit and ensure its recognition by astronomy authorities. David Bowdley said: "Helping to study this asteroid and choose a name for it has been a great inspiration for the students. Working alongside real scientists has shown how much more can be achieved when people collaborate. In future we will be running many more projects like this where students work alongside astronomers to achieve real scientific outcomes." Jay Tate from the Spaceguard Centre in Mid Wales is the project’s Near-Earth Object scientific advisor. He said: “Students working with the Faulkes Telescope Project produce some of the most important data on asteroids in the UK. Kids love it because they can watch things move, and more importantly because it’s real – a far cry from many sterile classroom activities.” The schoolchildren had the final say on three name suggestions made by the German astronomers, and ‘Snowdonia’ was the clear winner. The name acknowledges the location of the FT Operations Centre at Cardiff University, as well as drawing attention to Snowdonia National Park. The schools involved included: The Leys School in Cambridge, West Monmouth School in Pontypool, St David's Catholic College in Cardiff, Simon Langton Grammar School for Boys in Canterbury, University College School in London, Belmont House School in Glasgow and The Kingsley School in Leamington Spa. Kerry Pendergast teaches Physics and Astronomy at West Monmouth School. He said: "Observing and naming the new asteroid added an extra dimension to students’ studies and helped them feel part of scientific discovery. When they had the chance to vote for a Welsh name there really was no competition!" Anita Heward | alfa What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:1e77d748-607f-4cf6-b09f-1620207911af>
3.140625
1,054
Content Listing
Science & Tech.
39.708466
95,582,653
Science Science for for South South Africa Africa ISSN 1729-830X ISSN 1729-830X The electromagnetic spectrum: key to understanding our Universe Time machines and the accelerating Universe Volume 9 • Number 2 • 2013 Volume 3 • Number 2 • 2007 R20 The SK A: Answering questions about the cosmos Shedding light on Dark Matter A. sediba: A curious mosaic Space science in South Africa Sc A c Aacdaedmeym yo fo fS c i ei ennccee ooff SS o u u tt hh AAffrri c i ca a WE ARE ... … touching people’s lives by growing their awareness of science … sharing the excitement and achievements of science in daily life … transforming lives by educating a new generation of young scientists TRANSFORM The South African Agency for Science and Technology Advancement (SAASTA) is opening people’s eyes to the wonder of science by listening and communicating; by engaging with them and making them aware of new scientific knowledge; by working together and sharing the excitement of science; and by building a new generation of young scientists. For more information visit www.saasta.ac.za Bringing science to life A cosmic perspective: Multi-wavelength astrophysics Tom Jarrett and Michelle Cluver discuss the importance of the electromagnetic spectrum Shedding light on dark matter Claude Carignan explains how the SKA will help to unravel one of the most important mysteries of the Universe The accelerating Universe Roy Maartens shows how the SKA will be a ‘giant machine’ for probing deep into time The Square Kilometre Array: A path to unveil the unknown Sergio Colafrancesco explains how the SKA will help us provide answers to some of cosmology’s fundamental questions Contents Volume 9 • Number 2 • 2013 SANSA at the forefront The South African National Space Agency shows how important space science is to South African research Australopithecus sediba: A mosaic of ancient and modern Quest discusses the latest analyses of this most important of fossil finds ‘Seeing’ radio waves André Young and David Davidson discuss the electrical engineering behind interpreting radio telescope signals Fact File Early astronomers Science news Centre of Excellence for Palaeosciences launched in South Africa Young science communicators Morgan Trimble and Leon van Eck, winners of the Young Science Communicator’s competition, showcase their entries SAASTA news Get famous ... ‘sell’ your science at FameLab Working towards SunSmart schools in South Africa Caradee Wright and Patricia Albers discuss how schools can become SunSmart Back page science • Mathematics puzzle Quest 9(2) 2013 1 Science Science for for South South AfricA AfricA ISSN 1729-830X ISSN 1729-830X Volume 9 • Number 2 • 2013 Volume 3 • Number 2 • 2007 r20 The electromagnetic spectrum: key to understanding our Universe Time machines and the accelerating Universe The SK A: Answering questions about the cosmos Shedding light on Dark Matter A. sediba: A curious mosaic Space science in South Africa Sc A c AAcdAedmeym yo fo fS c I eI eNNccee ooff SS o u u tt hh AAffrrI c I cA A Images: NASA, SKA, MPIfR Bonn and Hubble Heritage Team SCIENCE FOR SOUTH AFRICA Editor Dr Bridget Farham Editorial Board Roseanne Diab (EO: ASSAf) (Chair) John Butler-Adam (South African Journal of Science) Anusuya Chinsamy-Turan (University of Cape Town) Neil Eddy (Wynberg Boys High School) George Ellis (University of Cape Town) Kevin Govender (SAAO) Himla Soodyall (University of Witwatersrand) Penny Vinjevold (Western Cape Education Department) Correspondence and The Editor enquiries PO Box 663, Noordhoek 7979 Tel.: (021) 789 2331 Fax: 0866 718022 e-mail: email@example.com (For more information visit www.questinteractive.co.za) Advertising enquiries Barbara Spence Avenue Advertising PO Box 71308 Bryanston 2021 Tel.: (011) 463 7940 Fax: (011) 463 7939 Cell: 082 881 3454 e-mail: firstname.lastname@example.org Subscription enquiries Phathu Nemushungwa and back issues Tel.: (012) 349 6624 e-mail: email@example.com Into the unknown n this issue of Quest we continue to explore what the Square Kilometre Array (SKA) means to South Africa and to science more generally. Over the past few decades we have come to understand much about the Universe – of which we are only a very, very small part. The Big Bang theory is well established in cosmology – the Universe expanded from an extremely dense, hot state approximately 13.8 billion years ago. We know that the Universe is still expanding – and accelerating as it does so. After the initial expansion, the Universe cooled enough for energy to be converted into the various subatomic particles that we now know exist, such as protons, neutrons and electrons. In March this year another important sub-atomic particle, the elusive Higgs Boson, was finally tentatively confirmed to exist. We also know that most of the Universe is composed of a form of matter that we can only infer from its gravitational effect on other matter – so-called dark matter. We have a good understanding of the formation and collapse of stars, of the composition of galaxies – but through all this research – and the leaps of imagination and brilliance that were required – we have thrown up more and more fundamental questions. This is the beauty of science – the more questions we answer, the more questions we land up asking. The SKA – here and in the other countries that will host arrays of radio telescopes – will allow us to spend the next few decades answering some of these fundamental questions. What is the nature of the dark side of the Universe? What is the origin of the most extreme events in the Universe? What is the nature of the highest energy particles in the Universe? What is the origin of the cosmic magnetic fields that confine these particles? Essentially the SKA will act as a time machine – allowing us to look far back into time, to the very point of the origin of the Universe and to start to answer some of these questions. In the process, more and more questions will arise. Will we continue to be able to answer them? I don’t know – and perhaps it doesn’t matter if we never can, for it is appreciation of the wonder around us that is important. In the words of Seneca, writing in the first century, ‘The time will come when diligent research over long periods will bring to light things that now lie hidden. A single life time, even though entirely devoted to research, would not be enough for the investigation of so vast a subject ... And so this knowledge will be unfolded through long successive ages. There will come a time when our descendants will be amazed that we we did not know things that are so plain to them ... Many discoveries are reserved for ages still to come, when memory of us will have been effaced. Our universe is a sorry little affair unless it has in it something for every age to investigate ... Nature does not reveal her mysteries once and for all’. Copyright © 2013 Academy of Science of South Africa Published by the Academy of Science of South Africa (ASSAf) PO Box 72135, Lynnwood Ridge 0040, South Africa Permissions Fax: 0866 718022 e-mail: firstname.lastname@example.org Subscription rates (4 issues and postage) (For subscription form, other countries, see p. 50.) South Africa • Individuals/Institutions – R100.00 • Students/schoolgoers – R50.00 Design and layout Creating Ripples Graphic Design Illustrations James Whitelaw Printing Paradigm 2 Quest 9(2) 2013 Bridget Farham Editor – QUEST: Science for South Africa IMPORTANT NOTICE TO QUEST READERS The distribution model of Quest has changed to ensure optimum reach and greater reader satisfaction. If you want to keep on receiving your copy of Quest, kindly fill in your particulars and post, fax or email to: Quest MAGAZINE, PO Box 72135, Lynnwood Ridge 0040, Pretoria, South Africa. Fax: 086 576 9519, Email: email@example.com All material is strictly copyright and all rights are reserved. Reproduction without permission is forbidden. Every care is taken in compiling the contents of this publication, but we assume no responsibility for effects arising therefrom. The views expressed in this magazine are not necessarily those of the publisher. A cosmic perspective: Multi-wavelength astrophysics ur Universe is truly magnificent, becoming ever more intriguing as astronomers take aim at the depths of space with their arsenal of quantitative tools. Astronomy is the never-ending quest for light – all light – capturing and decoding energy that spans the entire electromagnetic spectrum, from radio waves to gamma rays – that’s nearly 15 powers of ten in wavelength. Every object in the known Universe has energy and therefore emits some kind of light. Whether it is the hottest, most explosive event or the coldest, most inanimate whisper of a particle, there is always light. And each particle of light carries information from the object that emits it to the receiver that catches it (that’s us), telling us something new about that object. A particle of light is called a photon. Windows into the Universe So, the Universe comes to us. Travellers through time and space, light from distant objects falls upon Earth every instant of the day, freely providing clues about the mysteries of the Universe. Since the time of Galileo and the invention of the telescope, astronomers have used instruments to capture photons of light. There are now telescopes on every continent, every corner of the Earth staring upward … and telescopes in space, orbiting the Earth, orbiting other planets and even telescopes that have left the Solar System to venture into the great beyond. And these scientific instruments are specifically designed to study a part (or band) of the electromagnetic spectrum acting as windows onto the Universe. ▲ ▲ And therein lies the magic – astronomers study and contemplate the Universe from afar. We have no hope of ever visiting the stars and galaxies we see, but we can gaze upon the light they send us and unlock their secrets (well, at least some of them). Astronomers use many different tools to study and explain the wonders of our Universe. Tom Jarrett and Michelle Cluver explain how the electromagnetic spectrum and, through this, multi-wavelength astrophysics, is key to understanding our Universe. Figure 1: NASA’s panchromatic view of the starburst galaxy M82, comprised of X-ray (Chandra), visual (HST) and infrared (Spitzer) imaging, reveals the supernovae winds and ejecta that are blown out from the disc of the galaxy where massive stars are prodigiously forming. Image: NASA Quest 9(2) 2013 3 Figure 2: The electromagnetic (‘light’) spectrum, spanning wavelengths from the radio to the gamma ray, nearly 15 orders of magnitude (powers of ten). Image: Wikimedia Commons The electromagnetic spectrum Table 1: The astrophysics of the electromagnetic spectrum Type of Light Gamma rays 10-2 to 10-6 nm Explosions and powerful bursts; hot plasmas and black holes X-rays 0.01 to 10 nm Accretion discs and hot gas; pulsars and neutron stars; black holes and active galactic nuclei (AGN) Young, hot stars; supernovae and quasars 10 – 400 nm Visible 0.4 to 0.8 m Intermediate stars, HI regions and the interstellar medium (ISM) Near-infrared 1 – 5 m Starlight from evolved, giant stars; supernovae; low mass stars and brown dwarfs Mid-infrared 5 – 50 m Warm dust from the ISM; obscured supernovae; hot dust from AGN; planets 50 – 500 m Cold gas and dust from the interstellar medium 0.05 – 10 mm Cosmic microwave background; cold, dense ISM Radio >1 cm Supernovae; black holes and relativistic jets; neutral hydrogen (21-cm emission) Definitions Black hole: a region of spacetime from which gravity prevents anything from escaping – even light does not escape – hence the name ‘black hole’. Accretion disc: a structure that is in orbit around a central body, made up of gases and other matter, formed around black holes, nuclei of quasars and particular types of stars. Pulsars: a magnetised, pulsating neutron star that emits a beam of electromagnetic radiation. Neutron stars: a remnant star that can be found after the collapse of a massive star during a supernova event. Supernova: a massive stellar explosion – expanding shock waves from supernova explosions can trigger new star formation. Quasar: a distant, active, galactic nucleus that surrounds a massive black hole in the centre of the galaxy. Brown dwarf: low-mass objects in space that cannot sustain hydrogen fusion. 4 Quest 9(2) 2013 By combining information from different bands, we can piece together the origin of the light we observe and, through rigorous computer simulations and modelling, reconstruct the physics that has produced the emission (see Table 1 in the box). This is how we study everything from the life cycle of stars to the formation and evolution of galaxies, to the grandest scales of all, the structure of the Universe itself – the great Cosmic Web. Astronomers study the Universe using every tool, technique and wavelength band at their disposal. This is what we call multi-wavelength astrophysics. In this article we showcase some of the most amazing objects discovered to date, how and why they look different depending on which wavelength band they are viewed in, and how modern scientific tools are used to study objects that are, essentially, infinitely far away from us in both time and space. The hidden universe Our eyes are sensitive to the Sun’s light, the ‘visible’ region of the electromagnetic spectrum. This means that the other bands of energy are hidden, some only revealed in the last two or three decades by advances in detector technology and with the development of space telescopes. Since the atmosphere blocks out many harmful parts of the electromagnetic spectrum (e.g. X-ray, UV and infrared), space telescopes are essential for probing the ‘hidden Universe’. They can also provide superior resolution compared to Earthbased facilities. The best example has to be the Hubble Space Telescope (HST), which is above the atmosphereinduced ‘light wobble’ and so gives us spectacularly clear and deep ‘visible’ pictures of undiscovered worlds. In addition to the primary visual window, HST was also designed to be sensitive to part of the ultraviolet and near-infrared windows, helping us to unlock these hidden realms. But perhaps the most revolutionary space telescope, at least in terms of revealing a completely hidden universe, was the Infrared Astronomical Satellite (IRAS). Launched in the early 1980s, IRAS’s role was to study the ‘cool’ universe, the universe heated by ‘hot’ stars. Take the Orion Constellation for example – it is ablaze with both fire and ice. Have a look at Figure 1. In this constellation we find the Orion Nebula, a nearby stellar nursery full of baby stars of all sizes. Figure 3: The ‘Seven Sisters’, or Pleiades (M45), is a cluster of young, hot stars that formed some 100 million years ago. X-rays (and radio emission) reveal the hot ‘coronal’ atmospheres, while the visual light is dominated by the hottest ‘B’ stars, some of which is reflecting off the original birth cloud material, most easily seen at infrared wavelengths. Image: http://coolcosmos.ipac.caltech.edu/cosmic_ classroom/multiwavelength_astronomy/multiwavelength_museum/m45.html Figure 4: The Orion Constellation. (Left) As you see it: Orion’s Belt is at centre, Rigel in the lower right and Betelgeuse in the upper left; (Right) Orion glows brightly when viewed at far-infrared wavelengths: the emission arises from gas and dust heated by newly forming stars, deeply embedded within the stellar ‘nursery’ cloud within which they are born. Image: SIRTF/NASA Figure 5: The Ring Nebula (M57) is a spectacular example of the metamorphosis of a planetary nebula – a solar-like star at the end of its ‘giant’ phase of life, shedding its outer layers to live out the rest of its existence as a tiny ‘white dwarf’. Each panel represents a different energy regime: high to low, from left to right, in which the hot, ionised gas from the blown-out star is emitting from the visible to the infrared wavelengths like a mushroom cloud. Image: http://coolcosmos.ipac.caltech.edu/cosmic_classroom/multiwavelength_astronomy/multiwavelength_museum/m57.html best examples of a supernova remnant, Cassiopeia A, only recently discovered because it is buried within the birth cradle of the Milky Way – perhaps the finest illustration of a cosmic object that is completely hidden to conventional human investigation. Cassiopeia A is the remnant of a very powerful explosion, leaving behind one of the brightest radio-bright sources in the sky. But even that pales in comparison to our final example, for now we show not one star, but billions of stars all assembled into one magnificent galaxy, Centaurus A, Feast your eyes on Figure 6, the galaxy Centaurus A. At the heart of this galaxy is a very active and hungry The ones that are very massive and very hot are capable of heating the gas (fuel) and dust (star smog) that surrounds and envelops them. The whole sky is full of examples like this, a glowing sky – as viewed in the infrared window. All that energy (mostly ultraviolet) from the stars has been absorbed and re-radiated, most of it in the infrared window. Even though the gas and dust glows because of energy from stars, it is still very cold: 20–30 degrees above absolute zero (that is cold!). Orion is fire and ice. HST and IRAS were only the beginning. NASA and ESA have been launching space telescopes that smash through the atmospheric boundaries, including Fermi (γ-ray), Chandra (X-ray), GALEX (UV), WISE (mid-infrared), Spitzer (mid-infrared) and Herschel (mid-far-infrared). At longer wavelengths, ground telescopes such as the colossal machines ALMA (sub-mm) and the VLA and ATCA (radio) complete the full coverage of the electromagnetic spectrum. Let there be light! Incredible sights are now commonplace for astronomers. Figure 3 shows what a young star cluster, the familiar Pleiades, looks like seen through different lenses. How about a planetary nebula (PN)? Figure 5 shows the Ring Nebula, a particularly lovely type of PN. Although it may look like a planet, it is a dying star. Our own Sun is fated to become a PN someday (in billions of years time, so don’t worry just yet). What happens when a much more massive star comes to the end of its life? It explodes in spectacular fashion. And the resulting wreckage is nothing short of exquisite. Figure 7 is one of the Quest 9(2) 2013 5 Figure 6: Centaurus A (NGC5128) is the nearest and most mysterious radio galaxy. Its strange appearance at visual wavelengths is due to a spherical distribution of old stars (the bright halo), bisected by a dark dust lane that actually glows in the infrared. Inside lives a monster! A supermassive black hole drives a powerful jet of particles and energy (seen in the X-ray and radio-synchrotron) that spans over 1 million light years, well beyond the galaxy itself. Image: http://coolcosmos.ipac.caltech.edu/cosmic_classroom/multiwavelength_astronomy/multiwavelength_museum/cenA.html in the Pleiades Star Cluster (Figure 3), burn much faster and may only live tens of millions of years. With these four essentials, a galaxy can form over 100 billion stars, dancing and spiralling over eternity. Our own home galaxy, the Milky Way, is like an island in the great ocean of the cosmos. Figure 7: Cassiopeia A is an exceptionally bright radio-supernova remnant, deeply embedded within the Northern Milky Way, so much so that it is invisible at visual wavelengths. It is the result of a violent explosion, the death throes of a massive star. X-rays reveal the hot plasma from ejecta and the stellar debris; the mid-infrared glows from hot, shock-excited ionised gas, while the radio reveals electrons spiralling through power magnetic fields (synchrotron radiation) around the star. Image: http://coolcosmos.ipac.caltech.edu/cosmic_classroom/multiwavelength_astronomy/multiwavelength_museum/casA.html supermassive black hole, wreaking havoc throughout its ill-fated host. It is so powerful that is blasts a great jet of material (energy and particles) a million light years out into space, creating its own intergalactic space weather. How did Centaurus A come to be such a monster? Using multiwavelength observations and computer simulations, astronomers have deduced that Centaurus A is the result of a violent gravitational collision and then merger between two different galaxies. A rather unfortunate end for two galaxies, but a delicious medley of mystery and intrigue for science. Galaxies in general represent a great challenge owing to their size and complexity, demanding the full attention and resources of the astronomer, as we shall see next. See yonder, lo, the Galaxy… (quote from Geoffrey Chaucer. The House of Fame. 1380 – the first English use of the word galaxy.) Galaxies used to be called Island Universes. They are the largest, most fundamental building blocks of the known Universe. The Milky Way is only 6 Quest 9(2) 2013 one of untold numbers of galaxies that inhabit the known Universe. Regardless of their immense size, galaxies are created with a few simple ingredients, the most important being their fuel. It is the elemental hydrogen atom, with a little helium added in, that is the basis of all heavy element nucleosynthesis, or ‘starstuff’ that is forged in the centres of stars and – for the most massive stars – subsequent titanic explosions. Then there are smaller amounts of the heavier elements, which astronomers call ‘metals’. The catalyst is the force of gravity. Although gravity is relatively weak in comparison to the other forces of nature, it does have the unique (and vital) property of working over vast distances – in fact, gravity acts over all distances, from close to infinity. It is gravity that concentrates the hydrogen and metals into dense clouds from which stars may be born. But this will only happen once we add time – lots and lots of time. Relative to the human life span, a star is millions of times older. Stars such as the Sun can burn bright for many billions of years, while more massive stars, e.g. the hot stars you see Galaxy life cycles It seems so simple, and yet galaxies are incredibly complex, with a ‘life cycle’ that is as dramatic as that of a human. Imagine tracking the life cycle of every star in a galaxy – it would be nearly impossible (certainly for the humble and relatively short-lived scientist). Instead, astronomers study galaxies by looking at the combined behaviour of similar populations that include older stars, intermediate-age stars and young (newly forming) stars. It is the study of galaxies, also known as extragalactic astrophysics, that depends so heavily on the information that is encoded within the electromagnetic spectrum. Any given window of the spectrum will provide only hints, and in some cases false leads, as to the underlying physical mechanisms that shape and evolve these great cosmic entities. We need the entire spectrum to understand how galaxies grow and age. This is most dramatically illustrated with a special kind of galaxy, the starburst galaxy, which literally appears to be exploding with young, massive star formation. Let us consider the curious case of M82, the nearest starburst galaxy to the Sun (a measly 12 million light years distant). It is part of a group of galaxies, called the M81 Group (since M81 is the most massive and dominant galaxy in the group, it gets the headline) and therein lies one of the clues to its strange state. As part of a group, it is subject to strong tidal forces from the other galaxies, notably M81. This force may compress the fuel, and coupled Figure 8: The starburst galaxy M82, revealed in all its glory. This galaxy is relatively small in size, but is bursting with newly formed, massive stars. It looks completely different when comparing across the electromagnetic spectrum (inset images). The central graph shows the Spectral Energy Distribution (SED), or the flux versus wavelength, which depicts how much energy is coming from each wavelength band. The total luminosity is dominated by the far-infrared emission: cold dust in the interstellar medium (ISM) absorbs light emitted at ultraviolet and visual wavelengths, warming the dust until it glows at infrared wavelengths. This is an example of how light (or energy) gets redistributed from one band to another. Image: NASA, SDSS, NRAO space telescopes, astronomers measure the light (and thus energy) in the bands that cannot be seen from the ground (due to Earth’s atmosphere), including the X-ray, ultraviolet, infrared and submm. Ground observations of the visual and radio spectrum of M82 complete the SED diagram. The SED shows that most of the light coming from young/ hot and intermediate age stars (UV to visual) is missing (!). But, actually, it has transformed to other bands, maintaining the energy balance that must be conserved. Photons from these bands have been scattered and absorbed, later re-emitted by molecules and small dust grains – in the mid-infrared – and most importantly, large dust grains – far-infrared and the sub-mm – where most of the luminosity escapes from M82. At the longest wavelengths, the radio window, light is created through a phenomenon called synchrotron radiation: electrons and other charged particles (also known as cosmic rays), created from the supernovae destruction of all those massive stars that M82 is boldly forming, are being hurled through the interstellar medium of M82, spiralling and accelerating along magnetic field lines, emitting over all wavelengths, but most prominently in the radio window. Synchrotron radiation is thus used to measure the ‘pulse’ of the underlying massive star formation. It is through the SED that the hidden secrets of M82 are revealed: a sudden burst of star formation that will last some tens of millions of years, but really only a brief period in its full life cycle. And yet this is not the end of the story for M82. It will undergo further ignition events triggered by future close encounters with its big brother M81. Finally, let’s look at another method by which astronomers exploit their multi-wavelength data sets: colour combination, or panchromatic blending. And again M82 provides the most spectacular example. Figure 1 (on p. 3) is a combination of imaging from the three Great Space Observatories of NASA: Chandra (X-ray), Hubble (visual) and Spitzer (infrared). This multi-dimensional portrait reveals something completely different from that of the SED energy-balance diagram – M82 has a powerful ‘superwind’ that is emanating from the nuclear region, blasting outward from where the stars are forming. The energy from supernovae (exploding stars) has concentrated in the core of M82, blowing a hole in the disc and forming an escape route along the path of least resistance (i.e. perpendicular to the disc). The superwind is most conspicuous in X-rays (blue colour in the image), arising with its relative wealth in hydrogen gas (astronomers refer to this as ‘gas-rich’) and current evolutionary phase (that is, its state of the life cycle), this little galaxy has ignited from within. M82 is forming stars at a prodigious rate, far faster than the Milky Way, for example, which is a galaxy much larger in size but not nearly as excited. How do we know this? After all, in the visual window, M82 is faint, small and unremarkable. This is the way it remained to our understanding until the infrared window was opened, thanks to the arrival of IRAS (and later, the next generation of cameras aboard the Spitzer and Herschel space telescopes), which first revealed the hidden action-packed party that M82 is throwing with wild abandon. Today astronomers study galaxies such as M82 using a technique called the Spectral Energy Distribution, or SED (see Figure 8). The SED is essentially an energy balance diagram, showing where and how much energy is being released (and later captured by astronomers) across the electromagnetic spectrum. It is a close cousin to the method of spectroscopy, by which the light of astronomical objects is dispersed along the wavelength or frequency axis (e.g. atmospheric rainbows are nature’s ‘spectrum’ of the Solar window). Using Quest 9(2) 2013 7 Figure 9: M81 Group of galaxies is comprised of three major galaxies (M81, M82 and NGC3077) and several satellite galaxies. What is not revealed at visual wavelengths, however, is the intergalactic (or intergroup) medium of hydrogen gas. Only at radio wavelengths, specifically the 21-cm emission, can we see how the primordial (fuel) reservoir of gas binds the group together. from hot plasma (gas) and synchrotron radiation, but the wind bubble is also evident in the infrared (orange-red) which traces warm dust and ionised gas that has been blown upward like a great tornado. M82 is truly amazing, but it is only one of billions upon billions of starburst galaxies that live and die across the cosmos. Multi-wavelength astrophysics in the SKA era As much as we learn from the cosmos, we have only scratched the surface. To make a really significant leap forward in our understanding of the early universe and the energy and physics that shaped our corner of the cosmos, we need an innovative and revolutionary machine – just as we did with HST and IRAS in the last century. At radio wavelengths astronomers have access to diverse (and unique) science, key to understanding the deepest mysteries of the Universe. As a result, scientists and engineers are joining forces and embarking on an ambitious project: the construction of the most powerful radio telescope ever – an instrument so big that it requires entire continents for operation –the Square Kilometre Array (SKA). A multi-national effort, this vast collection of radio-light detectors spread throughout Africa and Australia, will begin science experiments in the next decade. A crucial part of the design, development and learning curve is building smaller prototypes, serving as ‘pathfinder’ systems, which will begin science observations in the next few years. The pathfinders are comprised of the Australian SKA Pathfinder (ASKAP), Apertif in the Netherlands, and here in South Africa, the MeerKAT array. Additionally, in South Africa we have a brand-new pre-pathfinder completed, KAT-7, which is leading the development of new technology 8 Quest 9(2) 2013 and already producing exciting science results (see article by C Carignan). Although much smaller versions of the full SKA, the pathfinders will still be enormously powerful radio telescopes comprising cutting-edge technology and capable of groundbreaking science. And yet, like all radio telescopes, the full potential of the SKA and its pathfinders is only realised when used in conjunction with other instruments – that is to say, multi-wavelength astrophysics. As we peer through the radio window, we see a hidden realm that is exceptionally important to understanding how the Universe grows to complexity: galaxies, stars, planets, life, and us. This is the cosmic fuel: hydrogen. Hydrogen is by far the most common element in the universe, and yet, paradoxically, it is very difficult to capture, measure or even observe in space – except at radio wavelengths, where the 21-cm emission of atomic hydrogen can be seen with sensitive radio telescopes. The 21-cm emission arises from a quantum mechanical effect related to relative ‘spin flip’ between the proton and the electron particles that comprise the atom. A marvellous example of the hidden realm of the H-atom is shown in Figure 8. On the left is a beautiful image of the M81 ‘group’ of galaxies as seen in the visual window. M81 is the big galaxy at the centre of this gravitationally bound grouping of galaxies. And our good friend, the starburst M82 (previously introduced in Figures 8 and 1), is near the top of the image. Now look at the panel to the right – this shows the exact same field but now you are viewing the distribution of hydrogen gas, unmistakably circling the galaxies (i.e. feeding them) and stretched between the group members. The hydrogen is getting sloshed around by the motion of the galaxies – a cosmic maelstrom stirred up by the gravitational tidal interactions. At the very least, both optical and radio windows are needed to figure out what is actually going on. And indeed, astronomers must employ the full multi-wavelength arsenal of tools to decode the life cycle of each galaxy in this dynamic and complex system; as with M82 (see Figure 9), clues are to be found throughout the electromagnetic spectrum. Modern astrophysics, 21st century style, is a synergy of multi-wavelength instrumentation and observations, computer modelling and simulations, and the application of the scientific method by men and women throughout the world. The universe is far less hidden and dark with these new tools and techniques, revealing wonders that would have been inconceivable a generation ago. But the quest is far from over; multi-wavelength astrophysics raises as many questions as it answers. The Universe still holds many tricks up its sleeve and we all have so much to gain from harnessing the physics that steers the cosmos, and uncovering our origins. ❑ Prof. Thomas Jarrett is the South African Research Chair in Astrophysics and Space Science at the University of Cape Town (UCT). His research has primarily focused on extragalactic science, including the evolution of galaxies, bulk flows and largescale structure in the local universe. Before coming to UCT in 2012, he was based at the California Institute of Technology, where he worked extensively with spaceborne telescope missions, notably IRAS, ISO, Spitzer and WISE, forming expertise with large and complex data sets and archives. Dr Michelle Cluver is an ARC Super Science Fellow at the Australian Astronomical Observatory in Sydney, Australia. She is an active member of the GAMA (Galaxy and Mass Assembly) collaboration, researching the WISE mid-infrared properties of galaxies, particularly those in groups. Her first postdoctoral position was at the California Institute of Technology using Spitzer Space Telescope data to study the evolution of galaxies in compact groups. She completed a PhD at the University of Cape Town in 2008, on the physics and fuelling of star formation in a unusual gas-rich disk galaxy. Michelle will rejoin the UCT Astronomy Department as a research fellow towards the end of the year. Image: Wikimedia Commons Image: Wikimedia Commons Nicolaus Copernicus Nicolaus Copernicus was born in Poland on 19 February 1473 and died on 24 May 1543. He was a mathematician and astronomer and was the first person to place the Sun at the centre of the Universe – which was called the heliocentric model of the Universe. His book On the Revolutions of the Celestial Spheres was published just before his death in 1543. This was a major event in the history of science and began what is called the Copernican revolution – removing the Earth from the centre of the Universe. This contributed greatly to the scientific revolution – the emergence of the modern scientific disciplines of mathematics, physics, astronomy, biology, medicine and chemistry. One particularly important part of his heliocentric model was a realisation that observations of the Sun’s movements arise from the Earth’s movement and that the Earth rotates once a day on its fixed poles. However, not everyone agreed with his heliocentric model. Galileo Galilei Galileo Galilei was born in Italy on 15 February 1564 and died on 8 January 1642. He was a physicist, mathematician, astronomer and philosopher. He has been called the ‘father of modern science’ and the famous modern physicist Stephen Hawking has said that Galileo is probably the single most important person in the development of modern science. One of his contributions to astronomy were improvements to the telescope, allowing better observation. It was Galileo who confirmed the phases of Venus, who discovered the four largest satellites of Jupiter – called the Galilean moons – and the observation and analysis of sunspots. In 1604 he described Kepler’s supernova and detected that it was a distant star because it showed no daily movement. This went against the belief that the heavens were unchanging and was one of the reasons for his disagreements with the church and public opinion of the time. Galileo fully supported Copernicus’s heliocentric model, which lead to accusations of heresy from the Roman Inquisition in 1615. He was forced to recant and remained under house arrest for the rest of his life. A replica of the earliest surviving telescope attributed to Galileo, on display at the Griffith Observatory. Image: Wikimedia Commons Harlow Shapley and Herber Curtis and the ‘Great Debate’ Harlow Shapley was an American astronomer who lived between 1885 and 1972. In 1918 he estimated the size of the Milky Way Galaxy and the position of the Sun in the galaxy. In 1953 he proposed the ‘liquid water belt’ that is now known as the concept of a habitable zone or circumstellar habitable zone. This is the region around stars within which objects with enough mass and the correct atmospheric pressure could support liquid water at the surface. The number of planets with Earth-like composition orbiting within circumstellar habitable zones has been estimated at anywhere from 500 million to more than 150 billion. Shapley was involved in the ‘Great Debate’ with Herber Curtis on the nature of nebulae and galaxies and the size of the Universe on 26 April 1920. Heber Curtis (1872–1942) was another American astronomer. The issue that was debated was whether distant nebulae were relatively small and lay within our own galaxy or whether they were large, independent galaxies. Shapley thought that what we now call galaxies (then called spiral nebulae) are inside our Milky Way, while Curtis thought that these galaxies were far outside our own Milky Way and comparable in size and nature to our Milky Way. This was the start of extragalactic astronomy. It was Curtis who was correct, however, confirmed by Edwin Hubble’s discoveries in the Andromeda galaxy. Image: Wikimedia Commons Edwin Hubble Edwin Hubble (1889–1953), an American, played a vital role in establishing the field of extragalactic astronomy. He is regarded as one of the most important cosmologists of the 20th century. He formulated Hubble’s law – showing that the velocity with which a galaxy moves away from the Earth increases with its distance from the Earth – effectively showing that the Universe is expanding. He is also responsible for showing that there are multiple galaxies outside the Milky Way – that the Universe goes far beyond our galaxy. The 100 inch (2.5 m) Hooker telescope at Mount Wilson Observatory near Los Angeles, California. This is the telescope that Edwin Hubble used to measure galaxy redshifts and discover the general expansion of the Universe. Image: Wikimedia Commons Quest 9(2) 2013 9 Figure 3: Our neighbour, the Andromeda Galaxy. Image: ESA/NASA/JPL-Caltech/NHSC Shedding light on Claude Carignan shows how South Africa will be at the cutting edge of astrophysics with KAT-7, MeerKAT and the SKA. I Figure 1: The electromagnetic spectrum showing the visible spectrum – the part that we can see. Figure 2: The MeerKAT precursor array, KAT-7. 10 Quest 9(2) 2013 Image: Rupert Spann/SKA n the 16th century, Copernicus realised that the Earth is not the centre of the solar system, but rather is in orbit around the Sun, in the same way as the other five planets that were known at the time. Copernicus’ realisation of this important fact removed our planet, Earth, from its privileged central position in the Universe. Then, in the early part of the 20th century, Shapley showed that the Sun is not at the centre of the Milky Way but on the edge. Now it was the Sun that had lost its privileged position at the centre of the galaxy, called the Milky Way, that is home to it and the Earth. Two years later, Curtis argued with Shapley that the Milky Way is not unique, but is a galaxy among countless other galaxies – the Milky Way is only one spiral galaxy among countless other galaxies. Almost at the same time in 1923, Hubble revealed that all galaxies are moving away from each other – the Universe is expanding. All these revolutions in the way that we thought about the Universe were vitally important. However, we had not seen anything yet! In the early 1970s, an Australian astronomer, Ken Freeman, in the appendix of the most Figure 6: The SKA precursor array: MeerKAT. dark matter famous paper yet on the structure of discs in spiral galaxies, wrote this apparently minor sentence, ‘If the HI rotation curve is correct, then there must be undetected matter (dark matter) beyond the optical extent of NGC300: its mass must be at least of the same order as the mass of the detected galaxy (luminous matter)’. This sentence was the start of one of the greatest revolutions in the way in which the scientists involved in astrophysics think. By the end of the 20th century, not only had we lost our position at the centre of the Universe, but we also discovered that the matter that we can see represents, according to the most recent measurements, only 4% of all the matter and energy present in the Universe. We live in a Universe of dark matter. Karoo, which is also a precursor to MeerKAT. The importance of hydrogen If we want to observe radiation from the stars, which emit mainly in the visible part of the spectrum, we use optical telescopes on the ground. But the gaseous part of the Universe is very important and most of that is made up of neutral hydrogen (HI). If we want to study this HI we need to use radio telescopes. Hydrogen does not emit visible light, but its radiation is detected in the 21-cm radio waveband. So, for an astronomer, hydrogen is considered as luminous matter – similar to the stars – because we can detect its radiation. ‘Dark matter’ may seem to be a strange concept – if we can’t see it, does it exist? However, there is no good reason why all types of matter in the Universe should emit detectable photon (remember that a photon is a particle of light). For example, for a star to be visible, its mass must be large enough to support thermonuclear hydrogen fusion in its centre. This is how the luminous energy of stars is produced – the energy that we can see. It may well be that during all the star formation processes that have taken place over the centuries, a large number of very small stars have been formed. These very small stars would have masses that were too small to produce light. These small objects could make up part of ‘dark matter’. ▲ ▲ What is dark matter? Essentially, dark matter is matter that emits no radiation. When we talk about luminous matter, we naturally think of radiation that can be perceived by our eye – what we can see. However, if our eye is mainly sensitive to yellow light, it is only because this is the part of the visible spectrum in which the Sun emits most of its light. This is because of the temperature at the Sun’s surface. If the Earth was in orbit around a cooler star that emitted most of its light in the red part of the spectrum, our eye would have evolved to be sensitive to that part of the spectrum. We would probably be able to detect infrared light, but could be blind to blue light. Nowadays, the tools of astronomers allow them not only to detect the light visible to our eyes, but also the ultraviolet and infrared radiation as well as the photons that are emitted in the other parts of the electromagnetic spectrum (Figure 1) such as gammarays, X-rays, microwaves and radio waves. Luminous matter is any kind of matter that emits radiation that we can detect regardless of the waveband. Dark matter, on the other hand, is any kind of matter that does not emit any detectable photon. We know it exists because it has a gravitational influence on luminous matter. In other words, our observations of luminous matter show that dark matter exists because dark matter can exert a gravitational force on luminous matter. We need to use many types of instruments to be able to observe luminous matter in all parts of the electromagnetic spectrum. Most of us know about optical telescopes, which allow us to detect radiation in the visible part of the spectrum (blue to red) and in some infrared windows. But at shorter wavelengths (gamma-rays, X-rays, UV) and a good part at longer wavelengths (infrared), the atmosphere is opaque and does not allow the radiation to reach the ground. So we have to put telescopes in orbit above the atmosphere to be able to detect these radiations. Dark matter can be studied using radio waves – which is where KAT-7 comes in (Figure 2). This is an array of seven radio telescopes in the Northern Quest 9(2) 2013 11 Figure 4: Theoretical rotation curves of a spiral galaxy. Dark matter is any form of ordinary matter (neutron, proton, electron), called baryonic matter or the more exotic non-baryonic particles called WIMPS (weakly interacting massive particles) that does not emit any detectable photon at any wavelengths (UV, optical, infrared, X-ray, gammaray, etc.) but that we know is present by its gravitational effect on the luminous matter that we can see. Figure 5: The distribution of the SKA’s antennae through Africa. There is also no reason either for the mass of a star to be correlated with the light. For example, very massive stars burn their supply of hydrogen fuel fast. They are very bright but at a price – they only last a relatively short time (millions of years). On the other hand, small stars will burn their hydrogen slowly, will not be very bright but will last much longer (billions of years). The result? In the solar neighbourhood, stars that are more massive than the sun produce 95% of the light but it is the stars that are less massive than the Sun that provide 95% of the mass. This means that we need something other than light to estimate the mass of galaxies, and galaxies are the building blocks of the Universe. So, if you manage to measure the rotation of a star (or of a gas cloud) at a certain distance from the centre, you then have a measure of the centrifugal force that maintains the star on its orbit and so a measure of the gravity exerted on that star. All you need to do now is to apply Kepler’s laws of motion and Newton’s law of universal gravitation to calculate all the mass inside the orbit of the object. These same laws can be used to measure the mass of the Sun using the orbital velocities of the planets in orbits around the Sun. So, even if dark matter is posing a serious new challenge to astrophysics, the physical laws used to study it were postulated more than 300 years ago by these two scientists who formulated their laws using their observations of the motions of the planets. Kepler and Newton to the rescue Before measuring the mass of a galaxy, we have to understand the type of object that we are dealing with. If we look, for example, at our neighbouring galaxy Andromeda (Figure 3), we can see that it is made of a disc of stars. If these stars were motionless, the law of universal gravitation says that they should all fall toward the centre. However, observations have shown that nearby galaxies are stable systems in equilibrium, which are neither in contraction nor in expansion. This means that another force counterbalances the gravity toward the centre. This force is the centrifugal force coming from the fact that the stars (and the gas) are in rotation around the centre of the galaxy. Neutral hydrogen (HI) as a probe The part of the galaxies that can be seen by optical telescopes is the part where neutral hydrogen, the most abundant element in the Universe, was dense enough to collapse and form stars. By using the stars to probe gravity, we can measure the mass of galaxies in the inner regions. However, the disc of neutral hydrogen can be two, three, or four times larger than the stellar discs. Even if in these outer regions this disc is not dense enough to form stars, it takes part in the general rotation of the galaxies. Because of this, the disc can be used to probe the gravitational potential in the outer regions and measure the total mass (luminous and dark) all the way out to the edge of the disc. 12 Quest 9(2) 2013 Neutral hydrogen does not emit light but it emits radio waves at a wavelength of 21 cm. This is why we had to wait until the 1970s when radio telescopes became sensitive enough to detect it in the outer parts of galaxies. Figure 4 shows the expected rotation curve for a galaxy if only the luminous matter (stars and gas) was present (A) and what we actually observed (B). Using KAT-7, MeerKAT and the SKA to study dark matter In 2025, the SKA will spread its 3 000 15-m dishes across nine African countries (Figure 5). While most of the antennae will be concentrated in the Northern Cape Karoo desert for sensitivity purposes, other antennae will have much longer baselines to allow us to see finer details in the radio images. The eight partner countries are Botswana, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Namibia and Zambia. A year ago, South Africa started the construction of the precursor array, MeerKAT (Figure 6), which should be ready for science observations in 2016. With its 64 dishes, MeerKAT will be, until the arrival of the SKA, the most sensitive radio telescope array in the world in the 1.4 GHz frequency range of neutral hydrogen. The dark ages The SKA will penetrate what astrophysicists call the ‘dark ages’. In a Universe which is 13.8 billion years old, the dark ages is the period between the first light we receive from the cosmic microwave background (CMB), emitted around 380 000 years after the Big Bang when the Universe Figure 7: This image shows that the galaxy NGC3109 is rotating with the blue toward the viewer and the red away. The rotation is used by astronomers to model the distribution of matter (luminous and dark) in the galaxy. became cool enough to let the photons travel all the way to us and the youngest galaxies that can be observed by the Hubble Space Telescope, some 10 -12 billion light years away. According to the current cosmological picture, the cold dark matter model, the subtle fluctuations in temperature seen in the CMB reflect ripples that arose as early, in the existence of the Universe, as the first nonillionth of a second (10−30 s.). It is these ripples that gave rise to the present vast cosmic web of stars, galaxies and galaxy clusters that formed in the heart of dark matter halos during the dark ages. But, even before we can watch the first stars and galaxies forming with the aid of the SKA, we are able to study the properties of this illusive dark matter in nearby galaxies. We do this using the pathfinder array KAT-7 (Figure 2), which is made of seven 12-m antennae, and has been in operation since December 2010. We will be able to add the second step to MeerKAT in three years’ time. In March 2012, South Africa’s KAT-7 telescope reached another major milestone by observing the radio emission from the HI gas in a nearby galaxy. The astronomers pointed the telescope towards a galaxy called NGC3109 – a small spiral galaxy, about 4.3 million light years away from Earth, located in the constellation of Hydra (Figure 7). Since then more than 120 hours of observations have been accumulated on NGC 3109 and the first scientific publication on neutral hydrogen using KAT-7 was published in the well-known Astronomical Journal. Despite its relatively small size, KAT-7 was able to detect 40% more HI emission from NGC 3109 than the largest aperture synthesis telescope, the very large array (VLA) in New Mexico. This is because the telescope, being more compact, can detect HI gas on large scales that are not visible to the VLA. This shows that not only size matters – every telescope has a niche and KAT-7 is the perfect instrument to observe nearby galaxies, which have emission on large scales. Figure 8 shows the HI gas detected with KAT-7 for NGC 3109, the dwarf galaxy Antlia and three background objects. Those observations allow us to derive the rotational velocities of NGC 3109 to a radius twice as large as that which was achieved with the VLA. It should be kept in mind that KAT-7 was built mainly as a test bed for MeerKAT and the SKA, so any scientific results that we obtain from this telescope are a bonus. This is just the beginning. In less than three years, time has been allocated to observe 30 galaxies, for 200 hours each using the 64 dishes of the MeerKAT telescope. These 6 000 hours of observations will constitute the MHONGOOSE survey. It should allow us to reach unprecedented sensitivity and to explore even further the outer parts of galaxies and their dark matter content. The study of dark matter is a longterm endeavour. We do not know yet what dark matter is made of. But even before being able to address Figure 8: HI gas after 120 hours of observations. The white contours at the top are for NGC3109 and those at the bottom are for the Antlia dwarf. The blue contours are for background galaxies. this question, we need to know how much dark matter there is and how it is distributed. While 50 years ago we knew nothing of its existence, we are now starting to understand many of its properties. We know that dwarf galaxies are more dominated by dark matter at all radii than are the larger spiral galaxies, where dark matter dominates mainly over luminous matter in the outer parts of the spiral galaxies. We know that the distribution of dark matter is more spherical than disc-like. By increasing our knowledge of the properties of dark matter, the new radio telescopes being built in South Africa should unveil the last secrets of dark matter. ❑ Professor Claude Carignan is a South African SKA (Square Kilometre Array) Research Chair in Multi-Wavelength Astronomy in the Department of Astronomy of the University of Cape Town. He is also an Emeritus Professor at the Laboratoire d’Astrophysique Expérimentale (LAE) of the Département de Physique of the Université de Montréal, in Canada, and Associate Professor in the Laboratoire de Physique et Chimie de l’Environnement (LPCE) and in the Observatoire d’Astrophysique de l’Université de Ouagadougou, in Burkina Faso. He is an expert on galaxy dynamics and dark matter. He has also been involved in the development of astronomy in Burkina Faso and in setting up the African Astronomical Society (AfAS). Prof. Carignan specialises in the study of mass distribution in galaxies, using both radio synthesis and optical FabryPerot interferometric techniques. Quest 9(2) 2013 13 The accelerating Universe The Square Kilometre Array will be a ‘giant machine’ for probing deep into the Universe and far back in time. Professor Roy Maartens explains to Quest. he Square Kilometre Array (SKA) will be the largest group of radio telescopes ever put together. The total collecting area of this telescope array will be one square kilometre – hence the name. All these radio telescopes will be connected by highspeed computer links to form a ‘giant machine’ – a machine that can probe deep into the Universe and far back in time. In the previous two articles (‘A cosmic perspective: Multi-wavelength astronomy’ (p. 3) and ‘Shedding light on dark matter’ (p. 10)) you have learnt that the electromagnetic spectrum and how we interpret and ‘see’ this is key to the observations we can make in space. You have also learnt that we can only see part of that spectrum. Many objects in the Universe, such as black holes, are The spin-flip transition. 14 Quest 9(2) 2013 surrounded by dust – radio telescopes can see through this dust and map the object behind it. Neutral hydrogen You have already seen that hydrogen is the most abundant element in the Universe, and that neutral hydrogen (HI) is particularly important to how we interpret what we see in the Universe. We use radio telescopes to detect HI. This is the basic material for growing stars – stars form when massive clouds of hydrogen collapse under their own gravity. HI was present in the Universe before the first stars and galaxies formed. This period was called the Dark Ages in the life of the Universe – because there were no stars to shine. After the stars and galaxies formed there was still HI in the Universe because not all the HI can be turned into stars. How do we detect HI? It does not shine, but it generates radio waves through what is called the ‘spin-flip’ transition. Each HI atom has one proton in its nucleus and one electron orbiting the nucleus. Each proton and electron has a ‘spin’. When the spins are in the same direction the atom is at a higher energy. If the spin is reversed, or flipped, the energy is lower. The difference in energy is now just enough to generate an electromagnetic wave that has a wavelength of 21 cm – so HI emission is often called 21-cm radiation. This wavelength is in the radio part of the electromagnetic spectrum. SKA science – tackling the big questions about our Universe When did the Universe begin? How did it start? What happened during the Dark Ages (see ‘Shedding light on dark matter’)? How did galaxies, clusters and voids form? What is the size of the Universe? Why is it accelerating and how fast? Electromagnetic waves (visible light, radio and other wave types) all travel at the speed of light (300 000 km/ sec). Take the Sun as an example. It is about 150 million km away from us. The visible light waves that flood the Earth take about eight minutes to reach us. So if the Sun exploded, we would only see the explosion eight minutes after it happened. The nearest star to us after the Sun is about four light years away – which means that its light takes about four years to reach us. Our nearest galaxy is Andromeda. The light from Andromeda takes Left: An artist’s impression of the MeerKAT radio telescope dishes that are being built for the SKA site near Carnarvon. Image: SKA The Karoo Array Telescope (KAT-7), is a 7-dish radio array designed and built in South Africa, and located at the SKA South Africa site near Carnarvon. This is an image of the spiral galaxy NGC3109, which is 4.3 million light years away from Earth. The green contours show the distribution of the hydrogen gas (HI), overlaid on an image of the same galaxy taken by an optical telescope. You can see that the HI emission comes from a much larger region than that seen by the optical image. Image: SKA South Africa about 2.5 million years to reach us and Andromeda is part of our local group of galaxies. Most galaxies are much further away – the furthest galaxy that we can possibly receive electromagnetic waves from is about 13 billion light years away – and there are about 100 billion galaxies in the part of the Universe that we can see. There are probably many many more galaxies that are too far away for their light to reach us. General Relativity In 1916 Albert Einstein published the theory of general relativity – which is still the best description of gravity that we have in modern physics. In this theory, Einstein brought together his theory of special relativity and Newton’s law of gravitation. In Einstein’s special theory of relativity, he showed that there is no such thing as absolute time and space because different observers in general measure different time intervals and distances. He showed that we can only make sense of the Universe around us if we combine space and time into four-dimensional spacetime. General relativity extends the special theory to include the force of gravity. This theory Albert Einstein in 1921. is the basis of our current model of the Image: Wikimedia Commons expanding Universe. – which was unable to explain the Universe. But Einstein thought that the Universe should be static, not expanding. Eventually the observers confirmed that it was expanding. And other physicists showed that the expanding Universe was perfectly in tune with general relativity. Einstein was finally convinced and accepted the expansion of the Universe. ▲ ▲ The SKA and the accelerating Universe The Universe is very big and it contains very many galaxies and stars that are very far apart. And that’s just the start of it. Not only is the Universe huge – it is expanding. This means that every galaxy that we observe is moving away from us. We are not the centre of the Universe – there is nothing special about the Milky Way and there is no centre to the Universe. If there are alien astronomers on other galaxies, they will also see that all galaxies are moving away from them. Confused? Don’t worry – Einstein himself refused to believe this after he developed his theory of general relativity in 1915. General relativity replaced Newton’s theory of gravity Our nearest galaxy is Andromeda. The light from Andromeda takes about 2.5 million years to reach us and Andromeda is part of our local group of galaxies. Quest 9(2) 2013 15 Alpha Centauri – a star system Alpha Centauri is a star system, made up of Alpha Centauri A and B and Proxima Centauri. This last star – which is also the least bright of the three – is the star that is closest to Earth. Proxima Centauri is a red dwarf that is 4.24 light years from Earth. It was discovered in 1915 by the Scottish astronomer Robert Innes, who was Director of the Union Observatory in Johannesburg Alpha Centauri looks like a single object to the naked eye, but it is actually a binary star system. Its combined brightness makes it the third brightest start (other than the Sun) seen from the Earth after Sirius and Canopus. A brief history of the Universe. The mystery of the accelerating Universe – dark energy or modified gravity – is one the main questions that the SKA will focus on. Image: NASA/R Maartens is just a mirage – and instead it is Einstein’s theory of general relativity that is failing us, just like Newton’s theory began to fail at the end of the nineteenth century. This alternative to dark energy is called modified gravity. The mystery of the accelerating Universe – dark energy or modified gravity – is one the main questions that the SKA will focus on. ❑ Mapping the neutral hydrogen (HI) in the Universe with the SKA to probe dark energy. As z (the ‘redshift’) increases, we are looking further away and further back in time. Credit: SKA The best theory we have is that dark energy comes from the vacuum (empty space) between the galaxies. But even this is not the end of the story. Science is full of surprises. Think about an expanding Universe of galaxies. The galaxies pull on each other through the attractive force of gravity. So the expansion of the Universe should slow down with time. Right? Wrong – in fact the expansion is speeding up. The Universe is accelerating – and has been for about the last five billion years. So what is pushing the galaxies apart? The truth 16 Quest 9(2) 2013 is – we don’t really know. So we give it a fancy name: dark energy. The best theory we have is that dark energy comes from the vacuum (empty space) between the galaxies. It is as if the vacuum exerts a repulsive force on the galaxies that overcomes their attraction towards each other. This theory seems to agree with observations – but we don’t understand how it works. It is even possible that dark energy Professor Roy Maartens is a cosmologist. His research focuses on answering fundamental questions about the universe, including: The ‘dark energy’ problem – why is the universe expanding faster and faster? The smoothness problem – is the universe smooth on very large scales and how can we test this? The Einstein problem – when does the Newtonian approximation break down in cosmology and how can we detect this? He obtained his BSc and Honours degrees in physics and applied maths from the University of Cape Town and then won a Rhodes scholarship at Oxford University. He was awarded a PhD in cosmology by UCT. He currently holds the Square Kilometre Array (SKA) Research Chair in Astronomy and Astrophysics at the University of the Western Cape. STUDY PHYSICS AT THE FACULTY OF SCIENCE AT THE UNIVERSITY OF JOHANNESBURG Innovatively Creating New Knowledge and Leading Scientists OUR RESEARCH FOCUS IS ON Astrophysics | Condensed Matter Physics | High-Energy Physics | Nuclear Physics | Theoretical Physics | Solar irradiation | Physics Education Some research projects in UJ Physics are tailored to use MeerKAT and the SKA in the long term Academic staff at UJ Physics are taking a leading role in broadening the training of students in astronomy, astrophysics, space science and cosmology across South Africa, as part of the National Astrophysics and Space Science Programme WE ALSO OFFER 100% bursary plus R5000 for top learners in science | The highest quality of education at all levels | A comprehensive range of degrees and diplomas | Internationally accredited qualifications | World-class research facilities | Top notch academics with high quality research output OPPORTUNITIES FOR CONDUCTING GROUNDBREAKING RESEARCH. Visit our website at www.uj.ac.za/science or contact us at 011 559 3826 The Square Kilometre Array: A path to unveil the unknown Our exploration of the cosmos has answered many questions – but in the process, even more fundamental questions have arisen. Sergio Colafrancesco looks at how the SKA will help to answer these questions. here are some fundamental questions that still need to be answered before we can say that we fully understand the Universe: What is the nature of the Dark Side of the Universe? What is the origin of the most extreme events in the Universe? What is the nature of the highest energy particles in the Universe? What is the origin of the cosmic magnetic fields that confine these particles? These are the questions that scientists are being asked to address by the major science and technology Definitions Anisotropy: In cosmology, this is the word used to describe the uneven temperature fluctuations of the cosmic microwave background radiation – the remnants of radiation that filled the Universe immediately after the Big Bang. Cosmic microwave background radiation: This is the thermal radiation that fills the whole of the known Universe almost uniformly. The Planck space experiment: Planck is a satellite that was put into space by the European Space Agency (ESA). It is described as the ESA’s time machine because it is scanning for cosmic microwave background in the sky, looking back to the dawn of time, close to the Big Bang, about 13.7 billion years ago. Planck has analysed – and is still analysing – the cosmic microwave background and its tiny fluctuations that are the seeds of all structures in our Universe. funding agencies around the world – they are serious questions for humankind. But before attempting to answer, we could (or should) ask: why are we asking these questions? Our exploration of the cosmos has provided many answers about the evolution of our Universe and its structures – stars, galaxies and very large clusters of galaxies. But this understanding has forced us to ask even more fundamental questions about the nature of matter, the nature of fields and the nature of the particles that are contained in the Universe. The nature of matter The detailed observations of the cosmic microwave background anisotropies from the Planck space experiment have shown that we live in a Universe whose matter and energy composition is made up of: n 4.9% normal matter – the standard particles and atoms and the known radiation that make up the world as we know it n 26.8% dark matter – an unknown form of matter known only for its gravitational effect n the remaining 68.3% is made up of dark energy – an unknown form of possible energy that is theorised The anisotropies of the cosmic microwave background (CMB) as observed by Planck. The CMB is a snapshot of the oldest light in our Universe, imprinted on the sky when the Universe was just 380 000 years old. It shows tiny temperature fluctuations that correspond to regions of slightly different density, representing the seeds of all future structure: the stars and galaxies of today. Image: ESA, Planck Collaboration 18 Quest 9(2) 2013 to produce the accelerating rate of expansion of the Universe. We also know that all particles in the Universe must have a mass and recently a fundamental particle, probably the Higgs boson, was finally discovered at the Large Hadron Collider (LHC) at CERN near Geneva, Switzerland. This provides the answer to the nature of the visible part of our Universe – the 4.9% of total cosmic matter determined cosmologically by the Planck experiment. These are great successes in the history of science but leave us with other more fundamental questions, some of which are outlined at the beginning of this article. Answering these questions requires more large and sensitive experiments to provide definitive answers. The square kilometre array (SKA) is one of the ‘big science’ projects that will be used over the next few decades to help us to answer some of the most important questions on the nature of the ‘unknown’ in our Universe. Let’s see how. SKA and the nature of the dark side of the Universe We have known about dark matter (DM) for 80 years – since 1933 when Fritz Zwicky proposed that it was The relative amounts of different constituents of the Universe. Image: Wikimedia Commons An image of the bullet cluster showing the DM (blue), hot gas (red) and galaxies (underlying) distribution. This image, courtesy of Marusa Bradac, shows dark matter (blue) separated from luminous matter (red). Image: Marusa Bradac An example of simulated data modelled for the CMS particle detector on the Large Hadron Collider (LHC) at CERN. Here, following a collision of two protons, a Higgs boson is produced which decays into two jets of hadrons and two electrons. The lines represent the possible paths of particles produced by the proton-proton collision in the detector while the energy these particles deposit is shown in blue. Image: Wikimedia Commons/CERN The Higgs boson The Higgs boson or the Higgs particle is an elementary particle that was originally theorised in 1964. It was tentatively ‘confirmed’ on 14 March 2013. It is named after the physicist Peter Higgs, who was one of several physicists who proposed that the particle must exist. In particle physics, the existence of the Higgs boson is necessary to explain why some fundamental particles have mass when the symmetries that control the ways in which they interact should mean that they have no mass. In particle physics, symmetry is a physical or mathematical feature of the system that is preserved under some change. The existence of such a particle was so important to understanding the physics that underlies our world and the Universe that we are part of, that scientists have spent 40 years searching for the particle. The LHC was built specifically to search for it – and in the course of this search has also helped to answer many more questions about the nature of matter. decay of hypothetical particles called neutralinos into elementary particles that decay further into electrons and positrons. This will provide detailed information on the nature of the fundamental DM particles that can be recorded by the SKA. The SKA will be able to record very weak radio signals, so this will probably be the only experimental set-up that could detect DM radio emission and so shed light on the elusive nature of DM – which is so fundamental to the existence of the Universe and its structures. Neutralinos are hypothetical particles, predicted by supersymmetry. In particle physics, supersymmetry is a proposed symmetry of nature that relates two basic classes of elementary particles: bosons and fermions. Synchrotron emission is the electromagnetic radiation that is emitted when charged particles are accelerated radially. In synchrotrons, it is produced using bending magnets and other sources of magnetic fields. We know that stars, planets, galaxies and even diffuse interstellar gas are magnetised. This cosmic magnetism cannot be attributed to permanent magnets like the ones which come in a science kit, but to the motion of huge clouds of plasma which are electrically charged, which move within the cosmic structures we observe. The challenge in studying cosmic magnetism is that while stars and galaxies can be seen directly by the light they emit, magnetic fields are invisible to even the largest telescopes. Astronomers thus need to employ a variety of indirect methods to study magnetism. For example, we know that synchrotron emission is produced when fast-moving electrons are trapped in magnetic fields, like planets caught by the Sun’s gravity. ‘dark matter’ that made up the ‘missing mass’ budget necessary to explain his observations of the Coma galaxy cluster. When he calculated the gravitational mass of the galaxies within this cluster, he obtained a value at least 400 times greater than expected from their luminosity, which meant that most of the matter must be dark. Modern calculations get a smaller factor of difference, but still infer that most of the matter must be dark. Scientists are eagerly trying to detect DM particles in deep underground experiments by measuring the energy deposited by the DM particle when it hits normal atoms in a pure laboratory environment. However, no definite signal has been detected to date. However, cosmologists are able to look at the radio emission produced by the The SKA and the origin of magnetic fields in the Universe To generate radio emissions that we observe by using radio telescopes such as the SKA and MeerKAT, cosmic structures need high-energy particles (like electrons and positrons travelling at almost the speed of light). But there must also be a ‘field’ that these particles can interact within to produce the radio synchrotron emission. This is the magnetic field that every particle in the Universe experiences – we also experience this daily. However, nobody knows how this magnetic field is produced in the environments of galaxies. Therefore this observation brings us to another fundamental question: what is the origin of magnetic fields in the Universe? Quest 9(2) 2013 19 If we see a body in the Universe that is emitting synchrotron emission, we know that this object must be magnetic, and we can use its properties to determine how strong its magnetism is and what direction a compass might point if we were near it. One problem with this approach is that many magnetic objects in space are not energetic enough to produce detectable synchrotron emission. But we can study their magnetism using a remarkable effect known as ‘Faraday rotation’. In this effect polarised light from a background radio source is changed when it passes through objects in which significant magnetism is present. The change is subtle, involving the angle at which the vibrating light waves are inclined, but can be measured with radio telescopes, and can be used to calculate the strength of the magnetic field in the foreground object. Studying cosmic magnetism in this way is relatively easy. However, it is often difficult to apply this technique, because only rarely does a random galaxy or gas cloud happen to lie in line with a bright background object, so that we can detect the consequent Faraday rotation and thus measure its magnetic properties. But because the SKA will be so much more sensitive than current radio telescopes, we can use it to revolutionise the study of magnetic fields in space. If we point the SKA at any part of the sky, we will detect radio emissions from thousands of faint, distant galaxies, spread like grains of sand all over the sky. These galaxies will be spaced so closely together that we can use the Faraday rotation of their polarised radio emissions to make detailed studies of the magnetism from all sorts of foreground objects. Even if we want to study a relatively small cloud of gas, there will be hundreds of background galaxies whose light shines through it, allowing us to build up a detailed picture of the cloud’s magnetism. This new technique will allow us to address many important unanswered questions. What is the shape and strength of the magnetic field in our Milky Way, and how does this compare to the magnetism in other galaxies? Is the overall Universe magnetic? If so, has the Universe’s magnetism affected the way in which individual stars and galaxies form? And ultimately, where has all this magnetism come from? 20 Quest 9(2) 2013 Optical image of the spiral galaxy M51 obtained with the Hubble Space Telescope (from Hubble Heritage), overlaid by contours of the total radio intensity and polarisation vectors at 6 cm wavelength, combined from radio observations with the Effelsberg and VLA radio telescopes. Image: MPIfR Bonn and Hubble Heritage Team These are all questions we can hope to address with the unique and fascinating capabilities of the SKA. We know that there are magnets everywhere in space. But with the SKA, we will understand what these magnets look like, and where they came from. The technology underpinning the SKA has then the potential to unveil the nature of the DM particles we have already discussed by detecting the radio emission produced by DM particle annihilation in galaxies and galaxy clusters. The SKA and the origin of cosmic rays Magnetic fields inside galaxies and clusters of galaxies also have a property that allows them to confine plasma and high-energy particles – called cosmic rays. And intergalactic space is able to guide the highestenergy cosmic rays as they pass through space to reach our planet. We observe these very-high-energy cosmic rays carrying the highest energies in the Universe, up to 1020 eV. This brings us to answer the next question: what is the origin of cosmic rays? The protons that constantly smack into Earth’s atmosphere at near the speed of light get their huge energies from exploding stars – supernovae. – or from powerful jets coming out of black holes in the centre of active galaxies – active galactic nuclei. We have long suspected this, but direct evidence for the idea has been difficult to come by – until now. Cosmic rays are any charged particles arriving at Earth from space. Nearly all of them are protons, and some have been accelerated to speeds higher than any achieved by a particle Artists impression of conceptual design for the Cherenkov telescope array (CTA). Image: Wikimedia Commons Thus, another approach to the problem is needed – and gamma rays provide it. We know that when the high-energy protons collide with low-energy protons further out, the violence of the collision indirectly creates gamma rays. These do not carry a charge and so travel in straight lines, unaffected by magnetic fields. Because of the law of conservation of energy, the gamma rays produced during the proton collisions will have a minimum energy of around 150 - 200 megaelectronvolts each. If lots of protons are colliding near the supernova remnant, there should be more gamma rays with that energy or higher coming from that region – and almost none with lower energies. That’s exactly what we are starting to see with gamma-ray telescopes such as Fermi and HESS. This is a characteristic feature that absolutely and uniquely tells us that what we are seeing are gamma rays from accelerated protons. The combination of the SKA and its high-energy relative, the Cherenkov telescope array (CTA), will tell us the complete story of the acceleration mechanism and the position of the most efficient acceleration sites in the Universe. The SKA and the extreme accelerators in the Universe This doesn’t explain the origin of all cosmic rays, however. Some of them This will consist of two arrays of telescopes in the two hemispheres, allowing full coverage of the sky. The telescopes will be ground-based very-high-energy gamma-ray telescopes. The southern CTA will cover about 10 km2 of land with around 100 telescopes that will monitor all the gamma-ray energy ranges towards the centre of the Milky Way and the galactic plane. The northern CTA will cover 1 km2 and be composed of 30 telescopes. These telescopes will be targeted at extragalactic astronomy. are particles called muons or positrons instead of protons – and one specific class, the ultra-high-energy cosmic rays (UHECRs), require much more extreme acceleration mechanisms and are most probably produced from outside our Galaxy. This brings us to the last question we ask here: what is the origin of the extreme accelerators in the Universe? Cosmic rays are energetic particles from deep in outer space – mainly protons, the bare nuclei of hydrogen atoms, plus some heavier atomic nuclei. They most probably acquire their energy when naturally accelerated by exploding stars. A few rare cosmic rays pack an astonishing wallop, however, with energies massively greater than the highest energy ever attained by human-made accelerators like CERN’s Large Hadron Collider. Their sources are a mystery. Nature is capable of accelerating elementary particles to macroscopic energies. There are basically only ▲ ▲ accelerator on Earth. Although we have known about cosmic rays since 1912, their origins have remained a ‘100-year-old mystery’. Possible sources of cosmic rays are the violent outburst of a supernova within our Galaxy, the Milky Way. The material blown out in the process moves so quickly that it creates a shock wave. Whenever a proton crosses the shock wave boundary, it gets a powerful kick. Because protons are charged, they can get caught in magnetic fields which carry them back and forth across the shock many times, like a tennis ball bouncing back and forth across a net. Eventually their energy gets great enough that they can leave the shock region. This is a newborn cosmic ray. Cosmic rays diffusing in the magnetised shock region produce radio emissions that can be detected with our radio telescopes. The SKA will be able to discover the sites of acceleration of these cosmic rays in every corner of our Galaxy, as well as in many thousands of external galaxies, thus proving the universal origin of cosmic rays and of the accelerating regions. But magnetic fields can also deflect cosmic rays on their way to our detectors. By the time they reach Earth, their directions are totally scrambled, making it hard to determine their origin. The Cherenkov Telescope Array (CTA) Quest 9(2) 2013 21 two ideas on how this happens: i) in gravitationally driven particle flows near the supermassive black holes at the centres of active galaxies, and ii) in the collapse of stars to a black hole, seen by astronomers as gamma ray bursts (GRBs). In active galactic nuclei (AGNs) the black holes suck in matter and eject enormous particle magnetic jets, perpendicular to the galactic disk, which could act as strong linear accelerators. As for GRBs, some are thought to be the result of the collapse of supermassive stars – hypernovae – while others are thought to be collisions of black holes with other black holes or neutron stars. Both sources are extragalactic and both require the existence of very powerful jets of plasma ejected at speed similar to the speed of light. It is possible that the UHECRs are accelerated in the immediate vicinity of the very massive and compact central object (say the BH) and then they acquire extremely high energy that allows them to escape the acceleration region, travel through the intergalactic magnetic field and reach the top of the Earth’s atmosphere, where they produce a shower of lower-energy particles and associated emissions that can be detected with our telescopes. The SKA will be able to shed light on the nature of the jets in external AGNs, and especially the nearby radio galaxies like Centaurus A, by observing, with extreme spatial resolution and sensitivity, the structure of the jet and their radio (polarised) emissions that will tell us (in combination with the CTA observations at the opposite end of the electromagnetic spectrum) both the location and the nature of the extreme accelerators in the Universe. But the SKA will also be able to observe the radio emissions generated in the atmosphere of the Earth by the particles produced in the shower initiated by the UHECR accelerated far in the Universe, thus reconstructing the whole picture (together with the complementary observations from CTA) of the extreme events in our Universe. Epilogue We have already discovered much about the origin and evolution of the Universe and we have reasonable confidence in describing its basic laws However, we are now faced with more fundamental questions. To answer these questions we need to maintain 22 Quest 9(2) 2013 Top: Cosmic rays of extremely high energy hit the Earth’s atmosphere and create showers of secondary particles, whose emission is detectable with both radio telescopes and gammaray telescopes. Bottom: One of the most likely candidates for the emission of ultra-high energy cosmic rays (UHECRs) detected on Earth is the giant radio galaxy Centaurus A and its relativistic jets. Image: NASA the humility of science and listen to the whispers of the Universe. ❑ Prof. Sergio Colafrancesco holds the DST/ NRF SKA Research Chair in Radio Astronomy at the University of the Witwatersrand (Johannesburg). He obtained his PhD in Astronomy at the University of Padua (Italy) and has been a staff Astronomer at the Rome Astronomical Observatory and a Professor in Astrophysics in Italy. He is widely recognised as a leading astrophysicist in various research fields like the search for dark matter, the origin of cosmic rays, the evolution of galaxy clusters and radio galaxies – AGNs, the physics of their magnetic fields, and of the cosmic microwave background. He has pioneered research topics such as the multi-frequency search for the nature of dark matter, models for cosmic ray production and diffusion and their impact on the multi-frequency emission features of galaxies and galaxy clusters, the effects of particle interactions with the CMB photons in the atmospheres of active galaxies and clusters of galaxies. He is involved in several international projects: SKA, MeerKAT, RADIOASTRON, MILLIMETRON, PLANCK, PRISM, QUIJOTE, AGILE, Fermi, HESS and CTA. Suggested reading Planck experiment: http://www.esa.int/ Our_Activities/Space_Science/Planck LHC at CERN: http://home.web.cern. ch/about/accelerators/large-hadroncollider Newton I. Philosophiæ Naturalis Principia Mathematica (the Principia), first published on 5 July 1687. Fermi E. 1949: Phys. Rev. 75:1169 ‘Seeing’ radio waves How can we ‘see’ radio waves? André Young and David Davidson explain the engineering components for the Square Kilometre Array. Interferometry Before reading the rest of this article, find ‘From theory to practice’, Quest 8(3) 2012 by Oleg Smirnov. In that article, Oleg explains that interferometry is a way to massively improve the optical resolution of radio telescopes. To summarise, interferometry is a method by which radio or light waves that are received at two different locations are combined and the resulting interference pattern is measured. Careful measurement of this pattern allows us to achieve an effective resolution that is determined by the distance between the two locations – the baseline. In the 1950s Sir Michael Ryle and his group at the Unversity of Cambridge developed a technique called aperture synthesis that used the principle of interferometry to combine multiple radio dishes into a single virtual telescope. Figure 1: A simplified diagram of a general interferometer. Figure 2: The components that make up the analogue front-end. 24 Quest 9(2) 2013 he Square Kilometre Array is planned to be the largest radio interferometer array ever built. Needless to say, there is a lot of engineering that goes into the realisation of such an ambitious project, from the design of the antennas that convert the incoming cosmic radiation into electronic signals, to the construction of the necessary infrastructure such as roads that allow access to the site. To cover all these aspects in one article is an impossible task. Instead, we will focus mainly on one small aspect of the big picture, called the analogue front-end, or AFE. Figure 1 is a simplified diagram of a general interferometer. You can think of an interferometer array as a large number of antennas that pick up radio waves – most likely radio dishes – standing in a remote location. These are only the first components in a very long receiver chain through which the radio waves propagate until they eventually reach the digital back-end. It is at this digital back-end that analogue signals are converted to digital signals by a component aptly called an analogueto-digital converter, or ADC. This allows further digital processing to be done at later stages, such as combining signals of different antenna stations in the interferometer array to produce images of the radio sky. Between the antenna and the ADC is a series of components that ensure the correct transfer of the analogue signals put out by the antennas to the input of the ADC. This section is called the analogue front-end – analogue because that is the nature of the signal during this stage, and frontend because this is at the front of the receiver chain. Figure 2 presents a more detailed view of the components that typically make up the analogue front-end. The first component is called a lownoise amplifier (LNA). This starts by amplifying the typically very small signal that is received by the antenna. This is then followed by a filter, which removes certain unnecessary or unwanted components that are present in the signal, then a mixer which transforms the signal frequency spectrum to allow simpler processing at later stages, and finally another filter stage to remove any unwanted signal components that may have sneaked in at earlier stages. After this stage the signal is about ready for digitisation and is passed on to the ADC. We will now discuss each of these stages in a bit more detail. Antenna Radio astronomy is the way that we ‘see’ celestial radio sources – that is celestial bodies that produce electromagnetic radiation within the radio frequency, or RF band of the electromagnetic spectrum. This frequency band extends roughly from 3 kilohertz up to 300 gigahertz. Unlike optical frequencies (around 400–800 terahertz, more than a 1 000 times higher than the highest RF frequencies) in which we are able to actually see electromagnetic radiation as visible light with our eyes, we cannot see RF radiation. So how exactly do we ‘see’ radio sources? This is where the antenna comes in – it transforms the incident electromagnetic waves into an electrical signal – current flowing through the terminals attached to the antenna – which we are able to manipulate, measure and analyse. Antennas come in a variety of forms, ranging from a simple monopole antenna (a straight piece of wire, typically found on an FM radio) to parabolic reflector antennas (such as those used to receive satellite television). Each antenna is designed for a particular application. One of the factors that influences the size of the antenna is the wavelength at the frequency for which it is designed – the lower the frequency the larger the antenna. For example, a monopole is roughly one quarter of a wavelength in length, that is about 75 cm at 100 MHz (more or less the centre A close up of the electronics in the analogue front-end. Image: SKA South Africa/Nick van der Leek Figure 3: a (above): How electromagnetic waves are collected at the focal point of the parabolic reflector; b (right): The ideal gain pattern of a typical parabolic reflector antenna. (180°/π) × 0.3/13.5 = 1.27° This means that if we point the antenna towards a source and measure the power we receive, and then point the antenna HPBW/2 = 0.635 degrees away from the source (why only half of the HPBW?) the received power would be halved. Satellite television transmissions are centred at roughly 12 GHz and a typical dish used in this application has a diameter of about 1 m. Use this information to estimate the HPBW for such a system. This should give you an idea of how accurately the dish should be orientated to successfully receive the transmission. Now consider the size of a reflector antenna which would give comparable performance at the much lower frequency of 100 MHz and which would mean that the SKA would still be able to function. The frequency is ten times smaller than 1 GHz, which means that the size of the antenna needs to be ten times larger – that is a diameter of 135 m! Building such a large reflector is not Figure 4: a (right): A dipole antenna; b (above): The radiation pattern of a single dipole antenna. impossible. In fact the world’s largest single-reflector telescope at the Arecibo Observatory in Puerto Rico has a diameter of 305 m. The problem is that building an interferometer array on the scale required for the SKA means that thousands such antennas would need to be built. ▲ ▲ of the frequency band in which FM radio stations transmit) and about nine times as short at 900 MHz (one of the frequencies used in cellular phones). For this reason cellular phones with built-in FM radios typically need headphones to be plugged in – they use the earphone cable itself as an antenna. This is because the antennas built into the phone for cellular communication are too small to receive FM radio transmissions. In applications where very weak signals need to be detected, as is the case in radio astronomy, we use high-gain antennas. A widely used design, which falls in this category, is the parabolic reflector antenna, which consists of a large reflective surface – the main reflector – and a smaller antenna which is called the feed antenna. If you point the reflector antenna in a direction that means that a distant radio source is in line with the main reflector axis, the electromagnetic wave incident on the reflector surface is reflected so that all the power is focused at the focal point of the parabolic reflector. The feed antenna then absorbs this power at the focal point of the parabolic reflector. The main advantage of such a high-gain antenna is that waves incident from other directions (e.g. radio sources at other positions in the sky where we are not looking) are suppressed very strongly in the antenna output signal. An important property of an antenna, called the gain pattern, describes how well signals from various directions are received. The ideal gain pattern of a typical parabolic reflector antenna is shown in Figure 3. The reflector has a diameter of 13.5 m and the gain pattern is for a frequency of 1 GHz. In this figure we see that waves arriving from the direction θ = 0° (this direction is referred to as on-axis) are received with the maximum gain of the antenna, and as the direction of arrival changes (off-axis) the gain rapidly decreases. An important measure of the gain pattern of the antenna is the half-power beamwidth (HPBW) which indicates how far off-axis a source has to be positioned to reduce the received power by half. A simple formula using the wavelength λ and reflector diameter d can be used to estimate this figure for the antenna whose gain pattern is shown in Figure 3b: HPBW ≈ (180°/π) × (λ/d) = Quest 9(2) 2013 25 Figure 5: a (right): Dipoles arranged in a linear array; b (below): The pattern that results from adding up the signals received by each of the antennas in the array. Figure 6a: The sinusoid produced when we plot the amplitude of a signal of 262 Hz as a function of time. Figure 6b: The frequency spectrum of a signal of 262 Hz. 26 Quest 9(2) 2013 In this case another type of ‘antenna’, called an antenna array is particularly useful. Instead of building one large antenna, the signals of a bunch of smaller antennas are combined in a specific way so that they interfere constructively (add up) for waves incident from the directions of interest, and interfere destructively (cancel out) for waves incident from other directions. This not only improves the total antenna gain, but it also allows the gain pattern of the antenna array to be changed by changing the way the signals received by each of the antennas in the array are added up. Figure 4b shows the radiation pattern of a single dipole antenna (Figure 4a). A dipole is very much like a monopole, except that it has two arms, as shown in Figure 4a. Notice that the gain of this antenna is a couple of orders of magnitude smaller than that of a large reflector antenna. Similarly the beamwidth of this smaller antenna is also much larger. Now note how the radiation pattern changes when we use an array of dipoles, in this case a total of 21 dipoles arranged in a linear array, as shown in Figure 5a. Each of the plots shows the pattern that results from a different way of adding up the signals received by each of the antennas in the array (Figure 5b). The gain is much higher and the beamwidth much narrower. Furthermore, we are able to scan the antenna electronically, that is to change the look-direction by simply changing the electronics in the receiver. In some cases this offers a major advantage over dish antennas, which require the entire reflector structure to be repositioned to change the look-direction. Low-noise amplifier The next component that follows the antenna is a low-noise amplifier, or LNA, which amplifies the signal output by the antenna. Remember that the signal received by the antenna in a radio telescope is extremely small, originating from a source which is typically many light years away! Usually these LNAs are placed very close to the antenna terminals so that the cable between the antenna and the LNA is as short as possible. In order to understand the reason behind this, we first need to understand a little more about how ‘noise’ enters and affects the behaviour of our receiver system. In any receiving system we distinguish between the desired part of what we receive (we simply call this the signal), and the undesired part of what we receive (we call this the noise). Imagine you are trying to listen to a friend talking on the phone while you are standing in a room full of people talking loudly – the friend’s voice that you hear on your phone would be the signal, and all other sounds that you hear count as noise. In situations like these a useful measure is the signal-to-noise ratio, or SNR. This is simply the ratio of signal power to noise power. Suppose you were equipped with a device with which you could measure the power of the sound that you hear. You could then measure the signal power by moving to a quiet room and measuring the power in the sound of the friend’s voice produced by the phone speaker alone. Similarly you could move to the noisy room, mute the phone speaker, and then measure the noise power as the power of the sound produced by the people talking loudly. The ratio of these powers (signal power divided by noise power) gives the SNR. Of course, the higher the SNR, the more clearly the signal can be identified and interpreted. If the noise in your room is too loud you are not able to hear your friend on the phone clearly. Can you think of a way to better hear your friend speaking without leaving the noisy room? In other words, how can you increase the SNR? Turning up the volume of the phone speaker would certainly help, because the friend’s voice would become louder while the noise in your room would remain the same (assuming the people in the room continue to talk at the same volume). The solution can also be understood by considering what happens to the SNR – increasing the volume increases the signal power (it amplifies the signal) without affecting the noise power (the noise in your room has nothing to do with what is happening in the phone). Increased signal power and constant noise power equals increased SNR. Suppose now that you move to a quiet room so that you only hear the phone speaker, and also suppose that your friend on the phone moves Juan-Pierre Jansen van Rensburg (MSc,completed 2012) and David Davidson with the one dish of two-element Stellenbosch University Experimental Interferometer in the background. Image: David Davidson and terrestrial communication transmitters. Internal noise, on the other hand, is produced by the receiving system itself. The fact is, any practical electronic component will produce a certain amount of noise, even something as simple as a cable. This means that as we move along the signal path from the terminals of the antenna (where we already have a certain SNR – the ratio of signal power to external noise power), more and more internal noise is added by each component in the receiver system, so that the SNR may decrease as we move down the receiver chain. In order to minimise this effect that internal noise has on the SNR, it is necessary to add an amplifier before internal noise is introduced by any other components (of course, the amplifier itself should also add as little noise as possible, which explains the “low-noise” part of the LNA). This means placing the amplifier as close to the antenna terminals as possible, so that the amount of noise power added by the cable between the antenna and LNA is as small as possible. Filters Before we can understand the function of a filter we first need to understand the concept of a frequency spectrum. Almost any signal that can be represented as a function of time can also be represented as a collection of different frequencies. Depending on the nature of the signal, some frequencies will be more prominent than others. Consider for example playing the note middle C on a piano. Figure 7: a (top): The signal produced by the three-note chord – C4, E4 and G4; b (above): The frequency spectrum of this signal. David Davidson (left) and current MSc student Nicholas Thompson (right) with the analogue front end built by JuanPierre Jansen van Rensburg as part of his MSc. Image: SKA South Africa/Nick van der Leek The sound that you would hear has a frequency of approximately 262 Hz. Now suppose the sound is recorded (transformed into an electronic signal) and we plot the amplitude of this signal as a function of time. The result is a sinusoid which we can express mathematically as x(t) = sin(2π × 262t) and is shown in Figure 6a. Using this time function and a mathematical tool called the Fourier transform, we are then able to represent the same signal as a function of frequency. This frequency to a room full of people talking loudly. Once again you will hear a combination of signal (your friend’s voice) and noise (the voices of other people in your friend’s room). What would happen now if you were to turn up the volume of the speaker? You increase the volume of the voice of your friend (signal power), but you also increase – by the same factor – the volume of the voices of the other people in your friend’s room (noise power). Therefore the SNR does not increase. So what is different in this case? In the previous scenario only the signal (friend’s voice) was processed by the amplifier (phone volume control), and the noise (voices of other people in your room) only entered the system after the signal was amplified. But this time the noise (voices of other people in friend’s room) entered the system before the amplifier (phone volume control) so that both the signal and the noise were amplified. The point is this: if the noise enters the system before the amplifier, the SNR is unaffected by the amplifier; if the noise enters the system after the amplifier, the amplifier increases the SNR. Now in a receiver system such as a radio telescope, we need to deal with external as well as internal noise. The external noise enters the system the same way that the signal does, that is, as electromagnetic waves incident on the antenna. Sources which produce this noise include cosmic radio sources which we are not looking at, satellites in the region of the sky where we are observing, Quest 9(2) 2013 27 The Fourier transform Figure 8a: The spectrum of the three-note chord with its low-pass filter response. Figure 8b: The spectrum where the highest tone (392 Hz) has been removed, and the high-pass filter response. Figure 8c: The output of a high-pass filter – the lowest tone (262 Hz) has been removed from the signal so only E4 is left. representation is called the frequency spectrum of the signal and is shown in Figure 6b. Note that the spectrum is zero at almost all frequencies, except for a spike at 262 Hz. This is because the signal is a sinusoid with that same single frequency. Now consider what happens if we play a three-note chord on the piano by simultaneously playing C4 (another name for middle C, 262 Hz), E4 (a slightly higher tone, 330 Hz), and G4 (even higher tone, 392 Hz). The sound 28 Quest 9(2) 2013 The Fourier transform, developed in the 19th century by French mathematician Joseph Fourier, lies at the heart of all interferometry. This transform is a technique for representing any continuous signal by a sum of waves of different frequency, called Fourier components. If you’ve ever seen a spectrum analyser on a sound engineer’s console (or just on a particularly advanced stereo), you have essentially seen a Fourier transform at work. The set of Fourier components is, in some sense, completely equivalent to the original signal. If we can measure the Fourier components, we can perform an inverse Joseph Fourier. transform to recover the exact original signal. Audio signals are one-dimensional, but an image of the Image: Wikimedia Commons sky can also be thought of as a signal – a two-dimensional (2D) one. A 2D Fourier transform turns it into a sum of 2D ‘spatial waves’, with large objects in the image associated with large (‘low-frequency’) waves, and small objects with small (‘highfrequency’) waves. Now here’s the interesting thing. If you were to somehow measure the signal on the surface of a telescope’s mirror (or in the lens aperture of a camera) – before it is focused on the detector – you would find a Fourier transform of whatever the telescope or camera is pointed at. When the mirror (or lens) then focuses this signal on the detector, it is actually, just by its very nature, performing an inverse Fourier transform. As you read this article, the optical system of your eyes is also continuously performing forward-and-inverse Fourier transforms, in order to form an image of the page on your retina. Fourier transforms are literally everywhere. Now remember that an interferometer works by connecting radio telescopes into a ‘synthetic aperture’, and measuring the signal in this aperture. This means that an interferometer actually measures the Fourier components corresponding to an image of the sky (rather than measuring an image of the sky directly). We can then do an inverse Fourier transform in software, and thus recover the original image. In some sense, an interferometer is like the ‘front half’ of a lens – with the rear half, the one responsible for the focusing, replaced by a computer that does inverse Fourier transforms. From: From theory to practice – Oleg Smirnov, Quest 8(3) 2012. that is produced is a combination of three sinusoidal waves, each with a different frequency. If we again record the sound and plot the signal as a function of time the result would look like Figure 7a. The mathematical expression for this function is x(t)= sin(2π × 262t) + sin(2π × 330t) + sin(2π × 392t). Accordingly the frequency spectrum of this signal has three spikes at the three frequencies corresponding to the notes played, as shown in Figure 7b. The frequency spectra of signals are not always as simple as the examples shown here. One signal which is of particular interest is called white noise and is often used as a mathematical model for a noise in the analysis of a system. The spectrum for a white noise signal is a horizontal line – it contains all frequencies. An interesting question is how would such a signal sound? The hissing sound that a radio makes when it is not tuned in to a station is more or less how white noise would sound. We now have enough knowledge to understand the purpose of a filter. Suppose you have a recording of the three-note chord played on a piano and you want to listen to the sound of one particular note only, for example E4. What you need in this application is a device that removes the unwanted frequencies (those of the C4 and G4 notes) from the signal – this is called a filter. Filters come in many varieties. One classification of filters is based on which frequencies it allows to pass through. For example, a filter which removes all frequencies above a certain threshold (called the cut-off frequency) is called a low-pass filter, because all the lower frequencies are passed through. Similarly a high-pass filter will remove all frequencies below the cut-off frequency and allows higher frequencies to pass through. Now let us apply filtering to the three-note chord example. Suppose we first apply a low-pass filter with a cut-off frequency of 363 Hz. In Figure 8a the spectrum of the recorded three-note chord signal (input to the low-pass filter) is shown along with the filter response. The filter response is a visualisation which helps to determine what the frequency spectrum of the output would look like for a given input. The output is determined by multiplying the input spectrum with the filter response. For low frequencies the filter response is equal to one, so the output spectrum is simply equal to the input spectrum over that range of frequencies. At Figure 9: The result of applying the inverse transform to the spectrum in Figure 8c – a single sinusoidal wave. The SKA research team at Stellenbosch. The authors are 2nd from the right (AY) and back row, 2nd from left (DBD). Image: SKA South Africa/Nick van der Leek a function of time as a function of frequency, the inverse Fourier transform is used to perform the reverse operation. Applying this inverse transform to the spectrum in Figure 8c gives the time function shown in Figure 9. Of course, the signal is exactly what we expect to see – a single sinusoidal wave, as would be produced if we recorded the sound of the E4 note played on its own. ADC (analogue-to-digital converter) Before the need for the mixer can be explained we first need to look ahead towards the following component in the signal chain, that is, the analogue-to-digital converter or ADC. This device does exactly what its name says – it converts the analogue signal at its input to a digital signal at its output. One way to think of this device is to think of it as measuring the strength of the signal at its input, and then recording the numerical value in computer memory. Of course, the input signal changes constantly and what we actually need is the signal as a function of time. So instead of measuring the signal once and outputting a single number, this device has to measure the signal repeatedly and every time output the value of the last measurement – this is called sampling. One question that comes to mind is how often should the ADC measure the input signal? In other words, what should the interval between consecutive measurements be? This is called the sampling frequency of the ADC, and it turns out that if we want to keep all the information that is carried in the input signal the sampling frequency should be at Figure 10: a (top): The input signal with a mixer frequency of 20 Hz; b (above): The output signal with a low-pass filter response. least double the highest frequency present in the signal spectrum (this is called the Nyquist frequency). For example, the input signal in Figure 7b has frequency components at 262, 330, and 392 Hz. If we want to sample this signal without loss of information, then we need a sampling frequency that is at least 2 × 392 Hz = 784 Hz. That is, we need to take a measurement every 1.276 milliseconds. Suppose we are not interested in the higher frequency component and we filtered this out using the low-pass ▲ ▲ higher frequencies the filter response is equal to zero, so multiplication with any input spectrum produces an output spectrum that is equal to zero over that range of frequencies. The output of this filter is shown in Figure 8b, where the highest tone (392 Hz) is seen to have been removed successfully. All that is left in this signal are the frequencies corresponding to the lowest and middle tones – if we had a recording of a two-note chord containing C4 and E4, this is what the spectrum of that signal would look like. Now suppose we use the output of the low-pass filter as input to a highpass filter with a cut-off frequency of 297 Hz and with the response shown in Figure 8b. Multiplying the filter response with the input signal produces zero at low frequencies and the input spectrum at higher frequencies. The output of this high-pass filter is shown in Figure 8c, where the lowest tone (262 Hz) is also seen to have been removed from the signal – the result of our two-stage filtering process is that we only have the frequency of the E4 note left in the signal. In effect we have realised a band-pass filter, which only allows frequencies within a specified frequency band to pass through (in this case frequencies within the range 297–363 Hz), by cascading a lowpass filter and high-pass filter. This is not typically how band-pass filters are implemented in practice, but the underlying principles are the same. Now that we have the frequency spectrum of our filtered signal, what does the filtered signal look like as a function of time? To answer this question we use the inverse Fourier transform. Just as the Fourier transform was used to represent Quest 9(2) 2013 29 The maths behind the mixer The mathematics behind the mixer is relatively simple and uses the sum and difference formulas for sine and cosine functions. Let’s suppose our input signal is a cosine with frequency f1 and multiply it by a cosine of frequency f2. So the input is x(t) = cos (2πf1t) and the output is: y(t) = cos (2πf1t) × cos(2πf2t). We now perform a simple trick often used in situations like these, adding and subtracting the same thing from our expression, which does not change its value, but which helps us to look at it in a different way. So let us add ½sin(2πf1t)sin(2πf2t) and subtract that same quantity from y(t), we then have: y(t) = cos(2πf1t)cos(2πf2t) + ½sin(2πf1t)sin(f2t) – ½sin(2πf1t)sin(2πf2t) We write the product of cosine functions as a sum of two similar terms, just to rearrange the expression to give: y (t) = ½cos(2πf1t)cos(2πf2t) + ½sin(2πf1t)sin(f2t) + ½cos(2πf1t)cos(2πf2t) – ½sin(2πf1t)sin(2πf2t) The first two terms on the right are the difference formula for the cosine function, and the last two terms are the sum formula for the cosine function. So we can write: y (t) = ½cos(2π(f2 – f1)t) + ½cos(2π(f2 + f1)t) The output is now the sum of two cosines, one with a frequency (f2 + f1) and one with a frequency (f2 – f1). If we use a low-pass filter with a cut-off frequency just above (f2 – f1) then we can remove the higher frequency component, so that we just have: y (t) = ½cos(2π(f2 – f1)t) . filter. The signal spectrum then looks like in Figure 8b, and the highest frequency is 330 Hz. To keep all the information in this signal intact we then only need to sample at 660 Hz – that is we need to take a measurement every 1.515 milliseconds. This means that the ADC can operate at a slower pace for lower frequency signals, a useful result to which we will return in the next section. Another important property of the ADC is the resolution, expressed as the number of bits (binary digits) used in the number representing the measured value. The more bits available to represent the measurement, the higher the quality of the measurement. One can think of this as being the equivalent of the number of decimal digits used in a calculation. Say for example you want to calculate the circumference of a circle using the formula circumference = π × diameter. Since π is an irrational number we can only use an approximation of its true value in a numerical calculation. If we are only able to use one decimal digit we would use the value π ~ 3.1. Using two decimal digits, we have π ~ 3.14, and so on. More digits equals higher accuracy – the same principle applies to the number of binary digits that the ADC is able to use in representing the signal strength as a number. The sampling frequency and resolution can be combined into a single figure of merit for the ADC, called the bitrate, which is measured in bits-per-second, or bps. This figure is simply the product of the sampling frequency and resolution – if every sample produces x bits, and the ADC obtains y amount of samples per second, then the output is x × y bits per second in total. The higher the bitrate, the higher the quality of the digital signal. 30 Quest 9(2) 2013 If you listen to digitally stored music on a computer (e.g. *.mp3 or *.wma files) you may notice that some players indicate the bitrate of the track that is playing. If you have access to encoder software, play around with the available bitrate options. Try to see if you can hear the effects of using very low bitrates. Mixer We now know that the frequency spectrum of the signal passed to the ADC determines at what rate the ADC should output samples of this signal. However, there is a practical limit to the number of samples which the ADC is able to process in a given time. This in turn places a limit on the highest frequency which should be passed to the ADC. Suppose we now have a signal that contains frequencies above the maximum frequency that is allowed to be fed to the ADC. This is similar to trying to listen to a dog whistle that works at 30 kHz – the frequency is just too high to be processed by the receiver (in this case the human ear, which is only able to hear up to about 20 kHz). One way to proceed is to filter out these high frequencies before passing the signal on, but this means we would lose the information that is carried in these high frequencies. A better solution is possible using a mixer in a process called frequency multiplying for reasons which will become clear in a moment. To explain how this device works, we again need to represent our input signal as a function of time, let us call it x(t). The mixer multiplies this signal with a sinusoidal function of a specific frequency, so that the output is y(t) = x(t) × cos(2πft). Does the name frequency multiplier make sense now? At the output of the mixer two signals are then produced, one signal which has the same spectrum as x(t) only shifted down by f hertz, and another shifted up by f hertz. Consider again the example of the sound produced by the dog whistle. Suppose we have a recording device which is able to record this sound perfectly (works at all frequencies), transforming the sound waves into an electronic signal. We then pass this signal through a mixer, which multiplies the input with a 20 kHz sinusoidal wave, and then plays that output signal through a speaker system which again perfectly transforms the electronic signal into sound waves (works at all frequencies). The sound produced by the speaker would contain a 10 kHz (30 – 20 kHz) component as well as a 50 kHz (30 + 20 kHz) component. Each of these components contain all the information in the original signal, so we are allowed to filter out the one which is more convenient. In this case we might apply a low-pass filter so that we keep the component which we are able to hear. This is illustrated in Figure 10a and b. In conclusion This is only one aspect of the engineering components required for the SKA. In fact, there are still numerous questions about the design and construction of the SKA that have not yet even been answered. The nature of science and engineering is such that the more questions we seek to answer, the more questions we land up asking. But without the electrical engineering back-end, the SKA would not be able to answer the big questions you have learnt something about in the previous three articles. ❑ David Bruce Davidson received BEng, BEng (Hons), and MEng degrees from the University of Pretoria, South Africa, in 1982, 1983, and 1986 respectively, and the PhD degree from the University of Stellenbosch in 1991. In 2011, he was appointed to the South African Research Chair in Electromagnetic Systems and EMI Mitigation for SKA. His main research interest through most of his career has been computational electromagnetics. Recently, his interests have expanded to include engineering electromagnetics for radio astronomy. André Young received the BEng degree in electrical and electronic engineering with computer science, MScEng degree in electronic engineering, and PhD in electronic engineering in 2005, 2007, and 2013, respectively, from the University of Stellenbosch. Currently he holds a post-doctoral appointment at Stellenbosch University, where his research focuses on calibration and phased array feed systems in radio astronomy. Study Science at WITS UNIVERSIT Y Why choose Wits? The Faculty of Science at the University of the Witwatersrand is internationally recognised for its innovative programmes which cover the Biological, Earth, Mathematical and Physical Sciences. The study of science opens doors to many exciting careers in diverse fields such as medical research, chemistry, computer science, biotechnology, genetic engineering and environmental sciences. The Wits Faculty of Science is one of the leading science faculties in the country and has an excellent track record in both teaching and research. Research strength ensures that staff members keep in touch with the latest developments in their fields. In addition to basic research in various fields, including mathematical modelling, high energy physics, biotechnology, molecular biology and environmental sciences, increasing effort is being devoted to applied research linked to a variety of activities in southern Africa. The Bachelor of Science (BSc) A BSc degree will introduce you to the basic scientific disciplines. It is a stepping stone rather than an end in itself and many of our students go on to study at postgraduate level. Choose your area of study from: Biological and Life Sciences: These include the micro and macro study of life. Courses range from the biochemistry of molecules such as DNA, RNA and proteins, and the molecular structure and function of the various parts of living cells, to evolution and the physiological and behavioural study of plants and animals. Earth Sciences: The Earth Sciences study the processes that shape the complex interactions between the solid earth, the oceans, the atmosphere and the organisms that have evolved on Earth. Fields of specialisation include the exploration for, and mining of minerals, the prediction of weather and earthquakes, the evolution of species through time, the state of our natural environment and how we can best manage the environment. Environmental Sciences: This involves the preservation and rehabilitation of our natural resources and can be studied under the Earth or Biological Sciences. Environmental Science studies the importance of the physical, biological, psychological, or cultural environment as factors influencing the structure or behaviour of animals, including humans. Mathematical Sciences: Pure Mathematics is a developing science. Mathematical Statistics and Actuarial Science are important in industrial and governmental planning and to the insurance industry. Applied Mathematics has applications in banking, finances and industry. Computer Sciences offers the understanding of computer hardware and software, in all its applications. Physical Sciences: Areas of study range from nuclear, particle, solid- and liquid-state physics, electricity, electronics, magnetism, optics, acoustics, heat and thermodynamics, to the synthesis of new compounds and the changes that take place during chemical reactions. New options exist at Wits for the study of Materials Science and Chemistry with Chemical Engineering. Scientists in Materials Science develop new ways of working with materials in responding to the challenges facing industry such as energy-fuels and environmental concerns, while in Chemistry with Chemical Engineering they use their knowledge of chemistry to design, operate and construct processes useful in the chemical industry. Is a career in science right for you? If you have a natural curiosity about the world we live in, care about conservation and the use of our natural resources, enjoy solving problems and are good at mathematics, then a career in science could be an excellent choice for you. Need to know more? Contact the Student Enrolment Centre, Tel: 011 717-1030, E-mail: firstname.lastname@example.org w w w.w i t s . a c . z a / s c i e n c e SANSA at the forefront The South African National Space Agency (SANSA) is a key player in the South African National Antarctic Programme and has several on-going space science and space weather related projects in Antarctica, as well as Marion Island and Gough Island. These articles show some of their research. Launching balloons in Antarctica The IPY banner over SANAE base in Antarctica. Image: Anton Feun The team launching the first BARREL payload from the South African Antarctic base.Image: SANSA ANSA is particularly interested in Antarctic research because it is an ideal location for scientists to study space weather. The inwardcurving magnetic field lines at the pole provide the perfect opportunity for space particle research. This opens up a unique window into geo-space, which allows SANSA to study the Earth’s magnetosphere, ionosphere and other related space weather phenomena. During summer take-over at the South African Antarctic base SANAE IV, the SANSA team assisted in a NASA-funded project as part of the first science campaign of the Balloon Array for Radiation belt Relativistic Electron Losses (BARREL) project. BARREL works in conjunction with NASA’s Van Allen Probes, two satellites currently orbiting Earth collecting data in the heart of the Van Allen radiation belts. The Van Allen belts are affected by space weather phenomena such as solar storms, the solar wind and coronal mass ejections. 32 Quest 9(2) 2013 This NASA image shows the Van Allen Radiation Belts and the Van Allen twin probes. Image: NASA The project aims to track where radiation goes when it escapes the belts, because the charged particles within the belts can damage space-based technologies such as communications and GPS satellites. Data collected from BARREL payloads complement the Van Allen satellite data, and help to fill in the space weather picture. The NASA/SANSA team successfully launched 13 helium-filled balloons from the South African Antarctic base, measuring 40 m in height, each carrying an identical payload to an approximate altitude of 38 km. Seven additional balloons were also launched from the British base, Halley Bay. The main objective is detecting X-rays produced by precipitating relativistic electrons as they collide with neutral particles in the Earth’s atmosphere. This is best done over a period of 10 days in the thinner layers of our atmosphere, which is why the payloads are sent up with balloons. A second science campaign, consisting of an additional 20 payloads, is scheduled to take place during the 2013–2014 austral summer. Projects such as these are vital in understanding space weather conditions that affect satellites orbiting the Earth within the Van Allen radiation belts. Precision farming trial findings: Satellite based augmentation system yields positive results Above: This graph shows a comparison of the vertical error in GPS and SBAS modes. Image: SANSA Left: A tractor equipped with both the standard GPS and the SBAS antennas, as well a radio antenna – used to conduct the precision farming trials. Image: SANSA ANSA’s navigation unit completed three satellite-based augmentation system (SBAS) trials during February and March 2013. Together with the trials, a lead user group meeting was held to confirm user requirements and illustrate the importance of this development to treasury. The first trial was held in Heidelberg and was based on precision farming. The second took place in Gauteng together with Tracker, and the final trial took place at the Kruger National Park. The trials aim to illustrate the necessity of an improved navigation system, by comparing the results of a normal GPS to those of the SBAS. For the precision farming trial, a tractor equipped with both the standard GPS as well as the SBAS was driven along specified lines on the farm. To test their accuracy, breaks were taken in between and the ‘same’ positions resumed – simulating a typical farming pattern. Results from the trial prove that the SBAS unit is indeed more accurate than the current GPS being used, and that the system could potentially eliminate several challenges faced by farmers. ‘Mainly wheat farmers make use of GPS alone for auto guidance of the tractors and variable rate application (VRA) of lime, pesticides and fertiliser,’ explains Eugene Avenant, SANSA Space Operations Chief Engineer and project representative. ‘The VRA controllers can control up to 16 nozzles on the beam and so for these applications, pass-topass accuracy is very important.’ In addition, farmers have a major problem with poor GPS repeatability – with the same passes of the tractor in the field giving different routes on the farm map. ‘It is hoped that SBAS will make a difference to yield monitoring and mapping and possibly result in vertical information being beneficial, as a number of farmers’ base maps are already in 3D,’ he adds. In South Africa, the main objectives of the dissemination activities of the SBAS awareness and training in South Africa Project (SATSA) are twofold. The first is to inform political stakeholders in government departments of the progress made related to SBAS training and trials in South Africa; and the second is to raise awareness of EGNOS for South Africa (EGSA) among potential user communities in the country. Demagnetising ships for the Navy The model ship is placed in the degaussing coil system designed to reduce the magnetic signature of the ship. Image: SANSA steel-hulled ship is like a huge floating magnet with a large magnetic field surrounding it. The process of building a ship within the Earth’s magnetic field leads to a certain amount of permanent magnetism in the ship. When the ship moves, this field also moves and adds to or subtracts from the Earth’s magnetic field. Essentially the moving ship builds up a magnetic signature, which can trigger magnetic sensitive devices such as mines that are designed to detect these magnetic signatures. Larger ships have DC coils systems built into the ship in various locations, which create a field equal and opposite Quest 9(2) 2013 33 The ship is tested using a magnetic sensor to determine if it has been sufficiently degaussed. Image: SANSA to the ship’s permanent magnetic field, known as degaussing. However, using degaussing coils in a surface ship can only compensate the ship’s own magnetic field to a certain level and thereafter the vessel has to undergo a deperming procedure in a dedicated deperming facility. During the deperming procedure large coils are wrapped around a ship and DC currents are used to cancel the remaining permanent magnetism of the ship. The process is repeated a number of times, slowly whittling down the permanent magnetic field of the ship, making it ‘magnetically invisible’. The ship is then able to pass over mines and other magnetic sensitive devices without triggering them. Emile Lochner, a physics and engineering student currently working on his MSc at SANSA Space Science, has developed a small-scale model of the Flash D deperming procedure. The model demonstrates how the procedure can be used on a larger scale for applications such as degaussing or deperming ships for the Navy. Emile obtained a BSc in physics from Rhodes University, after which he made the leap to engineering and completed his Bachelor’s degree in electrical engineering at the University of Stellenbosch. SANSA provides opportunities for students to work with highly sensitive and specialised equipment in an engaging and friendly environment. Working with SANSA has allowed Emile to build good rapport with fellow scientists and engineers in the field of space science. He describes his future goals as wanting to combine physics and engineering in the hope of developing new scientific instrumentation. Emile’s advice to aspiring engineers is not to be afraid of approaching people or of failure, as this is how you learn. SANSA to support radio astronomy infrastructure in Africa President Jacob Zuma with Russian President Vladimir Putin at the 5th BRICS Summit in Durban, South Africa. Photo courtesy of allafrica.com via Govt of South Africa/Flickr ollowing the signing of the RadioAstron space satellite agreement between the South African National Space Agency (SANSA) and the Russian Federal Space Agency (Roscosmos) at the 5th BRICS Summit held in Durban this year, SANSA will be responsible for the installation, operations and maintenance of the receiving antennae. The RadioAstron satellite was launched on 18 July 2011 and carries a radio telescope that will obtain images and coordinates of various radio-emitting objects. As a single, virtual telescope, it will be the world’s largest radio telescope, with a ‘dish’ measuring approximately 390 000 km. 34 Quest 9(2) 2013 The mission has an expected lifetime of five years and will support and improve investment in radio astronomy infrastructure in Africa and complement radio astronomy facilities such as the Square Kilometre Array (SKA), enhancing the continent’s reputation as a premier destination for radio astronomy. ‘SANSA’s Space Operations ground station will undergo an equipment upgrade to accommodate the operational requirements to support the Russian RadioAstron orbiting space telescope,’ says Eugene Avenant, Chief Engineer at SANSA Space Operations. Aligning with the National Development Plan, one of SANSA’s strategic goals is the positioning of South Africa as a recognised global citizen, to offer world-class and efficient services and societal benefits. ‘By participation in this international collaboration, SANSA will be in a position to contribute to job creation whilst fostering international relationships,’ Avenant adds. The RadioAstron project is an international collaboration led by the Astro Space Centre of the Lebedev Physical Institute (Russian Academy of Sciences) in Moscow. Other partners include the European Space Agency, the National Radio Astronomy Observatory (USA), the Tata Institute for Fundamental Research (India), and the Commonwealth Scientific and Industrial Research Organisation (Australia). SANSA’s role will include acting as a central point between Telkom – who has made an 18-m C-Band antenna available for the RadioAstron tracking and acquisition in South Africa, the Roscosmos system and the Hartebeesthoek Radio Astronomy Observatory (HartRAO). Raoul Hodges, Managing Director of SANSA Space Operations, explains this in more detail. ‘The control of the ground-based equipment and data relayed from the spacecraft via TCP/IP connections will be aggregated in a router/switch at the Telkom Earth station and relayed to SANSA Space Operations via fibre-optic connection. SANSA Space Operations will allocate a dedicated area with the required computer interfaces and apparatus to relay data to and from Roscosmos using terrestrial communication infrastructure.’ The idea is to complement the capability of ground-based very long baseline interferometry (VLBI) instruments with a space-based VLBI instrument. ❑ SOUTH AFRICAN ASTRONOMICAL OBSERVATORY South Africa’s premier research facility for optical and infrared astronomy. Navigate the southern skies with us Tours & visits of Sutherland telescopes Cape Town public lectures twice a month Astronomical information Science resources & activities for schools Astronomy career information In-service technical training Bursaries & scholarships The Southern African Large Telescope For more information contact us tel.: 021 447-0025 | fax: 021 447-3639 email: email@example.com | web: www.saao.ac.za Science news Q Centre of Excellence for Palaeosciences launched in South Africa he palaeosciences fraternity and academia have welcomed the launch of the South African Strategy for the Palaeosciences and the awarding of the Centre of Excellence (CoE) for the Palaeosciences of the Department Minister Derek Hanekom standing in front of the cast of Sediba. Image: Wits University A comparative assemblage: Chimp skeleton on the left; Sediba in the middle and a modern human skeleton on the right. Image: Wits University 36 Quest 9(2) 2013 of Science and Technology and the National Research Foundation to the University of the Witwatersrand and its collaborating institutions, namely the University of Cape Town, Iziko Museum in Cape Town, the National Museum in Bloemfontein, the Albany Museum of Rhodes University, and Ditsong Museum in Pretoria. The announcement was made recently at an event held at the University of the Witwatersrand’s Origins Centre. This is the culmination of two years of research and consultation, led by the Department of Science and Technology and the Department of Arts and Culture. ‘With our geographic location comes the responsibility to protect, preserve and develop knowledge about our abundant fossil wealth. This Strategy for the Palaeosciences sets out some of what the South African Government plans to do to meet its responsibility in this regard. I am confident that this centre we are launching today will make a substantial contribution towards this goal of positioning South Africa as a world leader in palaeosciences, collections and site management’, said the Minister of Science and Technology, Derek Hanekom, in his opening address. ‘The Centre of Excellence for the Palaeosciences is the 9th centre in the CoE programme since its Dr Andrew Kaniki, Executive Director: Knowledge Fields Development, National Research Foundation. Image: Wits University launch in 2004. The establishment of the centre has its origins in the National Research and Development Strategy (2002), which identified a number of knowledge fields in which South Africa should aim to achieve international research excellence because of its geographical advantage’ said Dr Andrew Kaniki, Executive Director: Knowledge Fields Development of the National Research Foundation. Wits University’s Prof. Bruce Rubidge, who will be heading the centre, said that ‘the CoE partnership between Wits and our South African partner institutions will comprise some 30 scientists and many more students and technical personnel, as well as established international research partnerships, making this one of the largest palaeoscience collaborations in the world.’ The CoE launched today will manage a number of activities, including: n research focused on the creation and development of new knowledge and technology n education and training of the highest standard at master’s, doctoral and postdoctoral levels n information brokerage through providing access to a pool of knowledge and promoting knowledge sharing and transfer n networking through collaborating across national and international boundaries n service rendering in respect of analysis and policy for government, business and civil society. The Centres of Excellence Programme was established in 2004, and the CoE for the Palaeosciences is the ninth centre. The Department of Science and Technology has completed a framework for the opening of a call for an additional five CoEs in the 2012/13 financial year, at least one of which will be in the social sciences and humanities. The awarding of the five new CoEs will be completed before the end of the 2013/14 financial year, at which time there will be a total of 14 CoEs. ❑ COUNCIL FOR GEOSCIENCE MISSION: To provide expert information and services to improve the management of natural resources and the environment for the benefit of the society. • Geological, Geotechnical, Geochemical, Metallogenic and Marine mapping Minerals Development • Construction Materials and Agricultural Minerals • Water-Resource Assessment and Protection • Environmental Geoscience • Engineering Geology and Physical Geohazards • Palaeontology • Laboratory Services • Geophysics • Seismology • Geographic Information Systems (GIS) • Information Databases • National Geoscience Library • Geoscience Museum • National Core Library century of geological excellence 1912 - 2012 280 Pretoria Street, Silverton, Pretoria Private Bag X112, PRETORIA, 0001 Tel: +27 (0)12 841-1911 Fax: +27 (0)12 841-1221 www.geoscience.org.za Australopithecus sediba: New analysis of the remains of Australopithecus sediba shows that the species is a tantalising mixture of ancient australopithecine features and modern Homo features. Quest explains. The reconstructed skull and mandible of Australopithecus sediba.Image: Reconstruction by Peter Schmid, photo by Lee Berger courtesy of the University of the Witwatersrand riting in the journal Science, Professor Lee Berger says that, ‘The site of Malapa, South Africa, has yielded perhaps the richest assemblage of early hominin fossils on the continent of Africa’. Fossils of Australopithecus sediba were first found in August 2008 and the species was named in 2010. Scientists have now established that these fossils date back to between 1.977–1.98 million years ago. At the time of its discovery it looked as though A. sediba could potentially be the ‘missing link’ between the australopithecines and the species of Homo that eventually gave rise to modern humans. On 12 April 2013 a series of six papers were published in Science by a team of South African and international scientists from the Evolutionary Studies Institute (ESI) at the University of the Witwatersrand (Wits) and 15 other global institutions. The findings that are highlighted in these papers take A. sediba to the forefront of research into the origins of hominins – and of our own species, Homo sapiens. 38 Quest 9(2) 2013 Composite reconstruction of A. sediba based on recovered material from MH1, MH2 and MH4 and based upon the most up-to-date research. As all individuals recovered to date are approximately the same size, size correction was not necessary. Femoral length was established by digitally measuring a complete femur of MH1 still encased in rock. For comparison, small-bodied female modern H. sapiens on left, male chimpanzee (Pan troglodytes) on right. Image: Lee Berger, courtesy of the University of the Witwatersrand A mosaic The six papers represent the culmination of more than four years of research into the anatomy of A. sediba based on the skeletons commonly referred to as MH1 and MH2, as well as the adult isolated tibia referred to as MH4. The papers are entitled: Dental morphology and the phylogenetic ‘place’ of Australopithecus sediba; Mandibular remains support taxonomic validity of Australopithecus sediba; The upper limb of Australopithecus sediba; Mosaic morphology in the thorax of Australopithecus sediba; The vertebral column of Australopithecus sediba; and The lower limb and the mechanics of walking in Australopithecus sediba, with the introduction entitled The Mosaic Anatomy of Australopithecus sediba. In essence, the six studies describe how the 2-million-year-old A. sediba walked, chewed and moved. As an example, A. sediba’s teeth are remarkably similar to human teeth. A mosaic of ancient and modern 2D reconstruction of the 2-million-year-old Australopithecus sediba based on fossils from the MH1, MH2 and MH4 skeletons from Malapa, South Africa. Image: Reconstruction by Peter Schmid, photo by Lee Berger, courtesy of the University of the Witwatersrand Casting technicians at the Evolutionary Studies Institute, University of the Witwatersrand cast elements of the sediba skeleton in order to prepare the standing reconstruction. Image: Bonita de Klerk, courtesy of the University of the Witwatersrand Most australopithecines have large, prominent canines, but A. sediba’s are small, like ours are, according to Darryl de Ruiter and colleagues at Texas A&M University. Peter Schmid and his team, at the University of Zurich, Switzerland, found that A. sediba’s lower ribs sweep inwards, as ours do, which suggests that the species had a ‘modern’ tapering waist. This also allows an arrangement of abdominal muscles that allows more efficient walking. A tree-dwelling australopithecine However, in other ways, A. sediba is quite different from early humans – hence the mosaic. Jeremy DeSilva, from Boston University in Massachusetts, found that A. sediba has a far more flexible foot than modern humans do – something that would allow the species to grip tree trunks and branches. Scientists think that A. sediba was the australopithecine that spent the most time in trees. But, if australopithecines spent more time walking and less time in the trees than their ancestors, why was the most human-like, A. sediba, so well adapted to life in the trees? Could this body plan be evidence for a deeper (tree-dwelling) lineage in South Africa? This is the question that DeSilva and his colleagues are now trying to answer. There is potential evidence for this second option, according to a study led by Joel Irish at Liverpool John Moores University in the UK. Irish and his colleagues have compared the teeth of A. sediba with those of other hominins. They have found that there may be two distinct ancient groups of australopithecines – one in East Africa (including Lucy, A. afarensis) and one in South Africa, better adapted to climbing – and which ultimately became H. sapiens. Origins of modern humans In summary, Berger suggests that A. sediba provides us with the most comprehensive examination of the anatomy of a definitive single species of early hominin. ‘This examination of a large number of associated, often complete and undistorted elements, gives us a glimpse of a hominin species that appears to be mosaic in its anatomy and that presents a suite of functional complexes that are both different from that predicted for other australopiths, as well as that for early Homo’. ‘Such clear insight into the anatomy of an early hominin species will clearly have implications for interpreting the evolutionary processes that affected the mode and tempo of hominin evolution and the interpretation of the anatomy of less well-preserved species,’ he says. ❑ Quest 9(2) 2013 39 Young Science Communicator’s The Young Science Communicator’s Competition (YSCC) challenges young scientists and researchers under the age of 35 to communicate their work to audiences beyond their scientific peer community through a written article, a radio script or a viral video. It is run biennially by SAASTA, with the next round to be run in 2014/2015. YSCC aims to encourage the development of science communication skills in young scientists which will carry through their careers, and forms part of an overall initiative to encourage scientists to communicate their scientific research in a creative and innovative manner, thereby developing science communication skills across the SET sector. YSCC provides an opportunity to entice young scientists, who may not have had previous opportunity or incentive to communicate their work, and to expose them to the opportunities in science communication. This year’s winners were Morgan Trimble and Leon van Eck. Budongo Forest in Murchison Falls National Park, Uganda. Living in an area with a high diversity of species may promote mental health and help prevent disease. By Morgan Trimble Why conserve biodiversity? Your life could depend on it L ast night I felt my anxieties melt away as I sat enjoying a sundowner in a friend’s back garden. Sure, both pinotage and social connections are known stressrelievers. But I believe the biggest factor was that my friend’s treed garden overlooks a stunning view of a rushing river, complete with chirping birds, calling frogs, and a family of otters. It’s a stark contrast to my little flat in a bustling urban neighbourhood with a view of a paved parking area and a neighbouring apartment block. Many of us enjoy an occasional picnic in the park, a leisurely hike through the local nature reserve, or even a far-flung safari. It feels good to get out and breathe the fresh air. But 40 Quest 9(2) 2013 could spending time in nature literally save your life? New research points in that direction. Back in 1984, a landmark study showed that patients recovering from surgery did so more quickly and with less pain medication if their hospital window had a view of trees rather than a building. Scientists have since discovered links between people’s access to so-called ‘greenspaces’, for example gardens or city parks, and their physical and mental health. People who live in areas with more greenspaces have a lower incidence of anxiety disorders and depression. They also experience lower levels of specific ailments, especially respiratory disorders and diseases associated with a lack of physical activity. Of course, this might be attributable to greenspaces reducing pollution and encouraging exercise. But recent research also links spending time in nature to specific physiological responses in our bodies that promote good health. In Japan, Shinrin-yoku, or strolling through a forest to bask in the atmosphere, has become a popular form of preventive medicine. Researchers have linked this ‘forest bathing’ to near immediate reduction in stress hormones, pulse rate and blood pressure, and relaxation of the nervous system. Forest bathing also improves immune function for at least a week. Interestingly, research also supports Above: On safari at Murchison Falls. New research suggests spending time in nature can improve your mental and physical health, so a safari could be just what the doctor ordered. Above left: Waterfall along the hiking trail in the Rwenzori Mountains, Uganda. Spending time basking in forests has become a popular form of preventive medicine in Japan where it is known as Shinrin-yoku. Left: Enjoying the view of the Three Rondavels, Blyde River Canyon, South Africa. Could a magnificent view improve your mood and your overall health? Science suggests views of nature are better for us than looking at industrial landscapes. Images: Morgan Trimble a link between the psychological benefits of spending time in nature and biodiversity. The more speciesrich an area is, the greater the increase in psychological well-being experienced by greenspace users. Still other research links the lack of biodiversity to an increase in health problems including asthma, allergies, and autoimmune disorders. Ecologists have long noted the link between degraded ecosystems, those that have lost species, and their susceptibility to collapse and invasion by exotic species that negatively affect ecosystem health. Recently, scientists have extended that theory by conceptualising the human body as the ‘ecosystem’, and its biodiversity as the menagerie of microorganisms it supports. Ecologist Ilkka Hanski and colleagues have found that people who live in homes surrounded by diverse plant life, host a higher diversity of bacteria on their skin. These people also show decreased markers for inflammation and allergies. Contrastingly, individuals living in areas with low environmental biodiversity had fewer bacteria species on their skin and were more likely to have skin allergies. This might seem counterintuitive to people used to associating bacteria with illness, but in fact, our bodies rely on good bacteria to stay healthy, in part because they fight off bad bacteria. For example, a reduced diversity and altered composition of microorganisms in the gut has been associated with allergies, diabetes, inflammatory bowel disease, and even obesity. What’s especially interesting, however, is the idea that living in an area of high biodiversity in the environment might promote a diverse and healthy community of microbes within our own ‘human ecosystems’ that helps us stave off disease. These new findings linking a healthy, diverse natural environment to our physical well-being can be added to the long list of reasons why it’s important to conserve biodiversity. As a society, we are becoming increasingly urbanised. Roughly two-thirds of the human population will be city-dwellers by 2050. It’s not enough that biodiversity is out there somewhere in a protected area. We also need to ensure that, as individuals, we connect often enough and meaningfully with nature. Promoting greenspaces in city planning is one option. But perhaps getting out of town and into nature this weekend, and as often as possible, is just what the doctor ordered. ❑ Morgan Trimble is a PhD student at the Conservation Ecology Research Unit at the University of Pretoria. Her research focuses on biodiversity in human-modified landscapes and the implications for people and for conservation. She likes to spend time in nature as often as possible, taking photographs and soaking up the atmosphere. Quest 9(2) 2013 41 Jungle fever: Brazil nuts, bees and orchids By Leon van Eck An iridescent bee of the genus Eufriesea visits the curious flowers of a Stanhopea orchid. Many orchids have evolved scents to attract specific insects, along with complicated floral structures to ensure pollination. Image: Daniel Jiménez, Lankester Botanical Garden, Costa Rica very Brazil nut you’ve ever eaten has been collected from the Amazon jungle. Forget about wine and cheese – Brazil nuts represent the ultimate in terroir. Without intact rain forest, the very landscape surrounding the tree, there would be no nuts at all. The Brazil nut tree (Bertholletia excelsa) can live to be more than five centuries old, reaching more than 40 m into the sky to become what’s known as an emergent – a true forest giant standing head and shoulders above the forest canopy below. The nuts (which are really just delicious, oily seeds) are encased in an enormous woody capsule that takes more than a year to mature on the tree. Imagine about 20 nuts arranged like the segments of a Terry’s chocolate orange, but wrapped in a cannonball. It’s downright dangerous to be under the tree when these palatable projectiles (which can weigh 2 kg) start falling to the forest floor. Around 20 000 tons of nuts are harvested each year. In the past, people who tried to farm the trees in largescale orchards were disappointed to find that the trees almost never set seed. Something was missing. Those who tended trees within or right next to undisturbed forest had much more success. This practice is called forest gardening, an example of which is shown in the image above, with nut trees growing in a soybean field overhung by jungle. It was clear that 42 Quest 9(2) 2013 Brazil nut trees tower above a soybean field in Mato Grosso, Brazil. Without the intact rain forest visible beyond the soybeans, the Brazil nut trees will not be pollinated and are unable to produce any nuts. Image: Patrick Joseph the trees need the forest like the forest needs the trees. This is because an intact ecosystem is required for the flowers of the Brazil nut tree to be pollinated. And that ecosystem includes amorous bees and bizarre orchids. When the short-lived flowers of certain orchids open up in the shadows of the forest canopy, it’s the equivalent of a holiday sale for the males of several species of metallic-looking insects known as euglossine bees. The bees have not come to score a good deal on a meal, for the orchids produce no nectar. Instead, the male bees crawl all over the flowers, enticed by otherworldly fragrances produced by special scent glands hidden within. The males bees actively collect the scent molecules onto specialised hairs that cover their legs to form a sexy bee cologne which they’ll use to attract females at display sites elsewhere in the stifling forest. In their mad scramble, they’ll also pick up and deposit some orchid pollen, thereby intertwining the reproductive fate of the orchid with their own. Outdoing the talented perfumiers of Guerlain and Givenchy, the scent glands of different orchid species produce unique mixes of volatile chemicals. In turn, female euglossine bees are as particular as the mademoiselles of Paris: scent preferences are often highly species-specific. The result is that pollination only occurs between orchid plants of the same species, as males of a particular bee species typically only visit one species of orchid. Some bee species play perfumier themselves and collect scent from different orchid species, mixing their custom cologne on-the-go. In these cases, the orchids attach their pollen on different parts of the bees’ bodies to prevent cross-pollination. And what role does the female euglossine bee play in all of this? Well, she’s the one who pollinates the flowers of the Brazil nut tree. Without intact rain forest, there can be no orchids. And without orchids, no bees. And without bees, no Brazil nuts. So think about all that jungle lust when you’re snacking on your roasted nut mix this festive season, and thank those horny bees and bizarre orchids every time you pop a precious Brazil nut in your mouth. ❑ Leon van Eck is a postdoctoral fellow in the Department of Genetics at Stellenbosch University, where his current research focuses on the evolutionary arms race between cereal crops and their pests and pathogens. Van Eck has a PhD in plant genetics from Colorado State University and is passionate about agricultural biotechnology. He believes public education is integral to continued food security through crop improvement. He currently teaches courses in genomics and plant biotech at Stellenbosch University. Born and raised in Pretoria, Van Eck enjoys hiking and exploring South Africa’s rich and biodiverse landscape. He occasionally blogs about these at www.geneticjungle.com. IDC – financing South African innovation The IDC’s Venture Capital Strategic Business Unit (SBU) manages a R750 million fund providing equity funding to start-up companies for the development of globally unique South African Intellectual Property (IP) – this being the key criteria for any application. Funding is provided in the form of ordinary shares and shareholder loans. There is no stipulated investment period, but the SBU’s objective is to achieve an exit opportunity within a reasonable time frame. The funding provided by the SBU facilitates completion of the development, followed by the commercialisation of technology-rich products. These innovations and inventions most often stem from academic researchers who have developed their work to a point where they have a desire to become entrepreneurs; and innovators or inventors who want to move from tinkering with their ideas and prototypes in their backyards to fully commercialised businesses. Through its investments, the Venture Capital SBU plays a proactive role in driving industrial development in South Africa, having a meaningful impact through the development of new entrepreneurs and shifting the focus from large companies to SMEs. This is achieved through sustainable development of more knowledge-intensive industries for long-term growth and job creation as prioritised in the Government’s New Growth Path (NGP). The unit continues to be a proactive, value-adding partner to its clients, capable of producing huge development returns to the benefit of South Africa’s economy and citizens. The critical investment criterion for all Venture Capital projects is that the IP must be owned by the company and if not patentable, the product needs to provide a sustainable competitive advantage. The unit’s mandate allows for investment in projects across all industries, leading to sectoral growth and job creation. Recent South African inventions and innovations in the electronics, ICT, medical device and biotechnology sectors have proven particularly successful. Funding for a project can reach a maximum of R40 million over several years, with the initial investment limited to R15 million. The IDC takes a minority shareholding of between 25% and 50% depending on the SBU’s valuation of the business and the amount of funding required. The start-ups stand to benefit from the further strategic support, guidance and advice provided through a partnership relationship with the IDC. Telephone: 086 069 3888 Email: firstname.lastname@example.org To apply online for funding of R1 million or more go to www.idc.co.za Working towards SunSmart schools in South Africa Caradee Wright and Patricia Albers discuss ways in which schools can help their students understand the need for sun protection. he results of the first ever SunSmart Schools 2012 Study are now being analysed by research teams at the Council for Scientific and Industrial Research (CSIR) and the Medical Research Council of South Africa (MRC). The study collected information about sun-related knowledge, attitudes and behaviours of South African schoolchildren, as well as sun protection practices at their schools, to help shape a SunSmart Awareness Programme for South African schools. Twentyfour primary schools from the nine provinces completed a school survey. This young girl is playing in the shade, wearing a hat and is also protected by sunscreen. Image: Caradee Wright 44 Quest 9(2) 2013 Two schools from each of Gauteng, KwaZulu-Natal, Western Cape and Northern Cape Provinces; three schools from Limpopo, Mpumalanga, NorthWest and Free State Provinces and four schools from the Eastern Cape Province participated. A total of 707 schoolchildren between ages of 11 and 13 years answered a questionnaire. While none of the 24 schools that participated in the nationwide SunSmart Schools 2012 Study had a sun protection policy in place or a ‘no hat, play in the shade’ rule, 75% of the schoolchildren interviewed at these schools had heard about the Cancer Association of South Africa, which promotes sun protection. According to the United States Community Preventive Services Task Force, sun protection awareness and intervention programmes implemented in primary schools are effective and can increase sun-protective behaviours, decrease sun exposure, sunburn incidence and formation of new moles, and thereby reduce the risk of adverse health effects later in life. Schools are important environments for educating children about sun protection, and also for providing supportive environments to help children best protect themselves against excess sun exposure. Ways that schools can support sun protection policies include: n encouraging hats and sun-protective clothing n reminding students to use sun protection n scheduling outdoor events outside the peak solar UVR hours of 10h00 - 15h00 Shade and sunscreens Most schools in the study had provided shade in their playgrounds. This is an important environmental way of providing sun protection and should be used around swimming pools, sports fields and recreational areas at schools. Sunscreen is another important way of protecting against too much solar UVR exposure. Only two schools Q Environmental health Clouds provide some protection against UV radiation. Image: Caradee Wright Left: The UV Index is a measure of the amount of UV radiation on a particular day and is related to cloud cover and the positon of the Sun in the sky. Image: New Zealand SunSmart campaign provided sunscreen for their students because of the costs involved. Threequarters of schoolchildren said that they never, or only sometimes, applied sunscreen. Schoolchildren seldom used hats for sun protection. Only about half of the schools said that they formally taught about sun protection as a health issue. UVR exposure Too much solar UVR exposure during childhood and adolescence has been associated with melanoma and non-melanoma skin cancer in later life. Excess UVR exposure is also associated with eye diseases, such as cataracts and pterygium, immune suppression and photoageing or wrinkling. More than 80% of students had not heard of melanoma and only 13% correctly identified it as being a form of skin cancer. Only 14% correctly noted that anyone can get melanoma. When asked what one can do to prevent skin cancer, 64% of students said avoid getting sunburnt, which shows that they correctly associated sun exposure with cancer. More than half of the children reported that they had been sunburnt last summer. Excess solar UVR exposure is a concern not only for fair-skinned children. Children with dark skin can also be sunburnt. However, in general, dark skin with higher melanin content is protective against sunburn and people with dark skin are less likely to get skin cancer. In this study, children self-reported as black (40% of children), white (27%), Indian/ Asian (7%) or coloured (23%) (3% don’t know or missing). Children’s self-reported skin colour was mostly light brown (54% of children), white (21%) or brown (15%). Most children (94%) reported that they were born in South Africa and there were 269 boys and 434 girls (four children did not identify their gender). SunSmart attitudes Further research to answer questions such as whether girls were more likely to be sunburnt than boys, will now be carried out. In general, the Grade 7 students who responded to the questionnaire did have a positive attitude towards sun-related issues, preferring not to have suntanned skin, but this may also be explained by the relatively large proportion of children with light brown or brown skin. Most students said they did not use sun protection regularly and only about half said that they stayed inside or in the shade to avoid getting sunburnt. Despite not wanting to get suntanned or sunburnt, this did not translate into positive sun protective behaviours, and about half of learners reported being sunburnt last summer. Children either do not know how best to protect themselves from getting sunburnt, or they (possibly as well as their parents and teachers) do not understand the possible health effects well enough to alter their behaviour, or there is a lack of social and environmental support for protective practices. A very interesting finding was that most learners (43%) did not know what their skin would do if they went out in the sun without sun protection in summer for 30 minutes during the middle of the day. All of these findings will now be fully explored and used to develop a South African-appropriate SunSmart awareness campaign for primary schools. Protect yourself Ways to protect against sunburn: n Avoid being outdoors in direct sunlight between peak UV hours of 10h00 – 15h00 without adequate sun protection. n When you spend time outdoors, use sun protection such as a widebrimmed hat, clothing that covers your skin, sunscreen and sunglasses. n Try to find shade and sit or play in the shade rather than in direct sun. Ideas for SunSmart-themed school projects: n Students can measure their shadow at different times of the day to see how its length changes in relation to where the Sun is in the sky. n Hold a competition to make a SunSmart awareness advert, either a skit/play or short video clip. n Hold a competition to design a SunSmart mascot for your school. n Consider creative ways of using available school shade for different activities. ❑ Caradee Wright is a Principal Scientist in the CSIR Climate Studies, Modelling and Environmental Health Research Group, where she leads the environmental health team. Caradee is also a Council Member of the National Association for Clean Air, co-chair of the South African Young Academy of Science and founder of the Environmental Health Research Network. Patricia Albers is a Scientist in the Environment and Health Research Unit at the Medical Research Council of South Africa. Quest 9(2) 2013 45 Research that can change the world Impact is at the core of the CSIR's mandate. In improving its research focus and ensuring that it achieves maximum impact in industry and society, the organisation has identified six research impact areas: Energy - with the focus on alternative and renewable energy. Health - with the aim of improving health care delivery and addressing the burden of disease. Natural Environment - with an emphasis on protecting our environment and natural resources. Built Environment - with a focus on improved infrastructure and creation of sustainable human settlements. • Defence and security - contributing to national efforts to build a safer country. • Industry - in support of an efficient, competitive and responsive economic infrastructure. • • • • The FameLab South Africa 2013 team: Prof. Janice Limson, Lorenzo Raynard (SAASTA), Robert Inglis (Jive Media Africa), Barend Jansen van Vuuren, Febe Wilken (second runner-up), Christopher Maxwell, Ntokozo Shezi, John Woodland (first runner-up), Ahmed Seedat, Charlotte Hillebrand, Michelle Knights, Charmaine Drury, Remo Chipatiso (British Council), Prof. Albert Modi, Dr Sandile Malinga, Prof. Himla Soodyall, David Cordingley (British Council) Image: SAASTA/FameLab Get famous … ‘sell’ your science at FameLab By Daphney Molewa, SAASTA ameLab, the international competition that gets people around the world talking science, has created excitement among South African participants and audiences. The nail-biting finals took place at SciFest Africa in Grahamstown on 15 March. In just three minutes, the finalists had to explain a science concept using only what they could carry onto the stage with them – and no PowerPoint. SAASTA partnered with the British Council and Jive Media Africa, along with fellow sponsors the South African Space Agency and the CSIR to increase the visibility of the competition and to encourage participation. FameLab was open to entrants between 21 and 35 years of age working or studying in science, technology, engineering or maths and who are passionate about their science. The participants also attended a two-day masterclass training session wth international FameLab trainer Malcolm Love. The winner of FameLab South Africa 2013, Michelle Knights, will represent the country at the international finals at the Cheltenham Science Festival in the UK in June this year, where young scientists from 25 countries will be competing. Michelle is a PhD student from Cape Town and a bursar of the SKA SA project. Her talk in the final rounds was about the search for life on planets elsewhere in the universe. ‘It was a fascinating and rewarding experience to take part in a science performance such as FameLab,’ Knights said afterwards. ‘It certainly challenges young scientists to make their work exciting at a new level!’ The rewards To get to the top in South Africa, Michelle had to beat eight other regional winners from Johannesburg, Durban and Cape Town in front of a capacity audience at SciFest Africa. Not only did she win the trip to the Cheltenham Festival, all expenses paid, but also pocketed a R10 000 cash prize. First and second runners-0up were Febe Wilken – a biotechnology student from the University of Pretoria and John Woodland, a chemistry student from the University of Cape Town. They received R3 000 in cash. ‘The aim behind the competition is to encourage young scientists to talk about their work; improving their communication skills to enable them to engage with the general public or any non-science audience, which is of critical importance as science and technology impacts society as a whole. The competition also seeks out new spokespeople for science … to inspire a new generation of scientists and challenge public perceptions about what it means to be a scientist,’ says Lorenzo Reynard. ❑ Visit www.britishcouncil.org.za/ famelab for more information. The winner of FameLab South Africa 2013, Michelle Knights, with Robert Inglis, Director of Jive Media Africa and Remo Chipatiso of the British Council. Image: SAASTA/FameLab Quest 9(2) 2013 47 Easy rock spotting Sasol First Field Guide to Rocks and Minerals of Southern Africa. By Bruce Cairncross. (Cape Town. Struik Nature. 2013.) This is another handy little pocket book that you can easily carry with you – even if you venture into the wilder areas of the country at speed, rather than hiking. This is the latest addition to the Struik Nature series and allows you to identify and start to understand our geologically exciting region. The book may be small, but it details 30 minerals and 18 major rock types – with the focus on the best known or most commonly found. There is an outline of crystal type and structure, with a glossary of geological terms. The Mohs scale of mineral hardness is also explained. Each rock or mineral is illustrated in full colour and its chemical formula provided, along with its hardness, composition, specific gravity and crytal system. This is an excellent book for any budding geologist or anyone who simply would like to know more about the world around us. Tracks and trails On Track: Quick ID guide to southern and East African animal tracks. By Chris and Mathilde Stuart. (Cape Town. Struik Nature. 2013.) This pocket-sized guide to animal tracks in southern and East Africa is another book that is handy to carry around when you are out hiking. In my experience it is not uncommon to stumble on tracks on mountain or veld trails, or even on our beaches, so this little book should be very useful. Each species in the guide has a ‘perfect’ drawing of a front and back track and for most, a photograph as well. However, a clear track is often the exception and many tracks you find will be smudged or distorted. But the authors encourage you to keep trying – apparently if you follow a track, you will often come across a clear section. There is a good explanation of how to look at tracks so that you can get the best out of them and help you to identify the species. The book is divided up into ‘heavyweights’, cloven hooves, paws, hands and feet, noncloven hooves, clusters, three toes, bird tracks: webbed, bird tracks: not webbed, tramline-like tracks and undulating trails. There is also a section on domestic animals. A thoroughly enjoyable little book. 48 Quest 9(2) 2013 Listening to the bush Sounds of the African Bush. By Doug Newman and Gordon King. (Cape Town. Struik Nature. 2013.) As the authors of this book/CD combination say, you often hear the animals of the bush before you see them – and you may not see them at all, of course. So knowing the sounds of the bush will allow you to identify some interesting mammals, birds, insects and amphibians. I now know that the frog that I hear all the time in the mountains above my home in Noordhoek in Cape Town is the clicking stream frog or Gray’s stream frog, for example. I have never managed to see one but they are abundant in Silvermine, part of the Table Mountain chain. We also have fiery-necked nightjars in the area – which I have been lucky enough to see from time to time, and which call often on warm nights. The CD is probably the most important part of the combination, but the book itself has excellent colour photographs of the species whose calls you will hear, as well as distribution maps and some information about their biology. An interactive journey Mister King’s Incredible Journey Activity Book. By David du Plessis. (Cape Town. Struik Nature. 2013.) This book is a companion to Mister King’s Incredible Journey and will allow younger children who enjoyed that book to find out more about Mister King and the other sea creatures that he encountered in an enjoyable and interactive way. The book starts with a section on how to get into Q Books drawing and also provides plenty of pages for colouring in. There are also picture exercises where youngsters can use the knowledge that they have gained through the Mister King book to consolidate their learning. All through the book are small inserts with information, for example ‘humpback whales sing songs that can last 20 minutes’ and ‘a boat must travel for five days from South Africa to reach Mister King’s island’. There are other exciting books in this series and you can go to www.activitiesforafrica to download these and share them freely. The most beautiful garden in Africa Kirstenbosch: The most beautiful garden in Africa. By Brian J Huntley, with principal photographer Adam Harrower. (Cape Town. Struik Nature. 2012.) A visit to remember Kirstenbosch: A visitor’s guide. By Colin Paterson-Jones and John Winter. (Cape Town. Struik Nature. 2013.) This is the second edition of this book, first published in 2004, and has been re-released to mark the centenary of the most beautiful garden in Africa. Kirstenbosch is one of the most famous botanical gardens in Africa and indeed the world – and rightly so. Kirstenbosch is situated on the lower eastern slopes of Table Mountain in Cape Town and attracts around 750 000 visitors a year. The gardens occupy just 40 hectares of an estate that occupies 532 hectares of mountainside. The balance is a nature reserve that supports fynbos, forest and a variety of animals and extends to Maclear’s Beacon, the highest point of the Cape Peninsula. The estate and the gardens are managed by the South African National Biodiversity Institute (SANBI) and is the largest of SANBI’s nine botanical gardens in South Africa. This slim visitor’s guide provides a comprehensive look at the history of the garden – with lovely old photographs to illustrate this – the Cape flora, the garden itself, including the Conservatory and Camphor Avenue. Of course the gardens change with the seasons and each is covered, showing the different flowering plants in spring, summer, autumn and winter. There is a short section on the mountain and some of its walks, as well as a section on the biodiversity of this unique region. I received this most wonderful book late last year. I live in the shadow of Table Mountain, on the other side from Kirstenbosch, which is situated on the eastern slopes of the mountain. In the 30-odd years that I have lived in Cape Town, Kirtenbosch, both as the garden itself and the mountain part of the estate have been an integral part of my life. So it is with enormous pleasure that I review this lovely book. What a magnificent way to celebrate the garden’s centenary year – you open the book to the mountain and the gardens themselves, so rich and colourful are the photographs. This is the first comprehensive account of the garden since Compton’s 1965 Kirstenbosch, Garden for a Nation – now out of print. The book draws heavily on information included in the annual reports published regularly since 1914 by the National Botanical Gardens of South Africa, by its successor the National Botanical Institute from 1990 and from 2004, SANBI. The history of Kirstenbosch covers a period of several centuries in South Africa – in itself a rich and colourful way of looking at a sometimes troubled, but always compelling country. Of Kirstenbosch – then an estate belonging to the government, William J Burchell wrote in 1822, ‘The beauty here displayed to the eye could scarcely be represented by the most skilful pencil.’ There are 11 chapters, ranging from the founding of the gardens to the network of National Botanical Gardens in South Africa. The book is full of general information on the Cape Floral Kingdom, conservation, education and how to make Kirstenbosch financially sustainable in a country riven by poverty. Beautifully written and lavishly illustrated, this is a book that will keep you occupied for many hours – and should stimulate you to get out into your local environment, wherever you live – and to Kirstenbosch itself, if you are lucky enough. Quest 9(2) 2013 49 Subscription Q Scie Science nce for for Sou South th Afri AfricA cA cA AfricA th Afri South Scie for Sou nce for Science cA AfricA th Afri South Scie for Sou nce for Science IMPORTANT NOTICE TO QUEST READERS ISSN 1729-830X Volume 8 • Number 1729-830X Volume 3 4 • 2012 • Number 2 • 2007 r29.95 ISSN 1729-830X ISSN 1729-830X ans Our cha ngi ng oce torin g South Afric a's role in moni From March 2013 (Quest Vol 9/1) the distribution model of Quest will change to ensure optimum reach and greater reader satisfaction. If you want to keep on receiving your copy of Quest, kindly fill in your particulars below and post, fax or email to: Quest MAGAZINE, PO Box 72135, Lynnwood Ridge 0040, Pretoria, South Africa. Fax: 086 576 9519, Email: email@example.com an Rob ots in the oce the globa l ocea n obse rving syste m Zoo pla nkt on new appro ache s to its study 1 • 2013 Volume 9 • Number 2 • 2007 Volume 3 • Number r29.95 r20 ISSN 1729-830X ISSN 1729-830X 2 • 2012 Volume 8 • Number 2 • 2007 Volume 3 • Number r29.95 r20 Ala n Tur ing : the fathe DN A Pro jec t: r of compThe uting sol vin g cri me e: How are cod es For ens ic sci enc cra cke d? wh at is it? es: Re adi ng the bon s age and sex from skel eton Lai d to res t: the miss ing iden tifie d Wh y are sm arp hon es sm art ? INVA SIVE ALIEN S an Anta rctic probl em SEAL S – high in the South ern Ocea n food chain Barb erton : wher e conti nents collid ed nd On the move : spec ies respo to clima te chan ge How did coa l for m? Out of Afr ica with spea rs and knive s A c AAcdAedmeym yo fo fS S c c I eI eNNccee ooff SS o u u tt hh AAffrrI c I cA A I cA A So u u tt hh AAffrrI c Sc I eI eNNccee ooff S of c A c AAcdAedmeym yo f S : the SKA Afric a reach es for the stars I cA A So u u tt hh AAffrrI c Sc I eI eNNccee ooff S of c A c AAcdAedmeym yo f S SUBSCRIBE NOW TO HAVE FOUR Quest ISSUES MAILED TO YOU! and place your order for back issues (subject to availability) SCIENCE FOR SOUTH AFRICA Subscription form 2013 I would like to subscribe to 4 issues of Quest: Science for South Africa. My details are (please print): Please fill in and return this form with proof of payment. Title: _______________ Name/Initials: _______________________________ Surname: _______________________________________________________ Company/university/institution/school: ____________________________ Student number (to qualify for student rates): _______________________ Postal address: ____________________________________________________________________________________________________________________ ___________________________________________________________________________________________________________ Code: _________________ Work tel.: ( ) ____________________________ Fax: ( Home tel.: ( How did you hear about QUEST? ______________________________________________________________________________________________________ If you are already a subscriber, are you satisfied with the subscription service? When would you like your subscription to begin? (Please fill in below.) Volume Number: _______ Issue Number: _______ ❑ ❑ ❑ ❑ No Tick here if this is an institutional subscription Tick here if this is an individual/personal subscription Tick here if this is a gift subscription Date: _____________________________________ Signature: ______________________________________________________________________________ Subscription rates* (4 issues incl. postage) Please tick appropriate rate. South Africa Neighbouring countries (Southern Africa) Foreign ❑ Individuals/institutions – R100.00 ❑ Individuals/institutions – R160.00 ❑ Individuals/institutions – R180.00 * No VAT is charged, as the publisher is not registered for VAT. ❑ ❑ ❑ Students/Schoolgoers – R 50.00 Students/Schoolgoers – R130.00 Students/Schoolgoers – R140.00 SUBSCRIPTION CONTACT DETAILS Payment options Please select and tick your payment option below. CHEQUE: Enclose your cheque in South African rands made payable to ACADEMY OF SCIENCE OF SOUTH AFRICA (together with this completed Subscription Form) DIRECT DEPOSIT: Use reference SUB and your name or institution's name on your deposit slip and deposit your subscription as follows: Bankers: Standard Bank Hatfield Account Number: 07 149 422 7 Branch code: 011545 Account name: Academy of Science of South Africa POST the completed Subscription Form together with your cheque or a legible copy of your cheque/deposit slip (include your name on the slip) to: Quest Magazine, PO Box 72135, Lynnwood Ridge, 0040, South Africa OR FAX the Subscription Form together with deposit slip to: QUEST SUBSCRIPTIONS 086 576 9519. Subscription enquiries: tel. 012 349 6624 OR e-mail: firstname.lastname@example.org For more information, visit www.questinteractive.co.za 50 Quest 9(2) 2013 FACULTY OF APPLIED AND COMPUTER SCIENCES LONG-TERM PARTNERSHIP VUT Vaal University of Technology The diploma in Non-destructive Testing (NDT) is registered with the Department of Education. Two new laboratories have been completed in the past year and are fully equipped. In its endeavour to offer state-of-the-art NDT, the Department invites industries into a partnership that will include among others the following: (a) Practical work that includes projects from industry. (b) Moderation of practical examination papers. (c) Commitment towards placing our students for inservice training. (d) Company visits by staff and students. (e) Part-time vacation jobs for students. (f) Membership of the NDT advisory board. For more information, please contact the below persons: Dr I Sikakana Head: Non-destructive Testing Technology and Physics e-mail: email@example.com Prof B R Mabuza Executive Dean: Faculty of Applied and Computer Sciences e-mail: firstname.lastname@example.org Image courtesy of Professor David Block Square Kilometre Array- A Proud Moment for South Africa The hosting of the Square Kilometre Array radio telescope (SKA) represents a major scientific coup for South Africa and will serve as a key research advantage to the entire scientific community. Scientists the world over believe that the SKA will be the major radio astronomy programme in the 21st Century that will allow us not only to understand the physics and the evolution of the universe and its structures, but also new aspects of astrophysics, like the origin of extremely high-energy particles, cosmic jets, black holes, and the structure and evolution of magnetic fields in cosmic structures, which will probably be addressed for the very first time. Studying Astrophysics and Astronomy at Wits Wits University is keen to lead and participate in programmes related to the SKA and hosts two South African Research Chairs in Radio Astronomy - the SKA Chair in Radio Astronomy and the South African Research Chair in Theoretical Particle Cosmology. The SKA Chair at Wits is a unique initiative that has not existed in such a clear definition for a specific project. It will help to generate cutting edge research for the country and support local research projects. Wits, for the first time now offers the BSc in the field of Astrophysics and Astronomy degree, an exciting new programme in the Faculty of Science. A BSc in astronomy opens the way for a variety of opportunities. If a research career is envisaged the studies can be continued by completing a Honours, MSc and eventually a PhD. There are lucrative bursaries available within the framework of the SKA project. But other opportunities exist as well, for instance the National Astronomy and Space Science Program (NASSP) offers well paid bursaries for obtaining Honours and Masters degrees. One interesting aspect of astronomy is it's genuinely international research culture. Young researchers have the opportunity to work in international teams, publish in international journals, attend international conferences and - in many cases - arrange for long term visits to collaborators world wide. Not only because of SKA but also due to the operation of other new, world class astronomical observatories in Southern Africa, like the Southern African Large Telescope (SALT) in Sutherland and the High Energy Spectroscopic System (H.E.S.S.) telescopes in Namibia, astronomy is a rapidly growing research field in South Africa and will offer a plethora of job opportunities in the near future. Astronomy is a highly diversified field of research. In particular the vast amounts of data produced by modern telescopes require a sound understanding of modern computer technology. Astronomers often take part in the development of hard and soft ware to generate, analyse and interpret the data. Problem solving skills acquired during this process are of high demand in industry. People trained in astronomy can be found occupying well paid and responsible positions in various industrial sectors, such as insurance, bank and consulting industries. For information and to apply: www.wits.ac.za Email: Admission.Senc@wits.ac.za Tel: 011 717 1030 Q Back page science Graceful eruption ponds and lakes sometimes disappear quickly – a phenomenon that scientists have observed firsthand in recent years. Source: NASA Surveying Earth’s interior with atomic clocks Ultra-precise portable atomic clocks are on the verge of a breakthrough. An international team lead by scientists from the University of Zurich shows that it may be possible to use the latest generation of atomic clocks to resolve structures within the Earth. Cutting specific atmospheric pollutants would slow sea level rise A solar prominence began to bow out and then broke apart in a graceful, floating style in a little less than four hours (16 March 2013). The sequence was captured in extreme ultraviolet light. A large cloud of the particles appeared to hover further out above the surface before it faded away. Image: NASA Greenland melt ponds An atomic clock. This natural-colour image was acquired on 4 July 2010 by the Advanced Land Imager on NASA’s Earth Observing-1 (EO-1) satellite. This glacial ice field lies in southwestern Greenland, not far from Disko Bay (Disko Bugt in Danish) and Davis Strait. The centre of the image is 68.91° North latitude and 48.54° West longitude. Image: NASA Each spring and summer, as the air warms up and the sunlight beats down on the Greenland ice sheet, sapphire-coloured ponds spring up like swimming pools. As snow and ice melt atop the glaciers, the water flows in channels and streams and collects in depressions on the surface that are sometimes visible from space. These melt depression changes the way a person perceives emotional information. These biases, termed emotional or affective biases, have also been shown to be modified by drug treatments that have efficacy in treating depression. Now, scientists at the University of Bristol have identified a new method of modelling a similar behaviour in rats and demonstrated that drugs which are antidepressant in man cause positive biases in this rat task. Importantly, the team have also shown that drugs which can cause depression in people cause a negative mood in rats, using this new modelling technique. Source: Bristol University Image: University of Zurich Have you ever thought to use a clock to identify mineral deposits or concealed water resources within the Earth? An international team headed by astrophysicists Philippe Jetzer and Ruxandra Bondarescu from the University of Zurich is convinced that ultra-precise portable atomic clocks will make this a reality in the next decade. In principle, atomic clock surveying is possible to great depth provided that the heavy underground structure to be studied is large enough to affect the tick rates of clocks in a measurable manner. Source: University of Zurich Researchers reveal more effective way of testing therapies to treat depression Researchers have found a new method for studying depression in rats that mirrors an aspect of the mood-related symptoms of the condition in humans. It is hoped this new technique, published in Neuropsychopharmacology, will improve the efficacy testing of new therapies. Studies in people have recently revealed that Decreasing emissions of black carbon, methane and other pollutants makes a difference. With coastal areas bracing for rising sea levels, new research indicates that cutting emissions of certain pollutants can greatly slow sea level rise this century. Scientists found that reductions in four pollutants that cycle comparatively quickly through the atmosphere could temporarily forestall the rate of sea level rise by roughly 25 to 50%. The researchers focused on emissions of four heat-trapping pollutants: methane, tropospheric ozone, hydrofluorocarbons and black carbon. These gases and particles last anywhere from a week to a decade in the atmosphere and can influence climate more quickly than carbon dioxide, which persists in the atmosphere for centuries. ‘To avoid potentially dangerous sea level rise, we could cut emissions of short-lived pollutants even if we cannot immediately cut carbon dioxide emissions,’ says Aixue Hu of the National Centre for Atmospheric Research (NCAR) in Boulder, Colorado, first author of a paper published recently in the journal Nature Climate Change. ‘Society can significantly reduce the threat to coastal cities if it moves quickly on a handful of pollutants.’ It is still not too late, ‘by stabilising carbon dioxide concentrations in the atmosphere and reducing emissions of shorter-lived pollutants, to lower the rate of warming and reduce sea level rise by 30%,’ says atmospheric scientist Veerabhadran Ramanathan of the Scripps Institution of Oceanography (SIO) in San Diego, a co-author of the paper. Ramanathan initiated and helped oversee the study. Source: National Science Foundation MIND-BOGGLING MATHS PUZZLE FOR Q uest READERS Q uest Maths Puzzle no. 25 Win a prize! 4C1 is a 3-digit number. How many digits could replace C for 4C1 to be divisible by 2? Send us your answer (fax, e-mail or snail-mail) together with your name and contact details by 15:00 on Friday, 16 August 2013. The first correct entry that we open will be the lucky winner. We’ll send you a cool Truly Scientific calculator! Mark your answer ‘Quest Maths Puzzle no. 25’ and send it to: Quest Maths Puzzle, Living Maths, P.O. Box 195, Bergvliet, 7864, Cape Town, South Africa. Fax: 0866 710 953. E-mail: email@example.com. For more on Living Maths, phone (083) 308 3883 and visit www.livingmaths.com. Answer to Maths Puzzle no. 24: Solution Pile 1 = 8 Pile 2 = 4 Pile 3 = 6 Pile 4 = 2 Quest 9(2) 2013 53 We create chemistry that lets cosy homes love windy days. Wind turbines produced with innovative solutions from BASF can withstand high-speed winds and severe weather conditions. Our products help make the production and installation of wind turbines more efficient, as well as making them durable – from the foundations to the very tips of the blades. In this way, we support the development of wind power as a climate-friendly source of energy. When high winds mean clean energy, it’s because at BASF, we create chemistry. www.wecreatechemistry.com Quest Science Magazine
<urn:uuid:782f58d0-a199-47dd-a514-2a6aa1c5a7cb>
3.109375
42,016
Content Listing
Science & Tech.
41.819172
95,582,679