text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Customized genome editing – the ability to edit desired DNA sequences to add, delete, activate or suppress specific genes – has major potential for application in medicine, biotechnology, food and agriculture. Now, in a paper published in Molecular Cell, North Carolina State University researchers and colleagues examine six key molecular elements that help drive this genome editing system, which is known as CRISPR-Cas. NC State’s Dr. Rodolphe Barrangou, an associate professor of food, bioprocessing and nutrition sciences, and Dr. Chase Beisel, an assistant professor of chemical and biomolecular engineering, use CRISPR-Cas to take aim at certain DNA sequences in bacteria and in human cells. CRISPR stands for “clustered regularly interspaced short palindromic repeats,” and Cas is a family of genes and corresponding proteins associated with the CRISPR system that specifically target and cut DNA in a sequence-dependent manner. Essentially, the authors say, bacteria use the system as a defense mechanism and immune system against unwanted invaders such as viruses. Now that same system is being harnessed by researchers to quickly and more precisely target certain genes for editing. “This paper sheds light on how CRISPR-Cas works,” Barrangou said. “If we liken this system to a puzzle, this paper shows what some of the system’s pieces are and how they interlock with one another. More importantly, we find which pieces are important structurally or functionally – and which ones are not.” The CRISPR-Cas system is spreading like wildfire among researchers across the globe who are searching for new ways to manipulate genes. Barrangou says that the paper’s findings will allow researchers to increase the specificity and efficiency in targeting DNA, setting the stage for more precise genetic modifications. The work by Barrangou and Beisel holds promise in manipulating relevant bacteria for use in food – think of safer and more effective probiotics for your yogurt, for example – and in model organisms used in agriculture, including gene editing in crops to make them less susceptible to disease. The collaborative effort with Caribou Biosciences, a start-up biotechnology company in California, illustrates the focus of these two NC State laboratories on bridging the gap between industry and academia, and the commercial potential of CRISPR technologies, the researchers say. - kulikowski - Note: An abstract of the paper follows. “Guide RNA Functional Modules Direct Cas9 Activity and Orthogonality” Authors: Alexandra E. Briner, Kurt Selle, Chase L. Beisel and Rodolphe Barrangou, North Carolina State University; Paul D. Donohoue, Euan M. Slorach, Christopher H. Nye, Rachel E. Haurwitz, and Andrew P. May, Caribou Biosciences Inc.; Ahmed A. Gomaa, University of Cairo Published: Oct. 16, 2014, online in Molecular Cell Abstract: The RNA-guided Cas9 endonuclease specifically targets and cleaves DNA in a sequence-dependent manner and has been widely used for programmable genome editing. Cas9 activity is dependent on interactions with guide RNAs, and evolutionarily divergent Cas9 nucleases have been shown to work orthogonally. However, the molecular basis of selective Cas9:guide-RNA interactions is poorly understood. Here, we identify and characterize six conserved modules within native crRNA:tracrRNA duplexes and single guide RNAs (sgRNAs) that direct Cas9 endonuclease activity. We show the bulge and nexus are necessary for DNA cleavage and demonstrate that the nexus and hairpins are instrumental in defining orthogonality between systems. In contrast, the crRNA:tracrRNA complementary region can be modified or partially removed. Collectively, our results establish guide RNA features that drive DNA targeting by Cas9 and open new design and engineering avenues for CRISPR technologies. Mick Kulikowski | Eurek Alert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:564c5a35-cbd8-42a7-becc-d10539b5c733>
3.234375
1,499
Content Listing
Science & Tech.
31.312083
95,543,017
As the fifth largest freshwater lake in China, Lake Chaohu has drawn increasing attention due to the decline in water quality and the occurrence of massive algal blooms. We applied an algae pixel-growing algorithm to MODIS Terra or Aqua data (2100 images) to characterize surface floating algae bloom dynamics from 2000 to 2013 with respect to meteorological and lake nutrient conditions. The results show an increase in surface algal bloom coverage, frequency, and duration with a trend toward earlier bloom formation. Importantly, spatial and temporal patterns in the historically less compromised eastern and middle lake areas show that water quality conditions are deteriorating. This has occurred at the same time as lake management has made a catchment scale effort to reduce impact. Our results show that nutrient concentrations were not the main driver of inter-annual bloom variations. Local meteorological conditions, in particular wind speed and temperature, played an important role in the dynamics of floating algal bloom. This highlights the important challenges for lake management. Scheda prodotto non validato Scheda prodotto in fase di analisi da parte dello staff di validazione |Titolo:||Fourteen-year record (2000-2013) of the spatial and temporal dynamics of floating algae blooms in Lake Chaohu, observed from time series of MODIS images| |Appare nelle tipologie:||1.1 Articolo in rivista| File in questo prodotto:
<urn:uuid:52b37a0c-0bd0-4c68-b03b-173cae70aa99>
2.765625
302
Academic Writing
Science & Tech.
19.601609
95,543,027
Original Research ARTICLE Inter-Annual and Geographical Variations in the Extent of Bare Ice and Dark Ice on the Greenland Ice Sheet Derived from MODIS Satellite Images - 1Department of Earth Sciences, Chiba University, Chiba, Japan - 2Climate Research Department, Meteorological Research Institute, Tsukuba, Japan Areas of dark ice have appeared on the Greenland ice sheet every summer in recent years. These are likely to have a great impact on the mass balance of the ice sheet because of their low albedo. We report annual and geographical variations in the bare ice and dark ice areas that appeared on the Greenland Ice Sheet from 2000 to 2014 by using MODIS satellite images. The July monthly mean of the extent of bare ice showed a positive trend over these 15 years, and large annual variability ranging from 89,975 to 279,075 km2, 5 and 16% of the entire ice sheet, respectively. The extent of dark ice also showed a positive trend and varied annually, ranging from 3575 to 26,975 km2, 4 and 10% of the bare ice extent. These areas are geographically varied, and their expansion is the greatest on the western side, particularly the southwestern side of the ice sheet. The bare ice extent correlates strongly with the monthly mean air temperature in July, suggesting that the extent was determined by snow melt. The dark ice extent also correlates with the air temperature; however, the correlation is weaker. The dark ice extent further correlates negatively with solar radiation. This suggests that the extent of dark ice is not only controlled by snow melt on the ice, but also by changes in the surface structures of the bare ice surface, such as cryoconite holes, which are associated with impurities appearing on the ice surface. The surface albedo of the Greenland Ice Sheet has recently been reported to have declined significantly. Remote sensing data, in situ glaciological observations, and climate modeling has consistently shown large negative anomalies of albedo with a negative trend in previous decades (e.g., Tedesco et al., 2011; Stroeve et al., 2013; Alexander et al., 2014). In particular, the albedo reduction was significant in 2010 (Tedesco et al., 2011) and in 2012 (Tedesco et al., 2013; Alexander et al., 2014), and albedo in 2012 was recorded as the minimum after 2000 (Alexander et al., 2014). Because a lower reflectance enhances the absorption of solar radiation at the surface, the surface reflectance substantially affects melting. Recent mass loss of the Greenland ice sheet has not only been caused by climatic warming and terminus retreat of ocean curving front, but also by surface melting enhancement due to albedo reduction (e.g., Box et al., 2012). Therefore, to understand the factors affecting the mass balance of the ice sheet, it is important to assess the change of albedo. Recent studies have shown that there are several factors for the recent albedo reduction of the Greenland ice sheet. In the snow-covered areas, the surface albedo can be determined by the snow grain size for the near-infrared wavelength, and by the concentration of light absorbing impurities for the visible wavelength (Wiscombe and Warren, 1980; Aoki et al., 2000, 2003). Snow grain size increases due to snow crystal metamorphism in warmer meteorological conditions, resulting in reduction of albedo (Warren and Wiscombe, 1980; Wiscombe and Warren, 1980). Box et al. (2012) suggest that the recent albedo decline over the accumulation area of Greenland can be attributed to an increase in snow grain size. The reduction of snow albedo is also suggested to be due to impurities, such as black carbon and mineral dust derived from the atmosphere. Dumont et al. (2014) suggested that snow surface albedo could be affected with the increase of snow impurities rather than snow grain growth after 2009. Furthermore, the albedo effect of impurities can be intensified by the increase in snow grain size, thus, these processes can provide a positive feedback to albedo reduction (Aoki et al., 2011). Expanding bare ice surfaces also substantially contribute to the recent albedo decline because the ice surface albedo is lower than that of snow (Box et al., 2012; Stroeve et al., 2013). Persistent negative anomalies in summer albedo have occurred in recent years across western Greenland in areas below 2000 m, these indicated by remote sensing data as well as by automatic water station data (e.g., Alexander et al., 2014). The decreasing in albedo is caused by early melt onset, decreasing snow cover, and the expansion of the bare ice area (Stroeve et al., 2013). The expansion of the ice surface was observed especially over the ablation area of the southwestern part of the ice sheet (Tedesco et al., 2011; van As, 2011). Furthermore, the ice surface albedo declines further as impurities accumulate on the surface. A surface covered with abundant impurities is referred to as dark ice, and such surfaces have appeared on the bare ice of the southwestern ice sheet every summer (Wientjes and Oerlemans, 2010). The extent of dark ice surfaces can potentially have an impact on the total mass balance of the ice sheet since its low albedo strongly influences melt. The regional distribution of these dark ice surfaces remains, however, poorly constrained. The ablation area surface albedo exhibits complicated spatial and temporal variations, depending on the surface ice type. Successive lowering of ice albedo and changing of the surface ice type after snow melt has been observed by in situ (Chandler et al., 2015) and remote sensing data (Moustafa et al., 2015). Superimposed ice, which has a relatively higher albedo than bare ice, appears first after snow melt; this is followed by dark ice with impurities being exposed in the middle of the melt season. The albedo of the ice surface can further change as the ice surface structure changes, e.g., during the formation of cryoconite holes. Cryoconite holes are small cylindrical pits formed on the ice surface, and can sink impurities at the bottom. Since the impurities in cryoconite holes become shielded from sunlight, their development causes relatively higher area-averaged ice albedo than the ice surface uniformly covered with impurities (Bøggild et al., 2010; Chandler et al., 2015). The bare ice albedo determined by surface ice type has multi-modal distributions and varies from year to year (Moustafa et al., 2015). Pigmented ice algae, growing on the ice surface can also reduce the albedo (Yallop et al., 2012; Lutz et al., 2013). Ice algae are cold-tolerant photosynthetic microbes, and grow on the melting ice surfaces out of cryoconite holes; their carbon production has been reported to be greater than that in cryoconite holes (Cook et al., 2010). The appearance of ice algae, which varies spatially and seasonally on glaciers (Uetake et al., 2010; Takeuchi, 2013), possibly contributes the albedo decline in ablation areas. These physical and biological processes may have contributed to the recent decline of surface albedo in ablation areas; however, the dynamics of the impurities and their effect on ice albedo are still unclear. In this study, we report annual and geographical variations in the areas of bare ice and dark ice that appeared in melting season on the Greenland ice sheet using Moderate resolution Imaging Spectroradiometer (MODIS) satellite images obtained between 2000 and 2014, and discuss possible factors affecting their variations. Data and Methodology We used a 5 km sub-sampled calibrated radiance product of non-map projected granule scene (product ID: MOD02SSH) derived from the NASA Terra platform MODIS, which are available at NASA's data archive Level 1 and Atmosphere Archive and Distribution System (LAADS) web (http://ladsweb.nascom.nasa.gov/). The study period was from 1 July 2000 to 31 July 2014. The reflectance of seven bands, including visible and infrared wavelength regions, were converted from the product with transformation described in MODIS L1B Product User's Guide (Toller et al., 2003), this is the radiance of the spectral bands. Cloud, ocean, and sea ice areas on each MODIS image were masked with the method described by Stamnes et al. (2007). We then created daily and monthly composite images of cloud free pixels from the MODIS scene data. The pixels were combined with mosaic processing to cover Greenland in its entirety and re-projected to Lambert's Azimuthal Equal-area Projection using the nearest neighbor method. Pixel overlap is averaged in the composite process. To classify the surface conditions of the ice sheet, we developed snow/ice discriminator based on the method described by Stamnes et al. (2007). Although this method can classify cloud, ocean and snow or ice regions, it is unable to distinguish the surface conditions on the ice sheet, such as snow and bare ice. We improved the method to discriminate snow, bare ice and dark ice surfaces by adding new thresholds. When snow is transformed into ice, surface reflectance becomes lower by density and grain radius increasing (Bergen, 1975). In order to discriminate snow and ice surfaces with reflectance in the visible and near-infrared bands, we used an image of the ice sheet during the melting season taken on 12 July 2012; this has a maximum variability of surface conditions, including snow and ice without cloud cover. Spectral reflectance of representative pixels of snow and bare ice on a MODIS image shows that snow surface reflectance was significantly higher than ice surface reflectance in all bands, especially at a wavelength of λ = 0.86 μm (band 2, Figure 1). Since the visible and near-infrared reflectance become lower with increasing grain radius (Warren and Wiscombe, 1980), the difference at λ = 0.86 μm shows the surface condition differing with snow metamorphism. Bare ice was defined as the surface with the following threshold using reflectance at λ = 0.86 μm (R0.86μm) (Equation 1). Furthermore, we determined the threshold for the classification between ice surfaces and bare soil areas with λ = 0.86 and 1.64 μm (band 6). Although reflectance at λ = 0.86 μm is sensitive to the difference between ice and snow, the reflectance of the dark ice surface is close to the reflectance of the bare soil surrounding the ice sheet (Figure 1). Reflectance at λ = 1.64 μm (R1.64μm) was significantly different between bare soil and bare ice surfaces, including dark ice. Weidong et al. (2002) showed that soil reflectance was low in the visible region and high in the near-infrared region and that the spectral reflectance properties did not change with the soil moisture content. Our results showed this same feature. Warren (1984) showed that the ice complex refractive index in the shortwave infrared region was large, therefore the light absorption of λ = 1.64 μm was stronger than λ = 0.86 μm. Thus, we used the following normalized index with λ = 0.86 and 1.64 μm in order to distinguish between the bare soil, bare ice, and dark ice surfaces. Bare soil surfaces can be recognized by a negative number in this index (Isnow) (Equation 2). In order to distinguish the dark ice surfaces in the bare ice areas, we used the reflectance at λ = 0.66 μm (R0.66μm), which is sensitive to impurities in both snow and ice. Dark ice was defined as the surface with: As we tested different values of the threshold to the image (Figure S1), this threshold value was best to be consistent with the “dark band” area of the southwestern area of the ice sheet in July 2012 indicated in Moustafa et al. (2015). It was also consistent with the dark ice reflectance from field observation in the northwest Greenland (Bøggild et al., 2010). Figure 1. Spectral reflectance of snow (68°56′33″N, 42°27′16″W, red circle), bare ice (68°05′10″N, 48°01′23″W, blue diamond), dark ice (69°32′25″N, 50°26′56″W, green square), and bare soil (68°23′02″N, 53°48′13″W, brown triangle) and RGB color composite image band 1, 4, and 3 taken on 12 July 2012 derived from MODIS. In order to check the classification result, we compared our result with an image of Landsat 8/OLI (Figure 2). We applied the thresholds to Landsat and MODIS images in the region of southwestern Greenland (68.9–67.7°N,49.7–48.0°W) on 12 July 2014. The classification from the Landsat image was in good agreement with that from the MODIS image. The relative errors in the area of each classification were 2.51% (snow), 0.16% (bare ice), and −3.30% (dark ice). Figure 2. Comparison of reflectance image (λ = 0.66 μm) and distribution of snow (light blue), bare ice (blue), and dark ice (red) between Landsat 8/OLI (A,B) and MODIS (C,D) satellites acquired on 12 July 2014. We obtained MODIS image composite data for the month of July from 2000 to 2014 and the areas of the bare and dark ice surface of the ice sheet were obtained with the thresholds in the index or reflectance described above. We chose this month because the monthly averaged albedo shows a minimal value at this time of year (Stroeve et al., 2013), indicating the month with most melting and usually showing a maximum expansion of the bare ice surface. Furthermore, in order to analyze geographical variations in albedo, we divided the ice sheet into four regions, namely, northwestern, southwestern, northeastern, and northwestern regions using the 72.5°N and 45°W lines of latitude and longitude, and obtained the extent of bare ice and dark ice within each region (Figure 3). Figure 3. Distribution of snow (light blue), bare ice (blue), and dark ice (red) in July from 2000 to 2014. Red lines are regional division (72.5°N and 45°W). Green frame in 2012 showed the dark band. In order to discuss the meteorological factors controlling the annual and geographical variations in the area of bare ice and dark ice, we used NCEP/NCAR reanalysis 2.5° grid data covering 2000–2014 to derive meteorological components (Kalnay et al., 1996). We used July monthly mean of air temperature, downward shortwave radiation flux, and downward longwave radiation flux, and the precipitation from January to May from the reanalysis data. The areal average of each component was obtained for the four geographical regions. The data used are only at the area lower than an elevation of 2000 m on the ice sheet, where the bare ice appeared. The analyses of the MODIS satellite images showed that bare ice appeared in the melting season along the terminus of the ice sheet in every year, but its extent varied annually (Table S1) and geographically (Figure 3). For example, the bare ice surface was smaller in 2000, 2004, and 2006, but larger in 2010, 2011, and 2012. Prominent bare ice surfaces were repeatedly observed in the southwestern and northeastern regions. The surfaces extended approximately 30 km in the southwestern region, from the ice terminus to inland of the ice sheet in 2012. Dark ice surfaces appeared generally when the bare ice surface was extended. Their distribution was, however, not uniform. For example, the dark ice in the southwestern region repeatedly appeared along the dark band where albedo was lower than in the surrounding areas (Wientjes and Oerlemans, 2010; Wientjes et al., 2011), and was located in the middle part of the bare ice area (Figure 3-2012, green frame). On the other hand, in the northeastern region, the dark ice tended to appear near the ice terminus. The extent of the bare ice surfaces showed a positive annual trend and a large variability over the 15 years (Figure 4A). Mean and standard deviations of the extent of the bare ice for the 15 years was 163,620 ± 53,580 km2. The maximal extent of the bare ice was 279,075 km2 in 2012, which is 3.1 times larger than the minimal extent of 89,975 km2 in 2000. The maximal and minimal extents corresponded to 16 and 5% of the entire ice sheet. The increasing rate of the bare ice extent in whole region over the 15 years was 7158 km2 (4.4%) per year. Figure 4. Annual variability of bare ice (A) and dark ice (B) extents in the whole region (blue circle) and northwestern (red diamond), northeastern (purple inverted triangle), southwestern (orange square), and southeastern (green triangle) regions from 2000 to 2014. The extent of dark ice also showed a positive annual trend over the 15 years and a variability ranging from 3575 to 26,975 km2 (Figure 4B). Mean and standard deviations of the dark ice extent was 10,180 ± 6,940 km2, which is 6.2% of the mean bare ice extent. The largest extent of dark ice occurred in 2012, when the bare ice extent was also maximal, and was 7.6 times larger than the minimal extent in 2000. The increasing rate was 703 km2 per year (7.6%). There is a significant correlation between the dark ice and bare ice extent (Pearson's correlation coefficient: r = 0.89, Probability: P < 0.01). In spite of the positive correlation, the percentage of the dark ice extent to the bare ice extent varied from 4 to 11%. The first and second highest percentages occurred in 2010 (11%) and 2012 (9.7%). Although the annual variations of the bare ice and dark ice synchronized with each other during most of the study period, they showed different variations in some years. For example, the bare ice extent was more than 210,000 km2 from 2010 to 2012, while the dark ice extent showed drastic changes, it shrunk from 23,400 km2 in 2010 to 13,025 km2 in 2011, and increased again to 26,975 km2 in 2012. The bare ice and dark ice extents of the four regions showed distinctive annual variabilities (Figures 4A,B). The mean bare ice extent was largest in the southwestern region (52,603 km2, 32% of the total mean bare ice area); intermediate in the northwestern and northeastern regions (45,518 km2, 28% and 43,975 km2, 27%); and smallest in southeastern region (21,520 km2, 13% of the total). Annual variations in the bare ice extent of each of the four regions showed that there are significant correlations between the regions except between northeastern and southeastern, and between northeastern and southwestern regions (Table S2). Although all four regions showed a positive trend during the study period, the increasing rate was greatest in the southwestern (5.8% per year) while was lowest in the northeastern region (2.8% per year). The trend is statistically significant in the whole region, as well as in the south-eastern and south-western region (p < 0.05). The mean dark ice extents in the southwestern and northeastern regions were larger (4013 and 3133 km2, corresponding to 39 and 31% of the total dark ice extent, respectively) than northwestern and southeastern regions of the ice sheet (1530 and 1500 km2, corresponding to 15%, respectively). However, the extent in each region largely changed annually, and the extent was generally largest in the northeastern region before 2005, but in the southwestern region after 2005. The percentage of dark ice extent to bare ice extent was greater in the southwestern (7.6%), northeastern (7.1%), and southeastern region (7.0%), but smaller in the northeastern region (3.4%). Annual variations of the dark ice extents in the four regions showed that there are significant correlations between northwestern and southwestern; northwestern and northeastern; southwestern and southeastern regions, but no significant correlation between other pairs (Table S2). The range of variation is largest in the southwestern region (from 575 to 15,025 km2), while is smallest in the southeastern region (from 425 to 2975 km2). The maximum extent in the southwestern region occurred in 2012, which was ~26 times larger than the minimal extent in 2000. The dark ice extents in all four regions showed a positive trend during the study period. The increasing rate was greater in the southwestern (12%), northwestern (7.8%) and southeastern region (3.1%), but smaller in the northeastern region (2.8%; Figure 4B). Factors Controlling the Trend and Variability of Bare Ice Extent According to Stroeve et al. (2013), the July monthly mean albedo of the Greenland ice sheet annually varied from 0.57 to 0.66, and showed a decreasing rate of -0.032 per decade from 2000 to 2012. The annual variability of albedo was consistent with the result for the bare ice extent shown by this study, and the bare ice extent also showed a positive trend during the period. This suggests that the expansion of the bare ice surface is a major contributor to the decline of surface albedo, as already noted by Box et al. (2012) and Tedesco et al. (2011). The bare ice extent showed a greater increasing rate on the western side compared with the eastern side, which is consistent with the trend of surface albedo reported by Alexander et al. (2014). The annual variability of the bare ice extent is likely to be controlled by meteorological factors. Stroeve et al. (2013) suggested that the decline of albedo is due to higher summer temperature anomalies. Comparison of the bare ice extent with meteorological components of the ice sheet below 2000 m (temperature, precipitation, and radiation fluxes) from NCEP reanalysis data revealed that the bare ice extent showed a significant positive correlation with the mean July air temperature during the 15 years (r = 0.66, p < 0.01). This significant positive correlation was also found in all four geographical regions (Table 1). In contrast, the bare ice extent does not significantly correlate with precipitation (January to May) or July mean shortwave or long-wave radiation fluxes. Higher air temperatures cause faster melting of the snow cover, and result in an upward shift of the snow line. This causes more bare ice surfaces to become exposed. The variability of the bare ice area is, thus, influenced by the location of the snow line, which is associated mainly with mean air temperature in the melting season. The trend of the bare ice extent can also be explained by the air temperature. The increasing rate of the air temperature was higher on the western side (0.113 and 0.128 degree per year for northwestern and southwestern regions, respectively) than on the eastern side (0.049 and 0.098 degree per year for northeastern and southwestern region, respectively, Figure 5). Table 1. Correlation coefficient of dark ice extent, temperature, winter precipitation, shortwave radiation flux, and longwave radiation flux for the extent of bare ice and dark ice. Figure 5. Annual variability of temperature, precipitation, shortwave radiation flux, and long-wave radiation flux calculated from NCEP/NCAR reanalysis data (Kalnay et al., 1996) in whole region (blue circle) and northwestern (red diamond), northeastern (purple inverted triangle), southwestern (orange square), and southeastern (green triangle) regions from 2000 to 2014. The geographical variability of the bare ice extent is due to the different climate and topographic conditions of each region. July monthly mean air temperatures derived from NCEP reanalysis data showed that of southeastern region (2.0°C) was highest of the four region, and it was followed in order by the southwestern, northwestern, and northeastern regions (0.6, −0.8, and −1.3°C). The higher temperature accounted for the longer melting season, resulting in a greater extent of bare ice. The larger extent of bare ice on the western side compared with the eastern side is probably due to the location of drainage divides (Zwally et al., 2012). Since the drainage divides are located in the eastern part of the ice sheet, the distance from ice terminus to drainage divide is longer on the western side than on eastern side of the ice sheet (Zwally and Giovinetto, 2001). Therefore, slope gradient is gentler, and thus the area below the equilibrium line, which corresponds to the bare ice extent, is larger on the western side. The equilibrium line altitude of the ice sheet is generally higher on the western side than on the eastern side because the snow accumulation rate is lower on the western side (Zwally and Giovinetto, 2001). Annual variation in the equilibrium line altitude along a transect near Kangerlussaq (K-transect) located at 67°N on the southwestern region (van de Wal et al., 2005) showed that it was positively correlated with the bare ice extent from 2000 to 2011 (van de Wal et al., 2012). Therefore, the largest extent of the bare ice in the southwestern region is likely to be due to higher equilibrium line altitude and gentle slope. Relationship between Bare Ice and Dark Ice Surfaces The positive correlation between bare ice and dark ice extents suggests that the dark ice extent is controlled by the same factors as those for the bare ice, however, both annual and geographical variability of the dark ice surface were not exactly same as those of the bare ice surface. Although, the dark ice extent in each region was positively correlated with the July mean air temperature over the 15 years similarly to the bare ice extent, their correlations were weaker. Furthermore, the total extent of dark ice was not correlated with temperature (Table 1). The dark ice surface also showed a distinct geographical variability compared with that of the bare ice surface. A greater extent of dark ice was observed in the southwestern and northeastern regions, but the relative extent of the bare ice surface was greatest in the southwestern, and less so in northwestern, and northeastern regions of the ice sheet. This suggests that the extent of the dark ice is not simply associated with exposure of the bare ice surface, and its distribution is not geographically uniform across the ice sheet. As the satellite images showed, dark ice did not appear uniformly on the bare ice surface, but tended to appear in certain parts of the area. As Wientjes and Oerlemans (2010) indicated, the dark ice in the southwestern region repeatedly appeared along the dark band in the middle part of the bare ice. The dark ice in the northeastern region tended to appear along the ice terminus. Furthermore, dark ice rarely appeared near the snow line when the bare ice surface had expanded. These facts suggest that the appearance of the dark ice is not simply determined by the location of the snow line, but is associated with the distribution of impurities or physical properties of the bare ice surface. Possible Factors Affecting the Extent of Dark Ice As many studies have already shown, dark ice in the ablation area is caused by impurities covering the ice surface (e.g., Bøggild et al., 2010; Wientjes and Oerlemans, 2010; Wientjes et al., 2011; Chandler et al., 2015). Variability in dark ice extent is likely to reflect changes in such coverage. Studies reveal that the impurities affecting ice surface albedo on the Greenland ice sheet include black carbon, mineral dust, and biogenic organic matter (cryoconite and ice algae) (Bøggild et al., 2010; Yallop et al., 2012; Lutz et al., 2013). They have different origins, and accumulating processes on the ice surface. Goelles et al. (2015a) reported four sources of impurities: (1) from the atmosphere by dry or wet deposition, (2) from surrounding tundra by regional wind transport, (3) from englacial material deposited in the past in the accumulation area, and (4) from in situ biological production of dark organic material. A model for impurity accumulation proposed by Goelles and Bøggild (2015b) indicated that englacial material is the main source. This is consistent with the results of this study that the dark ice repeatedly appeared in almost the same locations, i.e., the middle part of the southwestern region and the lower part of the northeastern region (Figure 6). As previous work suggests, the concentration of englacial impurities is probably greater in the subsurface ice and this may be due to the abundant deposition of windblown dust in the past, possibly during the early Holocene (Wientjes and Oerlemans, 2010; Takeuchi et al., 2014). Geographical variability of the extent of dark ice may also be explained by the difference in the concentration of englacial material. The higher proportion of dark ice in the southwestern, southeastern, and northeastern regions may be due to the greater content of impurities in the subsurface ice, while it being lower in the northwest may be due to lower concentrations of englacial impurities although in situ data is not yet available. Figure 6. Distribution of snow (light blue), bare ice (blue), and dark ice (red) in northeastern region (A–C) and southwestern region (D–F) during July from 2010 to 2012. The positive trend in dark ice extent may be caused by exposure of impurities from subsurface ice. As suggested by Moustafa et al. (2015), an increase in ice melt can cause more exposure of englacial impurities, and can extend the dark ice in the bare ice surface. According to Box et al. (2012), the melting of ice on the ice sheet increased 261.5 mm w.e. per year from 2000 to 2011. However, annual variability of dark ice extent cannot only be explained by the outcrop of englacial impurities. In particular, the change in dark ice extent from 2010 to 2012 appears to be too large in spite of the almost constant extent of the bare ice surface for these 3 years (Figure 6). As air temperature was also generally higher in all of these 3 years, the decrease in dark ice extent from 2010 to 2011 cannot be explained by the removal of surface impurities by melt water on the ice surface. The extent of dark ice from 2010 to 2012 suggests that it is associated with the ice type of the bare ice surface. Studies have revealed that the surface ice type of the Greenland ice sheet has a wide variation in ablation area, such as clean ice, dirty ice, stream, and cryoconite holes (Chandler et al., 2015). Surface albedo of the bare ice varies with these surface ice types. For example, albedo is lower for dirty ice, which is the ice covered with impurities. On the other hand, it is higher for ice with cryoconite holes, which aggregate and sink the impurities at the bottom of the hole, resulting in increased area-averaged albedo (Bøggild et al., 2010). It is also higher for weathering crust, which is the porous ice developed in the shallow layer of the ablation ice surface (Muller and Keeler, 1969). Therefore, the extent of dark ice can drastically change with the transition of ice types, in particular, the formation and decay of cryoconite holes without a change in impurities abundance. According to previous studies, development of cryoconite holes is controlled by the dominant heat source of the surface ice melting (McIntyre, 1984). A cryoconite hole can develop when radiation heat is dominant, while it can decay when latent or sensible heat is dominant. In fact, the development and decay of cryoconite holes connected with changing weather conditions have been observed in the ablation ice area of the ice sheet (Chandler et al., 2015). Many cryoconoite holes melt out completely during a period of warm, cloudy or very windy weather, resulting in the dispersal of their debris on the bare ice surface. This might have been the case in 2011, where the lower solar radiation and warm conditions may have caused the decay of holes and the release the cryoconite onto the ice, thereby extending the areas of dark ice in 2012. In this way, dark ice extent can be changed annually by meteorological conditions, even if the areal abundance of impurities on the ice sheet did not change. Dark ice might extend further under conditions of lower solar radiation and warmer temperatures. Our results showed that the dark ice extent was not simply controlled by the same factors as the bare ice and is likely to be attributed to impurity supply and surface ice structure. Therefore, it is important to develop an impurity accumulation model and to understand the physical processes of cryoconite holes and ice weathering in order to evaluate and predict future dark ice extents on the Greenland ice sheet. Estimates made using the impurity accumulation model of Goelles et al. (2015a) appear to be relevant, but there are still some uncertain values in using this model. For example, the movement of impurities across the ice surface by running melt water is uncertain, and this might affect the impurity abundance on the surface. There is also a lack of microbial processes in the model. In particular, the effect of ice algae may also significantly affect the ice surface albedo. Blooms of pigmented ice algae such as Ancylonema nordenskioldii, can change the ice to a dark color (Yallop et al., 2012). Moreover, the development of cryoconite granules, which are aggregates of organic and inorganic particles bound by filamentous cyanobacteria, might largely affect the retention time of all impurity particles, including mineral dust and black carbon because the formation of the granule is resistant to running melt water (Takeuchi et al., 2001). The structures of the surface ice should also be studied, in particular, the physical processes of their development, and relationship with ice surface albedo. Furthermore, as suggested by Irvine-Fynn and Edwards (2014) and Chandler et al. (2015), these ice structures could further interact with microbial production on the bare ice surface. The further study of ice surface processes may enable us to understand the dynamics of the extent of dark ice. Analysis of MODIS satellite images revealed that the areas of bare ice and dark ice on the Greenland ice sheet showed a positive trend and large annual variability from 2000 to 2014. The extent of these areas also varied geographically. Comparison of the variability with NCEP reanalysis meteorological data showed a significant correlation between the extent of the bare ice and the July mean air temperature, suggesting that the variability of the bare ice extent is mainly controlled by air temperature, which affects snow melt and the location of the snow line on the ice sheet. The extent of the dark ice also correlated with the air temperature. However, the correlation was weaker than that of the bare ice, and the distribution of dark ice was not uniform in the bare ice areas, indicating that the extent of dark ice was not simply controlled by snow melt caused by high air temperature. According to previous studies, dark ice in the ablation area is caused by impurities covering the ice surface, and englacial material is the main source of the impurity mass on the ice surface. The positive trend of the extent of dark ice may be due to the increase in exposure of the englacial impurities due to recent ice melt. However, annual variability in the extent of dark ice cannot be explained only by the outcrop of englacial impurities. The negative correlation between the extent of dark ice and shortwave radiation flux suggests that the extent of dark ice is associated with the ice type of the bare ice surface. Intense solar radiation can cause the development of cryoconite holes and hide impurities within the ice, this results in a rise of ice surface albedo. Thus, the extent of dark ice probably changes drastically with the transition of ice types, in particular, the formation and decay of cryoconite holes without changes in impurity abundance. The expansion of the extent of dark ice can further reduce the ice surface albedo, and cause more melting of the ice sheet. Therefore, it is necessary to understand the process of dark ice expansion. In particular, there is the need to develop an impurities accumulation model, and to understand the physical and biological processes associated with cryoconite holes and weathering ice to evaluate and predict the future extent of dark ice extent on the Greenland ice sheet. RS designed the study, analyzed data, and wrote the paper. NT gave technical support and conceptual advice and wrote the paper. TA gave technical support and conceptual advice. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. This study was supported by Japan Society for the Promotion of Science (JSPS), Grant-in-Aid for Scientific Research (No. 23221004, 26247078, and 26241020), and the GCOM-C/SGLI Mission, the Japan Aerospace Exploration Agency (JAXA). The Supplementary Material for this article can be found online at: https://www.frontiersin.org/article/10.3389/feart.2016.00043 Alexander, P. M., Tedesco, M., Fettweis, X., van de Wal, R. S. W., Smeets, C. J. P. P., and van den Broeke, M. R. (2014). Assessing spatio-temporal variability and trends in modelled and measured Greenland Ice Sheet albedo (2000–2013). Cryosphere, 8, 2293–2312. doi: 10.5194/tc-8-2293-2014 Aoki, T., Kuchiki, K., Niwano, M., Kodama, Y., Hosaka, M., and Tanaka, T. (2011). Physically based snow albedo model for caluclating broadband albedos and the solar heating profile in snow pack for general circulation models. J. Geophys. Res. 116, D11114. doi: 10.1029/2010JD015507 Aoki, Te., Aoki, Ta., Fukabori, M., Hachikubo, A., Tachibana, Y., and Nishio, F. (2000). Effects of snow physical parameters on spectral albedo and bidirectional reflectance of snow surface. J. Geophys. Res. 105, 10219–10236. doi: 10.1029/1999JD901122 Bøggild, C. E., Brandt, R. E., Brown, K. J., and Warren, S. G. (2010). The ablation zone in northeast Greenland: ice types, albedos and impurities. J. Glaciol. 56, 101–113. doi: 10.3189/002214310791190776 Box, J. E., Fettweis, X., Stroeve, J. C., Tedesco, M., Hall, D. K., and Steffen, K. (2012). Greenland ice sheet albedo feedback: thermodynamics and atmospheric drivers. Cryosphere 6, 821–839. doi: 10.5194/tc-6-821-2012 Chandler, D. M., Alcock, J. D., Wadham, J. L., MacKie, S. L., and Telling, J. (2015). Seasonal changes of ice surface characteristics and productivity in the ablation zone of the Greenland Ice Sheet. Cryosphere 9, 487–504. doi: 10.5194/tc-9-487-2015 Cook, J., Hodson, A., Telling, J., Anesio, A., Irvine-Fynn, T., and Bellas, C. (2010). The mass–area relationship within cryoconite holes and its implications for primary production. Ann. Glaciol. 51, 106–110. doi: 10.3189/172756411795932038 Dumont, M., Brun, E., Picard, G., Michou, M., Libois, Q., Petit, J.-R., et al. (2014). Contribution of light-absorbing impurities in snow to Greenland's darkening since 2009. Nat. Geosci. 7, 509–512. doi: 10.1038/ngeo2180 Goelles, T., and Bøggild, C. E. (2015b). Albedo reduction caused by black carbon and dust accumulation: quantitive model applied to the western margin of the Greenland ice sheet. Cryosphere 9, 1345–1381. doi: 10.5194/tcd-9-1345-2015 Moustafa, S. E., Rennermalm, A. K., Smith, L. C., Miller, M. A., Mioduszewski, J. R., Koenig, L. S., et al. (2015). Multi-modal albedo distributions in the ablation area of the southwestern Greenland Ice Sheet. Cryosphere 9, 905–923. doi: 10.5194/tc-9-905-2015 Stamnes, K., Li, W., Eide, H., Aoki, T., Hori, M., and Storvold, R. (2007). ADEOS-II/GLI snow/ice products - Part I: scientific basis. Remote Sens. Environ. 111, 258–273. doi: 10.1016/j.rse.2007.03.023 Stroeve, J., Box, J. E., Wang, Z., Schaaf, C., and Barett, A. (2013). Re-evaluation of MODIS MCD43 Greenland albedo accuracy and trends. Remote Sens. Environ. 138, 99–214. doi: 10.1016/j.rse.2013.07.023 Takeuchi, N. (2013). Seasonal and altitudinal variations in snow algal communities on an Alaskan glacier (Gulkana glacier in the Alaska range). Environ. Res. Lett. 8:035002. doi: 10.1088/1748-9326/8/3/035002 Takeuchi, N., Kohshima, S., and Seko, K. (2001). Structure, formation, and darkening process of albedo-reducing material (cryoconite) on a himalayan glacier: a granular algal mat growing on the glacier. Arc. Antarc. Alp. Res. 33, 115–122. doi: 10.2307/1552211 Takeuchi, N., Nagatsuka, N., Uetake, J., and Shimada, R. (2014). Spatial variations in impurities (cryoconite) on glaciers in northwest Greenland. Bull. Glaciol. Res. 32, 85–94. doi: 10.5331/bgr.32.85 Tedesco, M., Fettweis, X., Mote, T., Wahr, J., Alexander, P., Box, J. E., et al. (2013). Evidence and analysis of 2012 Greenland records from spaceborne observations, a regional climate model and reanalysis data. Cryosphere 7, 615–630. doi: 10.5194/tc-7-615-2013 Tedesco, M., Fettweis, X., van den Broeke, M. R., van de Wal, R. S. W., Smeets, C. J. P. P., van de Berg, W. J., et al. (2011). The role of albedo and accumulation in the 2010 melting record in Greenland. Environ. Res. Lett. 6:014005. doi: 10.1088/1748-9326/6/1/014005 Uetake, J., Naganuma, T., Hebsgaard, M. B., Kanda, H., and Kohshima, S. (2010). Communities of algae and cyanobacteria on glaciers in west Greenland. Polar Sci. 4, 71–80. doi: 10.1016/j.polar.2010.03.002 van As, D. (2011). Warming, glacier melt and surface energy budget from weather station observations in the Melville Bay region of northwest Greenland. J. Glaciol. 57, 208–220. doi: 10.3189/002214311796405898 van de Wal, R. S. W., Boot, W., Smeets, C. J. P. P., Snellen, H., van den Broele, M. R., and Oerlemans, J. (2012). Twenty-one years of mass balance observations along the K-transect, West Greenland. Earth Syst. Sci. Data Discuss. 4, 31–35, doi: 10.5194/essd-4-31-2012 van de Wal, R. S. W., Greuell, W., van den Broeke, M. R., Reijmer, C. H., and Oerlemans, J. (2005). Surface mass-balance observations and automatic weather station data along a transect near Kangerlussuaq, West Greenland. Ann. Glaciol. 42, 311–316. doi: 10.3189/172756405781812529 Wientjes, I. G. M., Van de Wal, R. S. W., Reichart, G. J., Sluijs, A., and Oerlemans, J. (2011). Dust from the dark region in the western ablation zone of the Greenland ice sheet. Cryosphere 5, 589–601. doi: 10.5194/tc-5-589-2011 Yallop, M. L., Anesio, A. M., Perkins, R. G., Cook, J., Telling, J., Fagan, D., et al. (2012). Photophysiology and albedo-changing potential of the ice algal community on the surface of the Greenland ice sheet. ISME J. 6, 2303–2313. doi: 10.1038/ismej.2012.107 Zwally, H. J., and Giovinetto, M. B. (2001). Balance mass flux and ice velocity across the equilibrium line in drainage systems of Greenland. J. Geophys. Res. 106, 33717–33728. doi: 10.1029/2001JD900120 Zwally, H. J., Giovinetto, M. B., Beckley, M. A., and Saba, J. L. (2012). Antarctic and Greenland Drainage Systems, GSFC Cryospheric Sciences Laboratory. Available online at: http://icesat4.gsfc.nasa.gov/cryo_data/ant_grn_drainage_systems.php Keywords: bare ice, dark ice, MODIS, Greenland ice sheet, climate change Citation: Shimada R, Takeuchi N and Aoki T (2016) Inter-Annual and Geographical Variations in the Extent of Bare Ice and Dark Ice on the Greenland Ice Sheet Derived from MODIS Satellite Images. Front. Earth Sci. 4:43. doi: 10.3389/feart.2016.00043 Received: 31 October 2015; Accepted: 04 April 2016; Published: 21 April 2016. Edited by:Michael Lehning, EPFL and WSL Institute for Snow and Avalanche Research SLF, Switzerland Reviewed by:Daniel Farinotti, Swiss Federal Institute for Forest, Snow, and Landscape Research WSL, Switzerland Qiao Liu, Institute of Mountain Hazards and Environment, Chinese Academy of Sciences, China Stefan Wunderle, University of Bern, Switzerland Copyright © 2016 Shimada, Takeuchi and Aoki. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Rigen Shimada, email@example.com †Present Address: Rigen Shimada, Earth Observation Research Center, Japan Aerospace Exploration Agency, Tsukuba, Japan; Teruo Aoki, Department of Earth Sciences, Okayama University, Okayama, Japan
<urn:uuid:a2381a2a-f3c8-4df9-b3b7-71cba89bbb58>
3.203125
10,354
Academic Writing
Science & Tech.
58.490176
95,543,028
Deep in the dusty catalogs of weather stations and meteorological offices all over the world are hidden treasures. They're easy to miss if you're not looking for them—often taking the form of, well, piles of moldy papers. But on those pieces of paper are hundreds of years of weather records—data that could make climate science far more accurate. The International Environmental Data Rescue Organization (IEDRO) estimates that there are 100 million paper strip charts—records that list weather conditions—sitting in meteorological storage facilities throughout the world. That’s about 200 million observations unused by scientists, data that could greatly improve their models. Now, a few small groups of scientists are trying digitize these records, but they’re facing all kinds of obstacles. Climate scientists often bemoan the lack of historic records. There are the famous data sets: the Vostok ice core drilled in the 1970s that looks back about 400,000 years, the Keeling curve started in 1958, data from satellites that watch sea ice retreat starting around 1979. But these are spot points in specific places that only span a short amount of time. To truly understand climate, researcher need a global records that reaches back hundreds of years. Those are the kinds of records that data-rescue organizations like IEDRO are trying to recover. “There’s data tied up in paper records that goes all the way back to the lat 1800s,” says Theodore Allen, a graduate student at the University of Miami and IEDRO volunteer. “So rather than working on observations from 1960 to present, we can work on things from 1880 to present.” With that kind of information, climate scientists can make their models far more reliable. The problem is that nobody wants to spend the time and money it takes to scan and input 100 million pieces of pieces of old, musky, often disorganized paper. “You’ll show up to a place and you need dust masks on for days at a time,” says Allen. “You’re crouched over running through dusty, dirty weather records in a damp room. It’s not very glamorous.” Different groups have different strategies for so called “data rescue” projects. One group, called the Atmospheric Circulation Reconstruction over the Earth (ACRE) focuses on records in existing archives like the National Meteorological Services across Europe. They go into existing libraries to try and digitize the data that exists among the books. “It’s a bit of a detective effort,” says Rob Allen, the data rescue project manager at ACRE, “you have to be an archaeologist, detective, cartographer and climate scientist all in one. The IEDRO teams take a different tack—searching for records in the back rooms at local weather stations all over the world. Instead of having their people do the scanning, IEDRO set up weather stations with their own scanners, and hires local people to do the digitizing. IEDRO focuses on creating local jobs around climate digitization projects, and once the project is complete the scanners and other equipment are donated to the weather station. With either approach, the task entails hundreds of hours of scanning and data inputting. So both ACRE and IEDRO have started toying with crowdsourcing the data-input side of things. Once the pages are scanned, they upload them to sites like Old Weather, where volunteers can help the scientists and get "promoted" in a little game they’ve created. The scope of this kind of data digitization has implications beyond climate science. An IEDRO project isn’t truly finished until the data is used to inform something like a local weather model, or flood recommendations, or city planning. The ACRE team plugs recovered climate data into current weather models to create pictures of what the global climate was like in, say, 1916. Despite the clear value in this kind of work, keeping these projects alive has been hard. Everyone who works at IEDRO does so as a volunteer. Getting funding is difficult. For the cost of a single satellite, groups like IEDRO could digitize millions of pages. “There’s a lot more money and funding to produce modern climate products like these advanced climate models, than there is wallowing around in some third world pit of a storage shed and unearthing a bunch of paper records,” says Allen. Soon, Allen will launch his own little project called “Data Safari”—a chronicle of his motorcycle trip throughout southern Africa in search of climate records to digitize. He'll spend sixty days traveling 10,000 kilometers searching for scraps of paper that might improve the climate record. It's work he'll do, as usual, on a volunteer basis. "One day," he says, "I would love to have this turn into a job." We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:50d2a86b-a293-4cf5-81d2-0f851f7ba724>
3.921875
1,040
News Article
Science & Tech.
48.180237
95,543,044
A 'subtle' laser heating technique causes atoms to assemble into a rotating lattice Scientists at Lehigh University, in collaboration with Lawrence Berkeley National Laboratory, have demonstrated the fabrication of what they call a new class of crystalline solid by using a laser heating technique that induces atoms to organize into a rotating lattice without affecting the macroscopic shape of the solid. This image shows the results of scanning X-ray microdiffraction (μSXRD) with submicron spatial resolution. Laue diffraction (a) from an unconstrained Sb2S3 single crystal (top) and laser fabricated RLS crystal Sb2S3 (bottom). Magnified images (b) of selected reflection (852) extracted from Laue patterns (a, bottom) obtained for different points of the RLS crystal (c). Credit: D. Savytskii, H. Jain, N. Tamura & V. Dierolf By controlling the rotation of the crystalline lattice, the researchers say they will be able to make a new type of synthetic single crystals and "bio-inspired" materials that mimic the structure of special biominerals and their superior electronic and optical properties as well. The group reported its findings today (Nov. 3) in Scientific Reports, a Nature journal, in an article titled "Rotating lattice single crystal architecture on the surface of glass." The paper's lead author is Dmytro Savytskii, a research scientist in the department of materials science and engineering at Lehigh. The other authors are Volkmar Dierolf, distinguished professor and chair of the department of physics at Lehigh; Himanshu Jain, the T.L. Diamond Distinguished Chair in Engineering and Applied Science and professor of materials science and engineering at Lehigh; and Nobumichi Tamura of the Lawrence Berkeley National Lab in Berkeley, California. The development of the rotating lattice single (RLS) crystals follows a discovery reported in March in Scientific Reports in which the Lehigh group demonstrated for the first time that a single crystal could be grown from glass without melting the glass. In a typical crystalline solid, atoms are arranged in a lattice, a regularly repeating, or periodic three-dimensional structure. When viewed from any angle--left to right, up and down, front to back--a crystal-specific periodicity becomes evident. Glass, by contrast, is an amorphous material with a disordered atomic structure. Because they have no grain boundaries between interconnecting crystals, single-crystal materials often possess exceptional mechanical, optical and electrical properties. Single crystals give diamonds their brilliance and jet turbine blades their resistance to mechanical forces. And the single crystal of silicon of which a silicon chip is made gives it superior conducting properties that form the basis for microelectronics. The periodicity, or repeating pattern, in a rotating lattice single crystal, said Jain and Dierolf, differs from the periodicity in a typical single crystal. "We have found that when we grow a crystal out of glass," said Jain, "the periodicity does not result the some way. In one direction, it looks perfect, but if you turn the lattice and look at it from a different angle, you see that the whole structure is rotating." "In a typical single-crystal material," said Dierolf, "once I figure out how the pattern repeats, then, if I know the precise location of one atom, I can predict the precise location of every atom. This is possible only because single crystals possess a long-range order. "When we grow an RLS crystal out of glass, however, we have found that the periodicity does not result the some way. To predict the location of every atom, I have to know not just the precise location of a particular atom but the rotation angle of the lattice as well. "Thus, we have to slightly modify the textbook definition of single crystals." The rotation, said Jain, occurs at the atomic scale and does not affect the shape of the glass material. "Only the string of atoms bends, not the entire material. We can see the bending of the crystal lattice with x-ray diffraction." To achieve this rotation, the researchers heat a very small portion of the surface of a solid glass material with a laser, which causes the atoms to become more flexible. "The atoms want to arrange in a straight line but the surrounding glass does not allow this," said Jain. "Instead, the glass, being completely solid, forces the configuration of the atoms to bend. The atoms move and try to organize in a crystalline lattice, ideally in a perfect single crystal, but they cannot because the glass prevents the perfect crystal from forming and forces the atoms to arrange in a rotational lattice. The beauty is that the rotation occurs smoothly on the micrometer scale. "Our laser imposes a degree of asymmetry on the growth of the crystal. We control the asymmetry of the heating source to impose this rotational pattern on the atoms." The group's ability to control the amount of heating is critical to the formation of the rotating lattice, said Jain. "The key to the creation of the rotating atomic lattice is that it occurs without melting the glass. Melting allows too much freedom of atomic movement, which makes it impossible to control the organization of the lattice. "Our subtle way of heating the glass overcomes this. We heat only the surface of the glass, not inside. This is very precise, very localized heating. It causes only a limited movement of the atoms, and it allows us to control how the atomic lattice will bend." Rotating lattices have been observed in certain biominerals in the ocean, said Jain and Dierolf, and it may also occur on a very small scale in some natural minerals as spherulites. "But no one had previously made this on a larger scale in a controlled way, which we have accomplished with the asymmetrical imposition of a laser to cause the rotating lattice," said Jain. "Scientists were not able to understand this phenomenon before because they could not observe it on a large enough scale. We are the first group to induce this to happen on an effectively unlimited dimension with a laser." Jain and Dierolf and their group are planning further studies to improve their ability to manipulate the ordering of the atoms. The researchers performed the laser heating of the glass at Lehigh and characterized the glass with micro x-ray diffraction on a synchrotron at the Lawrence Berkeley National Lab. They plan to perform further characterization at Berkeley and with electron microscopy at Lehigh. The project has been funded for six years by the U.S. Department of Energy. "This is a novel way of making single crystals," said Dierolf. "It opens a new field by creating a material with unique, novel properties." Article from March 2016: Lori Friedman | EurekAlert! Metal too 'gummy' to cut? Draw on it with a Sharpie or glue stick, science says 19.07.2018 | Purdue University Machine-learning predicted a superhard and high-energy-density tungsten nitride 18.07.2018 | Science China Press For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:31ca2ba4-d879-4ff6-904b-35eb4f1e2183>
3.078125
2,097
Content Listing
Science & Tech.
40.995952
95,543,059
Making Nanowires from Protein and DNA News Sep 07, 2015 For example, synthetic structures made of DNA could one day be used to deliver cancer drugs directly to tumor cells, and customized proteins could be designed to specifically attack a certain kind of virus. Although researchers have already made such structures out of DNA or protein alone, a Caltech team recently created—for the first time—a synthetic structure made of both protein and DNA. Combining the two molecule types into one biomaterial opens the door to numerous applications. There are many advantages to multiple component materials, says Yun (Kurt) Mou (PhD '15), first author of the Nature study. "If your material is made up of several different kinds of components, it can have more functionality. For example, protein is very versatile; it can be used for many things, such as protein–protein interactions or as an enzyme to speed up a reaction. And DNA is easily programmed into nanostructures of a variety of sizes and shapes." But how do you begin to create something like a protein–DNA nanowire—a material that no one has seen before? Mou and his colleagues in the laboratory of Stephen Mayo, Bren Professor of Biology and Chemistry and the William K. Bowes Jr. Leadership Chair of Caltech's Division of Biology and Biological Engineering, began with a computer program to design the type of protein and DNA that would work best as part of their hybrid material. "Materials can be formed using just a trial-and-error method of combining things to see what results, but it's better and more efficient if you can first predict what the structure is like and then design a protein to form that kind of material," he says. The researchers entered the properties of the protein–DNA nanowire they wanted into a computer program developed in the lab; the program then generated a sequence of amino acids (protein building blocks) and nitrogenous bases (DNA building blocks) that would produce the desired material. However, successfully making a hybrid material is not as simple as just plugging some properties into a computer program, Mou says. Although the computer model provides a sequence, the researcher must thoroughly check the model to be sure that the sequence produced makes sense; if not, the researcher must provide the computer with information that can be used to correct the model. "So in the end, you choose the sequence that you and the computer both agree on. Then, you can physically mix the prescribed amino acids and DNA bases to form the nanowire." The resulting sequence was an artificial version of a protein–DNA coupling that occurs in nature. In the initial stage of gene expression, called transcription, a sequence of DNA is first converted into RNA. To pull in the enzyme that actually transcribes the DNA into RNA, proteins called transcription factors must first bind certain regions of the DNA sequence called protein-binding domains. Using the computer program, the researchers engineered a sequence of DNA that contained many of these protein-binding domains at regular intervals. They then selected the transcription factor that naturally binds to this particular protein-binding site—the transcription factor called Engrailed from the fruit fly Drosophila. However, in nature, Engrailed only attaches itself to the protein-binding site on the DNA. To create a long nanowire made of a continuous strand of protein attached to a continuous strand of DNA, the researchers had to modify the transcription factor to include a site that would allow Engrailed also to bind to the next protein in line. "Essentially, it's like giving this protein two hands instead of just one," Mou explains. "The hand that holds the DNA is easy because it is provided by nature, but the other hand needs to be added there to hold onto another protein." Another unique attribute of this new protein–DNA nanowire is that it employs coassembly—meaning that the material will not form until both the protein components and the DNA components have been added to the solution. Although materials previously could be made out of DNA with protein added later, the use of coassembly to make the hybrid material was a first. This attribute is important for the material's future use in medicine or industry, Mou says, as the two sets of components can be provided separately and then combined to make the nanowire whenever and wherever it is needed. This finding builds on earlier work in the Mayo lab, which, in 1997, created one of the first artificial proteins, thus launching the field of computational protein design. The ability to create synthetic proteins allows researchers to develop proteins with new capabilities and functions, such as therapeutic proteins that target cancer. The creation of a coassembled protein–DNA nanowire is another milestone in this field. "Our earlier work focused primarily on designing soluble, protein-only systems. The work reported here represents a significant expansion of our activities into the realm of nanoscale mixed biomaterials," Mayo says. Although the development of this new biomaterial is in the very early stages, the method, Mou says, has many promising applications that could change research and clinical practices in the future. "Our next step will be to explore the many potential applications of our new biomaterial," Mou says. "It could be incorporated into methods to deliver drugs into cells—to create targeted therapies that only bind to a certain biomarker on a certain cell type, such as cancer cells. We could also expand the idea of protein–DNA nanowires to protein–RNA nanowires that could be used for gene therapy applications. And because this material is brand-new, there are probably many more applications that we haven't even considered yet." Synthetic Material That Detects Enzymatic ActivityNews Scientists integrate protein and polymer building blocks to create stimulus-responsive systemsREAD MORE Regenerative Medicine Meets Clever Engineering to Accommodate Bone GraftsNews Personalized bone grafts developed to repair bone defects from disease or injuryREAD MORE
<urn:uuid:d0b9a1eb-c762-4c6e-9a85-7680a0770300>
3.671875
1,218
News Article
Science & Tech.
30.390559
95,543,062
Pi day-14th March In the whole history of Mathematics, the calculation of the ratio of a circle’s circumference and diameter is one of the biggest challenges. This value is represented by the Greek letter π (pie). From ancient civilization of Babylonia to the present age of supercomputers, Mathematicians have been trying to calculate this mysterious number. They have searched for exact fractions value, formulas, and, more recently, patterns in the long string of numbers starting with 3.14159…….. which is generally shortened to 3.14.
<urn:uuid:07f4d32c-6e95-4c7a-a09c-381c353231f8>
3.234375
117
Personal Blog
Science & Tech.
50.90722
95,543,064
Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids. It is necessary because, apart from relatively recent results concerning the hydrogen molecular ion (dihydrogen cation, see references therein for more details), the quantum many-body problem cannot be solved analytically, much less in closed form. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena. It is widely used in the design of new drugs and materials. Examples of such properties are structure (i.e., the expected positions of the constituent atoms), absolute and relative (interaction) energies, electronic charge density distributions, dipoles and higher multipole moments, vibrational frequencies, reactivity, or other spectroscopic quantities, and cross sections for collision with other particles. The methods used cover both static and dynamic situations. In all cases, the computer time and other resources (such as memory and disk space) increase rapidly with the size of the system being studied. That system can be one molecule, a group of molecules, or a solid. Computational chemistry methods range from very approximate to highly accurate; the latter are usually feasible for small systems only. Ab initio methods are based entirely on quantum mechanics and basic physical constants. Other methods are called empirical or semi-empirical because they use additional empirical parameters. Both ab initio and semi-empirical approaches involve approximations. These range from simplified forms of the first-principles equations that are easier or faster to solve, to approximations limiting the size of the system (for example, periodic boundary conditions), to fundamental approximations to the underlying equations that are required to achieve any solution to them at all. For example, most ab initio calculations make the Born-Oppenheimer approximation, which greatly simplifies the underlying Schrödinger equation by assuming that the nuclei remain in place during the calculation. In principle, ab initio methods eventually converge to the exact solution of the underlying equations as the number of approximations is reduced. In practice, however, it is impossible to eliminate all approximations, and residual error inevitably remains. The goal of computational chemistry is to minimize this residual error while keeping the calculations tractable. In some cases, the details of electronic structure are less important than the long-time phase space behavior of molecules. This is the case in conformational studies of proteins and protein-ligand binding thermodynamics. Classical approximations to the potential energy surface are used, as they are computationally less intensive than electronic calculations, to enable longer simulations of molecular dynamics. Furthermore, cheminformatics uses even more empirical (and computationally cheaper) methods like machine learning based on physicochemical properties. One typical problem in cheminformatics is to predict the binding affinity of drug molecules to a given target. Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics - with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics - with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One major advance came with the 1951 paper in Reviews of Modern Physics by Clemens C. J. Roothaan in 1951, largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals), for many years the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first ab initio Hartree-Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer. In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of ? electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO. In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger. One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry. The Journal of Computational Chemistry was first published in 1980. Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry.Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems". The term theoretical chemistry may be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when a mathematical method is sufficiently well developed that it can be automated for implementation on a computer. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. Computational chemistry has two different aspects: Thus, computational chemistry can assist the experimental chemist or it can challenge the experimental chemist to find entirely new chemical objects. Several major areas may be distinguished within computational chemistry: The words exact and perfect do not apply here, as very few aspects of chemistry can be computed exactly. However, almost every aspect of chemistry can be described in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometres and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT). There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM). In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM). One molecular formula can represent more than one molecular isomer: a set of isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (i.e., the electronic energy, plus the repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum that is lowest is called the global minimum and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions, the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimization. The determination of molecular structure by geometry optimization became routine only after efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is estimated. More importantly, it allows for the characterization of stationary points. The frequencies are related to the eigenvalues of the Hessian matrix, which contains second derivatives. If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (i.e., an imaginary frequency), then the stationary point is a transition structure. If more than one eigenvalue is negative, then the stationary point is a more complex one, and is usually of little interest. When one of these is found, it is necessary to move the search away from it if the experimenter is looking solely for local minima and transition structures. The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and by making use of the Born-Oppenheimer approximation, which allows for the separation of electronic and nuclear motions, thereby simplifying the Schrödinger equation. This leads to the evaluation of the total energy as a sum of the electronic energy at fixed nuclei positions and the repulsion energy of the nuclei. A notable exception are certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants on the major theme. For very large systems, the relative total energies can be compared using molecular mechanics. The ways of determining the total energy to predict molecular structures are: The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations - being derived directly from theoretical principles, with no inclusion of experimental data - are called ab initio methods. This does not imply that the solution is an exact one; they are all approximate quantum mechanical calculations. It means that a particular approximation is rigorously defined on first principles (quantum theory) and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made). The simplest type of ab initio electronic structure calculation is the Hartree-Fock method (HF), an extension of molecular orbital theory, in which the correlated electron-electron repulsion is not specifically taken into account; only its average effect is included in the calculation. As the basis set size is increased, the energy and wave function tend towards a limit called the Hartree-Fock limit. Many types of calculations (termed post-Hartree-Fock methods) begin with a Hartree-Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. To obtain exact agreement with experiment, it is necessary to include relativistic and spin orbit terms, both of which are far more important for heavy atoms. In all of these approaches, along with choice of method, it is necessary to choose a basis set. This is a set of functions, usually centered on the different atoms in the molecule, which are used to expand the molecular orbitals with the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz. Ab initio methods need to define a level of theory (the method) and a basis set. The Hartree-Fock wave function is a single configuration or determinant. In some cases, particularly for bond breaking processes, this is inadequate, and several configurations must be used. Here, the coefficients of the configurations, and of the basis functions, are optimized together. The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without a full knowledge of the complete surface. A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way it is necessary to use a series of post-Hartree-Fock methods and combine the results. These methods are called quantum chemistry composite methods. Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree-Fock exchange term and are termed hybrid functional methods. Semi-empirical quantum chemistry methods are based on the Hartree-Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree-Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For ?-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely emprirical" because they do not derive from a Hamiltonian.Yet, the term "empirical methods", or "empirical force fields" is usually used to describe Molecular Mechanics. In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization, i.e., the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance proteins, would be expected to only have any relevance when describing other molecules of the same class. Computational chemical methods can be applied to solid state physics problems. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies; therefore, they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone. Once the electronic and nuclear variables are separated (within the Born-Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behaviour of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes. The atoms in molecules (QTAIM) model of Richard Bader was developed to effectively link the quantum mechanical model of a molecule, as an electronic wavefunction, to chemically useful concepts such as atoms in molecules, functional groups, bonding, the theory of Lewis pairs, and the valence bond model. Bader has demonstrated that these empirically useful chemistry concepts can be related to the topology of the observable charge density distribution, whether measured or calculated from a quantum mechanical wavefunction. QTAIM analysis of molecular wavefunctions is implemented, for example, in the AIMAll software package. Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in:
<urn:uuid:44d419ab-3593-4766-be6c-19582da2aa33>
3.421875
4,178
Knowledge Article
Science & Tech.
14.314345
95,543,081
As mentioned, the goal of a programmer in a modern computing environment is scalability: to take advantage of both cores on a dual-core processor, all four cores on a quad-core processor, and so on. Threading Building Blocks makes writing scalable applications much easier than it is with traditional threading packages. There are a variety of approaches to parallel programming, ranging from the use of platform-dependent threading primitives to exotic new languages. The advantage of Threading Building Blocks is that it works at a higher level than raw threads, yet does not require exotic languages or compilers. You can use it with any compiler supporting ISO C++. This library differs from typical threading packages in these ways: Most threading packages require you to create, join, and manage threads. Programming directly in terms of threads can be tedious and can lead to inefficient programs because threads are low-level, heavy constructs that are close to the hardware. Direct programming with threads forces you to do the work to efficiently map logical tasks onto threads. In contrast, the Threading Building Blocks runtime library automatically schedules tasks onto threads in a way that makes efficient use of processor resources. The runtime is very effective at load balancing the many tasks you will be specifying. Indeed, the alternative of using raw threads directly would amount to programming in the assembly language of parallel programming. It may give you maximum flexibility, but with many costs. Most general-purpose threading packages support many different kinds of threading, such as threading for asynchronous events in graphical user interfaces. As a result, general-purpose packages tend to be low-level tools that provide a foundation, not a solution. Instead, Threading Building Blocks focuses on the particular goal of parallelizing computationally intensive work, delivering higher-level, simpler solutions. Threading Building Blocks can coexist seamlessly with other threading packages. This is very important because it does not force you to pick among Threading Building Blocks, OpenMP, or raw threads for your entire program. You are free to add Threading Building Blocks to programs that have threading in them already. You can also add an OpenMP directive, for instance, somewhere else in your program that uses Threading Building Blocks. For a particular part of your program, you will use one method, but in a large program, it is reasonable to anticipate the convenience of mixing various techniques. It is fortunate that Threading Building Blocks supports this. Using or creating libraries is a key reason for this flexibility, particularly because libraries are often supplied by others. For instance, Intel’s Math Kernel Library (MKL) and Integrated Performance Primitives (IPP) library are implemented internally using OpenMP. You can freely link a program using Threading Building Blocks with the Intel MKL or Intel IPP library. Breaking a program into separate functional blocks and assigning a separate thread to each block is a solution that usually does not scale well because, typically, the number of functional blocks is fixed. In contrast, Threading Building Blocks emphasizes data-parallel programming, enabling multiple threads to work most efficiently together. Data-parallel programming scales well to larger numbers of processors by dividing a data set into smaller pieces. With data-parallel programming, program performance increases (scales) as you add processors. Threading Building Blocks also avoids classic bottlenecks, such as a global task queue that each processor must wait for and lock in order to get a new task. Traditional libraries specify interfaces in terms of specific types or base classes. Instead, Threading Building Blocks uses generic programming, which is defined in Chapter 12. The essence of generic programming is to write the best possible algorithms with the fewest constraints. The C++ Standard Template Library (STL) is a good example of generic programming in which the interfaces are specified by requirements on types. For example, C++ STL has a template function that sorts a sequence abstractly, defined in terms of iterators on the sequence. Generic programming enables Threading Building Blocks to be flexible yet efficient. The generic interfaces enable you to customize components to your specific needs. Programming using a raw thread interface, such as POSIX threads (pthreads) or Windows threads, has been an option that many programmers of shared memory parallelism have used. There are wrappers that increase portability, such as Boost Threads, which are a very portable raw threads interface. Supercomputer users, with their thousands of processors, do not generally have the luxury of shared memory, so they use message passing, most often through the popular Message Passing Interface (MPI) standard. Raw threads and MPI expose the control of parallelism at its lowest level. They represent the assembly languages of parallelism. As such, they offer maximum flexibility, but at a high cost in terms of programmer effort, debugging time, and maintenance costs. In order to program parallel machines, such as multi-core processors, we need the ability to express our parallelism without having to manage every detail. Issues such as optimal management of a thread pool, and proper distribution of tasks with load balancing and cache affinity in mind, should not be the focus of a programmer when working on expressing the parallelism in a program. When using raw threads, programmers find basic coordination and data sharing to be difficult and tedious to write correctly and efficiently. Code often becomes very dependent on the particular threading facilities of an operating system. Raw thread-level programming is too low-level to be intuitive, and it seldom results in code designed for scalable performance. Nested parallelism expressed with raw threads creates a lot of complexities, which I will not go into here, other than to say that these complexities are handled for you with Threading Building Blocks. Another advantage of tasks versus logical threads is that tasks are much lighter weight. On Linux systems, starting and terminating a task is about 18 times faster than starting and terminating a thread. On Windows systems, the ratio is more than 100-fold. With threads and with MPI, you wind up mapping tasks onto processor cores explicitly. Using Threading Building Blocks to express parallelism with tasks allows developers to express more concurrency and finer-grained concurrency than would be possible with threads, leading to increased scalability. Along with Intel Threading Building Blocks, another promising abstraction for C++ programmers is OpenMP. The most successful parallel extension to date, OpenMP is a language extension consisting of pragmas, routines, and environment variables for Fortran and C programs. OpenMP helps users express a parallel program and helps the compiler generate a program reflecting the programmer’s wishes. These directives are important advances that address the limitations of the Fortran and C languages, which generally prevent a compiler from automatically detecting parallelism in code. The OpenMP standard was first released in 1997. By 2006, virtually all compilers had some level of support for OpenMP. The maturity of implementations varies, but they are widespread enough to be viewed as a natural companion of Fortran and C languages, and they can be counted upon when programming on any platform. When considering it for C programs, OpenMP has been referred to as “excellent for Fortran-style code written in C.” That is not an unreasonable description of OpenMP since it focuses on loop structures and C code. OpenMP offers nothing specific for C++. The loop structures are the same loop nests that were developed for vector supercomputers—an earlier generation of parallel processors that performed tremendous amounts of computational work in very tight nests of loops and were programmed largely in Fortran. Transforming those loop nests into parallel code could be very rewarding in terms of results. A proposal for the 3.0 version of OpenMP includes tasking, which will liberate OpenMP from being solely focused on long, regular loop structures by adding support for irregular constructs such as while loops and recursive structures. Intel implemented tasking in its compilers in 2004 based on a proposal implemented by KAI in 1999 and published as “Flexible Control Structures in OpenMP” in 2000. Until these tasking extensions take root and are widely adopted, OpenMP remains reminiscent of Fortran programming with minimal support for C++. OpenMP has the programmer choose among three scheduling approaches (static, guided, and dynamic) for scheduling loop iterations. Threading Building Blocks does not require the programmer to worry about scheduling policies. Threading Building Blocks does away with this in favor of a single, automatic, divide-and-conquer approach to scheduling. Implemented with work stealing (a technique for moving tasks from loaded processors to idle ones), it compares favorably to dynamic or guided scheduling, but without the problems of a centralized dealer. Static scheduling is sometimes faster on systems undisturbed by other processes or concurrent sibling code. However, divide-and-conquer comes close enough and fits well with nested parallelism. The generic programming embraced by Threading Building Blocks means that parallelism structures are not limited to built-in types. OpenMP allows reductions on only built-in types, whereas the Threading Building Blocks parallel_reduce works on any type. Looking to address weaknesses in OpenMP, Threading Building Blocks is designed for C++, and thus to provide the simplest possible solutions for the types of programs written in C++. Hence, Threading Building Blocks is not limited to statically scoped loop nests. Far from it: Threading Building Blocks implements a subtle but critical recursive model of task-based parallelism and generic algorithms. A number of concepts are fundamental to making the parallelism model of Threading Building Blocks intuitive. Most fundamental is the reliance on breaking problems up recursively as required to get to the right level of parallel tasks. It turns out that this works much better than the more obvious static division of work. It also fits perfectly with the use of task stealing instead of a global task queue. This is a critical design decision that avoids using a global resource as important as a task queue, which would limit scalability. As you wrestle with which algorithm structure to apply for your parallelism ( while loop, pipeline, divide and conquer, etc.), you will find that you want to combine them. If you realize that a combination such as a parallel_for loop controlling a parallel set of pipelines is what you want to program, you will find that easy to implement. Not only that, the fundamental design choice of recursion and task stealing makes this work yield efficient scalable applications. It is a pleasant surprise to new users to discover how acceptable it is to code parallelism, even inside a routine that is used concurrently itself. Because Threading Building Blocks was designed to encourage this type of nesting, it makes parallelism easy to use. In other systems, this would be the start of a headache. With an understanding of why Threading Building Blocks matters, we are ready for the next chapter, which lays out what we need to do in general to formulate a parallel solution to a problem.
<urn:uuid:0ed726f7-74f2-4609-932a-ad8bd588634d>
3.546875
2,242
Documentation
Software Dev.
33.607836
95,543,086
Introduction to Scalar and Stratified Flows Perhaps the most important practical aspect of turbulent shear flows is their dominant effect on scalar fields such as temperature, density or chemical species. When turbulence exists, it tends to completely determine the mixing and diffusion of such quantities. Industrial flows with chemical reactions, combustion, and natural flows in the ocean and atmosphere usually involve turbulence constrained by forces, and complicated by factors, that laboratory studies often suppress; for example, stratification, rotation and shear. Unstratified, nonrotating, unsheared turbulence as described by the Batchelor (1967) classic book is so complex and poorly understood that many fluid dynamicists, and most undergraduate fluid mechanics textbooks, manage to avoid the subject completely. Papers in the present chapter on Scalar and Stratified flows confront many of the awkward uncertainties attending turbulence in the “real world”. Some excellent new tools exist today that were not available to Batchelor, and their impact is reflected in the following articles. KeywordsInternal Wave Stratify Flow Stratify Fluid Homogeneous Turbulence Turbulent Shear Flow Unable to display preview. Download preview PDF. - Batchelor, G. K. (1967): Theory of Homogeneous Turbulence. Cambridge University PressGoogle Scholar - Dahm, W. J. A., Buch, K. A.: High resolution three-dimensional (2563) spatio-temporal measurements of the conserved scalar field in turbulent shear flows, this chapterGoogle Scholar - Gerz, T., Schumann, U.: Direct simulation of homogeneous turbulence and gravity waves in sheared and unsheared stratified flows, this chapterGoogle Scholar - Gibson, C. H.: Fossil Two-dimensional turbulence in the ocean, this chapterGoogle Scholar - Gibson, C. H. (1980): Fossil temperature, salinity, and vorticity turbulence in the ocean. In Marine Turbulence, J. Nihoul, ed. Elsevier Publishing Co., Amsterdam, 221–257, 1980.Google Scholar - Gibson, C. H. (1981): Fossil turbulence and internal waves. In American Institute of Physics Conference Proceedings No 76: Nonlinear Properties of Internal Waves, Bruce West, ed., American Institute of Physics, 159–179Google Scholar - Gibson, C. H. (1990): Scalar field topology in turbulent mixing, in Topological Fluid Mechanics, Proceedings of the IUTAM Symposium, Cambridge 1989, H. K. Moffatt and A. Tsinober (Eds.), Cambridge University Press, 85–94Google Scholar - Melander, M. V., Hussain, F. Cut-and-connect of two antiparallel vortex tubes: a new cascade mechanism, this chapterGoogle Scholar - Nagano, Y., Tagawa, M. Turbulence model for triple velocity and scalar correlations, this publication Phillips, O. M. (1969): The Dynamics of the Upper Ocean. Cambridge University PressGoogle Scholar
<urn:uuid:20a6b2ac-14e3-424b-8177-bdfbd465c5f2>
2.96875
624
Academic Writing
Science & Tech.
33.792248
95,543,091
Iceland's geothermal energy: Time Magazine’s “Top 15 Green Websites”—Climate Change The Guardian newspaper was founded in 1821 in Manchester, England, and claims to have a “long history of editorial and political independence”. “Climate Change” is part of their environmental information on the web and has companion categories of: “Carbon Footprints”, “Carbon Emissions” and “Fossil Fuels”—among others. It’s a great site for things that affect our air quality—short and long term—and the policy that gets debated because of this. For example, they had a news article on a California judge giving Bush 16 days to determine if the Polar Bear should be listed as endangered. You can calculate your carbon footprints if you live in England and many of the articles are naturally about the UK. It has some interesting things and refreshing journalism from across the Atlantic.
<urn:uuid:c17cabab-da4a-4972-beb7-b8199c9a76bb>
2.765625
201
Personal Blog
Science & Tech.
36.1875
95,543,111
Sums of Squares Our main aim in this chapter is to determine which integers can be expressed as the sum of a given number of squares, that is, which have the form where each x i ε ℤ, for a given k. We shall concentrate mainly on the two most important cases, characterising the sums of two squares, and showing that every non-negative integer is a sum of four squares. We shall adopt two completely different approaches to this problem: the first is mainly algebraic, making use of two number systems, the Gaussian integers and the quaternions; the second approach is geometric, based on the fact that the expression represents the square of the length of the vector (x1,…, x k ) in R k . We shall therefore give two different proofs for several of the main theorems in this chapter. In mathematics, it is often useful to have more than one proof of a result, not because this adds anything to its validity (a single correct proof is enough for this), but rather because the extra proofs may add to our understanding of the result, and may enable us to extend it in different directions. KeywordsNumber System Integer Lattice Fundamental Region Irreducible Element Symmetric Convex Unable to display preview. Download preview PDF.
<urn:uuid:8f234915-133b-421c-9159-5fbd310a7c21>
3.53125
269
Truncated
Science & Tech.
39.716197
95,543,119
"The drill is one of Africa's most endangered primates and this is the first publication that analyses drill conservation status in detail across the majority of its range, in Cameroon." said Bethan Morgan, research scientist with the San Diego Zoo's Institute for Conservation Research. "We hope this study will provide a stark warningabout the general decline in drill populations while highlighting areas where long-term survival of this species is most likely." The study, which appeared in a recent issue of the International Journal of Primatology, indicates that as much as 80% of the remaining drill population resides in Cameroon. Of the 52 habitat areas where drill populations were counted, only four (Ebo, Ejagham, Kroup and Nta Ali) received high scores indicating the presence of sustainable populations. "Although the results of this study cause us a great deal of concern" said Ekwoge Abwe, coauthor of the study and manager of San Diego Zoo Global's Ebo Forest Research Project, "we are encouraged that it highlights the importance of the proposed Ebo National Park where we have been conducting a long-term and ongoing research and conservation program geared towards the protection of primate species and the reduction of the bushmeat practices that directly affect them." The San Diego Zoo Global Wildlife Conservancy is dedicated to bringing endangered species back from the brink of extinction. The Conservancy makes possible the wildlife conservation efforts (representing both plants and animals) of the San Diego Zoo, San Diego Zoo Safari Park, San Diego Zoo Institute for Conservation Research, and international field programs in more than 35 countries. The important conservation and science work of these entities is supported in part by The Foundation of the Zoological Society of San Diego. Christina Simmons | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:7d89564f-be45-4404-b759-93539eb50550>
3.53125
1,002
Content Listing
Science & Tech.
36.854559
95,543,124
The web server’s job is to listen for requests for account details, validate input parameters, connect to CICS, pass on the request to the CICS application, format the returned data into HTML and return a web page to the user’s browser. All of these tasks can be handled by a special type of program running in the web server environment called a servlet. Figure 8-4. Design for the Error page In Chapter 9, we can divide the specific tasks further among three distinct components: A Java servlet to process the request and connect to CICS A JavaBean to temporarily hold information about the customer Java Server Pages (JSP) to automatically format the returned data into HTML Let’s look at these web server components in more detail.
<urn:uuid:de07808f-0d70-43f9-a909-1bed76c3e654>
2.765625
163
Truncated
Software Dev.
37.185962
95,543,129
Geological Terms Beginning With "H" For terms beginning with other letters, please click below An external shape displayed by an individual crystal or an aggregate of crystals. A few examples of crystal habits are shown in the photo. Clockwise from top left: prismatic habit; geodic habit; banded habit; pisolitic habit. A haboob is an intense dust storm caused by a downburst of air that mobilizes loose silt and clay and carries them across the landscape as a wall of dust. Haboobs usually occur in arid areas where the surface is covered with fine-grained materials that are easily mobilized. They can occur without warning and travel at speeds of up to 60 miles per hour. The wall of dust can be up to 60 miles wide and up to 2 miles in height. The image shows a haboob that hit Phoenix, Arizona on August 22, 2003. Public domain image by Wikipedian Junebug172. The amount of time required for 1/2 of a radioactive isotope to decay into its daughter isotope. The mineral name for "rock salt." A chemical sedimentary rock that forms from the evaporation of ocean or saline lake waters. It is rarely found at Earth's surface, except in areas of very arid climate. It is often mined for use in the chemical industry or for use as a winter highway treatment. Some halite is processed for use as a seasoning for food. A small magnifying glass of about 10x power that is used in the field, office and laboratory by geologists to examine rock, mineral, fossil and other specimens. It is usually a folding device with a metal cover that, when closed, protects the lens from scratches and impact. Also known as a "hand magnifier," "pocket magnifier" or a "pocket lens." See this item in the Geology.com Store. A tributary to a U-shaped glacial valley which, instead of entering the valley at the same level as the main stream, enters at a higher elevation, frequently with a waterfall. These different stream levels are a result of the rapid downcutting of the larger glacier being much faster than the slower downcutting of the tributary stream. The photo shows a hanging valley in Tongass National Forest, Alaska. The valley walls are about 1000 feet high and the valley is nearly 2000 feet wide. Water that has a significant amount of dissolved calcium and magnesium ions. This water performs poorly with most soaps and detergents and leaves a scaly deposit in containers where it is heated or evaporates. It can frequently be improved through the use of home-based water treatment systems. |Dictionary of Geological Terms - Only $18.95 All scientific disciplines have an essential vocabulary that students and professionals must understand to learn and communicate effectively. A geology dictionary that is used regularly is one of the most important tools for developing professional competence. A good dictionary should be on the desk of every geologist and within easy reach. This dictionary is compact and inexpensive at only $18.95. More information. Hardpan, also known as caliche, is a surface or shallow layer in a soil or sediment in which the grains have been cemented together. Depending upon the degree of cementation, the layer might be thin and easily broken with a hammer, or it could be two or more meters thick and completely cemented. Hardpan is usually found in arid to semiarid areas where evaporation facilitates the precipitation of dissolved minerals in shallow sediments or soils. A hardpan layer can extend over hundreds of square kilometers and cause problems with drainage, agriculture, and construction. The upper part of a landslide's moving mass. It is located immediately below the scarp. Often when the scarp of a landslide becomes visible people will place soil on the head area to reestablish a smooth slope. This can be a mistake because it adds weight to the head and drives the slide. The upper portions of a drainage basin where the tributaries of a stream first begin flow. The movement of heat energy from the core of the Earth towards the surface. Heliodor is the name given to yellow to yellow-green gems of the beryl mineral group. They can be attractive, durable, high-clarity stones with a relatively low price. Surprisingly, they are infrequently seen in jewelry. An iron oxide mineral with a chemical composition of Fe2O3. It is the world's most important ore of iron. When crushed it forms a red powder that has been used as a pigment for thousands of years. Hemimorphite is a zinc silicate mineral that occurs in white, blue and greenish blue colors. It is a minor ore of zinc. It is sometimes cut as a gemstone. These lack durability and are used as a collector's gem or in jewelry that will be subject to light wear. Hessonite is a variety of grossular garnet that is rich in iron and manganese. It has an orange to red-orange to reddish brown color and is sometimes called "cinnamon stone." It is occasionally cut into faceted stones and used in jewelry. A narrow ridge with steeply inclined sides of nearly equal slopes. Formed by differential erosion of steeply dipping rock units. An unusual pillar of rock that remains after the differential weathering or erosion of horizontal rock layers of varying physical properties. These structures can be caused by weathering along joints, less resistant rock units being selectively weathered, remnants from stream erosion and other processes. The name has an African origin where people imagined hoodoos being evil spirits or creatures in the form of stone. A nonfoliated metamorphic rock that is typically formed by contact metamorphism around igneous intrusions. A small spatter cone that forms on the solidified surface of a lava flow where hot lava is still flowing below. An opening in the roof of the flow and pressure within can force a spattering of lava out of the opening. This lava can build up into a structure with a very unusual shape. An elongated block of high topographic relief that is bounded on two sides by steeply-dipping normal faults. Produced in an area of crustal extension such as the Basin and Range Province of the southwestern United States. The barren rock that surrounds a mineral deposit. It is a term that is more specific and less geographically extensive than "country rock." Shown in the photo is gold in a quartz vein (right side) enclosed in basalt (left side). A volcanic center located within a lithospheric plate that is thought to be caused by a plume of hot mantle material rising from depth and located above a "hot spot" on the outer core. A natural spring that delivers water to the surface that is of higher temperature than the human body. Hot springs form in areas where there is warm rock at shallow depth or where deep circulation brings hot waters up from deep within the earth. The image is a photo of Emerald Spring, a hot spring with a pool in Yellowstone National Park. The dark portion of a soil that consists of organic material that is well enough decayed that the original source material cannot be identified. The ability of a porous material to transmit a fluid. Also known as "permeability." A mining method in which water is sprayed onto alluvium or unconsolidated sediment under high pressure for the purpose of disaggregating the particles and washing them through a sluice in the hope of recovering gold, gemstones or other heavy mineral particles. The method often caused great environmental damage by disrupting the land and flushing enormous tonnages of sediment into drainage basins. The photo from USGS shows hydraulic mining in Malakoff Diggings in the foothills of the Sierra Nevada in the 1870s. Any organic chemical compound (gaseous, liquid or solid) that is composed of carbon and hydrogen. The term is frequently used in reference to fossil fuels, specifically crude oil and natural gas. The production of electrical energy through the use of flowing or falling water. A graph that shows the change of a water-related variable over time. Example: A stream discharge hydrograph shows the change in discharge of a stream over time. The movement of water between the atmosphere, ground and surface water bodies through the processes of evaporation, precipitation, infiltration, percolation, transpiration and runoff. Also known as the "water cycle." The science of Earth's water, its movement, abundance, chemistry and distribution on, above and below Earth's surface. A chemical reaction involving water that results in the breakdown of mineral material. Pertaining to hot water, the actions of hot water or the products produced by the actions of hot water. Mineral deposits that are formed by the actions of hot water or gases associated with a magmatic source. A local metamorphism that occurs when hot waters and gases move through subsurface fractures and alter the minerals in the surrounding rocks. A deposit of minerals precipitated in a fracture by the actions of hot water or gases associated with a magmatic source. Many metallic ores and gemstone deposits form in hydrothermal veins. A hot spring on the sea floor, usually near mid-ocean ridges, that discharges hot water laden with dissolved metals and dissolved gases. When these hot fluids contact the cold ocean water the dissolved materials precipitate, producing a dark plume of suspended material. The water discharged from these springs is sea water that percolates down into the earth through fissures in the sea floor. This water is heated and picks up dissolved gases and metals as it interacts with the hot rocks and magma at depth. Also known as a "black smoker." Extremely salty; water which has a salinity much higher than average sea water is said to be hypersaline. (Average sea water contains about 35 g/L of dissolved sodium chloride.) A point beneath earth's surface where the vibrations of an earthquake are thought to have originated. Also known as the focus. |More General Geology| |What Does a Geologist Do?| |Land Below Sea Level| |Divisions of Geologic Time| |What Is Earth Science?| More From Geology.com:
<urn:uuid:6c370707-f2ee-4baf-b17c-bb2c5579a43c>
3.65625
2,121
Structured Data
Science & Tech.
44.139612
95,543,145
Structure of Random Materials Light scattering and small angle x-ray scattering results are reported for a variety of random materials. Random processes such as polymerization and aggregation account for the structure of these materials. Materials studied include linear and branched polymers, colloidal aggregates (prepared in solution, in flames and at an air-water interface), and composites. Although the concept of fractal geometry is essential to interpretation of the scattering curves, not all the materials show fractal character. KeywordsFumed Silica Silica Aerogel Colloidal Aggregate Random Material Intermediate Slope Unable to display preview. Download preview PDF. - 5.H.E. Stanley in Structural Elements in Particle Physics and Statistical Mechanics, K. Fredenhagen, and J. Honerkamp, Eds., Plenum, New York, 1982.Google Scholar - 9.K.D. Keefer and D.W. Schaefer, to be published.Google Scholar - 10.K.D. Keefer, O.E. Martin and D.W. Schaefer, to be published.Google Scholar - 14.D.W. Schaefer, K.D. Keefer and J.E. Martin, J. Phys. (Paris) Colloq, (1985).Google Scholar - 16.J.E. Martin, D.W. Schaefer and A.J. Hurd, to be published.Google Scholar - 18.S.K. Sinha, T. Freltoft and J. Kjems, in Kinetics of Aggregation and Gelation, F. Family and D.P. Landau, Eds., Elsevier-North Holland, Amsterdam, 1984.Google Scholar
<urn:uuid:e52fbbb7-bbd5-4e7b-bed3-e68aa07bfe72>
2.578125
363
Truncated
Science & Tech.
64.050938
95,543,176
Basic AIR native extension to Red language What is Red Red is a computer programming language. ... Introduced in 2011 by Nenad Rakocevic, Red is both an imperative and functional programming language. Its syntax and general usage overlaps that of the interpreted Rebol language (which was introduced in 1997). More info available at: http://red-lang.org What is AIR What is ANERed ANERed is experiment with embedding Red library into AIR context using AIR Native extension. This repository contains sources and script for building the ANERed extension and also example test AIR app which contains very simple console for testing Red commands directly from AIR. Here is how it can look like: It is still having some issues.. like the values returned from Red are always just a string and also as the output from Red's View blocking and similar cases, but that is probably ok.. I don't expect someone would be using the library as real console in AIR. To build ANERed and the test app, you must download AIR SDK and modify the Copyright (c) 2017, Oldes All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
<urn:uuid:1dca9d15-d8c6-4aeb-ba37-e11b093b3381>
2.578125
483
Product Page
Software Dev.
36.363921
95,543,185
A new space telescope will soon peer into the darkness of 'near space' (within a few thousand light years of Earth) to seek answers related to the field of high-energy astrophysics Peering into darkness can strike fear into the hearts of some, but a new space telescope will soon peer into the darkness of "near space" (within a few thousand light years of Earth). Scientists are using the telescope to seek answers related to the field of high-energy astrophysics. The Japan Aerospace Exploration Agency (JAXA) Kounotori H-II Transfer Vehicle (HTV-5) is seen berthed to the International Space Station. The external CALET experiment, which will search for signatures of dark matter, is seen being extracted from the unpressurized section by the station's robotic arm, Canadarm2. An aurora over the Earth limb is visible in the background. The CALorimetric Electron Telescope (CALET) investigation will rely on the instrument to track the trajectory of cosmic ray particles and measure their charge and energy. The instrument is optimized for measuring electrons and gamma rays, which may contain the signature of dark matter or nearby sources of high-energy particle acceleration. "The investigation is part of an international effort (involving Japan, Italy and USA) to understand the mechanisms of particle acceleration and propagation of cosmic rays in the galaxy, to identify their sources of acceleration, their elemental composition as a function of energy, and possibly to unveil the nature of dark matter," said CALET principal investigator Dr. Shoji Torii. "We know that dark matter makes up about a quarter of the mass-energy of the universe, but we can't see it optically and don't know what it is," said Dr. John Wefel, and CALET co-principal investigator for the US team. "If CALET can see an unambiguous signature of dark matter, it could potentially produce a new understanding of the nature of dark matter." Right now, scientists are much more certain what dark matter is not, rather than what it is. This research may help scientists identify dark matter and fit it, more accurately, into standard models of the universe. CALET launched aboard the Japan Aerospace Exploration Agency (JAXA) H-II Transfer Vehicle "Kounotori" (HTV-5) in August 2015 and was placed on the International Space Station's Japanese Experiment Module - Exposed Facility just days after its arrival. The instrument is a charged particle telescope designed to measure electrons, protons, nuclei and gamma rays. Unlike the telescopes that are used to pinpoint stars and planets in the night sky, CALET operates in a scanning mode. As it looks upward, it records each cosmic ray event that enters its field of view and triggers its detectors to take measurements of the cosmic ray. These measurements are recorded on the space station and sent to a ground station where they are fed into computers running analysis codes that allow scientists to reconstruct each event. From the resulting measurements, scientists must then separate electrons from the protons, gamma rays and the higher Z elements (chemical elements with >1 proton in the nucleus). They then sort the particles by energy to extend the existing data to higher energies and search for signatures of new astrophysics processes and phenomena like dark matter and nearby particle acceleration to study cosmic ray propagation in the galaxy. "The major theoretical model attributes dark matter to weakly interacting massive particles (WIMPs), whose nature is predicted by various high energy physics models," said Torii. "In these models, a WIMP would be its own antiparticle and, when two of them get together, they annihilate, producing known particles like electron/positron pairs, proton/anti-proton pairs, and gamma rays." Searching for excess annihilation products (i.e. electrons and gamma rays) is one way to try to identify a dark matter candidate and this is where CALET helps scientists. CALET joins another ISS investigation searching for excess annihilation products, the Alpha Magenetic Spectrometer or AMS, which is looking at positrons and antiprotons to identify dark matter. "Dark matter is still a puzzle," said Torii. "By measuring with good energy resolution the spectrum of high energy cosmic electrons and photons, CALET may make a discovery or exclude existing models." "Seeing an appropriate signature in the electron spectrum and/or gamma rays would be extremely important since this would set the mass scale (weight) for the dark matter particles, which would in turn allow theorists to better determine new physics associated with the WIMP," said Torii, adding that it is possible that a signature may be found that is not indicative of dark matter, but rather indicates a nearby source of charged particle acceleration. "The latter would be [a] huge achievement since no individual sources have ever been positively identified," said Torii. "Such objects seem to be able to accelerate particles to energies far higher than we can achieve on Earth using the largest machines and we want to learn how nature does this, with possible applications here on Earth." Understanding the location of these sources as well as particle propagation (the time particles spend, and distance traveled, wandering around the galaxy) means scientists can infer the shape of the cosmic ray spectrum at the source. Gaining a better understanding of how cosmic rays originate and the mechanisms of particle acceleration and propagation is important to space travel and for understanding the radiation environment in space and on Earth. "Basically, CALET is after new information about how our little corner of the universe works," said Torii, who added that the investigation underscores the importance of the space station as a platform for performing investigations and for successful international collaboration. Rachel Hobson | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:df145c57-d429-4ebf-9e4e-6a277e8f7472>
3.25
1,758
Content Listing
Science & Tech.
33.336197
95,543,191
Saturday July 21, 2018 Dec-30-2013 19:19TweetFollow @OregonNews Black HolesEdsel Chromie Salem-News.com It would be a great help if the astronomers had a better understanding of what they were looking at before they waste more billions of taxpayer dollars on a false premise. (SAN DIEGO) - Happy New Year. It could be a banner year for the solution of mysteries in the universe. On Dec. 7, 2013 the Science Channel broadcast a program again titled Swallowed by a Black Hole, copyrighted in 2013. I made comments on the program in my July 3, 2013 article titled, “Black Hole Discoveries”. It reported discoveries that confirm my contention that black holes are created by the combined magnetic fields of all of the stars and planets of the galaxy. The scientists reported that billions of taxpayer dollars and thousands of scientists are involved in a 5 continent telescope array to view the black hole at the center of our galaxy in an attempt to solve the mystery of black holes this summer. I think it would help if they used the correct paradigm and known, proven, laboratory experiments in their evaluation of what they will be viewing. These following comments indicate that the scientists are very clearly clueless about the cause and effect of black holes. It will help save time and money if they had a better understanding or an alternative concept to consider in advance. Dr. John Maggorian, U. of Oxford, “We find that the mass of the black hole was very strongly related to the mass of the companion galaxy. There is a nice linear relationship between these two with the mass of the black hole being around a half of a percent of the mass of the host galaxy. We discovered something new, fundamental.” According to Swallowed by a Black Hole, “That correlation is now known as the Maggorian Relationship. The relationship that Dr. Maggorian had discovered between galaxies and the tiny black holes at their center seemed so strange and odd that Dr. Maggorain and his colleagues thought they had made a mistake. It was like suggesting that something as tiny as a coin could control something as massive as the Earth.” My concept explains precisely why the mass of the galaxy and the size of the black hole are related. This discovery conforms to my contention that it is the combined magnetic fields of all of the mass of the galaxy creating the black hole the very same as the magnetic field of the outer windings of an ignition coil produces the intense flow of current through the core that produces the spark for ignition. This is the precise correlation between the mass and the black hole. And, they have it backwards. It is not the tiny black hole that controls the galaxy. It is the accumulated mass that determines the size and the power of the black hole. Also, similar to the swirling secondary magnetic field surrounding the flow of electricity through a wire, the flow of energy through the core of the black hole creates the swirling gasses surrounding the black hole. Dr. Caleb Scharf, Columbia U., New York, said: “It is incredibly important because it really meant that there is something linking the tiny black holes at the center of galaxies with the whole galaxy itself. It meant that somehow the whole history has been intertwined that the growth of the galaxy to the growth of the black hole are somehow related.” My concept explains precisely how these two are related. The narrator said: “As gas continues to spiral in toward the event horizon gravity climbs to staggering extreme. Gas molecules are forced into the whirlpool to queue up to be devoured by the black hole. Friction between the gas particles in this cosmic waiting line produces the densest, hottest, most electrically charged environment to be found anywhere in the universe.” Again, laboratory experiments in the early 1900’s proved that the swirling secondary magnetic field around the wire produces an eerie glow around the wire. And the demonstration I have often described proves that a magnetic field surrounding a charge of static electricity will create a bright glow of the gasses in the near vacuum within a fluorescent tube. This is the very same near vacuum of outer space and the very same gasses. The very powerful core energy of the black hole produces both the swirling gasses as well as the glow. Gravity has absolutely nothing to do with black holes. Black holes are created entirely via the electromagnetic energy of the magnetic fields of the mass of the galaxy. The Science Channel program states, “Under the intense gravitational field at the entrance to the black hole the dense superheated disk of matter waiting to be swallowed begins to shine like the Sun.” Prof. Eliot Quartert with U.C. Berkeley, said: “That fundamental fact is one of the great surprises about black holes. By their very name you would think that black holes would be this dark object that wouldn’t produce any light. And that’s true if you had this black hole sitting by itself alone it doesn’t produce any light. But in nature we have this gas spiraling into the black hole that turns out to produce the most efficient sources of light and the brightest sources of light that we know of in the universe.” The recent SOHO, IMAGE, TRACE and POLAR satellites have proven that the blinding light and the heat of the Sun are created by electromagnetic energy, not by the friction of gasses. The narrator said: “And so this summer the world’s most powerful telescopes will aim on our galactic center where the predicting of astronomers will be put to a 5 continent test.” It would be a great help if the astronomers had a better understanding of what they were looking at before they waste more billions of taxpayer dollars on a false premise. A black hole does not “devour” gasses or planets to provide its energy. The energy is solely provided by the combined magnetic fields of the entire galaxy. And, the energy will remain and increase in relation to its continued growth as long as the galaxy exists. Edsel Chromie is a Detroit Michigan native who moved to San Diego in 1965. Edsel is a World War Two Navy veteran who served as a motor machinists mate on diesel electric systems where he learned about the magnetic field current swirling around the primary current flow through a wire as a part of Navy training to trace the direction of flow of the electricity in case of torpedo damage. This led to Edsel's unique explanations of many phenomena of the universe. He also has four approved patents on solar energy and Sun tracking systems. Today Edsel writes about this unique set of life experiences for Salem-News.com, conveying information that seems especially relevant as nuclear disaster, potential changes in the earth's atmosphere, and what many view as an increasing level of natural disasters continue to dominate headlines. Perhaps many of the answers are on hand, yet unaccepted by the scientific community. You can write to Ed Chromie at this address: firstname.lastname@example.org Articles for December 29, 2013 | Articles for December 30, 2013 | Articles for December 31, 2013
<urn:uuid:4932c20e-89d1-460b-bb19-26cca1d23878>
2.625
1,463
Nonfiction Writing
Science & Tech.
47.979011
95,543,200
Air Plane In Flight - Head Wind, Tail Wind And Cross Wind Strings (SiPjAjk) = S7P5A51 Base Sequence = 12735 String Sequence = 12735 - 5 - 51 An airplane can fly at a speed of 200 miles/hr in still air. It can fly a distance of 800 miles with tail wind (wind is in same direction as direction of flight). It can fly 640 miles with head wind (wind is against direction of flight). What is the speed of the wind? (a) S7P5A51 (Physical Change - Speed). Pj Problem of Interest (PPI) is of type change. Problems of speed, velocity, acceleration and duration are associated with motion, they are generally Pj Problems of the type change because the PPI addresses rate of change. Figure 1.1 illustrates the directions of the head wind and tail wind. Sometimes the vertical component of the wind is not zero. Such a wind is called a cross wind and it has both a vertical and horizontal components. Let speed of the wind be w: Then, speed of airplane with tail wind = 200 + w Speed of airplane with head wind = 200 - w So, (200 + w)t = 800 ----(1) (t is time in hrs). Also, (200 - w)t = 640 ----(2) Solving equations (1) and (2) simultaneously gives: 400t = 1440 So, t = 18/5 hrs Substituting t in equation (1) or equation (2), gives: The speed of the wind, w = 22 miles/hr. The point . is a mathematical abstraction. It has negligible size and a great sense of position. Consequently, it is front and center in abstract existential reasoning. Single Variable Functions Ordinary Differential Equations (ODEs) The Universe is composed of matter and radiant energy. Matter is any kind of mass-energy that moves with velocities less than the velocity of light. Radiant energy is any kind of mass-energy that moves with the velocity of light. Composition And Structure Of Matter How Matter Gets Composed How Matter Gets Composed (2) Molecular Structure Of Matter Molecular Shapes: Bond Length, Bond Angle Molecular Shapes: Valence Shell Electron Pair Repulsion Molecular Shapes: Orbital Hybridization Molecular Shapes: Sigma Bonds Pi Bonds Molecular Shapes: Non ABn Molecules Molecular Orbital Theory More Pj Problem Strings
<urn:uuid:8f82c15c-433c-40b4-becc-ae1d7bb57c4d>
3.09375
545
Tutorial
Science & Tech.
59.639684
95,543,247
Estimation of Species Area Abundance from Point Abundance Data, Using Effective Detection Areas from Camera Traps Received Date: Sep 03, 2017 / Accepted Date: Nov 04, 2017 / Published Date: Nov 30, 2017 Estimations of species abundance are a common goal of wildlife monitoring surveys, but debate remains as to which methods are theoretically and practically most useful. Abundance-induced heterogeneity (AIH) models developed in the early 2000s allowed estimation of point abundance from repeated presence-absence data (e.g. occupancy models), and advanced estimation of point abundances of unmarked species. AIH models, however, do not provide an estimate of the effective detection area sampled. Therefore the absolute number of individuals in a survey area cannot be estimated directly. Recently, methods have become available to determine the effective detection area sampled by camera traps. Our objective was to present a novel method to estimate the absolute number of individuals of a species in an area from point abundance data using effective detection areas from camera traps. This would make AIH models available for population estimates. We applied this newly developed Species Area Abundance (SAA) model to a 3-month camera trapping data set of Bawean warty pigs (Sus blouchi) from Indonesia, and compared the result to an independent Random Encounter Model (REM) estimate from the same data. Population sizes and uncertainties estimated by the SAA and the REM model were comparable. Differences in density estimations between the REM and SAA model were not significant when mean group size was included in the REM. The less restrictive assumptions regarding camera trap placement of the SAA model compared to the REM might make it more practical to study cryptic and unmarked animal populations. Further studies are needed to determine the accuracy and practicality of the SAA model using a range of differrent sampling designs and focus species. Keywords: Abundance-induced heterogeneity model; Bawean warty pig; Occupancy; Activity; Random encounter model; REM Estimation of population sizes has been one of the main goals in wildlife monitoring surveys . Well into the second half of the previous century, survey methods were limited to capture-recapture and distance sampling studies using live traps, direct observations or indirect signs (i.e. feces), to estimate densities and population sizes [2-5]. The introduction of modern camera traps in the 1990’s provided new ways to estimate abundance and initiated a surge in new research . Most camera trap studies between 2008 and 2013, however, still used capture-recapture methods to estimate densities [7,8], a method limited to species that are individually marked [9,10]. For unmarked animal populations, camera trap studies increasingly make use of relative abundance-indices to make inferences . Unfortunately, relative abundance-indices are not comparable between sites, habitats, different points in time and species, as detection probability is not constant [11,12]. Additionally, relative abundance- indices do not produce estimates of the absolute number of individuals that are required for management of endangered populations. Presently, two camera trapping methods estimate absolute numbers of individuals: the abundance-induced heterogeneity model and the Random Encounter Model . However, both models still have their drawbacks when estimating absolute numbers of individuals in an area (from here on referred to as ‘species area abundance’). Abundance-induced heterogeneity (AIH) models are based on an occupancy framework, in which binary survey data are gathered over repeated visits. The AIH model assumes that detection probability (r) is species specific and is related to its abundance Ni. Therefore variation in species detection probability at a sampled site, or point, can be used to estimate its abundance. In the AIH model, the mean abundance of a species across all sampled sites is expressed as Mean point abundance or Poisson parameter Lambda (λ). Lambda is estimated by the maximum likelihood of the binomial probability of observing a certain detection history over a set of T repeat visits, multiplied by the Poisson probability of the local presence of K animals on the site . The key assumption of the AIH model is that the spatial distribution of the animals across survey sites follows a prior distribution (Poisson) . Initially, the AIH model was developed for point count surveys of migratory birds, but researchers have more recently applied it to camera trap surveys (e.g. Argus phaesants Argus argusianus , Siamese Firebacks Lophura diardi ). The limitation of the AIH model lies in its inability to estimate the effective detection area sampled. Estimations of species area abundance have been limited to a summation of Lambda over sampled sites and effective detection area is substituted for species characteristics such as average home range size [17,18]. Substitution, however, does not account for potential overlapping home-ranges, nor does it guarantee that camera traps effectively sample the home-range of a species. The Random Encounter Model (REM) was the first method to estimate the density and abundance of unmarked populations by taking detection probability of camera traps into account. This potentially provides a robust way to estimate species area abundance . Further adjustments to the REM in 2011 allowed researchers to estimate the effective detection area of a camera trap for different species . However, as the method is based on an ideal gas model, cameras must be placed randomly in relation to the movement of the target species. This restricts the sampling designs suitable for REM , and limits its use for cryptic species with low detection probability, that often require non-random sampling designs in order to generate sufficient captures. Our objective is to present a novel method to estimate species area abundance of unmarked species that combines the advantages of the AIH- and REM models and that is suitable for a variety of sampling designs. Specifically, our model combines the AIH model with estimating the effective detection area from camera traps. Materials And Methods A model for species area abundance estimation from point abundance data Conceptually, our newly proposed species area abundance (SAA) model consists of 5 basic steps (Figure 1). (1) The effective detection area of the camera traps for the target species is determined, providing the area sampled by each trap. (2) Using presence-absence data from repeat surveys in an AIH model, the mean point abundance, i.e. the mean abundance of a species across all sampled sites, is estimated. (3) This estimate is extrapolated to species area abundance by multiplying the mean point abundance with the total number of sampling sites if the entire survey area was monitored by camera traps without overlap between traps. (4) The extrapolated value is then corrected for the total number of sites occupied and (5) The resulting value is divided by the the maximum number of detections. The final estimate represents the number of individuals in the area at any given time, i.e. species area abundance. We will now explain the model in more detail. Figure 1: Schematic representation of the SAA model (a) a population is sampled using camera traps, with the effective area sampled by the camera traps indicated by the orange cones (Sx) (b) based on detection histories, mean point abundance across sampling sites is estimated (λ) (c) the estimate is extrapolated to area abundance by multiplying λ with the total number of sampling sites as if the entire survey area was monitored by camera traps (Tx) (d) the estimate is corrected for the total number of sites occupied, i.e. occupancy (ψx) (e) finally, the estimate is corrected for animal activity and movement by multiplying the camera operating time by the proportion of time spent active by the target species and dividing this by the average time the species spends at each occupied locality, equal to the average length of each detection event. The result is the maximum number of detection events (Dx max). The estimate in (d) is divided by Dx max to provide the number of animals in the area at any given time, i.e. species area abundance. Effective detection area The effective detection area of a camera trap is related to both species body size and behaviour . For species x, the effective detection area (Sx) of a camera trap is equal to the segment of a circle that is the product of the species-specific effective detection radius (rx) and the species-specific effective detection angle (θx) . At the effective detection radius and angle, the expected number of detections for a species is equal to the expected number of detections missed . Effective detection radius and angle are estimated from the detection radius and angle of an animal at first capture, using functions for fitting standard linear covariate detection models . The threshold values at which the number of detections equals the number of detections missed can be computed using a line transect model for angle and a point transect model for radius in distance [20,23]. Within a single species, large differences between body size (e.g., between sex or life-stages) should be taken into account by estimating the radius and angle of detection for each sex or life-stage. Furthermore, when using a combination of different camera trap models in the same survey or project, the effective detection area has to be (i.e. effective detection radius and angle) estimated for each camera trap model. The effective detection area should then be weighted based on the contribution of each camera trap model to the total number of traps used. The maximum number of sampling sites without overlap (Tx) for a species equals the size of the survey area (A) divided by the effective detection area. Mean point abundance: Mean point abundance (ψx) can be estimated from the Royle-Nichols abundance-induced heterogeneity model . Species area abundance: Species area abundance (Nx) can be calculated by multiplying the mean point abundance by the proportion of sites occupied (ψx) and the maximum number of local sampling sites. Equation 3 overestimates species abundance in an area, as it assumes a species to be present 24 h a day in each local sampling site. The result of equation 3, therefore, needs to be corrected for the maximum number of times a camera could detect an individual of a species. Proportion of sites occupied: The proportion of sites occupied (ψx) can be estimated using single season occupancy models. Site covariates deemed important on the basis of the ecology of the target species should be included and several models compared. The total number of sites occupied is estimated as the sum of the mean of the posterior distribution of occupancy at each site and then divided by the total number of sites sampled . Maximum number of detections The maximum number of times a camera can detect an individual of a species depends on the average time the camera traps are operated (t), the time a species spends at a local sampling site, and its level of activity throughout the day. If activity is defined as an animal being in movement , then the total amount of time in which detections could occur is equal to the total time the camera traps are operated, multiplied by the proportion of time spent active by the animal (vx). A flexible circular distribution to time-of-detection from camera trap data can be used to estimate the proportion of time spent active . This method has two assumptions: the level of activity is the only determinant of the rate at which the camera detects animals i.e., the camera operating times and animal activity times are independent of one another, and all individuals in the sampled population are active at the peak of the daily activity cycle. If these assumptions are met, trap rate is proportional to the level of activity and the total amount of time spent active proportional to the area under the trap rate curve. The proportion of time spent active multiplied by the average time the cameras are operated gives the maximum operating time available to detect a species. This maximum operating time, however, does not provide information on how many detection events could have occurred. To calculate the maximum number of detection events, we must first assume that the presence of the camera traps does not alter the natural behaviour of the species. Secondly, we must assume that the local sites sampled are a representative sample of the habitats available in the area. If these assumptions are met, then the maximum number of detection events (Dx max) equals the maximum operating time divided by the average time a species spent at each local sampling site per detection event The use of equation 4 requires that the camera sensors are setup to allow continuous detection, either through the use of video recordings or the use of photo recordings without intervals. To identify gaps in time when camera traps do not function during the research period, camera traps should be set to take a picture every day at midnight. Species area abundance can now be formulated as the product of the mean point abundance, the proportion of sites occupied and the maximum number of local sampling sites divided by the maximum number of detection events. Species area abundance (SAA) model: (5) Dividing the extrapolated abundance estimates by the maximum number of detection events thus provides a ‘snapshot’ or ‘frozen-intime’ view of the number of animals in the area, similar to Distance sampling methods . The SAA model assumes that all sampled local sites are accessible, animals are distributed homogenously over the habitat that they use, and abundance is constant for the duration of the survey. We recognize that this assumption might not be met in all cases. Stratified sampling within an animal’s habitat can be an option if animals are clearly using specific parts of their habitat. Additionally, more cameras at random locations are required to estimate density of species that are more aggregated or systematically distributed over their habitat. Next to this, abundance is assumed constant during the survey period. Therefore, repeated surveys need to be conducted in a sufficiently short timeperiod to ensure population closure. Case study bawean warty pigs We applied the SAA model to a camera-trapping dataset of Bawean warty pigs (Sus blouchi) from Bawean Island, Indonesia. Details of the random sampling design, population size estimates using REM modelling and an estimate of occupancy from these data can be found in Rademaker et al. . Point abundance (n=102) (λx) was computed in PRESENCE from the Royle-Nichols model . We used a Chi-Square (χ2) Goodness-Of-Fit test to assess whether the Poisson distribution fitted the dataset at the 0.05 significance level . Each 24 hour period of a single operating camera represented a repeated survey. Each camera trap operated for 7 days, resulting in 7 repeated surveys. Data on detection radius and angles were collected in the field using a video viewer, compass and measuring tape. Camera trap videos were played on the video viewer and a field assistant positioned him/herself at the first detection point. A compass was then laid on the center of the camera trap to measure the angle of detection. This procedure was then repeated using a measuring tape to estimate detection radius. Effective detection radius and angle were estimated by DISTANCE 6.0 using a point-transect model for radius, and a line-transect model for angle data . Propagation of error approach was used to estimate the uncertainty of species area abundance . Uncertainty of the species area abundance estimate Nx is equal to the square root of the squared sums of the uncertainty of the parameters λx, Tx and Dx max, times the partial derivatives of these parameters Uncertainty of the parameters Tx and Dx max were themselves functions of uncertainty of the parameters θx, rx and t and Rx (eqn 8) respectively. We used a two-sample t-test for equal means , to determine whether there was a significant difference between abundance estimates of the SAA-and REM model. Species area abundance (SAA) Model Bawean warty pigs were detected at 45 out of 102 sites with an average of (x ±SE) 0.73 ± 1.04 detection events per site over the repeat surveys. Mean point abundance was estimated at 1.06 ± 0.32 pigs. Chisquare Goodness-Of-Fit test showed no significant difference between observed and expected values, indicating model fit (Table 1). Mean parameter estimates to calculate species area abundance can be found in Table 2. Species area abundance of Bawean warty pigs on Bawean Island was estimated to be 436 ± 141 individual pigs (x ± SE) or 9.4 ± 3.0 pigs per km2. Uncertainty of the SAA model estimate was 32.34%. |Survey No.||No. of. Detections||No. of Observed||No. of Expected||Chi-square*| Table 1: Goodness-Of-Fit test results for mean point abundance in presence. Table 2: Mean parameter estimates and standard error of mean point abundance (λx), proportion of sites occupied (ψx), maximum number of local sampling sites (Tx), survey area in km2 (A), effective detection radius (r), effective detection angle (θ) maximum number of detection events (Dx max), average camera operating time in seconds (t), average time per detection event in seconds (R) and activity level (v). Mean parameter estimates and standard error of r, θ, ψ and v were obtained from Rademaker et al. Random encounter model (REM) Rademaker et al. estimated density and total abundance of Bawean warty pigs with the same dataset, although using a REM. Density and total abundance were estimated in two ways: one estimate with mean group size as an upper limit estimate, and one estimate without group size as a lower limit estimate. The lower limit estimate equalled 3.7 ± 0.9 pigs/km2 or 172 ± 42 pigs on the whole island. The upper limit estimate was 8.1 ± 1.9 pigs/km2 or 377 ± 92 pigs on the whole island. The uncertainty in the total abundance estimate by the REM was comparable to that obtained by the SAA model, however, slightly lower with 24.32%. The density estimate per km2 obtained through the SAA model is significantly (t (116.48)=-1.82, p=0.036) different to that of the lower limit estimate by the REM, but not significantly (t (170.96)=-0.36, p=0.359) different to that of the upper limit estimate by the REM. We estimated species area abundance of Bawean warty pigs on Bawean Island by using mean point abundance from an abundance induced heterogeinity model and effective camera trap detection areas. The result is credible, although the uncertainty in the estimation by the SAA model is high, with mean point abundance(λx), as the highest contributing parameter. Neither Royle and Nichols nor O’Brien and Kinnaird explicitly mention the uncertainty of mean point abundance for species assessed, although, the latter graphically report 95% CI of the estimates. Suwanrat et al. , report point abundance estimates of 0.49 ± 0.13 individuals (x ± SE) , equal to an uncertainty of 26.53%. When looking at uncertainties in REM estimates there is large variation in the literature. A study on Baird’s tapir (Tapirus bairdii) yielded an average uncertainty of 54% and an average uncertainty of 39% was reported for Harvey’s duiker (Cephalophus harveiyi) based on six differrent locations . On the contrary an uncertainty of only 8% was reported in a study on European wildcats (Felis silvestris silvestri) and an average uncertainty of 15% in a study of female African lionesses (Panthera leo spp.) in four habitats . This shows the difficulty in defining an acceptable level of uncertainty in estimating abundance of unmarked species from camera trap data. The abundance-induced heterogeneity model uses a Poisson distribution, and thus assumes that the spatial distribution of animals is homogeneous over the habitats . In order to meet this assumption the number of animals inhabiting one sampling point should not be spatially correlated to the number of animals at other sampling points. This can be achieved by placing traps at a distance greater than the home range diameter of the focal species when using non-random sampling designs. Additionally, the abundance-induced heterogeneity model assumes that in order to accurately estimate the maximum number of detection events, sampled points are a representative sample of the habitats available in the area. These assumptions allow for non-random and random sampling designs. The REM is derived from an ideal gas model in which particles are assumed to move randomly in relation to one another and the number of collissions or density of particles in the gas can be calculated based on this assumption. The key requirement for calculating animal density using the REM model is thus that the placement of the camera traps must be random in relation to the movement of the animals. Within habitats sampled by the researcher (e.g., secondary forest), landscape features that are used or avoided by the target species more than proportionally (e.g., trails, places with scratchmarks or feces), must therefore only be sampled in proportion to their coverage in the landscape to prevent violation of model assumptions . Random (stratified) or systematicinterval sampling designs can meet this assumption [35,36]. Use of random or systematic sampling design makes the use of the REM limited for cryptic species whose detection probability is low and who disproportionally use certain landscape features. In that case, locally preferrential placement is the only option to obtain any detection event or to get a sufficient amount of observations to accurately estimate abundance as well as effective detection parameters. The abundance induced heterogeneity model and the SAA model described in this paper do not have this limitation as they do not rely on trapping rate, but presence-absence data. An additional capture within a repeat survey resulting from a non-random movement of the target species in relation to a camera trap’s position does not directly influence the abundance estimate as the number of presence detections during the survey is still 1. Available habitats must be sampled representatively, but at these sampling sites locally optimal locations within a certain radius (e.g., 100 m), such as trails, places with tracks, feces or other signs of recent activity, can be sampled and representative estimates of abundance can still be obtained . This makes these models more suitable to estimate species area abundance of more rare or cryptic species. Repeated surveys do need to be conducted in a sufficiently short time-period to meet the assumption of population closure. Further studies are needed to determine the level of accuraccy and practicality of the SAA model for different species and sampling designs. We used the newly developed Species Area Abundance (SAA) model to estimate the abundance of Bawean warty pigs on Bawean. The standard error of estimated abundance on Bawean was slightly greater than the standard error of abundance estimated by an REM model used for comparison, but lay within the range of uncertainties of a number of other REM studies. An advantage of the SAA model in studying abundances of rare or cryptic species in an area are the less restrictive assumptions in terms of sampling design. Further studies are needed to determine the accuracy and practicality of species area abundance estimations under different sampling designs. We would like to thank Erik Meijaard for reviewing an early version of the manuscript. Next to this, we would like to thank the anonymous reviewer whose comments helped to further improve and clarify the manuscript during the submission process. - Wight HM (1938) A Manual on Field and Laboratory Technic in Wildlife Management. University of Michigan Press, Ann Arbor, USA. - Caughley G (1977) Analysis of Vertebrate Populations. John Wiley and Sons, New York City, New York, USA. - Burnham KP, Anderson DR, Laake JL (1980) Estimation of density from line transect sampling of biological populations. Wildlife monographs 72: 3-202. - White GC (1982) Capture-recapture and removal methods for sampling closed populations. Los Alamos National Laboratory. - Bailey RE, Putman RJ (1981) Estimation of fallow deer (Dama dama) populations from faecal accumulation. J Appl Ecol 18: 697-702. - O'Connell AF, Nichols JD, Karanth KU (2010) Camera traps in animal ecology: methods and analyses. Springer Science & Business Media, USA. - Foster RJ, Harmsen BJ (2012) A critique of density estimation from ca mera-trap data. The Journal of Wildlife Management 76: 224-236. - Burton AC, Neilson E, Moreira D, Ladle A, Steenweg R, et al. (2015) Review: Wildlife camera trapping: a review and recommendations for linking surveys to ecological processes. J Appl Ecol 52: 675-685. - Soisalo MK, Cavalcanti SM (2006) Estimating the density of a jaguar population in the Brazilian Pantanal using camera-traps and capture–recapture sampling in combination with GPS radio-telemetry. Biol Conserv 129: 487-496. - Karanth KU, Nichols JD, Kumar N, Hines JE (2006) Assessing tiger population dynamics using photographic capture–recapture sampling. Ecology 87: 2925-2937. - Harmsen BJ, Foster RJ, Silver S, Ostro L, Doncaster CP (2010) Differential use of trails by forest mammals and the implications for camera‐trap studies: a case study from Belize. Biotropica 42: 126-133. - Sollmann, R, Gardner B, Chandler RB, Shindle DB, Onorato DP, et al. (2013) Using multiple data sources provides density estimates for endangered Florida panther. J Appl Ecol 50: 961-968. - Royle JA, Nichols JD (2003) Estimating abundance from repeated presence-absence data or point counts. Ecology 84: 777-790. - Rowcliffe JM, Field J, Turvey ST, Carbone C (2008) Estimating animal density using camera traps without the need for individual recognition. J Appl Ecol 45: 1228-1236. - Donovan TM, Hines J (2007) Exercises in occupancy estimation and modeling estimation. Vermont Cooperative Fish and Wildlife Research Unit Spreadsheet Project, USA. - O'Brien TG, Kinnaird MF (2008) A picture is worth a thousand words: the application of camera trapping to the study of birds. Bird Conserv Int 18: S144-S162. - Suwanrat S, Ngoprasert D, Sutherland C, Suwanwaree P, Savini T (2015) Estimating density of secretive terrestrial birds (Siamese Fireback) in pristine and degraded forest using camera traps and distance sampling. Global Ecol Conserv 3: 596-606. - Karanth KU, Nichols JD (1998) Estimation of tiger densities in India using photographic captures and recaptures. Ecology 79: 2852-2862. - Jennelle CS, Runge MC, MacKenzie DI (2002) The use of photographic rates to estimate densities of tigers and other cryptic mammals: a comment on misleading conclusions. AnimConserv 5: 119-120. - Rowcliffe J, Carbone C, Jansen PA, Kays R, Kranstauber B (2011) Quantifying the sensitivity of camera traps: an adapted distance sampling approach. Methods Ecol Evol 2: 464-476. - Rowcliffe JM, Kays R, Carbone C, Jansen PA (2013) Clarifying assumptions behind the estimation of animal density from camera trap rates. J Wildl Manage 77: 876-876. - Buckland ST, Anderson DR, Burnham KP, Laake JL (1993)Statistical Theory. In: Distance Sampling: Estimating Abundance of Biological Populations (pp; 52-99).Springer-Science+Business Media, BV, Netherlands. - Thomas L, Laake JL, Rexstad E, Strindberg S, Marques FFC, et al. (2009) Distance 6.0. Releas ‘’x’’1. Research Unit for Wildlife Population Assessment. University of St. Andrews, United Kingdom. - Fiske I, Chandler R (2011) unmarked: An R package for fitting hierarchical models of wildlife occurrence and abundance. J Stat Softw 43: 1-23. - Rowcliffe JM, Kays R, Kranstauber B, Carbone C, Jansen PA (2014)Quantifying levels of animal activity using camera trap data. Methods Ecol Evol 5: 1170-1179. - Howe EJ, Buckland ST, Després-Einspenner ML, Kühl HS (2017) Distance sampling with camera traps. Methods Ecol Evol 8: 1558-1565. - Rademaker M, Meijaard E, Semiadi G, Blokland S, Neilson EW, et al. (2016) First Ecological Study of the Bawean Warty Pig (Sus blouchi), One of the Rarest Pigs on Earth. Plos one 11: e0151732. - Hines JE (2006) PRESENCE2- Software to estimate patch occupancy and related parameters. - Snedecor GW, Cochran WG (1989) Statistical Methods. Eighth Edition, Iowa State University Press, USA. - Taylor JR (1997) Propagation of Uncertainties. In: An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements (2nd edition). University Science Books, Sausolito, USA, pp: 45-73. - Carbajal-Borges JP, Godínez-Gómez O, Mendoza E (2014) Density, abundance and activity patterns of the endangered Tapirusbairdii in one of its last strongholds in southern Mexico. Trop Conserv Sci7: 100-114. - Rovero F, Marshall AR (2009) Camera trapping photographic rate as an index of density in forest ungulates. JAppl Ecol 46: 1011-1017. - Anile S, Ragni B, Randi E, Mattucci F, Rovero F (2014) Wildcat population density on the Etna volcano, Italy: a comparison of density estimation methods. J Zool 293: 252-261. - Cusack JJ, Swanson A, Coulson T, Packer C, Carbone C, et al. (2015) Applying a random encounter model to estimate lion density from camera traps in Serengeti National Park, Tanzania. J Wildl Manage 79: 1014-1021. - Kays R, Tilak S, Kranstauber B, Jansen PA, Carbone C, et al. (2010) Monitoring wild animal communities with arrays of motion sensitive camera traps. arXiv preprint arXiv: 1009.5718. - Ahumada JA, Silva CE, Gajapersad K, Hallam C, Hurtado J, et al. (2011) Community structure and diversity of tropical forest mammals: data from a global camera trap network. Philos Trans R Soc Lond B Biol Sci 366: 2703-2711. Citation: Rademaker M, Rode-Margono EJ, Weterings MJA (2017) Estimation of Species Area Abundance from Point Abundance Data, Using Effective Detection Areas from Camera Traps. J Biodivers Endanger Species 5: 200. Doi: 10.4172/2332-2543.1000200 Copyright: ©2017 Rademaker M, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Select your language of interest to view the total content in your interested language Share This Article - Total views: 689 - [From(publication date): 0-2017 - Jul 16, 2018] - Breakdown by view type - HTML page views: 618 - PDF downloads: 71
<urn:uuid:086f444f-6997-4f2b-a2f8-85190501422e>
2.8125
6,751
Academic Writing
Science & Tech.
35.557526
95,543,255
Trithemis tropicana Fraser, 1953 Eastern Mantled Dropwing - scientific: T. africana (Brauer, 1890) ssp. tropicana - vernacular: Eastern Phantom D. Type locality: Eala, DRC Male is similar to many Trithemis species with (a) hamule with long sickle-shaped hook; (b) genital lobe directed away from hamule; (c) Abd largely black, often with some pale markings, appears blue pruinose or black with maturity. However, set apart from the rest by (1) ranging from DRC to Nigeria; (2) 16½-19½ Ax in Fw; (3) 6-9 cell-rows between anal loop and tornus; (4) Hw base broadly brown (rather than with small brown patch), including triangle. Replaced by T. africana in western Africa (see for separation). [Adapted from Dijkstra & Clausnitzer 2014] Streams in open areas in forest. Often with coarse detritus and a sandy and/or soft (like muddy) bottom, and probably especially calmer sections (like pools) with emergent vegetation and/or submerged roots. From 0 to 700 m above sea level, but possibly higher up. Abdominal segment 2 (lateral view) Map citation: Clausnitzer, V., K.-D.B. Dijkstra, R. Koch, J.-P. Boudot, W.R.T. Darwall, J. Kipping, B. Samraoui, M.J. Samways, J.P. Simaika & F. Suhling, 2012. Focus on African Freshwaters: hotspots of dragonfly diversity and conservation concern. Frontiers in Ecology and the Environment 10: 129-134. - Fraser, F.C. (1953). New genera and species of Libellulines from the Belgian Congo. Revue Zoologie Botanique Africaines, 48, 242-256. [PDF file] - Pinhey, E.C.G. (1962). Some records of Odonata collected in tropical Africa. Journal Entomological Society Southern Africa, 25, 20-50. [PDF file] - Pinhey, E.C.G. (1966). Notes on African Odonata, particularly type material. Revue Zoologie Botanique Africaines, 73, 283-308. [PDF file] Citation: Dijkstra, K.-D.B (editor). African Dragonflies and Damselflies Online. http://addo.adu.org.za/ [2018-07-17].
<urn:uuid:658d03ff-7a23-4270-8194-f5bc1bd0a4d4>
2.609375
573
Knowledge Article
Science & Tech.
68.898718
95,543,261
Resurrected Proteins Double Their Natural Activity News Oct 02, 2015 Proteins play a large role in sustaining life functions. These molecules ensure that vital reactions, such as DNA replication or metabolism catalysis, are carried out within cells. When proteins die, the so-called process of denaturation takes place, which is accompanied by the unfolding of the native three-dimensional structure of the protein and hence the loss of its activity. By reassembling this polymer tangle back, it is possible to renature the protein and restore its activity, but this procedure requires much effort. Denatured proteins often pile up to form toxic aggregates, which is the underlying reason for many illnesses such as Alzheimer, Parkinson and Huntington. Therefore the investigation of denaturation and renaturation mechanisms cannot be overestimated. In a new study, David Avnir, professor at the Hebrew University in Jerusalem, and Vladimir Vinogradov, head of the International Laboratory of Solution Chemistry of Advanced Materials and Technologies at ITMO University, found that bringing proteins back to life is not only possible, but can be carried out with an improvement over their original activity. This strange phenomenon owes to a new technique of protein renaturation based on combining thermally denatured proteins (carbonic anhydrase) with a colloid solution of inorganic aluminum oxide nanoparticles. As the solution turns into a gel, the nanoparticles start binding together, exerting mechanical pressure on the protein molecules. As a result, each molecule ends up entrapped in its own individual porous shell, which prevents the malign process of protein aggregation and eventually restores their original spatial structure. Having compared the level of activity of proteins before denaturation and after renaturation, the chemists discovered that the resurrected ones were 180 percent more active than their native predecessors. "Every protein molecule has its active center, which allows the molecule to interact with the environment. The active center, however, constitutes only 5 - 10 percent of the molecule surface," explains Vladimir Vinogradov. "During renaturation we deal with a long unfolded molecule containing an active center and several extending tails. The active center and nanoparticles have similar charges and will repel, while the tails have an opposite charge and will gravitate towards the nanoparticles. In the end, when a shell forms around the molecule, the active center will be as far away from the wall of the shell as possible. Instead, the active center will be directed right into the pore in the shell, thus increasing the protein's chances to interact with the substrate." Researchers say that this technique only works with unfolded denatured proteins. The orientation of native proteins within the shell cannot be controlled in the same way, because the active center can find itself anywhere, including facing the wall, which entirely excludes the possibility of interacting with the substrate. As professor David Avnir explains, one possible application of the discovery could help optimize the fabrication of drugs based on active proteins: "Some of the most effective drugs are based on active proteins that are harvested from cell cultures. However, from all proteins grown in such a way only 20 percent are native and suitable for use, while the remaining 80 percent are the so-called inclusion bodies, that is, non-functioning denatured proteins. Obviously, knowing how to convert denatured proteins to their native state, and on top of it with increased level activity, would allow pharmaceutical companies to lower the price of many drugs making them more affordable." Mouse Study Suggests That Dietary Fat, Not Carbs, Drives ObesityNews A mouse study that made over 100,000 measurements of body weight and fat has concluded that the sole driver of obesity in mice is increased dietary fat content.READ MORE The Perfect Terahertz Beam - Thanks to the 3D PrinterNews Scientists have succeeded in shaping terahertz beams with extremely high precision. All that is needed for this is a simple plastic screen from a 3D printer.READ MORE Peering Inside ProteinsNews The proteins in our bodies are sophisticated structures that perform specific jobs to keep us functioning and healthy. Understanding how a protein is wired could help researchers develop ways to control its activity. A new technique lets researchers look into the atomic structures of proteins to see that wiring.READ MORE
<urn:uuid:e9c08dad-6cef-4acf-80b3-95ead022078f>
3.328125
875
Truncated
Science & Tech.
17.456857
95,543,269
Some of the world’s top scientists have confirmed that our Universe is actually a computer simulation, similar to the one portrayed in The Matrix movies. According to a paper written award-winning astrophysicist Neil deGrasse Tyson, humans are living in a computer program, run by an ‘alien’ race with advanced technological capabilities. Ladbible.com reports: His notion consisted of three propositions: - “The fraction of human-level civilisations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero, or - The fraction of posthuman civilisations that are interested in running ancestor-simulations is very close to zero, or - The fraction of all people with our kind of experiences that are living in a simulation is very close to one.” Put more simply: humans or humanity will go extinct before we get to the stage where we’re able to run simulations, people in the future either won’t be interested in operating simulations if they did reach a ‘posthuman’ era or find it unethical, or we are living in one right now. The paper also says that it might not be humans who delve into this, and could be a civilisation elsewhere in the deep reaches of space who master ancestor simulations. Professor Bostrom tells LADbible: “If we imagine science and technology continuing to unfold and reaching a state of maturity; we can see that at that point, it would be feasible to create detailed computer simulations of people like their forbearers, and they wouldn’t be distinguishable from the original reality.” His paper also highlights an Inception-like concept that we could be a simulation, within a simulation, within a simulation and so on: “It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe.” But how much computing power would these ‘posthumans’ need to generate a simulation that feels like real life? Professor Bostom’s paper hypotheses: “Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers.” Yep, you read that right; the professor predicts that we could actually colonise planets in the future for the sole purpose of using them as a base to house a shit-tonne of supercomputers to run simulations. When the paper was released 14 years ago, Bostrom identified researchers such as Eric Drexler, who had created designs for a computer the size of a sugar cube that could perform 1021 instructions per second. That’s 1,000,000,000,000,000,000,000 or sextillion instructions every single second. But that was back in 2003; imagine the advancements in computing power that have happened since then. Nick’s paper then highlighted estimates from other researchers, suggesting that the human brain, based on the number of synapses and firing power, makes roughly about 1016-1017 instructions a second. It’s estimated that we have between 80-100 billion nerve cells, and in 2013 researchers in Japan managed to simulate one second of a person’s brain activity. However, the operation took 40 minutes, 82,944 processors and ate up 1 petabyte (that’s one level above a terabyte) of memory. Professor Bostrom tells us that it’s virtually impossible, at this stage, to work out when this posthuman era will be: “The original simulation argument never made any assumption really about the timeline and whether it will take 50 years to get to this ability to create ancestor simulations or 50,000 years.” However, Elon Musk made a very good point during the World Government Summit this year: “When you see the advancement of something like video games. You know, like, say 40 years ago…the most advanced video game would be like Pong where you had like two rectangles and a dot, you know, like batting it back and forth. “But now you can see a video game that’s photo realistic…and millions of people playing simultaneously. “And you see where things are going with virtual reality and augmented reality. And if you extrapolate that out into the future with any rate of progress at all, like even 0.1 percent or something like that a year, then eventually those games will be indistinguishable from reality.” Take the 2016 video game No Man’s Sky as an example of how far things have come with technology. The action-adventure survival game lets players explore a universe which has more than 18 quintillion (or more specifically 18,446,744,073,709,551,616) different planets. According to the developer, Sean Murray, it would take something like 584 billion years to visit every planet. Despite its many criticisms from fans, the fact that video game developers can produce a simulated universe that detailed in this age only fuels the likelihood that our civilisation will be able to make ancestor simulations. At last year’s Code Conference, Elon Musk added: “There’s a billion to one chance we’re living in base reality (aka not in a simulation).” He’s talked about the simulation theory so much that he’s agreed with his brother that the topic is banned from hot-tub conversations. An interesting aspect of the theory points towards an unlikely area: religion. Many religions have a benevolent creator, a higher being, an all-knowing superior; which actually fits in nicely with the simulation argument. While Professor Bostrom insists that the ‘simulation-hypothesis does not imply the existence of such a deity, nor does it imply its non-existence’, he mentions how the theory mirrors people’s understanding of ‘god’. “Although all the elements of such a system can be naturalistic, even physical,” he says. “It is possible to draw some loose analogies with religious conceptions of the world. “In some ways, the posthumans running a simulation are like gods in relation to the people inhabiting the simulation: the posthumans created the world we see; they are of superior intelligence; they are ‘omnipotent’ in the sense that they can interfere in the workings of our world even in ways that violate its physical laws; and they are ‘omniscient’ in the sense that they can monitor everything that happens.” One of the aspects that backers of the theory point out is that virtually everything in our world has a limit or is at least measurable. Rich Terrile, a scientist at NASA’s Jet Propulsion Laboratory told the Guardian: “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated.” We already use simulations in many areas, from predicting weather patterns to what would happen to a horde of people if they came into contact with a spinning blade. These simulations help us understand how things would react under certain circumstances. However, despite science being a field where theories and hypotheses are researched and tested, this one is pretty difficult to prove. According to the New Yorker, there is a team of scientists, led by two tech billionaires, who is currently trying to see whether they can break out of the simulation. Another team from the University of Washington is trying to see whether they can pick up physical signatures in our universe that could be attributable to a simulation. As far as we know, nothing has been proven yet. Professor Bostrom tells us: “I don’t think there’s any strong proof for one way or the other. You could imagine possible observations that would have strong bearing like if a window pops up in front of you saying, ‘You’re In A Simulation, Click Here For More Information’. That would kind of be conclusive proof.” Until we get something as obvious as a pop-up window in real life, we’ll just keep playing games like The Sims and pretend we’re the all-powerful creator. Latest posts by Sean Adl-Tabatabai (see all) - New San Francisco Mayor: “There’s Literally Human Sh*t Everywhere!” - July 16, 2018 - Trump And Putin Vow To Destroy The ‘New World Order’ - July 16, 2018 - President Trump: CNN Is The Enemy Of The People - July 16, 2018
<urn:uuid:f8a78b0a-fd34-44e6-82a8-55a392bdc86e>
2.6875
1,840
News Article
Science & Tech.
36.202682
95,543,270
An introduction to the new tags in HTML 5 Otrzymano od użytkownika krzychukula za pośrednictwem Czytnika Google: gets finalised, what’s in your browser is likely to change. The skills you’ll pick up now, however, will be ready for the final HTML 5, whenever it arrives. Author: Simon Bisson | Originally appeared in Issue 149 01 Drawing with <canvas> <title>HTML 5 demo</title> var canvas = document.getElementById(‘testCanvas’); var ctx = canvas.getContext(‘2d’); <body onload = “drawCanvas();”> <canvas id = “testCanvas” width = “300” height = “300”>Your browser does not support the 02 Drawing a square which we can combine to make more complex objects. 03 Adding a circle Drawing circles is harder than squares or rectangles. Here, we have to use an arc function to draw and fill a circle. One problem is that HTML 5’s drawing API expects angles to be expressed in radians, so we’ll need to convert our 360 var x = 120 var y = 120; var radius = 50; var startAngle = 0; var endAngle = (Math.PI / 180)* 360; var anticlockwise = true ctx.arc(x,y,radius, startAngle, endAngle, anticlockwise); 04 Blending colours HTML 5’s drawing APIs let you use an alpha blend to mix colours. You can do this by choosing an appropriate fill style. Here, we’re using an RGB colour mix, and all we need to do to specify alpha blending is to specify ‘rgba’ instead of ‘rgb’. The transparency level is a number between zero and one. ctx.fillStyle = “rgba(0, 0, 200, 0.5)”; 05 Paths to pictures The secret to drawing shapes in HTML 5 is using paths. We’ve already used a startPath function to draw our circle, and we can change the shape we’re drawing by using 270 degrees. Add a closePath function after you draw your arc (and switch to draw clockwise) to get this image. 06 Stroke versus Fill Paths can be treated as strokes – for line drawing – or fills. Strokes are simple lines that follow the path described by a set of drawing instructions. You can move between points without drawing a line using a moveTo() function, allowing complex shapes to be drawn with one single stroke() function. This code adds two concentric triangles to our drawing in just one action. 07 Colouring strokes We’ve already used fillStyle to set colours and apply alpha blending to the shapes in our canvas drawing. We can do similar things with our line drawings, using strokeStyle to set colours for the canvas stroke-drawing operation. Set strokeStyle to rgb(0, 300, 0) to draw your triangles in green. We’ve already used a local transparency value, but you can do a lot more with colours using the HTML 5 canvas API. One option is to use a colour gradient, where you can set the shape and direction of a gradient, either using a linear or radial gradient. The result can be a very simple drawing but have a very effective effect – perhaps a stylised sunset or a forest fading off into the horizon. var radialgradient = ctx. 09 Go further with <canvas> 10 Web Forms 2.0 HTML 4’s forms are a familiar, if limited, way of getting info from a web browser to a server. HTML 5 builds on the old forms standard to deliver a more flexible way of working. Opera is the first browser to implement Web Forms 2.0, taking advantage of features like validity checking. 11 Validity checking <label>Home page: <input type=”url” oninvalid=”alert(‘You must enter a valid web page address.’); return false;”> 12 Minimum and maximum Web Forms 2.0’s validity-checking features aren’t just for checking whether your users have typed in an email address when asked for one. You can also use them to handle minimum and maximum values for a field, which can even be a date range. Opera will render this code as a field with up and down arrows that can be used to select a time inside the range specified. <label>Select a time during business hours:<input type=”time” min=”09:00” max=”17:30”></label> One-minute intervals are a bit on the short side for a booking system, so we’ll need to modify the default. Web Forms 2.0 includes a step operator, which we can use to control how Opera’s up/down control works. While the Opera controls use a one-minute default, the minimum step for a time input is one second. Set step to 1,800 seconds to allow users to pick appointments with 30- <label>Select a time during business hours:<input type=”time” min=”09:00” max=”17:30” step=1800> 14 Complete with datalists Modern browsers (and the Google toolbar) often store commonly used form responses. With Web Forms 2.0, sites can keep their own lists of responses, using datalists associated with form inputs. This code snippet will hold a list of site URLs that can be used as a quick-pick list for a URL input field, or your user can type in a URL that isn’t on the list. <input type=”url” name=”location” list=”urls”> <option label=”Site 1” value=”http://www.site1.org/”> <option label=”Site 2” value=”http://www.site2.org/”> <option label=”Site 3” value=”http://www.site3.org/”> <option label=”Site 4” value=”http://www.site4.org/”> <option label=”Site 5” value=”http://www.site5.org/”> <option label=”Site 6” value=”http://www.site6.org/”> <option label=”Site 7” value=”http://www.site7.org/”> - Subskrybuj Web Designer - Defining the internet through beautiful design za pomocą Czytnika Google - Zacznij używać Czytnika Google, aby łatwiej śledzić swoje ulubione witryny
<urn:uuid:a2de1258-607e-46c3-8df8-0b45b7ee1620>
3.1875
1,564
Tutorial
Software Dev.
66.544912
95,543,276
What is 115 K C E? 115 Known Chemical Elements 5 people found this useful 115 known chemical elements. However, that number is not accurate; there are really more than that now. how to find the measure of angle C in the following triangle some rock melt at 2000F what is this temperaturein the celsiusscale 23 x 5 115 is divisble by 1 (115 times), 5 (23 times), 23 (5 times) and115 (1 time). Element 115 made power for nuclear reactors. . Element 115 also made new weapons for the U.S. 1, 5, 23, 115. Element 115 is Ununpentium, or Uup. It is also known as eka-bismuth. It is a placeholder in the periodic table, and we have not been able to synthesize more than 30 or so atom…s of it since 2004. All of its half-lives are very short. That point is located in rural northeastern China, about 75 miles west of the center of Beijing. . the continent that includes the coordinated of 45n latitude and 115e longitude is Asia.
<urn:uuid:1e161f1d-da6c-4390-a350-df32e79ffd75>
3.171875
241
Q&A Forum
Science & Tech.
77.648599
95,543,290
Newsletter | July 27th, 2017 This week we’re talking about robotic eels tracking water pollution, building DNA from scratch👨🔬, the first floating wind farm💨, and the first solar-powered train ☀️🚆. Robotic eels vs water pollution We have certainly measured water pollution before. But what about measuring water pollution with a robotic eel? Researchers at the Swiss Federal Institute of Technology 🇨🇭 and other institutions created a swimming machine that emulates an eel’s undulating motion. The researchers believe this type of robot measures pollution levels faster and more effectively than other methods. The undulating motion is actually better than propeller-based robots because propellers can kick up mud and disturb aquatic life, resulting in compromised data. As the eel robot swims it sends information back to a remote computer in real-time. Even better, the robot can be programmed to swim towards more and more contaminated water, effective for finding sources of pollution. Nothing like using nature as an inspiration to help protect nature. 🌊👍 DNA built from scratch, full genome likely built by end of year? 😲 Forget modifying the DNA code, what about building DNA completely from scratch? Jef Boeke at New York University leads an international team of 11 labs to “rewrite” the yeast genome. The intention is that in the short term they could insert human-made DNA into living cells, possibly providing a treatment for diseases. In the long-term? Possibly creating new organisms. As one scientist cautioned, “It’s not only a science project. It’s an ethical and moral and theological proposal of significant proportions.” Rewriting the yeast genome won’t be easy, requiring millions of DNA letters to be altered, added, and deleted. For this reason, they have split up the task between 11 labs spread throughout four continents. They are already one-third of the way through. In fact, they aim to have the rest of the yeast genome built by the end of the year! Let’s see… (More) ENVIRONMENT . The world’s first floating wind turbines in 🇬🇧 Floating wind turbines are now a reality, thanks to the company Stratoil (🇳🇴) & UK government subsidies. The floating turbines will be situated 15 miles from the coast of Scotland, to make a floating farm called Hywind. It’s expected that the floating turbines will produce enough power for 20,000 homes. The cost of wind power has seen a significant drop in recent years and may soon be cheaper than nuclear power. The UK has been a leading nation in the deployment of wind power as an alternative energy source. They have the world’s largest turbines in the Irish Sea off the north-west of England. Guess who has the biggest collection of wind power on the planet? The UK. In fact, at the moment the collective capacity of all of the UK’s wind turbines can power 4.3 million homes. And they’re looking to double their wind capacity by 2020. 🇬🇧💨 More firsts, the first solar-powered train 🇮🇳 The first solar-powered train started moving this past week, along a 12.5-mile route near Delhi, India. The train isn’t fully solar-powered, it is a diesel-electric hybrid, but it is certainly a step in the right direction. Its solar-powered battery can power the train by itself for up to 72 hours. While Indian Railways has used embedded solar-panels since 2015, they only used the solar power to power interior lights and air conditioning. Now, the entire train can run from solar power, again, for up to 72 hours. The UK is also looking to get their trains off the grid, and power them from the sun. However, the UK project would likely source its solar power from elsewhere, not driectly mounted to the trains themselves. That makes the Indian solar-powered trains that much more unique, panels on the train itself is powering the train forward. 🚆+☀️ is pretty futuristic. 👍
<urn:uuid:1bca91bc-cc7d-42ce-b296-fcd15e36bef4>
2.9375
890
News (Org.)
Science & Tech.
55.536575
95,543,302
NOTE: Most of the High Level Science Products are unavailable while unscheduled maintenance is being performed. They will be incrementally restored over the course of this week. We apologize for any inconvenience. How can I make CalFUSE run faster? CalFUSE requires two things of your machine: memory and I/O. To help the program run more quickly, turn off other memory-intensive applications, especially IDL. If your computer has a fiber-optically-coupled disk, take advantage of its increased speed by moving your data and calibration files there. Trouble Compiling CalFUSE Q: I just tried to do a "make install" of CalFUSE. It got as far as trying to compile slalib, but then crashed because it seemed to be trying to use fort77 instead of f77 to do the compiling. What's up? A: We've seen this problem before. Typing "make -e install" instead of "make install" should do the trick. Q: What do exposure numbers like 915 mean? e.g., B09001019151attagfraw.fit A: The bright earth. When the earth passed between the spacecraft and the target, we sometimes left the detector high voltage on to get a downward-looking airglow spectrum. The resulting raw data files received exposure numbers beginning with 901. Early in the mission, we used to take such data frequently. However, because the repeated airglow exposures were drawing charge out of the detectors for no good gain, the practice was largely stopped, and only performed occasionally for the rest of the mission. Caveat: Bright-earth exposures may not end when the target reappeared from behind the earth. If the spacecraft was within ten minutes of local noon or the pointing was predicted to be unstable, target acquisition likely to failed, so the airglow exposure continued until the situation improved. If the target was still in the aperture, the spacecraft continued to observe it and the resulting airglow exposure included a target spectrum. To find out, check the count-rate plots for the airglow exposures. CalFUSE rejects data obtained during earth occultation, so the extracted spectra contain only target data. The observation-level (*all*) spectra do not, however, include contributions from 900-level exposures. Modified Julian Date Q: CalFUSE converts wavelengths to a heliocentric reference frame. Does it convert the exposure start and stop times (EXPSTART and EXPEND) to a heliocentric frame as well? A: No. CALFUSE does not convert the exposure start and stop times to a heliocentric frame of reference. The header keywords EXPSTART and EXPEND are genuine geocentric modified Julian dates, i.e., Julian dates with 2400000.5 subtracted. Local Standard of Rest Q: What is the difference between dynamical LSR and standard solar LSR? A: The component of the sun's motion in the direction of the target, assuming the standard solar motion of 20 km s-1 towards RA 18h, Dec +30o (1900), is written to the header keyword V_LSRSTD. In V_LSRDYN, we provide the sun's velocity toward the target assuming the dynamical solar motion of of 16.6 km s-1 towards l = 53, b = +25. V_LSRDYN and V_LSRSTD are provided for informational purposes only. They are not used by the calibration software. Reference for FUSE Flux Calibration Q: What value I should quote for the accuracy of the FUSE flux calibration? Can you give me an appropriate reference? A: The accuracy of the FUSE flux calibration is about 10%, but the exact value depends on the kind of science you are trying to do. For details, see "CalFUSE Version 3: A Data-Reduction Pipeline for the Far Ultraviolet Spectroscopic Explorer" (Dixon et al. 2007). Low Count Rates Q: Why do my emission-line stars show systematically lower C III 977 Å fluxes through the FUSE MDRS aperture than when observed with HUT or ORFEUS? A: Poor alignment of the SiC channels is most likely at fault: if the star drifts out of the slit during an observation, you will measure lower fluxes. If your spectra have emission lines in the 1000-1100 Å region, then you may be able to scale the SiC spectra to match the flux observed with the LiF 1A channel. If your data were obtained in time-tag mode, then you might be able to exclude times when the star was out of the aperture. If your data were taken in histogram mode, you could use the count-rate plots to estimate the fraction of the exposure when the star was out of the aperture and scale the line fluxes accordingly. Q: The burst-rejection routine often throws out the last few seconds of my data, even when the count-rate plots show no increase in the flux. What's up? A: This may not be an error. If the observation ended as the spacecraft moved into the SAA or the target approached the earth limb, the resulting rise in the background count rate may be interpreted as a mild burst. (Because the count-rate plots are logarithmic, small changes in the count rate may be hard to see.) In any event, it is easy to stop this behavior by tinkering with the keywords that control the burst-rejection algorithm. Details may be found in The CalFUSE Pipeline Reference Guide. Q: I've just installed CalFUSE v3.2 and am running cf_jitter to produce new jitter files. What does this message mean? cf_jitter-3.3: Housekeeping file format is out of date. A: We've added two header keywords to the housekeeping files that allow cf_jitter to compute the "reference quaternions" (that is, the desired spacecraft orientation) directly from the target's RA and DEC. Most of the time, the reference quaternions are inferred from the pointing arrays in the housekeeping file. This message tells you that your housekeeping files lack these new keywords. It may be safely ignored. Q: I've just installed CalFUSE v3.2 and am seeing a new warning message in my trailer files. What does it mean? Should I worry about it? cf_satellite_jitter-1.31: APER_COR = SKIPPED. Omitting photon shift. A: The two components of the jitter correction are now performed by two separate subroutines. The first, cf_screen_jitter (header keyword APER_COR), flags times when the target is out of the aperture. The second, cf_satellite_jitter (header keyword JITR_COR), corrects the coordinates of individual photons for spacecraft drift. The two programs talk to each other through the file-header keyword APER_COR. If cf_screen_jitter runs to completion, it sets APER_COR to COMPLETE. Later, cf_satellite_jitter checks this keyword. If its value is anything but COMPLETE, the program exits with the above message. Why cf_screen_jitter was unable to do its work is a separate question, which you can answer by examining the trailer file for this exposure. Q: The jitter-screening routine does not always flag times when my star is out of the aperture. Do you have any tools that do a better job? A: We've written a little IDL script that attempts to identify times when a bright target is out of the aperture. It uses the count-rate array in the timeline table of the intermediate data file (IDF) to construct a count-rate histogram, then rejects times when the count rate is more than 3 sigma below the mean. For such times, the jitter bit is set in both the timeline table and the photon list. You can download the tool from the FUSE IDL Tools web page. Q: I was planning to use PHALOW and PHAHIGH of 4 and 16 for LiF 1A to improve the background in my time-tagged data. What values do you recommend? A: We no longer recommend the use of narrow pulse-height ranges to reduce the detector background in FUSE data. Careful analysis has shown that limits more stringent that the default values (roughly 2-24, depending on the detector) can result in significant flux losses across small regions of the detectors, resulting in apparent absorption features that simply aren't real. Q: Why does CalFUSE do such a bad job of subtracting the background from SiC 1 spectra of external galaxies? Here's an example. A: Because your target is a galaxy, the file-header keyword SRC_TYPE is set to EC, for "extended continuum" source. In such cases, CalFUSE employs an extraction window that is considerably larger (in the Y dimension) than that used for point-source targets. On side 1, the SiC LWRS spectrum falls near the lower edge of the detector, where the background rises steeply and is difficult to model. The extended-source aperture extends into this region of enhanced background, which the pipeline is unable to subtract properly. If the emitting region is more point-like than extended, then the solution is simple: change the keyword SRC_TYPE from EC to PC and re-extract the spectrum. You don't even need to re-run the rest of the pipeline. The smaller extraction window no longer includes the offending background region, and the resulting spectrum is better behaved. Be sure to combine the IDF and BPM files from the individual exposures before running cf_extract_spectra. This increases the signal on unilluminated regions of the detector and enables the software to perform a multi-component fit to the background. Q: Some recent data sets show strong background contamination at short wavelengths. What's going on? A: What you are seeing is scattered Lyman continuum emission from the sun. Since the failure of the third reaction wheel in December 2004, we have experimented with the use of off-nominal roll angles to improve spacecraft stability. Sometimes, these roll angles place the spacecraft in a configuration that allows scattered sunlight to contaminate one of the SiC channels. We have no way to model or subtract this emission. Q: CalFUSE gives warnings about the detector voltage changing during an exposure, but the data look OK to me. Is this something that I should be worried about? A: If the header keywords indicate that the detector voltage was high, low, or changed during an exposure, CalFUSE writes a warning message to the trailer file. If a valid housekeeping file is available for the exposure, this warning may be safely ignored, because the pipeline uses housekeeping information to populate the high-voltage array in the timeline table and properly excludes time intervals when the voltage was low. If the housekeeping file is not present, each entry of the high-voltage array is set to the "HV bias maximum setting" reported in the IDF header. In this case, the pipeline has no information about time-dependent changes in the detector high voltage, and warnings about voltage-level changes should be investigated by the user. Q: Does your optimal-extraction algorithm emphasize the fixed-pattern noise in FUSE spectra? A: It shouldn't. Fixed-pattern noise is a problem only for high signal-to-noise data sets, which generally come from bright targets. The weighting scheme used by the optimal-extraction algorithm is essentially Fxy/(Fxy+Bxy) - where F is flux and B is background - normalized to conserve flux. Each column in the image is considered separately. So if the signal is much brighter than the background, we weight the pixels along a given column uniformly. This weighting should smooth out the fixed-pattern noise, rather than enhance it. For more information on the optimal-extraction algorithm, see (Dixon et al. 2007). Q: IDL is installed on my machine, but CalFUSE does not produce the detector-image and count-rate plots. What's wrong? A: If IDL is installed on your machine, then the IDL routines should run. My guess is that they are crashing, but you don't know it, because all of the output from IDL is sent to /dev/null. All that I can suggest is that you look at these error messages and see if you can figure out the problem. You can do this in one of two ways. In the directory calfuse/v3.2/bin are two Pearl scripts, idlplot_spex.pl and idlplot_rate.pl, which create the detector-image and count-rate plots, respectively. You could modify them to write their output to a file, rather than /dev/null. Another option is to run the programs interactively: > idl > .run calfuse/v3.2/idl/cf_plot_extract3 > cf_plot_extract3, 'E54201050252bttagf' The other routine is called cf_plot_rate3. Q: When I run CalFUSE, I don't get any count-rate or detector-image plots. The perl scripts generate a batch file for IDL to execute, but the call to IDL just hangs. I'm running the bash shell on a Mac. A: These plots are generated by a couple of perl scripts, idl_obsplot.pl and idlplot_rate.pl, that live in the calfuse/v3.2/bin directory. The scripts construct batch files containing the commands needed to generate the plots, then feed them to IDL. If IDL isn't reading the files properly, try changing the line system("idl $batch_filename > /dev/null"); system("idl < $batch_filename > /dev/null"); in both scripts. Combining Data from Multiple Exposures Q: Is it better to combine multiple IDFs and extract a single spectrum or to extract the spectra separately and cross-correlate them? A: The standard advice is that, for bright targets, you want to optimize spectral resolution, so should extract and cross-correlate the individual spectra. For faint targets, you want the best possible background subtraction, so should combine the individual IDFs into a single file. There's an implicit trade-off between spectral resolution and background fidelity, but the presumption is that resolution is less important for faint targets. For complete instructions, see the FUSE Tools in C web page. If you want to have your cake and eat it, too, you can shift the photons in the individual IDFs before combining them. For example, you could extract the individual spectra, determine the appropriate shifts for each, apply them to the IDFs, then run idf_combine. John Grimes has written a tool to shift spectra within an IDF. See the FUSE IDL Tools web page. Q: What's the easiest way to extract night-only spectra from FUSE data? A: Broadly speaking, there are two ways to exclude day-time photons from your data: either modify the screening files before running the pipeline or modify the IDFs afterward. If you are starting with raw files, simply change the keyword DAYNIGHT from BOTH to NIGHT in the scrn*.fit files (in the parmfiles directory), then run the pipeline as usual. Photons obtained during orbital day will be flagged as bad and excluded from the extracted spectrum. If you already have IDFs, you can use the IDL tool cf_edit to filter the data interactively. Another option is the C program idf_screen, which runs faster and (if you like) in batch mode. It is described on the FUSE Tools in C web page. FUSE and IDL Q: How can I read the CalFUSE output files into IDL? A: CalFUSE output files are in FITS format with the data stored as binary table extensions. The extracted spectral files (*fcal.fit) employ the standard binary table format, listing wavelength, flux, error, etc. for each wavelength bin. The intermediate data files (*idf.fit) have a different format, listing all of the photon arrival times, then the X coordinates, then the Y coordinates. Formally, the table has only one row, and each element is an array. Both file formats can be read using the MDRFITS command from the IDL Astronomy User's Library. Note that extensions 1 and 3 of the IDF must be read using the /fscale keyword. idl> a=mrdfits('P99901010011attagfidf.fit',1,/fscale) idl> b=mrdfits('P99901010011alif4ttagfcal.fit',1) Elements of individual arrays in the IDFs are addressed using the syntax idl> print, a.time[3:30] while, for the extracted spectral files, the syntax is idl> print, b[3:30].wave FUSE and IRAF Q: How can I read CalFUSE output files in IRAF? A: CalFUSE extracted spectral files (*fcal.fit) must be converted first into STSDAS tables, then into image files that IRAF can read. The program mkmultispec writes wavelength information to the file header. First we load the appropriate packages: sts tables fitsio hst_calib ctools ... then perform the conversion: cl> strfits A15101010141alif4ttagfcal.fit * input cl> tabim input.tab flux flux 1 cl> mkmultispec flux input.tab[wave] "" The result is a file called flux.imh containing a single array of flux values. Additional arrays (error bars, raw counts, etc.) can be extracted in the same way. FITS File-Header Keywords Q: Is there an easy way to modify FITS header keywords? A: If CalFUSE is installed on your computer, you have access to a nifty little program called modhead, which enables you to read or modify header keywords interactively. Here are a couple of examples: > modhead scrn1a015.fit DAYNIGHT DAYNIGHT= 'BOTH ' / Use only DAY, NIGHT or BOTH > modhead scrn1a015.fit DAYNIGHT NIGHT DAYNIGHT= 'BOTH ' / Use only DAY, NIGHT or BOTH Keyword has been changed to: DAYNIGHT= 'NIGHT ' / Use only DAY, NIGHT or BOTH The program can handle both numerical and string arguments, but is confused by multiple-word strings. Another convenient tool is the hedit, found in the images.imutil package. If you want to modify many FITS files at once, you can take advantage of IRAF's scripting capabilities to run hedit multiple times. For more information, see the hedit help page.
<urn:uuid:4ede3bff-52d8-4faa-bd46-b478477945a5>
2.546875
4,067
FAQ
Science & Tech.
54.464216
95,543,306
Hurricane Gert could save Britain’s summer with early reports suggesting a bright, warm start to next week. There was early hesitation about how badly Britain’s weather would be affected by the hurricane that formed earlier this week in the south Atlantic. Weather models can struggle with recurving hurricanes and tropical storms, as their impacts are difficult to predict, and this lowers confidence in forecasts. But latest forecasts indicate that Hurricane Gert could pull low pressure away from the north of Scotland and allow highs from the south-west to take charge of the UK’s weather. It will weaken as it crosses the Atlantic allowing a ridge of high pressure to take charge bringing some pleasant weather for much of the UK with the remnants of Gert swallowed up by the low creating an opening for the ridge of high pressure. The Weather Channel confirmed the remnants of Gert would merge with the low from Newfoundland to head north-west of the UK on Sunday with the worst of the weather moving north. Before then, there will be some quite potent weather for parts of the UK with very heavy rain and high winds in the north-west. The south will be mainly clear and dry. Pressure continues to build on Tuesday and it is predicted to be dry and bright for many but there could still be some cloud and rain. Pleasant conditions are set to continue on Wednesday before another north-westerly flow appears. Forecaster Hannah Findley, of The Weather Channel, said: "The low will pass to the north of the UK on Sunday, Monday and Tuesday, with associated cloud and rain mainly affecting the north. "The south of England and Wales will be more settled, largely dry, with maximum temperatures in the low 20s C, around normal for the time of year. "This area of low pressure will weaken and move eastwards, allowing higher pressure to build in behind from the end of Tuesday. "The second half of next week is forecast to be more settled, mostly dry and with some bright spells, although still plenty of cloud, keeping temperatures around normal for the time of year, but certainly not a wall-to-wall sunshine or a heatwave."
<urn:uuid:fa69dd67-6559-4345-86c2-f18018814544>
2.578125
448
News Article
Science & Tech.
50.284189
95,543,314
Dust is everywhere -- not just in your attic or under your bed, but also in outer space Dust is everywhere--not just in your attic or under your bed, but also in outer space. To astronomers, dust can be a nuisance by blocking the light of distant stars, or it can be a tool to study the history of our universe, galaxy, and Solar System. An electron microscope image of a micron-sized supernova silicon carbide, SiC, stardust grain (lower right) extracted from a primitive meteorite. Such grains originated more than 4.6 billion years ago in the ashes of Type II supernovae, typified here (upper left) by a Hubble Space Telescope image of the Crab Nebula, the remnant of a supernova explosion in 1054. Laboratory analysis of such tiny dust grains provides unique information on these massive stellar explosions. (1 μm is one millionth of a meter.) Image credits: NASA and Larry Nittler For example, astronomers have been trying to explain why some recently discovered distant, but young, galaxies contain massive amounts of dust. These observations indicate that type II supernovae--explosions of stars more than ten times as massive as the Sun--produce copious amounts of dust, but how and when they do so is not well understood. New work from a team of Carnegie cosmochemists published by Science Advances reports analyses of carbon-rich dust grains extracted from meteorites that show that these grains formed in the outflows from one or more type II supernovae more than two years after the progenitor stars exploded. This dust was then blown into space to be eventually incorporated into new stellar systems, including in this case, our own. The researchers--led by postdoctoral researcher Nan Liu, along with Larry Nittler, Conel Alexander, and Jianhua Wang of Carnegie's Department of Terrestrial Magnetism--came to their conclusion not by studying supernovae with telescopes. Rather, they analyzed microscopic silicon carbide, SiC, dust grains that formed in supernovae more than 4.6 billion years ago and were trapped in meteorites as our Solar System formed from the ashes of the galaxy's previous generations of stars. Some meteorites have been known for decades to contain a record of the original building blocks of the Solar System, including stardust grains that formed in prior generations of stars. "Because these presolar grains are literally stardust that can be studied in detail in the laboratory," explained Nittler, "they are excellent probes of a range of astrophysical processes." For this study, the team set out to investigate the timing of supernova dust formation by measuring isotopes--versions of elements with the same number of protons but different numbers of neutrons--in rare presolar silicon carbide grains with compositions indicating that they formed in type II supernovae. Certain isotopes enable scientists to establish a time frame for cosmic events because they are radioactive. In these instances, the number of neutrons present in the isotope make it unstable. To gain stability, it releases energetic particles in a way that alters the number of protons and neutrons, transmuting it into a different element. The Carnegie team focused on a rare isotope of titanium, titanium-49, because this isotope is the product of radioactive decay of vanadium-49 which is produced during supernova explosions and transmutes into titanium-49 with a half-life of 330 days. How much titanium-49 gets incorporated into a supernova dust grain thus depends on when the grain forms after the explosion. Using a state-of-the-art mass spectrometer to measure the titanium isotopes in supernova SiC grains with much better precision than could be accomplished by previous studies, the team found that the grains must have formed at least two years after their massive parent stars exploded. Because presolar supernova graphite grains are isotopically similar in many ways to the SiC grains, the team also argues that the delayed formation timing applies generally to carbon-rich supernova dust, in line with some recent theoretical calculations. "This dust-formation process can occur continuously for years, with the dust slowly building up over time, which aligns with astronomer's observations of varying amounts of dust surrounding the sites of stellar explosions," added lead author Liu. "As we learn more about the sources for dust, we can gain additional knowledge about the history of the universe and how various stellar objects within it evolve." This work was funded by NASA's Cosmochemistry program. The Carnegie Institution for Science (carnegiescience.edu) is a private, nonprofit organization headquartered in Washington, D.C., with six research departments throughout the U.S. Since its founding in 1902, the Carnegie Institution has been a pioneering force in basic scientific research. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science. Larry Nittler | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:b371bac2-8691-4ab5-a95f-0de975dcffe3>
3.96875
1,604
Content Listing
Science & Tech.
34.717386
95,543,328
View Full Version : How much deeper would the oceans be without sponges? 06-27-2002, 02:59 AM Stupid but funny. And needs an answer 06-27-2002, 03:07 AM Sponges do not exert enough pressure on water to compress it. So sponges can not actually cause water to take up less volume. If you removed sponges, I imagine the ocean would probably get shallower (by maybe one-millionth the width of a hair) because you don't have those sponges there taking up room. Seriously, did you think about this for two seconds? Fill a bucket with water. Add a sponge. See if the water level goes down. 06-27-2002, 03:25 AM The answer to your next question is 'As much wood as a woodchuck would if a woodchuck could chuck wood' 06-27-2002, 03:28 AM Actually, if you removed the sponges, the oceans would get shallower, as Achernar said, but that doesn't mean that they are any deeper than they would be if there never had been any sponges, since (pretty much)all of the nutrients from which the sponges are composed were already in solution and therefore part of the volume of the sea anyway (although the rea-arrangement of matter into more complex molecules may have brought about a change in volume). 06-27-2002, 03:58 AM "Sponges grow in the ocean...that floors me...do you realize how muich deeper the oceans would be if that didn't happen?"--Steven Wright :D 06-27-2002, 06:53 AM Depends what you mean by "deep" - - Bill Clinton 06-27-2002, 06:57 AM I love Steve Wright. But this isn't a General Question. This is closed. DrMatrix - General Questions Moderator vBulletin® v3.8.7, Copyright ©2000-2018, vBulletin Solutions, Inc.
<urn:uuid:0025178b-17d0-405f-8235-d585121ce701>
3.359375
440
Comment Section
Science & Tech.
82.295026
95,543,335
Obligate association with gut bacterial symbiont in Japanese populations of the southern green stinkbug Nezara viridula (Heteroptera: Pentatomidae) - 505 Downloads The southern green stinkbug Nezara viridula (Linnaeus) has a number of sac-like outgrowths, called crypts, in a posterior section of the midgut, wherein a specific bacterial symbiont is harbored. In previous studies on N. viridula from Hawaiian populations, experimental elimination of the symbiont caused few fitness defects in the host insect. Here we report that N. viridula from Japanese populations consistently harbors the same gammaproteobacterial gut symbiont, but, in contrast with previous work, experimental sterilization of the symbiont resulted in severe nymphal mortality, indicating an obligate host–symbiont relationship. Considering worldwide host–symbiont association and these experimental data, we suggest that N. viridula is generally and obligatorily associated with the gut symbiont, but that the effect of the symbiont on host biology may be different among geographic populations. Possible environmental factors that may affect the host–symbiont relationship are discussed. KeywordsNezara viridula Symbiotic bacterium Midgut crypts Gammaproteobacteria Obligate symbiosis We thank M. Baba and Y. G. Baba for insect samples. This study was supported by the Program for Promotion of Basic and Applied Research for Innovations in Bio-oriented Industry (BRAIN), the Japan Society for the Promotion of Science (JSPS), and The Council for Grants of the President of the Russian Federation and State Support of the Leading Scientific Schools (project # 3332.2010.4). - Buchner P (1965) Endosymbiosis of animals with plant microorganisms. Interscience, New YorkGoogle Scholar - Prado SS, Almeida RP (2009) Phylogenetic placement of pentatomid stink bug gut symbionts. Curr Microbiol 58:64–69Google Scholar - R Development Core Team (2010) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org - Swofford DL (2001) PAUP* version 4.0b10 [computer program]. Sinauer, SunderlandGoogle Scholar - Yukawa J, Kiritani K, Kawasawa T, Higashiura Y, Sawamura N, Nakada K, Gyotoku N, Tanaka A, Kamitani S, Matsuo K, Yamauchi S, Takematsu Y (2009) Northward range expansion by Nezara viridula (Hemiptera: Pentatomidae) in Shikoku and Chugoku Districts, Japan, possibly due to global warming. Appl Entomol Zool 44:429–437CrossRefGoogle Scholar
<urn:uuid:2ad322b1-ffd1-4108-99d5-e846b28df0b1>
2.625
634
Academic Writing
Science & Tech.
28.695132
95,543,349
Most of the meteorites that we collect on Earth come from the main belt of asteroids located between Mars and Jupiter . They were ejected from their asteroidal “parent body” after a collision, were injected into a new orbit, and they finally felt onto the Earth. Meteorites are a major tool for knowing the history of the solar system because their composition is a record of past geologic processes that occurred while they were still incorporated in the parent asteroid. One fundamental difficulty is that we do not know exactly where the majority of meteorite specimens come from within the asteroidal main belt. For many years, astronomers failed to discover the parent body of the most common meteorites, the ordinary chondrites that represent 75% of all the collected meteorites. To find the source asteroid of a meteorite, astronomers must compare the spectra of the meteorite specimen to those of asteroids. This is a difficult task because meteorites and their parent bodies underwent different processes after the meteorite was ejected. In particular, asteroidal surfaces are known to be altered by a process called “space weathering”, which is probably caused by micrometeorite and solar wind action that progressively transforms the spectra of asteroidal surfaces. Hence, the spectral properties of asteroids become different from those of their associated meteorites, making the identification of asteroidal parent body more difficult. Collisions are the main process to affect asteroids. As a consequence of a strong impact, an asteroid can be broken up, its fragments following the same orbit as the primary asteroid. These fragments constitute what astronomers call “asteroid families”. Until recently, most of the known asteroid families have been very old (they were formed 100 million to billions of years ago). Indeed, younger families are more difficult to detect because asteroids are closer to each other . In 2006, four new, extremely young asteroid families were identified, with an age ranging from 50000 to 600000 years. These fragments should be less affected than older families by space weathering after the initial breakup. Mothé-Diniz and Nesvorný then observed these asteroids, using the GEMINI telescopes (one located in Hawaii, the other in Chile), and obtained visible spectra. They compared the asteroids spectra to the one of an ordinary chondrite (the Fayetteville meteorite ) and found good agreement, as illustrated on Fig. 1. This discovery is the first observational match between the most common meteorites and asteroids in the main belt. It also confirms the role of space weathering in altering asteroid surfaces. Identifying the asteroidal parent body of a meteorite is a unique tool when studying the history of our solar system because one can infer both the time of geological events (from the meteorite that can be analyzed through datation techniques) and their location in the solar system (from the location of the parent asteroid). There are only a few exceptions, including the example of the famous meteorites coming from Mars. After the primary asteroid is disrupted, the fragments move away from each other. The older the collision, the greater the distance between fragments. Meteorites are named for the place they were collected. The Fayetteville meteorite fell near Fayetteville, Arkansas, on December 26, 1934. Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:0c7f04a4-928b-444d-a5a9-133fe166b7f9>
4.75
1,246
Content Listing
Science & Tech.
32.726886
95,543,354
PERLFAQ1(1) Perl Programmers Reference Guide PERLFAQ1(1) perlfaq1 - General Questions About Perl This section of the FAQ answers very general, high-level questions about Perl. What is Perl? Perl is a high-level programming language with an eclectic heritage written by Larry Wall and a cast of thousands. Perl's process, file, and text manipulation facilities make it particularly well-suited for tasks involving quick prototyping, system utilities, software tools, system management tasks, database access, graphical programming, networking, and web programming. Perl derives from the ubiquitous C programming language and to a lesser extent from sed, awk, the Unix shell, and many other tools and languages. These strengths make it especially popular with web developers and system administrators. Mathematicians, geneticists, journalists, managers and many other people also use Perl. Who supports Perl? Who develops it? Why is it free? The original culture of the pre-populist Internet and the deeply-held beliefs of Perl's author, Larry Wall, gave rise to the free and open distribution policy of Perl. Perl is supported by its users. The core, the standard Perl library, the optional modules, and the documentation you're reading now were all written by volunteers. The core development team (known as the Perl Porters) are a group of highly altruistic individuals committed to producing better software for free than you could hope to purchase for money. You may snoop on pending developments via the archives <http://www.nntp.perl.org/group/perl.perl5.porters/> or read the faq <http://dev.perl.org/perl5/docs/p5p-faq.html>, or you can subscribe to the mailing list by sending email@example.com a subscription request (an empty message with no subject is fine). While the GNU project includes Perl in its distributions, there's no such thing as "GNU Perl". Perl is not produced nor maintained by the Free Software Foundation. Perl's licensing terms are also more open than GNU software's tend to be. You can get commercial support of Perl if you wish, although for most users the informal support will more than suffice. See the answer to "Where can I buy a commercial version of Perl?" for more information. Which version of Perl should I use? (contributed by brian d foy) There is often a matter of opinion and taste, and there isn't any one answer that fits everyone. In general, you want to use either the current stable release, or the stable release immediately prior to that one. Currently, those are perl5.14.x and perl5.12.x, Beyond that, you have to consider several things and decide which is best for you. o If things aren't broken, upgrading perl may break them (or at least issue new o The latest versions of perl have more bug fixes. o The Perl community is geared toward supporting the most recent releases, so you'll have an easier time finding help for those. o Versions prior to perl5.004 had serious security problems with buffer overflows, and in some cases have CERT advisories (for instance, o The latest versions are probably the least deployed and widely tested, so you may want to wait a few months after their release and see what problems others have if you are o The immediate, previous releases (i.e. perl5.8.x ) are usually maintained for a while, although not at the same level as the current releases. o No one is actively supporting Perl 4. Ten years ago it was a dead camel carcass (according to this document). Now it's barely a skeleton as its whitewashed bones have fractured or eroded. o The current leading implementation of Perl 6, Rakudo, released a "useful, usable, 'early adopter'" distribution of Perl 6 (called Rakudo Star) in July of 2010. Please see <http://rakudo.org/> for more information. o There are really two tracks of perl development: a maintenance version and an experimental version. The maintenance versions are stable, and have an even number as the minor release (i.e. perl5.10.x, where 10 is the minor release). The experimental versions may include features that don't make it into the stable versions, and have an odd number as the minor release (i.e. perl5.9.x, where 9 is the minor release). What are Perl 4, Perl 5, or Perl 6? In short, Perl 4 is the parent to both Perl 5 and Perl 6. Perl 5 is the older sibling, and though they are different languages, someone who knows one will spot many similarities in The number after Perl (i.e. the 5 after Perl 5) is the major release of the perl interpreter as well as the version of the language. Each major version has significant differences that earlier versions cannot support. The current major release of Perl is Perl 5, first released in 1994. It can run scripts from the previous major release, Perl 4 (March 1991), but has significant differences. Perl 6 is a reinvention of Perl, it is a language in the same lineage but not compatible. The two are complementary, not mutually exclusive. Perl 6 is not meant to replace Perl 5, and vice versa. See "What is Perl 6?" below to find out more. See perlhist for a history of Perl revisions. What is Perl 6? Perl 6 was originally described as the community's rewrite of Perl 5. Development started in 2002; syntax and design work continue to this day. As the language has evolved, it has become clear that it is a separate language, incompatible with Perl 5 but in the same Contrary to popular belief, Perl 6 and Perl 5 peacefully coexist with one another. Perl 6 has proven to be a fascinating source of ideas for those using Perl 5 (the Moose object system is a well-known example). There is overlap in the communities, and this overlap fosters the tradition of sharing and borrowing that have been instrumental to Perl's success. The current leading implementation of Perl 6 is Rakudo, and you can learn more about it at <http://rakudo.org>. If you want to learn more about Perl 6, or have a desire to help in the crusade to make Perl a better place then read the Perl 6 developers page at <http://www.perl6.org/> and "We're really serious about reinventing everything that needs reinventing." --Larry Wall How stable is Perl? Production releases, which incorporate bug fixes and new functionality, are widely tested before release. Since the 5.000 release, we have averaged about one production release per The Perl development team occasionally make changes to the internal core of the language, but all possible efforts are made toward backward compatibility. Is Perl difficult to learn? No, Perl is easy to start learning <http://learn.perl.org/> --and easy to keep learning. It looks like most programming languages you're likely to have experience with, so if you've ever written a C program, an awk script, a shell script, or even a BASIC program, you're already partway there. Most tasks only require a small subset of the Perl language. One of the guiding mottos for Perl development is "there's more than one way to do it" (TMTOWTDI, sometimes pronounced "tim toady"). Perl's learning curve is therefore shallow (easy to learn) and long (there's a whole lot you can do if you really want). Finally, because Perl is frequently (but not always, and certainly not by definition) an interpreted language, you can write your programs and test them without an intermediate compilation step, allowing you to experiment and test/debug quickly and easily. This ease of experimentation flattens the learning curve even more. Things that make Perl easier to learn: Unix experience, almost any kind of programming experience, an understanding of regular expressions, and the ability to understand other people's code. If there's something you need to do, then it's probably already been done, and a working example is usually available for free. Don't forget Perl modules, either. They're discussed in Part 3 of this FAQ, along with CPAN <http://www.cpan.org/>, which is discussed in Part 2. How does Perl compare with other languages like Java, Python, REXX, Scheme, or Tcl? Perl can be used for almost any coding problem, even ones which require integrating specialist C code for extra speed. As with any tool it can be used well or badly. Perl has many strengths, and a few weaknesses, precisely which areas are good and bad is often a When choosing a language you should also be influenced by the resources <http://www.cpan.org/>, testing culture <http://www.cpantesters.org/> and community <http://www.perl.org/community.html> which surrounds it. For comparisons to a specific language it is often best to create a small project in both languages and compare the results, make sure to use all the resources <http://www.cpan.org/> of each language, as a language is far more than just it's syntax. Can I do [task] in Perl? Perl is flexible and extensible enough for you to use on virtually any task, from one-line file-processing tasks to large, elaborate systems. For many people, Perl serves as a great replacement for shell scripting. For others, it serves as a convenient, high-level replacement for most of what they'd program in low- level languages like C or C++. It's ultimately up to you (and possibly your management) which tasks you'll use Perl for and which you won't. If you have a library that provides an API, you can make any component of it available as just another Perl function or variable using a Perl extension written in C or C++ and dynamically linked into your main perl interpreter. You can also go the other direction, and write your main program in C or C++, and then link in some Perl code on the fly, to create a powerful application. See perlembed. That said, there will always be small, focused, special-purpose languages dedicated to a specific problem domain that are simply more convenient for certain kinds of problems. Perl tries to be all things to all people, but nothing special to anyone. Examples of specialized languages that come to mind include prolog and matlab. When shouldn't I program in Perl? One good reason is when you already have an existing application written in another language that's all done (and done well), or you have an application language specifically designed for a certain task (e.g. prolog, make). If you find that you need to speed up a specific part of a Perl application (not something you often need) you may want to use C, but you can access this from your Perl code with What's the difference between "perl" and "Perl"? "Perl" is the name of the language. Only the "P" is capitalized. The name of the interpreter (the program which runs the Perl script) is "perl" with a lowercase "p". You may or may not choose to follow this usage. But never write "PERL", because perl is not an acronym. What is a JAPH? (contributed by brian d foy) JAPH stands for "Just another Perl hacker,", which Randal Schwartz used to sign email and usenet messages starting in the late 1980s. He previously used the phrase with many subjects ("Just another x hacker,"), so to distinguish his JAPH, he started to write them as Perl programs: print "Just another Perl hacker,"; Other people picked up on this and started to write clever or obfuscated programs to produce the same output, spinning things quickly out of control while still providing hours of amusement for their creators and readers. CPAN has several JAPH programs at <http://www.cpan.org/misc/japh>. How can I convince others to use Perl? (contributed by brian d foy) Appeal to their self interest! If Perl is new (and thus scary) to them, find something that Perl can do to solve one of their problems. That might mean that Perl either saves them something (time, headaches, money) or gives them something (flexibility, power, In general, the benefit of a language is closely related to the skill of the people using that language. If you or your team can be faster, better, and stronger through Perl, you'll deliver more value. Remember, people often respond better to what they get out of it. If you run into resistance, figure out what those people get out of the other choice and how Perl might satisfy that requirement. You don't have to worry about finding or paying for Perl; it's freely available and several popular operating systems come with Perl. Community support in places such as Perlmonks ( <http://www.perlmonks.com> ) and the various Perl mailing lists ( <http://lists.perl.org> ) means that you can usually get quick answers to your problems. Finally, keep in mind that Perl might not be the right tool for every job. You're a much better advocate if your claims are reasonable and grounded in reality. Dogmatically advocating anything tends to make people discount your message. Be honest about possible disadvantages to your choice of Perl since any choice has trade-offs. You might find these links useful: AUTHOR AND COPYRIGHT Copyright (c) 1997-2010 Tom Christiansen, Nathan Torkington, and other authors as noted. All rights reserved. This documentation is free; you can redistribute it and/or modify it under the same terms as Perl itself. Irrespective of its distribution, all code examples here are in the public domain. You are permitted and encouraged to use this code and any derivatives thereof in your own programs for fun or for profit as you see fit. A simple comment in the code giving credit to the FAQ would be courteous but is not required. perl v5.16.3 2013-03-04 PERLFAQ1(1)
<urn:uuid:67af8cc8-52ef-4080-afce-8c3dbc914da4>
2.875
3,175
FAQ
Software Dev.
59.768242
95,543,356
Super-resolution microscopy reveals fine detail of cellular mesh underlying cell membrane One of today's sharpest imaging tools, super-resolution microscopy, produces sparkling images of what until now has been the blurry interior of cells, detailing not only the cell's internal organs and skeleton, but also providing insights into cells' amazing flexibility. In the current issue of the journal Cell Reports, Ke Xu and his colleagues at UC Berkeley use the technique to provide a sharp view of the geodesic mesh that supports the outer membrane of a red blood cell, revealing why such cells are sturdy yet flexible enough to squeeze through narrow capillaries as they carry oxygen to our tissues. The discovery could eventually help uncover how the malaria parasite hijacks this mesh, called the sub-membrane cytoskeleton, when it invades and eventually destroys red blood cells. "People know that the parasite interacts with the cytoskeleton, but how it does it is unclear because there has been no good way to look at the structure," said Xu, an assistant professor of chemistry. "Now that we have resolved what is really going on in a normal healthy cell, we can ask what changes under infection with parasites and how drugs affect the interaction." Typical human cells have a two-dimensional skeleton that supports the outer membrane and a three-dimensional interior skeleton that supports all the organelles inside and serves as a transportation system throughout the cell. Red blood cells, however, have only the membrane supports and no internal scaffolding, so they're basically a balloon filled with molecules of oxygen-carrying hemoglobin. Because of their simpler structure, red blood cells are ideal for studying the skeleton that supports the membrane in all cells. Electron microscope images earlier showed that the sub-membrane cytoskeleton in red blood cells is a triangular mesh of proteins, reminiscent of a geodesic dome. But measurements of the size of the triangular subunits were made by flattening out the domed membrane of a dead and dried-out cell, which distorts the structure. STORMing the cytoskeleton Xu was a postdoctoral fellow in the Harvard University lab of one of the inventors of super-resolution microscopy, Xiaowei Zhuang, and is an expert on the version called STORM (stochastic optical reconstruction microscopy). Super-resolution microscopy gives about 10 times better resolution than standard light microscopy and works well with wet and live cells. Using STORM, Xu, former Berkeley postdoc Leiting Pan and graduate student Rui Yan were able to image the full sub-membrane cytoskeleton of fresh red blood cells and discovered that the triangles of the mesh are about half the size of found in earlier measurements done with electron microscopy: each side is 80 nanometers long, instead of 190 nanometers. The distinction is critical: The building blocks of the mesh are a protein called spectrin, which can be stretched to a maximum of about 190 nanometers in length. If the mesh were made of stretched spectrin, it would be rigid, Xu said. But since its normal length is a relaxed 80 nanometers, it acts like a spring. "It is more like a spring in its relaxed state, where it has much flexibility under compression or stretching, so that gives red blood cells a lot of elasticity under different physiological conditions, such as squeezing through a narrow capillary," Yan said. At the vertices of the mesh, where five to six spectrin proteins come together, is a different protein: actin. Actin is a standard part of the sub-membrane cytoskeleton and one of the main structural components of the cell. Tears in the mesh Interestingly, STORM revealed never-before-seen holes in the cytoskeletal mesh that may also be critical to its flexibility. "This is a defect in the network, but there might be a reason for it," said Xu, who is also a Chan Zuckerberg Biohub Investigator. "The cell would want to change structure rapidly as it goes through the capillaries, and having those defects is helpful in reorganizing the shape without breaking the mesh. It can act as a weak point as they try to squeeze through things, they can start to bend around those points." Xu actually discovered the key structural role of spectrin. While still at Harvard, he used STORM to look at the skeletal structure of neurons, and discovered that actin proteins form precisely spaced rings along the entire length of the axon - which can be as much as a foot long - much like the ribs of a snake. They are separated by exactly 190 nanometers, and when he looked through textbooks for proteins with that length, he came across spectrin. He subsequently used STORM to confirm that in its stretched state, spectrin proteins are the spacers between the rings, keeping them precisely separated. "The ringed skeleton makes the axon a very stable but bendable structure," Xu said, whereas the regular spacing may be key to its electrical conductivity. Super-resolution microscopy employs a trick to overcome the diffraction limit of light microscopy, which prevents conventional light microscopes from resolving things smaller than half the size of the wavelength of the light, which for visible light is about 300 nanometers. STORM involves attaching a blinking light source to individual molecules and then isolating each light's position independently of the others, building up a complete image much like the 1880s artists who developed pointillism, producing images from individual dots of paint. Typically chemists attach these flashing sources to all molecules of the same type in a cell, such as all actin molecules, but since only a small percentage of the sources blink on at any one time, it's possible to pinpoint the exact location of each. Today's best resolution is about 10 nanometers, Xu said, which is about the size of a single protein or molecule. The work was supported by the National Natural Science Foundation of China, a Pew Biomedical Scholars Award and a Packard Fellowship for Science and Engineering. Coauthor and postdoc Wan Li contributed to experimental design and data analysis. Robert Sanders | EurekAlert! Colorectal cancer risk factors decrypted 13.07.2018 | Max-Planck-Institut für Stoffwechselforschung Algae Have Land Genes 13.07.2018 | Julius-Maximilians-Universität Würzburg For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:4df06fc0-3895-4297-9fb3-7133def90d58>
3.078125
1,910
Content Listing
Science & Tech.
38.479845
95,543,371
This was just about the strangest water I'd ever seen. These waves developed on a small creek just as it entered Lake Michigan on the beach at Warren Dunes State Park. The water was flowing clear and smoothly until it reached the lake, when these bumps would appear. They didn't flow forward like a regular wave; they just stayed in the same place. It looked like the water was flowing over an invisible log about six or eight inches high. The water would stay that way for a half minute or so, and then either subside back to a smooth flowing stream or collapse backwards with a little whitewater crest.It was fascinating. I couldn't figure it out completely, but it was obvious that the moving stream hitting the lake waves caused some kind of interference that created the phenomenon. Later, I vaguely remembered the term "standing wave" from a physics class. A little internet searching suggests that these are a form of "hydraulic jump" that occurs when a shallow, fast-moving stream moves into deeper, slower water. I'd been to this creek numerous times in the past & never saw this happen. On Friday, I guess the speed and depth of the creek crossing the beach were just right. I'll certainly look for it in the future. Here's a short video of the standing waves. Annual Monitoring in Effect at the Hart Plant Preserve - By Lauren Cvengros, MNA Intern The Donald E. Hart Plant Preserve is located in Benzie County in the northwestern part of the Lower Peninsula by Crystal Lak... 7 hours ago
<urn:uuid:12cccecd-dc4d-4995-b3e0-e3e64cb89b3a>
2.90625
320
Personal Blog
Science & Tech.
63.456667
95,543,377
Join the Conversation To find out more about Facebook commenting please read the Conversation Guidelines and FAQs Zebra mussels: Good, the bad and the ugly Invasive species have earned their bad reputations. English sparrows compete with native birds from Newfoundland to South America. Australian brown tree snakes are well on their way to exterminating every last bird from the forests of Guam. And I don’t think anyone can fully predict how Columbia’s rivers will change in response to drug lord Pablo Escobar’s escaped hippopotamus population. While our climate protects us from rampaging hippos, the Northeast has plenty of exotic species in its waterways, including some that cause serious damage. Zebra mussels are possibly the most familiar of these. They were first discovered in Lake Champlain in 1993 by a precocious 14 year old, Matthew Toomey, who recognized one based on an identification card he’d received at school. Since then, the mussels have spread throughout the lake and their effects have been well chronicled. They kill native mussels; coat surfaces with razor-sharp shells; foul anchor chains; block water intake pipes; and steal plankton and other food from native fish. With all of the negative press regarding the species, you might find it jarring to read anything positive about zebra mussels, particularly anything written by a biologist. Discussing positive effects of invaders is practically taboo. We don’t speak ill of the dead; we never praise invasive species. I’m certainly not advocating zebra mussel propagation, but like them or not, they are here to stay. These mussels are an important part of European ecosystems, and it’s interesting to consider what native organisms benefit from their presence. Zebra mussels are voracious filter feeders. A single mussel can suck a liter of water through its body daily. All of this filtration removes plankton and particles from lake waters, but these particles don’t just disappear. The phrase immortalized in Minna Unchi’s children’s book Everybody Poops applies. Along with excrement, unpalatable particles rejected by zebra mussels are mixed with mucus and dropped on the lake floor. Mussel excrement and mucus might not sound appetizing, but it’s a smorgasbord for lake floor invertebrates and fish. In addition to covering rocky surfaces, zebra mussels often carpet lake floor sand and silt. Formerly soft sediments that provided foraging grounds for sturgeon and other fish can become a tangled mess of living and dead mussels several inches thick. Not surprisingly, fish such as log perch, bullhead, and sculpins have difficulty finding their insect prey amongst the clutter of shells layered over their sandy habitats. When given the choice, juvenile sturgeon avoid zebra mussels and spend their time on sandy or rocky areas. What’s bad for these predators may be good for their prey. To figure out just how good or bad zebra mussels could be for Lake Champlain invertebrates, we ran experiments under 30 feet of water in sandy areas of Appletree Bay. When my colleagues Ellen Marsden, Mark Beekey, and I fenced off lake floor patches with and without zebra mussels, twice as many invertebrates colonized areas with zebra mussels. More species also moved in. After a month, the number of species in experiments with added mussels doubled and included some species more typical of rocky lake floors. Nooks and crannies between zebra mussel shells seem to act like very small, natural shark cages that protect tiny insects from hungry fish. And when we placed insects and fish in aquariums, far more invertebrates survived with zebra mussels than without. On balance, I would rather have a lake without zebra mussels than with them. But unless ways are found to eliminate them, it will remain important to understand how they affect native species. In Lake Champlain, the zebra mussel population grew rapidly and has since fallen below peak numbers, as often happens with this species in a new location. This month, for the first time in recent years, we pulled up a lake floor sample in Burlington Bay that entirely lacked zebra mussels. Perhaps we are reaching a new equilibrium? Declan McCabe teaches biology at St. Michael’s College. His work with student researchers on insect communities in the Champlain Basin is funded by Vermont EPSCoR’s Grant EPS-1101317 from the National Science Foundation.The illustration for this column was drawn by Adelaide Tyrol. The Outside Story is assigned and edited by Northern Woodlands magazine and sponsored by the Wellborn Ecology Fund of New Hampshire Charitable Foundation: email@example.com.
<urn:uuid:bfb77f1b-8ee1-4f03-9bab-b6f7a4cab72b>
2.890625
1,002
Nonfiction Writing
Science & Tech.
41.795269
95,543,383
Authors: George Rajna Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory. Comments: 25 Pages. [v1] 2018-03-06 11:49:00 Unique-IP document downloads: 32 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:2a3fc42c-9c00-470e-b317-53b5025e129c>
3.8125
1,091
Knowledge Article
Science & Tech.
17.384275
95,543,391
hi iam biginer ineed to know different between string and character a string is a sequence of characters,for example "shahrukh" "struct11" a string is always enclosed in double quotes " " whereas in case of characters a character is just of a single byte and is enclosed within single quotes or you can say that every key on keyboard other then the function keys resembles a character. It looks like you're new here. If you want to get involved, click one of these buttons!
<urn:uuid:9d5b2808-11fd-49e1-9ccf-62c06a9902c8>
3.015625
108
Q&A Forum
Software Dev.
44.465
95,543,397
Clustered DNA lesions, also called Multiply Damaged Sites, is the hallmark of ionizing radiation. It is defined as the combination of two or more lesions, comprising strand breaks, oxidatively generated base damage, abasic sites within one or two DNA helix turns, created by the passage of a single radiation track. DSB clustered lesions associate DSB and several base damage and abasic sites in close vicinity, and are assimilated to complex DSB. Non-DSB clustered lesions comprise single strand break, base damage and abasic sites. At radiation with low Linear Energy Transfer (LET), such as X-rays or γ-rays clustered DNA lesions are 3–4 times more abundant than DSB. Their proportion and their complexity increase with increasing LET; they may represent a large part of the damage to DNA. Studies in vitro using engineered clustered DNA lesions of increasing complexity have greatly enhanced our understanding on how non-DSB clustered lesions are processed. Base excision repair is compromised, the observed hierarchy in the processing of the lesions within a cluster leads to the formation of SSB or DSB as repair intermediates and increases the lifetime of the lesions. As a consequence, the chances of mutation drastically increase. Complex DSB, either formed directly by irradiation or by the processing of non-DSB clustered lesions, are repaired by slow kinetics or left unrepaired and cause cell death or pass mitosis. In surviving cells, large deletions, translocations, and chromosomal aberrations are observed. This review details the most recent data on the processing of non-DSB clustered lesions and complex DSB and tends to demonstrate the high significance of these specific DNA damage in terms of genomic instability induction.
<urn:uuid:3eee19d1-53d3-4b87-909b-3602da1b659d>
2.8125
353
Academic Writing
Science & Tech.
27.470435
95,543,414
Where are all of these meteors coming from? In terms of direction on the sky, the pointed answer is the constellation of Perseus. That is why the meteor shower that peaks later this week is known as the Perseids -- the meteors all appear to came from a radiant toward Perseus. In terms of parent body, though, the sand-sized debris that makes up the Perseids meteors come from Comet Swift-Tuttle. The comet follows a well-defined orbit around our Sun, and the part of the orbit that approaches Earth is superposed in front of the Perseus. Therefore, when Earth crosses this orbit, the radiant point of falling debris appears in Perseus. Featured here, a composite image containing over 60 meteors from last August's Perseids meteor shower shows many bright meteors that streaked over Mount Shasta, California, USA. This year's Perseids holds promise to be the best meteor shower of the year.
<urn:uuid:95e9ebcb-cc21-4e40-960c-f17736bb40fe>
3.03125
194
Personal Blog
Science & Tech.
57.100146
95,543,419
Impact on Biodiversity The impact of genetically modified organisms (GMOs) on “the conservation and sustainable use of biological diversity,” as stated in Article 26 of the Cartagena Protocol on Biosafety ((Secretariat of the Convention of Biological Diversity 2000, p. 19), has generated a heated and polarized debate. On one side is the position of those who believe that not only economic but ethical, religious, and cultural considerations related to biodiversity should be taken into account. On the other side, stand those who believe that the impact of GMOs on the environment should be circumscribed to environmental assessments. Whatever the position, generating a useful analysis of the impacts of GMOs on biological diversity requires thoughtful definition of concepts and selection of appropriate valuation methods. KeywordsEcosystem Service Taxonomic Diversity Plant Genetic Resource Hedonic Price Ecosystem Service Valuation - 2010 Biodiversity Indicators Partnership (2010) Biodiversity indicators and the 2010 target: experiences and lessons learnt from the 2010 Biodiversity Indicators Partnership (Technical Series No. 53). Secretariat of the Convention on Biological Diversity, MontréalGoogle Scholar - Bennett J, Birol E (eds) (2010) Chapter I. Using choice experiments to investigate environmental conservation and economic development trade-offs. In: Choice experiments in developing countries implementation, challenges and policy implications. Edward Elgar Publishing, NorthamptonGoogle Scholar - Convention on Biological Diversity (1992) United Nations. http://www.cbd.int/doc/legal/cbd-en.pdf. Accessed 20 June 2013 - Gomez-Baggethun E, Martin-Lopez B (2012) The socio-economic costs of biodiversity loss. Lychnos 3:68–73Google Scholar - Haines-Young R, Potschin M (2010) The links between biodiversity, ecosystem services and human well-being. In Raffaelli DG, Frid LJ (eds) Ecosystem ecology—a new synthesis. Cambridge University Press, CambridgeGoogle Scholar - Hawksworth. D. L. 1995. Biodiversity: Measurement and Estimation. The Royal Society and Chapman and Hall Publishers, London, UK. ISBN 0 412 75220 4Google Scholar - Millennium Ecosystem Assessment (2005) Concepts of ecosystem value and valuation approaches. http://www.millenniumassessment.org/documents/document.304.aspx.pdf. Accessed 10 May 2013 - Phalan B, Balmford A, Green RE et al (2011) Minimising the harm to biodiversity of producing more food globally. Food Policy 36(Suppl 1):S62–S71Google Scholar - Smale M (ed) (2006) Valuing crop biodiversity: on-farm genetic resources and economic change. CABI Publishing, WallingfordGoogle Scholar - TEEB (2010) Mainstreaming the economics of nature: a synthesis of the approach, conclusions and recommendations of TEEB. http://www.teebweb.org/wp-content/uploads/Study%20and%20Reports/Reports/Synthesis%20report/TEEB%20Synthesis%20Report%202010.pdf. Accessed 20 June 2013 - UNEP-WCMC (2011) Developing ecosystem service indicators: experiences and lessons learned from sub-global assessments and other initiatives (Technical Series No 58). Secretariat of the Convention on Biological Diversity, MontréalGoogle Scholar - United Nations (2005) Millennium Development Goals Report 2005. United Nations Department of Public Information, New York. http://unstats.un.org/unsd/mi/pdf/MDG%20Book.pdf Accessed 31 May 2013.
<urn:uuid:1ca92159-8946-483f-9760-3e72e8e34a4b>
3.1875
764
Academic Writing
Science & Tech.
30.844504
95,543,486
Search Interview Questions | 2618 questions in repository.| There are more than 200 unanswered questions. Click here and help us by providing the answer. Have a video suggestion. Click Correct / Improve and please let us know. |Java - Interview Questions and Answers| |Ans. Yes, we can substitute outer classes wherever we need to have inner classes but Inner classes have advantage in certain cases and hence preferred - | Ease - Why to implement a class outside if its objects are only intended to be part of an outer object. Its easy to define the class within another class if the use is only local. Protection - Making a call an outer exposes a threat of it being used by any of the class. Why should it be made an outer class if its object should only occur as part of other objects. For example - You may like to have an class address whose object should have a reference to city and by design thats the only use of city you have in your application. Making Address and City as outer class exposes City to any of the Class. Making it an inner class of Address will make sure that its accessed using object of Address. Sample Code for inner class |Help us improve. Please let us know the company, where you were asked this question :| |Like Discuss Correct / Improve java inner classes classes objects technical lead intermediate| Try 1 Question(s) Test | What is the difference between the following two code lines ?| 1. new OuterClass().new InnerClass(); 2. new OuterClass.InnerClass(); |What are the different types of inner classes ?| |What is the difference between inner class and sub class ?| |Which access specifier can be used with Class ?| |Difference between nested and inner classes ?| |Which of the following cannot be marked static ?| |Explain use of nested or inner classes ?| |What is the benefit of inner / nested classes ?| |Explain Inner Classes ?|
<urn:uuid:c5832ec2-6709-423e-8f0c-7fd5ab07968d>
2.96875
422
Q&A Forum
Software Dev.
54.929015
95,543,495
by Sharam Hekmat Publisher: PragSoft Corporation 2005 Number of pages: 311 This book introduces C++ as an object-oriented programming language. No previous knowledge of C or any other programming language is assumed. Home page url Download or read it online for free here: by Richard L. Halterman - Southern Adventist University Table of Contents: The Context of Software Development; Writing a C++ Program; Values and Variables; Expressions and Arithmetic; Conditional Execution; Iteration; Other Conditional and Iterative Statements; Using Functions; Writing Functions; etc. by Bartosz Milewski - Addison Wesley The book teaches programming in C++ from the perspective of a professional programmer. It presents the development of a parser and a calculator from a simple command-line program to a GUI application. Learn how to use C++ like a real pro. by Juan Soulie - cplusplus.com These tutorials explain the C++ language from its basics up to the newest features of ANSI-C++, including basic concepts such as arrays or classes and advanced concepts such as polymorphism or templates. The tutorial is oriented in a practical way. by Herbert Schildt - McGraw-Hill Osborne Media Written by Herb Schildt, this step-by-step book is ideal for first-time programmers or those new to C++. The modular approach of this series, including sample projects and progress checks, makes it easy to learn to use C++ at your own pace.
<urn:uuid:8838fd97-d5b2-4307-aebf-341e16a9bbf4>
2.953125
317
Content Listing
Software Dev.
40.979344
95,543,535
|External view of the left valve of Pisidium pseudosphaerium| In some bivalve classification systems, the family Sphaeriidae is referred to as Pisidiidae, and occasionally Pisidium species are grouped in a subfamily known as Pisidiinae. Pisidium and taphonomy In large enough quantities, the minute shells of these bivalves can affect environmental conditions, and this change in conditions can positively affect the ability of organic remains in the immediate environment to fossilize (one aspect of taphonomy). For example, in the Dinosaur Park Formation, the fossil remains of hadrosaur eggshells are rare. This is because the breakdown of tannins from the local coniferous vegetation caused the ancient waters to be acidic, and therefore usually eggshell fragments dissolved in the water before they had a chance to be fossilized. Hadrosaur eggshell fragments are however present in two microfossil sites in the area. Both of these sites are dominated by preserved shells of invertebrate life, primarily shells of pisidiids. The slow dissolution of these minute bivalve shells released calcium carbonate into the water, raising the water's pH high enough that it prevented the hadrosaur eggshell fragments from dissolving before they could be fossilized. Drawing of the right valve external view of Pisidium moitessierianum Extant subgenera and species Extant subgenera and species within the genus Pisidium include: Subgenus Euglesa Jenyns, 1832 Subgenus Pisidium Pfeiffer, 1821 Subgenus Cyclocalyx Dall, 1903 Subgenus Tropidocyclas Dall, 1903 - Pisidium henslowanum (Sheppard, 1823) - Pisidium lilljeborgii Clessin, 1886 - Pisidium supinum A. Schmidt, 1851 - Pisidium waldeni Kuiper, 1975 Subgenus Hiberneuglesa Starobogatov, 1983 - Pisidium hibernicum Westerlund, 1894 Subgenus Cingulipisidium Pirogov & Starobogatov, 1974 - Pisidium crassum Stelfox, 1918 - Pisidium milium Held, 1836 - Pisidium nitidum Jenyns, 1832 - Pisidium pseudosphaerium Favre, 1927 Subgenus Pseudeupera Germain, 1909 Subgenus Neopisidium Odhner, 1921 - Pisidium conventus Clessin, 1867 Subgenus Odhneripisidium Kuiper, 1962 Subgenus Afropisidium Kuiper, 1962 - Pisidium giraudi Bourguignat, 1885 - Pisidium hodgkini (Suter, 1905) - Pisidium pirothi Jickeli, 1881 Subgenus incertae sedis - Pisidium annandalei Prashad, 1925 - Pisidium edlaueri Kuiper, 1960 - Pisidium javanum van Benthem Jutting, 1931 - Pisidium maasseni Kuiper, 1987 - Pisidium punctiferum (Guppy, 1867) - Pisidium raddei Dybowski, 1902 - undescribed Pisidium species from Africa (Kuiper, in prep., awaiting additional records) - Tanke and Brett-Surman (2001). - "Abstract," Tanke and Brett-Surman (2001). Page 206. - "Discussion," Tanke and Brett-Surman (2001). Page 212. - "Eggshell," Tanke and Brett-Surman (2001). Page 209. - Appleton C., Ghamizi M., Jørgensen A., Kristensen T. K., Lange C., Stensgaard A-S. & Van Damme D. (2009). Pisidium pirothi. In: IUCN 2010. IUCN Red List of Threatened Species. Version 2010.4. <www.iucnredlist.org>. Downloaded on 3 December 2010. - Lange C. N. & Ngereza C. (2004). Pisidium artifex. 2006 IUCN Red List of Threatened Species. Downloaded on 7 August 2007. - Bogan A. (2011). Pisidium stewarti. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.2. <www.iucnredlist.org>. Downloaded on 1 November 2012. - Kuiper J.G.J. (2009). "Fossil records of Palaearctic Pisidium species in tropical Africa". Zoologische Mededelingen 83(10): 593-594. HTM. - Tanke, D.H. and Brett-Surman, M.K. 2001. Evidence of Hatchling and Nestling-Size Hadrosaurs (Reptilia:Ornithischia) from Dinosaur Provincial Park (Dinosaur Park Formation: Campanian), Alberta, Canada. pp. 206–218. In: Mesozoic Vertebrate Life—New Research Inspired by the Paleontology of Philip J. Currie. Edited by D.H. Tanke and K. Carpenter. Indiana University Press: Bloomington. xviii + 577 pp. |Wikimedia Commons has media related to Pisidium.| - Pisidium keyed, described and illustrated in Danmarks Fauna (Georg Mandahl-Barth) - Pisidium images at Consortium for the Barcode of Life
<urn:uuid:5eb4afaf-52ba-47ad-9b81-8f97c12a9196>
3.25
1,203
Knowledge Article
Science & Tech.
40.624211
95,543,560
Horticulture professor Jiming Jiang studies centromeres, large regions of DNA that help match up and then separate pairs of chromosomes during cell division. Long ignored by most genome scientists, centromeres now appear to be key in creating artificial chromosomes—complete, self-replicating packages of genetic material that could revolutionize crop improvement in plants and gene therapy in humans. What is a centromere? Humans have about 30,000 genes carried by our 46 chromosomes. Each chromosome has one centromere, a stretch of DNA that ensures the accurate transmission of the chromosomes—our genetic material—into daughter cells during cell division. You can actually see the centromere under the microscope—it looks like a constriction on the chromosome. It’s an extremely complex structure. There’s a lot of protein involved, and the centromere’s DNA—how to describe it? It’s junk DNA, basically. It doesn’t have genes, just a lot of repetitive junk DNA. When did scientists discover the centromere is full of junk DNA? When they sequenced the human genome? Scientists say that the human genome has been sequenced, that the mouse genome has been sequenced, but people don’t realize that none of the centromeres have been sequenced. They just don’t count it. And most scientists don’t care because there are no genes [in those regions]. Plus, it’s almost impossible to sequence centromeres with current technology—they are too long and contain too much repetitive DNA. But rice is a different story. The centromere on rice chromosome 8 is not particularly repetitive, so my team was able to sequence it back in 2004. We were the first team to sequence a centromere from a multicellular species, and, surprisingly, we found genes in it! How did this rice centromere end up with genes in it? Let me try to explain what we think is going on in this strange case. In the scientific community, people are starting to believe that centromeres originate somewhere. They don’t just exist, right? And when a new centromere emerges—a neo-centromere—it may look like a regular piece of DNA, with genes in it. Over time, however, as it evolves, the centromere accumulates junk DNA for whatever reason. So, the rice centromere that we sequenced, we believe, is somewhere in the middle of this evolutionary process. It’s like a caveman. It is starting to accumulate some repetitive, junk DNA, but it still has some genes in it. It’s interesting to consider that centromeres can evolve. With funding from the NSF, we are now trying to understand the evolution of this rice centromere over the past 10 million years. To get at this question, we’re sequen-cing this centromere in five different species of wild rice, which diverged from cultivated rice between 1 million and 10 million years ago. We’ll be able to see what kinds of changes happened over that time—how the genes moved away, how the junk DNA accumulated. This work will help us figure out the minimum requirements needed to make a centromere. There are a lot of things we don’t know right now, but if we can figure out the answers, this work will ultimately help us design artificial chromosomes. That’s the long-term goal.
<urn:uuid:e4faa18a-a3df-42ee-93bc-532984f4b41f>
3.984375
733
Audio Transcript
Science & Tech.
46.82721
95,543,561
The meteor explosion over Russia that injured more than as many as 1,000 people and damaged hundreds of buildings was not caused by an asteroid zooming close by the Earth today (Feb.15), a NASA scientist says. NASA asteroid expert Don Yeomans, head of the agency's Near-Earth Object Program Office, told SPACE.com that the object which exploded over a thinly inhabited stretch of eastern Europe today was most likely an exploding fireball known as a bolide. More than 500 people were injured, mostly by glass cuts when windows shattered during the blast, according to the Russian Emergency Ministry. "If the reports of ground damage can be verified, it might suggest an object whose original size was several meters in extent before entering the atmosphere, fragmenting and exploding due to the unequal pressure on the leading side vs the trailing side (it pancaked and exploded)," Yeomans told SPACE.com in an email. "It is far too early to provide estimates of the energy released or provide a reliable estimate of the original size." Yeoman stressed that the bolide event was likely not associated at all with the incoming asteroid 2012 DA14, which will fly within 17,200 miles (27,000 kilometers) of Earth when it passes safely by our planet today. "The asteroid will travel south to north," Yeomans said. "The bolide trail was not south to north and the separation in time between the fireball and 2012 DA14 close approach is significant." Asteroid 2012 DA14 is 150 feet (45 meters) wide — about half the size of a football field — and will make its closest approach to Earth at 2:24 p.m. EST (1924 GMT) when it passes over Indonesia. It will be about 5,000 miles (8,046 kilometers) closer to Earth than the communications satellites circling the planet in geosynchronous orbits. NASA scientists and professional and amateur astronomers around the world have been tracking asteroid 2012 DA14 since it was first discovered by a team of amateurs in February 2012. Not only does the asteroid pose no threat to Earth during today's flyby, but it will not hit Earth for the foreseeable future, NASA scientists have said. Visit SPACE.com today for complete coverage of asteroid 2012 DA14's flyby. Image courtesy of Nasa - Who Owns the Moon? A Space Lawyer Answers - New Aeolus Mission Will Use a Laser to Monitor Earth's Winds (Video) - First Moonwalker Neil Armstrong's Memorabilia Heads to Auction - Image of the Day This article originally published at Space.com here
<urn:uuid:c63549b1-6f59-4438-b448-e41434c3c30e>
3.234375
531
News Article
Science & Tech.
49.19
95,543,562
Some features of this site are not compatible with your browser. Install Opera Mini to better experience this site. This page contains archived content and is no longer being updated. At the time of publication, it represented the best available science. However, more recent observations and studies may have rendered some content obsolete. Hurricane Katrina was sprawled across all or part of 16 states at 2:15 p.m. CDT on August 29, 2005, when the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite captured this image. After nearly eight hours over land, Katrina was still a Category 1 storm, with winds of 150 kilometers per hour (95 miles per hour) and stronger gusts. In this image, Katrina measures about 1,260 kilometers (780 miles) from east to west and about the same distance from north to south across its center. While most states under its clouds have only experienced rain so far, Louisiana, Mississippi, Alabama, and Florida have all been pummeled by furious winds, heavy rain, and a powerful storm surge. Katrina was a strong Category 3 storm when its eye moved ashore earlier in the day. The large image provided above has a resolution of 500 meters per pixel. The MODIS Rapid Response Team provides the image in additional resolutions, including MODIS’ maximum resolution of 250 meters per pixel. Hurricane Katrina exploded into a category 5 storm on August 28, 2005, as it moved north through the Gulf of Mexico towards the United States. It was one of the most powerful storms on record for the Atlantic Basin.
<urn:uuid:98dfb710-4854-4a8e-ba58-243a9998a2ad>
3.40625
324
Knowledge Article
Science & Tech.
43.940329
95,543,582
posted by Dean Factor each number completely I don't know how to do factor tree's, they're tricky, :\ Please help Please help me, :| Any large number ending in zero can be factored by 10, which in turn can be factored into 2 and 5. The remaining 11 cannot be factored, but the 48 can. I'll let you do that. Also see: http://mathforum.org/library/drmath/view/58557.html I hope this helps. Thanks for asking. what is the answer of this problems (p+3)5 wat is a consenrt
<urn:uuid:1d4b0d2d-a4cf-4957-902b-94828511af03>
2.53125
134
Q&A Forum
Science & Tech.
89.646774
95,543,587
It's not too often a math problem goes viral, especially when it doesn't involve a complaint about Common Core. Over the weekend, Singapore TV personality Kenneth Kong posted a logic problem on his Facebook page that was given to high school kids competing in a math olympiad. The problem reads: Albert and Bernard just became friends with Cheryl, and they want to know when her birthday is. Cheryl gives them a list of 10 possible dates. May 15 May 16 May 19 June 17 June 18 July 14 July 16 August 14 August 15 August 17 Cheryl then tells Albert and Bernard separately the month and the day of her birthday respectively. Albert: I don’t know when Cheryl’s birthday is, but I know that Bernard does not know too. Bernard: At first I don’t know when Cheryl’s birthday is, but I know now. Albert: Then I also know when Cheryl’s birthday is. So when is Cheryl’s birthday? Kong wrote that the problem had caused a debate with his wife, and they're hardly alone. People argued about the answer on Facebook and Twitter with a passion you don't usually see over math. "The worldwide excitement and curiosity about this problem is very encouraging," Steven Strogatz, professor of applied mathematics at Cornell University said via email. "It shows that people love to think logically (or at least, to try to think logically), just for the pleasure of it. Even though the problem isn't good for anything, it's still fun to think about." While the question burned up bandwidth and got people excited, a logic problem may not be the best way to increase the understanding of mathematics. “It’s a fun problem, though in the end just a puzzle," Jordan Ellenberg, math professor at the University of Wisconsin and author of "How Not to Be Wrong," told The Huffington Post via email. "A really deep math question would be to ask students to *construct* a puzzle like this.” So what's the answer?Click here to find out. For a highly detailed look at the answer, check out this explanation from The Guardian. If you were stumped by the question, don't feel too bad. It was meant to be hard. “Being Question 24 out of 25 questions, this is a difficult question meant to sift out the better students,” Singapore and Asian School Math Olympiads wrote on Facebook. “SASMO contests target the top 40% of the student population and the standards of most questions are just high enough to stretch the students.” And, it seems, plenty of adults. The Huffington Post's David Freeman contributed to this report.
<urn:uuid:723379c2-9309-402f-a365-27d8bacb5884>
2.578125
573
News Article
Science & Tech.
66.83457
95,543,600
Despite the onslaught of politicians attempting to project an air of question around man-made climate change, studies continue to emerge proving the connection between human actions and our changing environment. The most recent study, published in Nature Climate Change, finds an "anthropogenic fingerprint" (human influence) on our warming oceans. The study, "Human-Induced Global Ocean Warming On Multidecadal Timescales," was conducted by researchers in the U.S., Australia, Japan and India. Based on observations of rising upper-ocean temperatures, the researchers used improved estimates of ocean temperatures to examine the causes of our warming ocean. According to a Lawrence Livermore National Laboratory press release, the study shows that over the past 50 years, observed ocean warming is explained only when greenhouse gas increases are included in the models. Lead author and LLNL climate scientist Peter Gleckler said in the press release, "The bottom line is that this study substantially strengthens the conclusion that most of the observed global ocean warming over the past 50 years is attributable to human activities." Gleckler added, "Although we performed a series of tests to account for the impact of various uncertainties, we found no evidence that simultaneous warming of the upper layers of all seven seas can be explained by natural climate variability alone. Humans have played a dominant role." Report co-author Dr. John Church explained to Australia's ABC News AM that "Natural variability could only explain 10 percent, or thereabouts, of the observed change." Oceanography expert Nathan Bindoff told the news organization, "This paper's important because, for the first time, we can actually say that we're virtually certain that the oceans have warmed, and that warming is caused not by natural processes, but by rising greenhouse gases primarily." He added, "We did it. No matter how you look at it, we did it. That's it." The recent ocean warming study has been released on the heels of other disturbing climate change reports. Arctic monitoring stations are now measuring over 400 parts per million of carbon dioxide in the atmosphere, a disturbing milestone that far surpasses the 350 ppm mark that many scientists consider the threshold separating safe from dangerous. Researchers recently warned in Nature that the world is heading toward a tipping point of disastrous consequences driven by human-led increases in atmospheric carbon dioxide and rising global temperatures: "The plausibility of a planetary-scale ‘tipping point’ highlights the need to improve biological forecasting by detecting early warning signs of critical transitions on global as well as local scales, and by detecting feedbacks that promote such transitions. It is also necessary to address root causes of how humans are forcing biological changes." Despite the ominous findings, some politicians are still attempting to project an element of doubt on issues surrounding human-induced climate change. A Virginia lawmaker recently fought to omit mentions of "climate change" and "sea level rise" from a coastal flooding study, telling the BBC, "The jury's still out" on whether humans contribute to global warming. Despite his claim, studies such as the recent ocean warming one are turning in a pretty clear verdict. BEFORE YOU GO 10 Countries With The Most CO2 Emissions:
<urn:uuid:6297c0ad-dbbf-47f8-97f5-af6a95059684>
3.171875
649
News Article
Science & Tech.
28.611907
95,543,628
Moth displays the ultimate deterrent after evolving camouflage on its wings that looks like a SPIDER - Has pattern that resembles eight legs of a spider - Unusual-looking creature discovered in Thailand in 2005 - Can frighten or distract potential predators It has wings lighter than a feather and is one of the most delicate creatures on earth. But this tiny moth can frighten off predators far bigger then itself - with its scary spider-like markings. The Lygodium Spider Moth knows how to stand up for itself by using intricate patterns that mimic the shape of a spider - deterring potential predators from attacking it. Imitation is the sincerest form of flattery: Researchers think that the creepy spider markings help protect it from predators This spider moth species is a dramatic example of how one species can reap benefits from mimicking or looking like another species The wings have patterns that make it resemble the eight legs of a spider. The fascinating bug was discovered in Thailand in 2005, and is described in the journal Annals Of The Entomological Society of America. The moth feeds on ferns, and the researchers think that the creepy spider markings help protect it from predators, Business Insider said. Researchers have documented previous incidences of moth species mimicking the behaviour of spiders as a way to defend against their predation. The fascinating bug was discovered in Thailand in 2005 - and can deter predators with its markings But this spider moth species is a dramatic example of how one species can reap benefits from mimicking or looking like another species. The moth has other unique features to help it in its battle for survival. Its caterpillar-like form makes it resemble beetle larvae. When the moth reaches adult state it also has armored segments on its rear similar to those on beetles but unlike anything seen before in a moth, the Featured Creature blog wrote. Previous research on insects suggests that when prey - like the Lygodium Spider Moth - sense an approaching spider, they stretch out their wings and the predator thinks it has met one of its own species. It will often flee to avoid an aggressive confrontation, the Why Evolution Is True blog said. So a moth with this pattern might escape being eaten because it either frightens off oncoming predators or flies away while its predator is startled. Most watched News videos - Brutal bat attack caught on surveillance video in the Bronx - Brexit Secretary Dominic Raab wants to 'heat up' talks with EU - Drowned woman and child found next to survivor clinging to wreck - Son forces mother to eat grass in violent family bust up - Shocking video shows mother brutally beating her twin girls - Tourist dies after waterfall jump in background of music video - The terrifying moment a plane comes crashing down in South Africa - Utah train worker calls a group of girls porn stars - Trump's daughter grasps her Secret Service agent's hand - Desperate parents queue for hours for school breakfast club place - A frail Richard Bacon is spotted leaving hospital with a walker - Bon Jovi star Richie Sambora soars in fighter plane
<urn:uuid:fd19d833-2c6d-41fd-9802-956a3256db99>
3.4375
640
Truncated
Science & Tech.
29.314363
95,543,634
The Definability of Cardinal Numbers One says that the sets x and y are equinumerous (in symbols, x ≈ y) if there is a 1 – 1 function mapping x on y. The notion of the cardinal |x| of x is obtained from equinumerosity by abstraction. The use of |x| does usually not require any special apparatus. E.g., when we say |x| — ℵ0 we mean to say that there is a 1 – 1 function mapping x on the set of all natural numbers; when we say |x| < |y| we mean that there are 1 – 1 functions mapping x into y, but none of them is onto y; etc. As is common in mathematics, one tends to pass from the abstract notion of cardinal numbers to real cardinal numbers, i.e., one wants to regard the cardinal numbers as objects of the mathematical system. This is where one encounters the problem of how to define the cardinal | x | of x as an object of set theory. KeywordsFree Variable Cardinal Number Axiom Schema Operation Symbol Membership Relation Unable to display preview. Download preview PDF. - 2.Fraenkel, A. A., and H. Bar-Hillel: Foundation of set theory. Amsterdam: North Holland Publishing Co. 1958.Google Scholar - 6.Mostowski, A.: Über die Unabhängigkeit des Wohlordnungssatzes vom Ordnungsprinzip. Fundamenta Mathematicae 32, 201–252 (1939).Google Scholar - 9.Sttppes, P.: Introduction to logic. Princeton: D. Van Nostrand Co. 1957.Google Scholar - 10.Tarski, A.: General principles of induction and recursion in axiomatic set theory (abstract). Bull. Amer. Math. Soc. 61, 442–443 (1955).Google Scholar
<urn:uuid:b823c66b-052b-4cd5-a984-a22f42cc4289>
3.1875
404
Truncated
Science & Tech.
71.403542
95,543,639
Organisms living in seasonal environments are often limited by the time available to complete their development. Especially individuals in northern populations may face severe time constraints in their need of completing development before the end of the growth season. Larval amphibians have been widely used in studies of phenotypic plasticity. However, their responses to changes in photoperiod, the main seasonal cue in many organisms, are unknown. In a laboratory experiment, we studied whether common frog (Rana temporaria) tadpoles originating from two populations (separated latitudinally by 1600 km) adjust their growth and development according to the progress of the season by using photoperiodic cues, and whether these responses are temperature dependent. We hypothesised that if frogs use photoperiod as a cue, they should increase growth and development rates as a response to photoperiodic treatments mimicking progressing season. Although our predictions were not verified in either of the populations, photoperiod manipulations had effects on larval life history in both populations. When exposed to progressing season treatments and high temperature, tadpoles from the southern population ceased feeding which led to delayed metamorphosis and increased mortality. In the northern population, age at metamorphosis was unaffected by the photoperiod treatments, but growth rate until metamorphosis and metamorphic size were slightly larger in the treatments with shorter (increasing or decreasing) day length. Irrespective of photoperiod treatments, growth and development rates, size at metamorphosis and food consumption were higher in the northern as compared to the southern population. These results indicate that in contrast to several insect species, the critical life history decisions in amphibian larvae may not be strongly influenced by photoperiodic cues, but different populations seem to differ in this respect. However, the strong temperature x photoperiod interactions in several traits in the southern population suggest that the role of photoperiodic cues may be affected by other environmental factors, although the ecological significance of these differences remains unclear. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:f83becdb-b743-4dbf-a072-84f19297d0ce>
3.15625
422
Academic Writing
Science & Tech.
1.321369
95,543,643
January 10 2017 Astronomy Newsletter Here's the latest article from the Astronomy site at BellaOnline.com. Volans Flies the Southern Skies Volans (the Flying Fish) flees from its predator Dorado (the Mahi Mahi) across the southern sky. They're two of the southern hemisphere constellations that Flemish astronomer Petrus Plancius (1552-1622) created to fill in parts of the sky not visible to northern astronomers. (1) January 8, 1587: Johannes Fabricius born. He was a Frisian/German astronomer, eldest son of David Fabricius who was also an astronomer. In their solar observing they discovered sunspots about the same time as - but independently of - Galileo. (2) January 9, 1839: Scottish astronomer Thomas Henderson published his determination of the distance to Alpha Centauri. He had been the first to succeed in using stellar parallax to calculate the distance to a fixed star, but had delayed publishing his results and lost out on the credit for being first. (3) January 10, 1946: The US Army Signal Corps had the first successful echo detection of a radar signal bounced off the Moon. It was part of an experiment in radar astronomy, a technique later used to map Venus. (4) January 11, 1787: William Herschel discovered the moons of Uranus, Oberon and Titania. He didn't name the moons. Here's the story on that: http://www.bellaonline.com/articles/art181335.asp (5) January 12, 1907: Sergei Korolev was born. He was the mastermind behind the development of the Soviet space program. Even his name was a state secret and he was only referred to as the Chief Designer. More about the Soviet space program here: http://www.bellaonline.com/articles/art302542.asp (6) January 13, 1610: Galileo discovered Jupiter's moon Ganymede. (7) January 13, 1978: NASA selected the first women astronauts. (At last.) (8) January 14, 2005: ESA's Huygens probe landed on Saturn's moon Titan. It made a big contribution to what we know about Titan. More about the planet-sized moon: http://www.bellaonline.com/articles/art182860.asp (9) January 15, 2006: NASA's Stardust returned with samples of comet dust. Please visit http://astronomy.bellaonline.com/Site.asp for even more great content about Astronomy. I hope to hear from you sometime soon, either in the forum http://forums.bellaonline.com/ubbthreads.php/forums/323/1/Astronomy or in response to this email message. I welcome your feedback! Do pass this message along to family and friends who might also be interested. Remember it's free and without obligation. I wish you clear skies. Mona Evans, Astronomy Editor Unsubscribe from the Astronomy Newsletter Online Newsletter Archive for Astronomy Site Master List of BellaOnline Newsletters
<urn:uuid:cca05db1-f235-4afd-9fd2-fd242131435e>
3.796875
660
News (Org.)
Science & Tech.
57.646787
95,543,655
DNA generic 248.88. (photo credit: Courtesy) Minuscule biomolecular computers made of DNA are as uncommon today as laptops were 15 years ago. They were invented just eight years ago, when Prof. Ehud Shapiro and his team at the Weizmann Institute of Science's biological chemistry department introduced the first autonomous programmable DNA computing device. So small that a trillion can fit in a drop of water, the device was able to perform such simple calculations as checking a list of 0s and 1s to determine if there was an even number of 1s. A newer version of the device, created in 2004, detected cancer in a test tube and released a molecule to destroy it. Besides the tantalizing possibility that such biology-based devices could one day be injected into the body - a sort of "doctor in a cell" - biomolecular computers could conceivably perform millions of calculations in parallel. The computers exist only in a few specialized labs, but Shapiro's research students Tom Ran and Shai Kaplan at the biological chemistry and computer science and applied mathematics departments have found a way to make these microscopic devices "user friendly," even while performing complex computations. Shapiro and his team have just published in the online edition of Nature Nanotechnology about their advanced program for biomolecular computers that enables them to "think." The train of deduction used by this futuristic device is remarkably familiar. It was first proposed by Aristotle over 2000 years ago as a simple if… then proposition: "All men are mortal. Socrates is a man. Therefore, Socrates is mortal." When fed a rule ("All men are mortal") and a fact ("Socrates is a man"), the computer answered the question "Is Socrates mortal?" correctly. The team went on to set up more complicated queries involving multiple rules and facts, and the DNA computing devices were able to deduce the correct answers every time. Simultaneously, the team created a compiler - a program for bridging between a high-level computer language and DNA computing code. Upon compiling, the query could be typed in something like this: Mortal(Socrates)? To compute the answer, various strands of DNA representing the rules, facts and queries were assembled by a robotic system and searched for a fit in a hierarchical process. The answer was encoded in a flash of green light: Some of the strands were equipped with a naturally glowing fluorescent molecule bound to a second protein which keeps the light covered. A specialized enzyme, attracted to the site of the correct answer, removed the "cover" and let the light shine. The tiny water drops containing the biomolecular data-bases were able to answer intricate queries, and they lit up in a combination of colors representing complex answers. If hedonism is taking self enjoyment to the extreme, a "hedonimeter" measures the extent of this enjoyment. In 1881, the optimistic Irish economist Francis Edgeworth imagined a strange device called a hedonimeter that would be capable of "continually registering the height of pleasure experienced by an individual." It was just a dram, but for many decades, social scientists have tried to to measure happiness. Surveys have revealed some useful information, but these are plagued by the unpleasant fact that people misreport and misremember their feelings when confronted by a questioner with a clipboard or when volunteers report their feelings via cell phone. But what if you had a remote-sensing mechanism that could record how millions of people around the world were feeling on any particular day - without their knowing? That's exactly what Peter Dodds and Chris Danforth, a mathematician and computer scientist working at the University of Vermont's advanced computing center, have created. Their methods show that Election Day, November 4, 2008, was the happiest day in four years. The day of Michael Jackson's death, one of the unhappiest. Their results were recently reported in the Journal of Happiness Studies (yes, there is such a thing). "The proliferation of personal online writing such as blogs gives us the opportunity to measure emotional levels in real time," they wrote in their study, "Measuring the Happiness of Large-Scale Written Expression: Songs, Blogs and Presidents." Their answer to Edgeworth's daydream begins with a Web site, www.wefeelfine.org, that mines some 2.3 million blogs, looking for sentences beginning with "I feel" or "I am feeling." They gathered nearly 10 million sentences. Then, drawing on a standardized "psychological valence" of words established by the Affective Norms for English Words (ANEW) study, each sentence receives a happiness score. In the ANEW study, a large pool of participants graded their reaction to 1,034 words, forming a kind of "happy-unhappy" scale from 1 to 9. For example, "triumphant" averaged 8.87, "paradise" 8.72, "pancakes" 6.08, "vanity" 4.30, "hostage" 2.20, and "suicide" 1.25. The sentence "I feel lazy" would receive a score of 4.38. "We were able to make observations of people in a fairly natural environment at several orders of magnitude higher than previous happiness studies," Danforth says. "They think they are communicating with friends," but since blogs are public, he says, "we're just looking over their shoulders."
<urn:uuid:eec80217-aca7-4d1c-9ccc-8fabc784b606>
3.734375
1,113
News Article
Science & Tech.
41.187862
95,543,661
Molecular 'Bricklayers' Build New Cell Walls News Feb 22, 2017 | Original story from Delft University of Technology It is the most crucial mechanism in life - the division of cells. For 25 years, it has been known that bacteria split into two by forming a Z ring at their centre. They use this to cut themselves into two daughter cells. Using advanced microscopes, researchers from the universities of Harvard, Indiana, Newcastle, and Delft have succeeded in finding out how bacteria do this. The bacteria appear to build a new cell wall working from the outside in, with the help of multiple molecular ‘bricklayers’, in about a quarter of an hour. What was completely unexpected was that the ‘bricklayers’ move along the inside of the wall under construction by ‘treadmilling’; the building of the cell wall is performed from scaffolding that is continuously being moved at the front, while at the rear it is continuously being dismantled. They investigated the process by viewing individual bacteria through advanced microscopes. This involved putting coloured labels on the cell wall material. By changing the colours every time, they were able to see that the bacteria were building the cell walls from the outside in. And by changing the colours of the building material with breaks of just a few seconds, they were also able to see that this is not a gradual process, but one that takes place in a different location each time. The engine that drives all of this is FtsZ, a protein that makes an arched -haped piece of polymer, and which appears to move via a phenomenon known as ‘treadmilling’, named after the old treadmills from the Middle Ages. Protein as scaffolding “With treadmilling, you create movement by adding something on the front, while removing something from the rear,” explains Professor Cees Dekker, of Kavli Insitute of Nanoscience TU Delft, a co-author on the article. “Our research shows that a cell also uses this phenomenon for building a cell wall.” Cell walls are built with the help of a number of collaborating proteins, with FtsZ playing the most important part. “Our new discovery has solved the 25-year-old puzzle of how FtsZ coordinates cell division. The protein appears to work like a kind of scaffolding, on which the building work takes place. However, it is not rolling scaffolding, but fixed scaffolding that is continuously renovating itself: all the time, the cell is building new scaffolding boards for the work on the cell wall on, let’s say, the right-hand side of the FtsZ scaffolding, while breaking up the now-superfluous scaffolding on the left-hand side, at the rear end of the work. This way, the scaffolding shifts along the cell wall. The building machine that produces the cell wall is controlled from the scaffolding, therefore moving neatly in tandem with the slowly moving scaffolding. The cell does this with different sets of scaffolding along the cell wall simultaneously, resulting in the construction of a partition wall in ten or fifteen minutes. Meanwhile, other proteins make sure that the DNA is divided properly between the two halves, for example, or that the membrane is properly closed off, and so on. The division of cells is a complex and fascinating process.” The study was a collaborative project involving researchers from four scientific groups, in the US, the UK, and Delft. The most significant contribution from Delft consisted of the production of nanostructures in which exactly one bacterium fits, lengthwise. “By placing the nanoboxes upright on the microscope, we were able to see in very sharp focus a cross-section of the cell. This gave us an excellent view of the dynamics of the FtsZ molecules. An important technical contribution.” explains Dekker. Although the study is fundamental in nature, Dekker believes that this type of research may be of practical benefit in the future. “Once we have a thorough understanding of how bacterial cells divide, it could pave the way towards alternative antibiotics. That is still some way off, but if we are able to disrupt bacterial cell division in a targeted manner, we may have new weapons in the future that we can use to fight bacteria that cause disease.” Filho, A. W., Hsu, Y., Squyres, G., Kuru, E., Wu, F., Jukes, C., . . . Garner, E. (2016). Treadmilling by FtsZ filaments drives peptidoglycan synthesis and bacterial cell division. doi:10.1101/077560 This article has been republished from materials provided by Delft University of Technology. Note: material may have been edited for length and content. For further information, please contact the cited source. Gene Regulator May Contribute to Protein Pileup in Exfoliation GlaucomaNews Researchers are seeking factors that contribute to protein pileup in exfoliation glaucomaREAD MORE Nano-tech Diagnostic Can Indicate Cancer or Thrombotic Risk in One Drop of BloodNews A team of international researchers led by Professor Martin Hegner, Investigator in CRANN and Trinity’s School of Physics, have developed an automated diagnostic platform that can quantify bleeding – and thrombotic risks – in a single drop of blood, within seconds.READ MORE
<urn:uuid:76823615-c3ef-461d-a62c-3ddbf7bf7c8b>
3.640625
1,146
Truncated
Science & Tech.
47.824213
95,543,664
Successful demonstration of the transmission of the signal proved that the elements of such systems can be reduced to microscopic size and yet still remain functional. As pointed out by the portal Ars Technica, the merit of the IBM lies in the fact that their artificial neurons built on the basis of well-known materials. However, these materials can be reduced to nanometer level, without losing their properties and functionality. Organic neurons are enclosed in membranes that act as a signal gate, taking a certain amount of energy to operate. In version IBM this role for germanium-Sulima-cells tellurium (GST), usually found in optical disks. If sufficiently heated from the GST changes physical phase. Amorphous insulator it goes into a crystalline phase conductor. In other words, the signal passes through the membrane only when it is fed enough electricity to move the GST to the crystalline phase, and then return to amorphous. However, for the rank of full artificial neuron, they must possess another inherent characteristic of the organic analogue of the stochasticity or randomness of behavior when it receives a signal. IBM claims that their neurons they made, as GST-membrane never return to the same structural configuration. This feature allows you to perform those tasks that would not be possible if the results were perfectly predictable. As suggested by the portal Ars Technica, in the future based on such artificial neurons, scientists can create computers, effectively simulating the parallel processing of information (as it makes our brain), and apply this principle to process sensory information. However, as noted, the creation of such machines will be much more simple task than writing them under the corresponding software. - 11-07-2018When we will end the storage of digital data, we'll use DNA - 29-06-2018Greatest mysteries: what is consciousness? - 27-06-2018Composed of 35 chief technology forecasts to 2018 - 26-06-2018The theory of ether. What unites Mendeleev, Tesla, and von Braun? - 17-06-2018Is there a limit to scientific knowledge? - 30-11--0001Physics saw the beginning of times and confirmed the expansion of the Universe - 30-09-2010From Russia suggest that you escape as soon as possible
<urn:uuid:83726515-e05f-4948-9ed4-d6085e674ac1>
3.234375
473
Content Listing
Science & Tech.
39.405882
95,543,666
1 year ago Experts are warning that the Great Barrier Reef can no longer be saved in its present form, due in major part to human influence on climate, the Independent reported. Moving forward, an expert panel appointment by the Australian government recommend that the reef’s “ecological function” be maintained. “There is great concern about the future of the Reef, and the communities and businesses that depend on it, but hope still remains for maintaining ecological function over the coming decades,” the Panel recently said. “The Panel considers that action to reduce emissions of greenhouse gases must be central to the response. This needs to be coupled with increased efforts to improve the resilience of the coral and other ecosystems that form the Great Barrier Reef.” These recommendations come after a 2016 study found the “largest die-off of corals ever recorded” with approximately 67 percent of shallow water coral found dead across a 435-mile stretch. As for what “maintaining ecological function” specifically means, the Great Barrier Reef Marine Park Authority reportedly stated: “The concept of ‘maintaining ecological function’ refers to the balance of ecological processes necessary for the reef ecosystem as a whole to persist, but perhaps in a different form, noting the composition and structure may differ from what is currently seen today.”
<urn:uuid:3f020962-e513-4b80-a267-eed76a4ab789>
3.421875
278
News Article
Science & Tech.
23.323917
95,543,670
posted by Jay if a rocket is launched straight up into the air with an initial velocity of 112 feet per second, it's height after t seconds is given by the formula h = 112t - 16t, where x represents the height of the rocket in feet: A. When will the rocket reach it's maximum height B. What is the maximum height? (Please explain how to do the problem but if you can't that's ok. I can work with a link that will show me how to do it. Thank you for answering.) NOT AM ACTUAL ANSWER. THIS IS BY THE SAME PERSON. IMPORTANT SIDE NOTE: the -16t is squared. your equation is not correct, should be h = 112t - 16t^2 This is a parabola opening downwards You have to find the vertex, and A and B can be answered at that point hint: the t of the vertex is -b/(2a) plug that back into the equation to get h, the maximum let me know what you get. When the rocket will reach max height: Maximum height: 7.918... Feet BY SAME PERSON WHO ASKED QUESTION
<urn:uuid:1ed2fafd-f177-48c0-b083-589518fa0e8b>
3.640625
257
Q&A Forum
Science & Tech.
89.7325
95,543,679
+44 1803 865913 Edited By: Jacco C Kromkamp, Jody FC de Brouwer and Gerard F Blanchard 260 pages, diagrams Intertidal mudflats are an important component of estuaries and they fringe large areas of the European coastlines. They form natural barriers protecting coastal areas from the sea and are biologically highly productive areas representing important nursery and feeding grounds for higher organisms including birds, fish and shellfish. Microphytobenthos assemblages are an important biological component of intertidal mudflats. These unicellular algae are active in a thin surface layer of the sediment that is subject to rapid fluctuations in light, temperature and salinity both over time and space. Microphytobenthos have a large impact on whole estuary functioning, for example, through their influence on the morphodynamics of coastal zones and by affecting sediment water nutrient exchange. Although basic knowledge exists about the physiology and ecology of these organisms and their roles in ecosystem functioning, a more thorough understanding of these processes necessitates an interdisciplinary approach. This volume provides a valuable and up to date contribution to our current understanding of the microbial ecology in estuarine intertidal areas. Proceedings of the Colloquium, Amsterdam, 21-23 August, 2003 This book is an excellent and accessible synthesis of current research and will be invaluable to advanced students and researchers alike. It serves as a concise orientation in what has been discovered, and shows the reader the tantalizing expanses that are as yet terra incognita.--Hugh L. MacIntyre "Ecology " There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects Fantastic service at a great price – I'll definitely use you again. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:ed0a689e-816f-41b7-bc2d-dc23734be18d>
3.046875
403
Product Page
Science & Tech.
19.688856
95,543,681
In this order, a large number of species use acoustic communications for aposematism A role of phenotypic plasticity in the evolution of aposematism and bioluminescence: Experimental evidence from glow-worm larvae (Coleoptera: Lampyridae). Experimental evidence for aposematism in the dendrobatid poison frog Oophaga pumilio. Since then numerous adaptive functions for animal colors have been described including camouflage, sexual signalling, thermoregulation and aposematism (Poulton 1890; Cott 1940). These studies included investigations of the importance of prey size, habitat, distance from predators, mimicry, and aposematism (warning coloration) on the rate of predation on the model organisms. From an ecological point of view, pigmentation has been proposed to play an important role in thermoregulation, resistance to desiccation, ultraviolet radiation and parasitism, crypsis, aposematism , mate choice and courtship behavior (Wittkopp et al. Multiple, recurring origins of aposematism and diet specialization in poison frogs. evolves by individual selection: evidence from marine gastropods with pelagic larvae. For example, among infrahumans, peak shift has been invoked as a mechanism underlying the evolution of aposematism (warning coloration) among prey (Gamberale & Tullberg, 1996; Gamberale-Stille & Tullberg, 1999; Leimar, Enquist, & Sillen-Tullberg, 1986), the sexual selection among birds for elaborately plumaged males (Weary, Guilford, & Weisman, 1993), and the preference for supernormal stimuli displayed by many species (Ghirlanda & Enquist, 1998, 1999, 2003; Staddon, 1975). and Brown (2001) as a case of aposematism , notwithstanding the different The paper, 'Do aposematism and Batesian mimicry require bright colours?
<urn:uuid:98f1dc62-e3ed-436b-b827-0074302a10d0>
3.03125
432
Knowledge Article
Science & Tech.
-6.831923
95,543,715
Amoeba eat bacteria and other human pathogens, engulfing and destroying them – or being destroyed by them, but how these single-cell organisms distinguish and respond successfully to different bacterial classes has been largely unexplained. In a report in the journal Current Biology, researchers from Baylor College of Medicine use the model of the social amoeba – Dictyostelium discoideum - to identify the genetic controls on how the amoeba differentiate the different bacteria and respond to achieve their goal of destruction. “No one has looked at the basic question of what happens when you put the two classes of species together,” said Dr. Adam Kuspa, professor in the department of biochemistry & molecular biology and senior vice president for research at BCM. “What does the amoeba do?” The scientists, who included first author graduate student Waleed Nasser, did a genetic screen called a transcriptional profile, that identified sets of genes are active or expressed when interacting with two major classes of bacteria – gram negative and gram positive. “The two kinds of bacteria are different in structure and biochemistry,” said Kuspa, who is the corresponding author of the report. “We found that the Dictyostelium did differentiate between the different bacteria. In fact, it was shocking that nearly 800 different genes were activated when exposed to a kind of gram negative bacteria known as enterobacteria (Klebsiella).” The researchers found 50 amoebal genes that were activated during growth on gram negative species of bacteria and 68 that were activated on gram positive species. The genes identified as active on gram positive bacteria were those most commonly defined as involved in metabolism. Those active on gram negative bacteria were most likely involved in degrading the cell wall, in particular one gene called alyL, which encodes an amoeba protein which likely acts as a lysozyme, an enzyme that breaks down bacterial cell walls. They also identified glucose-6-phosphate or a metabolite of it as signaling the presence of gram positive bacteria. From that, said Kuspa, the question arises of whether this “barometer” of the presence of gram positive bacteria in the social amoeba might be conserved across evolution in humans. “Might it be conserved in us?” he asked. When the genome of the social amoeba was sequenced, Kuspa, colleague Dr. Gad Shaulsky, professor of molecular and human genetics at BCM, and others found that all amoeba are related. That means that what affects one kind of amoeba probably affects another. “The second thing was that we found amoebae are more closely related to us than we thought,” said Kuspa. Many of the proteins found in amoeba are conserved in mammals. “We hope that what we learn from amoebae might be relevant to human immune systems,” he said. Others who took part in this work include Shaulsky, Balaji Santhanam, Edward Roshan Miranda, Anup Parikh Chris Dinh, Rui Chen and Blaz Zupan, all of BCM; and Kavina Juneja of Rice University and Gregor Rot of the University of Ljubljana in Slovenia. Zupan is also of the University of Ljubljana. Funding for this work came from the Dictyostelium Functional Genomics Program Project Grant from the National Institutes of Health (PO1 HD39691). Baylor College of Medicine
<urn:uuid:3285a12e-f252-4762-abb7-dab217bfc13d>
3.5625
758
News Article
Science & Tech.
29.760982
95,543,743
Phospholipid bilayers that mimic cell membranes in living organisms are of interest as substrates for biosensors and for the controlled release of pharmaceuticals. To better understand how these materials behave with embedded proteins, a necessary first step is to understand how the bilayers respond by themselves. As will be reported in the Dec. 9 issue of Physical Review Letters (published online Nov. 21), scientists at the University of Illinois at Urbana-Champaign have studied the phase transition in a supported bilayer and discovered some fundamental properties that could affect the material’s performance in various applications. "Like water turning into ice, bilayers can exist in either a fluid phase or a solid (gel) phase, depending upon temperature," said Andrew Gewirth, a professor of chemistry. "Using a sensitive atomic force microscope, we studied how the microstructure of these bilayers changed during the transformation process." James E. Kloeppel | UIUC news bureau Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:44665c14-52cf-428b-ad58-ab6a633342cf>
3.078125
838
Content Listing
Science & Tech.
37.550389
95,543,747
Places such as islands, river channel regions of the desert and small-scale rock outcrops can offer critical protection for threatened species populations when times get tough. Project 4.4 will identify the refuges that threatened species depend upon when under stress from threats such as feral predators, diseases, fires and droughts, and identify the best ways to manage and protect those refuges. “We will collate the fragmented fieldwork research that already exists to reveal potential high-value conservation areas that may not be currently included in national parks,” says Dr Diana Fisher, who leads the project alongside Dr Michael Kearney. The project will include studies that focus on a variety of threatened species like the greater bilby in central Australia and the great desert skink in the north-western corner of South Australia. It will attempt to discover how they respond to the threats posed by feral cats and extreme weather events such as bushfires and droughts. “Close attention must be paid to where threatened species go in times of drought and whether their predators are capable of following them. Radio collars and monitoring have played an important role in identifying the movements of many of these animals and their predators,” says Dr Fisher. “We’re interested in mapping out precisely what they need to survive; obviously a reliable water source is vital but there are many other factors we must take into consideration, like safety from feral predation. “For example, in the case of the bridled nailtail wallaby and the northern quoll, these mammals are quite small and particularly vulnerable to predation by cats and dogs. “Our challenge is to develop an understanding of the complex interactions between predators and native mammals, and the environments and conditions in which each can survive.” This project will also be linked to research undertaken by Dr Sarah Legge and Professor John Woinarski on feral cat management. “Studies conducted on the behaviour of feral cats revealed that they are achieving far greater success when hunting in burnt-out open areas, where there’s no refuge for their prey. We don’t fully understand how they are managing to survive in some desert areas,” says Dr Fisher. “In the case of many small mammals, those classified under the antechinus (small carnivorous mammals) genus and others like the plains mouse or the central rock rat appear to be restricted to increasingly smaller ranges. We need to investigate and understand the precise reasons why they now appear to be so restricted in their range.” Dr Fisher notes the futility of protecting huge areas of land, if there is no protection for species in crucial refuges for survival in times of stress. “We know we can’t protect threatened species from predators in all places, we don’t have the resources or the capacity, but if we can identify refuges we can focus our attention on achieving the best possible outcomes.” Image credit: Northern quoll, S J Bennett (Flickr) Most people know that cats kill many birds and mammals, but they also have impacts on less charismatic species. Australian cats are killing about 650 million reptiles per year, according to new research published in the journal Wildlife Research. You have to be pretty lucky to make a living by combining your passion and interests, and that’s exactly how Dr Daniel White feels about his current state of affairs. Dan began his career studying genes, and has since applied his science to saving species. Here he describes how. The TSR Hub recognises that outcomes for threatened species will be improved by increasing Indigenous involvement in their management. In response to this, the Hub is guided by an Indigenous Reference Group and has a number of projects across Australia that are collaborating with Indigenous groups on threatened species research on their country. A new contagious fungal plant disease has entered Australia, myrtle rust. It’s highly mobile, can reproduce rapidly and is infecting many species across a broad geographic range. Containment and eradication responses have so far been unsuccessful. Australia is losing large old hollow-bearing trees in our mountain ash forests due to logging, fires and climate change. A team at the Australian National University have been investigating the importance of these trees, the implications of their loss and things we can do to ensure we have enough mountain giants for the future.
<urn:uuid:235c6c48-0f58-45c2-80e1-a440bc6ff337>
3.828125
903
News (Org.)
Science & Tech.
35.962788
95,543,769
Texas Tech Associate Professor and Whitacre Endowed Chair in Mechanical Engineering Jian Sheng, along with biologists Brad Gemmell and Edward Buskey from the University of Texas Marine Science Institute, have discovered new information that explains how these tiny organisms overcome this disadvantage. Their paper, titled “A compensatory escape mechanism at low Reynolds number” was published in the current issue of Proceeding of the National Academy of Sciences. “The purpose of the study was in trying to determine the effects of climate change at the very base of the food chain,” Sheng said. As one of the most abundant animal groups on the planet, many species, including many commercially important fish species, rely on planktonic copepod nauplii at some point during their life cycle. Understanding the ability of these animals to respond to changes in the environment could have direct implications into understanding the future health of our oceans. By independently varying temperature and viscosity, Sheng recorded their movements with 3-D high speed holographic techniques developed by the Sheng lab at Texas Tech. “At 3,000 frames per second, it was like tracking a racecar through a microscope,” Sheng said. “We were able to determine that the plankton adapted to changes in viscosity by altering the rhythm of its pulsing appendage.” The response, built-in to its natural muscle fiber, was only triggered by changes in temperature, Sheng said. It could not compensate for changes in viscosity due to environmental pollution, such as algae blooms or oil spills.Supporting Information (806) 742-3563, or firstname.lastname@example.org Jian Sheng | Newswise Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:b45d83b6-5b24-4159-9b7d-c28ca50d1bb7>
3.125
958
Content Listing
Science & Tech.
35.082746
95,543,778
Evolution doesn’t have to take millions of years. New research shows that a type of lizard living on man-made islands in Brazil has developed a larger head than its mainland cousins in a period of only 15 years. The group of insect-eating geckos from the species Gymnodactylus amarali was isolated from the rest of the population when areas of the countryside were flooded to provide hydro-electric power. This caused the extinction of some larger species of lizards on the new islands, leaving the geckos to eat insects that would normally have been mopped up by the bigger species. As a result, the geckos have evolved bigger mouths, and so bigger heads, that enable them to eat their larger prey more easily. We’ve actually seen rapid evolution like this before, but usually in response to a natural disaster such as drought or climate change. What’s different about the geckos is that they’ve evolved in direct response to an environmental change enacted by humans, demonstrating just how much impact we can have on the natural world. The gecko study, published in PNAS, gives us an interesting demonstration of how evolution works, not just because the change has happened within our lifetimes. Those geckos among the original colony that had larger heads (and mouths) could eat a wider range of prey and so had more energy to put into survival and reproduction. As a result, they had more children and their genes for larger heads spread to a greater proportion of the next generation. This continued until larger heads had become a common feature of the group. But why just those with bigger heads? Why didn’t geckos whose whole bodies were bigger receive the same evolutionary advantage? Well larger bodies take more energy to maintain, so those individuals would lose the advantage that they gain by eating more food. One of the most interesting things about this research is that the geckos on all five of the islands studied have evolved larger heads, even though they were isolated from each other. This suggests that increasing head size without increasing body size is the most efficient way to take advantage of the opportunity to eat a more varied diet than is normal for this species. This kind of rapid evolution has been seen before, including among the finches of the Galapagos Islands that helped Charles Darwin formulate his theory of natural selection in the first place. One of these finches species reduced the average size of its bill in a period of just 22 years when a competitor with a larger bill colonised the island. The larger species ate all the larger seeds with tough shells, a large bill that still couldn’t compete became a disadvantage for the finches and so those birds with a smaller beak began to thrive. This is one of the fundamental principles of biology: if you don’t need a particular structure you don’t bother to grow it and save the energy instead. A similar instance occurred in Florida when a lizard called the Cuban brown anole, which is much larger than the native green anole, colonised areas of Florida. The green anole promptly retreated up into the treetops and within 20 generations had evolved bigger, stickier foot pads, a helpful characteristic for the high life. Another example of rapid evolutionary change was found in Soay sheep on the island of Hirta in St Kilda off the coast of Scotland. After the residents of the island were evacuated in 1930, the sheep were allowed to run wild and, within 25 years, began to get smaller. The explanation put forward for this is that milder winters caused by climate change are allowing smaller lambs to survive, bringing down the average size of the whole population. This suggests that we should expect to see many more examples of rapid evolution as the climate continues to change in response to greenhouse gas emissions. But the new study on geckos shows that localised human action can also interfere with the processes of evolution. Although the change in head and mouth size in the gecko seems benign, we should remember it came about because of the extinction of four other species of lizard in the area linked to the flooding. It’s a timely reminder that climate change is not the only issue facing biodiversity and evolutionary processes. 3,151 total views, 3 views today Latest posts by Wildlife Articles (see all) - Climate change has left weasels with mismatched camouflage - 11th June 2018 - The threats behind the sharp fall in puffin numbers - 11th June 2018 - Fantastic wildlife sightings during 2018 Orca Watch event - 11th June 2018
<urn:uuid:4337bd2f-0023-49e9-a1c8-bcf4b7e8a6d3>
4.125
942
News Article
Science & Tech.
37.527404
95,543,796
Photo courtesy of Francesco Tomasinellni Symbiotic relationships exist throughout the natural world. From bacteria living in human digestive systems, to bees pollinating flowers and dogs living with humans to fish hanging out underneath sharks; many species have a mutual relationship with another in some form. With the recent focus on a story about tarantulas and frogs doing the rounds, I would like to document a few symbiotic relationships which naturally occur in nature. So what is this frog-tarantula relationship? The headline to the article which has come into focus states that ‘Tarantulas are keeping frogs as pets’. This isn’t actually the case. What they are in fact doing is working together; a mutual dependency where both parties benefit from the outcome. This is what we know as mutual symbiosis. There are several different types of symbiotic relationship: Mutulistic symbiosis: Both parties are working together to receive benefits. Commensalistic symbiosis: One species will benefit from another but the second party will be neither helped nor harmed Parasitic symbiosis: Where one party reaps benefits from another, causing harm, detriment or death. The story about the tarantula-frog relationship isn’t a new one, it was actually first documented nearly 25 years ago by Crocraft & Hambler. It refers to Xenesthis immanis, a large burrowing tarantula that comes from Peru and Chiasmocleis ventrimaculata, a tiny, narrow mouthed frog also known as the Dotted Humming frog. The frogs and spiders live with each other, despite the fact that the frog would probably make a fairly decent meal for the tarantula. So how can prey and predator feel so comfortable around each other? Well, it is believed that the tiny frogs are used to guard the spiders eggs. Ants are the main source of food for the Dotted Humming frogs, however they are also one of the biggest threats to the tarantula’s eggs. The frogs guard the spider’s eggs and in return, the tarantula protects the tiny amphibian from other predators, such as other arthropod species. Since the first sighting in 1989, there has been further documentation of these frogs and tarantulas living and working together leading us to believe they have well sealed their mutual symbiotic relationship. The fact that two such creatures can form a mutually beneficial relationship seems almost sweet to us, if not a little unbelievable, however it is going on all over the animal kingdom. If you head into any fish shop or aquarium today, they will tell you (and try to sell you) about the brilliance of shrimps. Shrimps are excellent for cleaning up. They remove lots of debris from water and plants and consume it, leading to a reduction of rubbish in the water column. And it isn’t just us humans who have picked up on this. Cleaner shrimp (Palaemonidae sp., Hippolytidae sp., Stenopodidae sp.) get their name because that is exactly what they do: clean. Like tiny dentists, they spend their time cleaning up the insides of the mouth’s of other , eating the parasites living in there. The shrimps congregate in mass numbers at ‘cleaning stations’, waiting for interested fish to come along, queue up and when it is their turn, open their mouths for the shrimp to head in and get to work. The shrimps get a good meal and the fish get a decent oral clean. These shrimps aren’t fussy either, there have even been cases of human divers heading down and opening their mouths and getting their teeth cleaned! Another example of symbiosis involving mouths is the supposed relationship between Crocodiles and Egyptian Plovers. There are written accounts of Plovers flying in to remove food from crocodiles open mouths which date back as far as 480 BC and the National Geographic ran the same story back in 1986, however there is still no hard evidence that this relationship actually exists or that the crocodiles do entertain the birds in this fashion. No photographic or video material has been produced and there are some who do not believe this actually happens. Another famous relationship is that of the Oxpecker and a variety of large mammals. It was believed for a long time that Oxpeckers were doing good for many mammals, however ongoing research by Paul Weeks et al. has suggested that this is not correct and this actually may be a case of Parasitic symbiosis. Oxpeckers eat a variety of small parasites such as ticks, lice, fleas etc. The birds can often be seen sitting atop Rhinos, Hippos, Elephants, Wildebeest or cattle, feeding from the insects on their skin. It used to be common thought that the Oxpeckers were doing the mammals a favour; removing all those nasty parasites they didn’t want. However, there is a new school of thought that suggests the Oxpeckers are doing more harm than good. Often, when they consume a tick or flea, the ecto-parasite has already had its fill, meaning it will have already bitten the mammal and spread any disease that it may be carrying. The Oxpecker may remove the parasite, but there is no evidence that they are causing a reduction in infestations. The birds have also been observed creating or opening wounds on their host and drinking their blood. In fact some mammals, such as Elephants, will work to remove these birds from their backs suggesting the relationship is not enjoyed by both parties. Symbiosis doesn’t just happen in the animal kingdom however, it is also apparent in the plant world. The final relationship I want to mention here (which is absolutely my favourite) is Lichen. Lichen is often described as an entity all of its own, but is actually just an example of symbiosis working perfectly. Lichens are a hybrid of a fungus and algae or cyanobacteria. The algae part which forms lichen is called a photobiont and the fungi part is a mycobiont. For the fungus, having the algae around is important because it can perform photosynthesis, enabling the normally heterotrophic fungi to harvest this and create its own food. In return, the algae uses the fungus to help it expand its habitat range. Fungus is fantastic at adapting to different habitats, so by teaming up, the algae can reach further afield. There are many different types of lichen, made up by different types of fungus and different algae or cyanobacteria. Some people argue that the relationship is not mutual and is in fact a form of parasitic symbiosis on the part of the fungi as it uses up so much of the sugars the algae develop through photosynthesis, however there is substantial evidence for both arguments and a concrete conclusion has not yet been reached. These are only a few examples of symbiosis. There are hundred of different types of relationships out there, all just as fascinating as the next. It will be interesting to know whether more symbiotic relationships will form as we continue to evolve and which species will rely on which? 14,948 total views, 16 views today Latest posts by Alex Pearce (see all) - The Gull Cull - 11th November 2015 - Little White Lions – The Industry’s ‘Super Rare’ deception - 16th October 2015 - A Short History of Dogs - 15th October 2015
<urn:uuid:5760358d-5381-4504-9f31-eae1bde478db>
3.28125
1,551
Personal Blog
Science & Tech.
45.117
95,543,797
It is shown in this paper that no stable equilibrium can be attained in an ecological community in which some r of the components are limited by less than r limiting factors. The limiting factors are thus put forward as those aspects of the niche crucial in the determination of whether species can coexist. For example, consider the following simple food web: Despite the similar positions occupied by the two prey species in this web, it is possible for them to coexist if each is limited by an independent combination of predation and resource limitation, since then two independent factors are serving to limit two species. On the other hand, if two species feed on distinct but superabundant food sources, but are limited by the same single predator, they cannot continue to coexist indefinitely. Thus these two species, although apparently filling distinct ecological niches, cannot survive together. In general, each species will increase if the predator becomes scarce, will decrease where it is abundant, and will have a characteristic threshold predator level at which it stabilizes. That species with the higher threshold level will be on the increase when the other is not, and will tend to replace the other in the community. If the two have comparable threshold values, which is certainly possible, any equilibrium reached between the two will be highly variable, and no stable equilibrium situation will result. This is not the same as dismissing this situation as "infinitely unlikely," which is not an acceptable argument in this case. Hutchinson's point of the preceding section vividly illustrates this. The results of this paper improve on existing results in three ways. First, they eliminate the restriction that all species are resource-limited, a restriction persistent in the literature. Second, the results relate in general to periodic equilibria rather than to constant equilibria. Third, the nature of the proof relates to the crucial question of the behavior of trajectories near the proposed equilibrium, and provides insight into the behavior of the system when there is an insufficient number of limiting factors. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:e8428adc-0bae-43fc-a858-15bc8d9decfe>
2.796875
416
Academic Writing
Science & Tech.
24.804157
95,543,803
This would run counter to the widely-held belief that massive, luminous galaxies (like our own Milky Way Galaxy) began their formation and evolution shortly after the Big Bang, some 13 billion years ago. Further research into the nature of these objects could open new windows into the study of the origin and early evolution of galaxies. John Salzer, principal investigator for the study published today in Astrophysical Journal Letters, said that the 15 galaxies in the sample exhibit luminosities (a measure of their total light output) that indicate that they are massive systems like the Milky Way and other so-called "giant" galaxies. However, these particular galaxies are unusual because they have chemical abundances that suggest very little stellar evolution has taken place within them. Their relatively low abundances of "heavy" elements (elements heavier than helium, called "metals" by astronomers) imply the galaxies are cosmologically young and may have formed recently. The chemical abundances of the galaxies, combined with some simple assumptions about how stellar evolution and chemical enrichment progress in galaxies in general, suggest that they may only be 3 or 4 billion years old, and therefore formed 9 to 10 billion years after the Big Bang. Most theories of galaxy formation predict that massive, luminous systems like these should have formed much earlier. If this overall interpretation proves correct, the galaxies may allow astronomers to investigate phases of the galaxy formation and evolution process that have been difficult to study because they normally occur at such early times in the Universe, and therefore at very large distances from us. "These objects may represent a unique window on the process of galaxy formation, allowing us to study relatively nearby systems that are undergoing a phase in their evolution that is analogous to the types of events that, for most galaxies, typically occurred much earlier in the history of the Universe," Salzer said. The discoveries are the result of a multi-year survey of more than 2,400 star-forming galaxies called the Kitt Peak National Observatory International Spectroscopic Survey (KISS). The survey was designed to collect basic observational data for a large number of extragalactic emission-line sources. Additional rounds of follow-up spectroscopy for the sources discovered in the initial survey led to the discovery of the 15 luminous, low-abundance systems. "The reason we found these types of galaxies has to do with the unique properties of the KISS survey method," Salzer said. "Galaxies were selected via their strong emission lines, which is the only way to detect these specific galaxies." Previous surveys done by others have largely missed finding these unusual galaxies. While the hypothesis that these galaxies are cosmologically young is provocative, it is not the only possible explanation for these enigmatic systems. An alternative explanation proposes that the galaxies are the result of a recent merger between two smaller galaxies. Such a model might explain these objects, since the two-fold result of such a merger might be the reduction of metal abundances due to dilution from unprocessed gas and a brief but large increase in luminosity caused by rampant star formation. As a way to distinguish between these two scenarios, Salzer and his team intend to request observing time on NASA's Hubble Space Telescope to use high-resolution imaging to determine whether or not the systems might be products of merging. A National Science Foundation Presidential Faculty Award to Salzer, as well as continued NSF support cumulatively totaling $1.2 million, funded the KISS survey and supporting work. Also contributing to the Astrophysical Journal Letters paper were astronomers Anna Williams of Wesleyan University in Middletown, Conn. and Caryl Gronwall of Pennsylvania State University. Salzer is at IU while on leave from his position of professor of astronomy at Wesleyan, but expects to formally join the faculty at IU in the coming year. The authors also recognized KISS team members Gary Wegner, Drew Phillips, Jessica Werk, Laura Chomiuk, Kerrie McKinstry, Robin Ciardullo, Jeffrey Van Duyne and Vicki Sarajedini for their participation in the follow-up spectroscopic observations over the past several years. To speak with Salzer, please contact Steve Chaplin, University Communications, at 812-856-1896, or firstname.lastname@example.org. Steve Chaplin | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:ca3bd945-9672-4bba-8161-09a5ddd1dc1f>
3.984375
1,530
Content Listing
Science & Tech.
32.222436
95,543,866
META II Grammar Explained Even after reading the paper, I had a hard time understanding how the grammar worked. Below are my notes on the grammar itself. This description assumes that you have read the paper and that you are familiar with the different syntactic and semantic constructs of the language. A META II grammar starts with .SYNTAX followed by an identifier pointing to the first syntax equation. The control must go there first, .OUT('ADR ' *). PROGRAM = '.SYNTAX' .ID .OUT('ADR ' *) A grammar is composed of zero or more syntax equations. A grammar terminates with A syntax equation starts with an identifier, the metalinguistic variable, separated from a first-order expression by the = character. Each syntax equation is translated into a recursive subroutine, .LABEL * and ST = .ID .LABEL * '=' EX1 '.,' .OUT('R')., The syntax equation for first-order expressions implements the alternative operator of the language. If the previous second-order expression, EX2, returns true, the control is redirected to the end of the subroutine, .OUT('BT ' *1). EX1 = EX2 $ ('/' .OUT('BT ' *1) EX2) .LABEL *1 ., The syntax equation for second-order expressions implements the concatenation mechanism of the language. A second-order expression can be a concatenation of third-order expressions or output directives. If the first construct fails, the control is redirected to the end of the subroutine, .OUT('BF ' *1). Otherwise, the parsing continue. If no third-order expression or output directive is found, the subroutine finishes. If one is found but the parsing failed, a syntax error is thrown, EX2 = (EX3 .OUT('BF ' *1) / OUTPUT) $ (EX3 .OUT('BE') / OUTPUT) .LABEL *1 ., The syntax equation for third-order expressions implements: - the mechanism to combine syntax equations, .ID, which calls the subroutine with the same name, .OUT('CLL ' *). - the syntax recognizers: '.STRING'which call the appropriate string matching procedure - the precedence mechanism using parenthesis - the optional operator: - the optional sequence operator: '$', a syntax alternative to The optional sequence operator tries to recognize as many third-order expressions as possible and always succeeds, EX3 = .ID .OUT('CLL ' *) / .STRING .OUT('TST ' *) / '.ID' .OUT('ID') / '.NUMBER' .OUT('NUM') / '.STRING' .OUT('SR') / '(' EX1 ')' / '.EMPTY' .OUT('SET') / '$' .LABEL *1 EX3 .OUT('BT ' *1) .OUT('SET')., The syntax equations for output directives implement the .LABEL directives to generate assembly code. OUTPUT = ('.OUT' '(' $ OUT1 ')' / '.LABEL' .OUT('LB') OUT1) .OUT('OUT') ., .LABEL accept the same arguments. GN2 are used to generate and copy labels for implementing subroutines and control structures. * is used to copy what has just been parsed. .STRING is used to copy a string literal OUT1 = '*1' .OUT('GN1') / '*2' .OUT('GN2') / '*' .OUT('CI') / .STRING .OUT('CL ' *).,
<urn:uuid:bf5a00cf-f025-48cd-a1c8-010ac2167698>
3.015625
797
Documentation
Software Dev.
57.9825
95,543,883
Martian conglomerate containing rounded pebbles observed by Mars Curiosity Rover in Gale Crater (Image by NASA/JPL-Caltech/MSSS) Permission Details DMCA Rounded pebbles discovered by the Curiosity rover were carried some 30 kilometres from their source by a flowing river on the red planet, a new study concludes. The research, reported in the journal Nature Communications, provides some of the most compelling evidence yet that Mars had long periods of warm climate, allowing flowing rivers to move material tens of kilometres downstream. The findings are based on new experiments that show the shape of a pebble can be used to accurately determine how far it has been transported across a planet's surface. 'The thing that's absolutely remarkable is that we can make observations from space, send a rover there, and by looking at individual particles, actually determine — from their shapes — how far they've moved,' said one of the study's authors, Professor Douglas Jerolmack of the University of Pennsylvania in Philadelphia. Read the rest of the story HERE: Rob Kall is an award winning journalist, inventor, software architect, connector and visionary. His work and his writing have been featured in the New York Times, the Wall Street Journal, CNN, ABC, the HuffingtonPost, Success, Discover and other media. He's given talks and workshops to Fortune 500 execs and national medical and psychological organizations, and pioneered first-of-their-kind conferences in Positive Psychology, Brain Science and Story. He hosts some of the world's smartest, most interesting and powerful people on his Bottom Up Radio Show, and founded and publishes one of the top Google- ranked progressive news and opinion sites, OpEdNews.com more detailed bio: Rob Kall has spent his adult life as an awakener and empowerer-- first in the field of biofeedback, inventing products, developing software and a music recording label, MuPsych, within the company he founded in 1978-- Futurehealth, and founding, organizing and running 3 conferences: Winter Brain, on Neurofeedback and consciousness, Optimal Functioning and Positive Psychology (a pioneer in the field of Positive Psychology, first presenting workshops on it in 1985) and Storycon Summit Meeting on the Art Science and Application of Story-- each the first of their kind. Then, when he found the process of raising people's consciousness and empowering them to take more control of their lives one person at a time was too slow, he founded Opednews.com-- which has been the top search result on Google for the terms liberal news and progressive opinion for several years. Rob began his Bottom-up Radio show, broadcast on WNJC 1360 AM to Metro Philly, also available on iTunes, covering the transition of our culture, business and world from predominantly Top-down (hierarchical, centralized, authoritarian, patriarchal, big) to bottom-up (egalitarian, local, interdependent, grassroots, archetypal feminine and small.) Recent long-term projects include a book, Bottom-up-- The Connection Revolution, debillionairizing the planet and the Psychopathy Defense and Optimization Project. |The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
<urn:uuid:b7c62bf3-6668-42d2-9946-442d8120d15e>
3.28125
681
Truncated
Science & Tech.
15.944254
95,543,889
There are lots of different methods to find out what the shapes are worth - how many can you find? Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws? A game for 2 or more people, based on the traditional card game Rummy. Players aim to make two `tricks', where each trick has to consist of a picture of a shape, a name that describes that shape, and. . . . Engage in a little mathematical detective work to see if you can spot the fakes. A game in which players take it in turns to turn up two cards. If they can draw a triangle which satisfies both properties they win the pair of cards. And a few challenging questions to follow... A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. A game in which players take it in turns to try to draw quadrilaterals (or triangles) with particular properties. Is it possible to fill the game grid? Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... Why not challenge a friend to play this transformation game? The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? Move your counters through this snake of cards and see how far you can go. Are you surprised by where you end up? In this game the winner is the first to complete a row of three. Are some squares easier to land on than others? Play this game to learn about adding and subtracting positive and negative numbers Invent a scoring system for a 'guess the weight' competition. Match the cumulative frequency curves with their corresponding box plots. Can you deduce which Olympic athletics events are represented by the graphs? Infographics are a powerful way of communicating statistical information. Can you come up with your own? How good are you at estimating angles? How can we make sense of national and global statistics involving very large numbers? Generate three random numbers to determine the side lengths of a triangle. What triangles can you draw? Investigate what happens to the equation of different lines when you translate them. Try to predict what will happen. Explain your findings. Collect as many diamonds as you can by drawing three straight lines. A game in which players take it in turns to choose a number. Can you block your opponent? Investigate what happens to the equations of different lines when you reflect them in one of the axes. Try to predict what will happen. Explain your findings. I took the graph y=4x+7 and performed four transformations. Can you find the order in which I could have carried out the transformations? My measurements have got all jumbled up! Swap them around and see if you can find a combination where every measurement is valid. Can you work out which processes are represented by the graphs? These Olympic quantities have been jumbled up! Can you put them back together again?
<urn:uuid:f034c5a1-55a6-48b0-803b-3be9819442da>
3.265625
699
Content Listing
Science & Tech.
66.905208
95,543,900
How is energy use determined? Usually we use a mechanical or electrical meter. Gas supplies can simply drive round a counter. Electricity can be measured in several ways, for instance by using part of it to turn a disc which in turn drives a counter. Essentially we measure how much power we use and for how long. The unit is usually kilowatt-hours. 1 person found this useful What laboratory methods are used to determine the energy content of foods? calorimeter . although not most accuratethe simplest way to test the energy content of foods is using a calorimeter, this is basically a insulated copper beaker filled with water. the item of food is burned under the copper cup. the change in temperature of the water is then taken. using the specific heat of water and copper it is possible to find how many joules of energy were added to the water and beaker. this value can then be converted into calories if desired How did Niels Bohr use spectra to determine energy levels of atoms and ions? \n. \n Patterns in Spectral Lines \n. \nWhen a gas is excited in an electrical discharge, light is emitted (this is essentially how neon lamps work). Niels Bohr looked at this emitted light using a spectrograph, which separates different wavelengths of light (just a like a simple triangular prism). Atoms, such as hydrogen or neon, emit very specific patterns of light. When you separate the wavelengths, you see a pattern of very sharp lines of light at only certain wavelengths and not others. In other words, the atoms emit only certain wavelengths of light, resulting in a series of lines when you look at the light through a spectrograph. Bohr looked at these lines and managed to figure out the pattern that determined which wavelengths were observed. He said that the light emitted was due to transitions between energy levels in the atoms, and the wavelength of light corresponded to the energy difference between the two states involved in the transition. In this way, he figured out the equation to predict the spacing between all of the energy levels of any one-electron atom or ion.\n. \nHis model was quite successful, and he was able to predict which lines you would see for things that hadn't even been measured yet (a good test for any theory!). Surprisingly, although he figured out the pattern so well, he didn't actually know what the patterns were really due to. In fact, he had to make assumptions that turned out to be completely false! However, despite these errors (which were corrected when quantum mechanics was developed), the Bohr model of the atom is very useful for many applications. His model does NOT work well for multi-electron atoms/ions, which unfortunately includes the large majority of atoms and ions! You need quantum mechanics for that! The kinetic energy of a substance is determined by what? The kinetic energy of anything is determined by the mass and velocity of the substance. This is represented in the equation: KE=(1/2)mv 2 How does solar energy determine weather and climate? if there was no solar energy then there would be no winds because the sun heats the ground and all the air around it and it rises and it pushes cold air down to the ground and the hot air cools off and the cold air warms up and that process of heating creates winds. What is the equation used to determine the energy content of a packet of light of specific frequency? Planck's Equation, E = hv Where E is the energy contained within the photon of light, h is Plank's constant, and v is the frequency of the light. Planck's constant, h, = 6.626068 X 10 ^ -34 J s and frequency of light = speed of light / wavelength this is not only used in light but also to find energy of all electromagnetic radiation What is the formula you use to determine the gravitational potential energy of an object? Weight x Height = Potential energy The units and the calculation are the same as for work. Work is force through a distance. Get the weight in the SI by multiplying the mass in kilograms by the acceleration due to gravity on earth like so: 1.00 Kg x 9.81 m/s 2 = 9.81 Newton . The gravitational potential energy of a 1.00 Kg mass lifted 1.00 meter on earth will be: 1.00 Kg x 9.81 m/s 2 x 1.00 m = 9.81 J . In the SI, you will give the answer in Newton meters or Joules. What determines the energy of a river? I think it's a correlation of the volume of water multiplied by the steepness of descent. That is Mass X Velocity. Having said that please note I did start with the words 'I think....' How do you determine an energy coefficient? Hydraulic energy coefficient is: EnD=E/(n*D)2 where EnD is theenergy coefficient E is the specific hydraulic energy (J/kg) n isthe rotational speed (rpm) D is the diameter (m). What is the formula you use to determine the gravitional potential energy of a object? The gravitational potential energy of an object is given by: Ug=mgy M is the mass, g is the gravitational constant, and y is the height of the object. How do you determine ionization energy? they should giveyou a chart to read off.... if they don't, then the first is small, and then it increases as you take more electrons... because the atomic radius is decreasing, the nuclear pull is increasing, and the ratio of protons to electrons is increasing, so the electrons are pulled in more tightly How can you determine an objects mechanical energy? You have to use a method of absorbing the energy, and measuring what has been absorbed. For example engines are tested using a dynamometer, which can measure the force exerted and the rate of working of the engine. A simple analogy would be with the fairground game where you slam a hammer down onto some sort of pivot which raises a weight, the height it goes to indicates the energy put into the blow. So its a matter of making such a test more accurate and scientific. How is the energy value of foods determined? It is determined by the number of calories. 1000 Calories equals one kilocalorie and a daily in take is 2, 500 Calories. How is the energy value of food determined? it determines how your weight equals the mass of the height of what you and your attracting bodies weight and eat... ( im an scientist who knows everything about these stuff... Thank you )... How to Determine elastic potential energy? The potential energy is the product of the force required to compress or stretch the elastic medium, and the distance of travel. If the force is measured in Newtons and the movement in meters, the work done will be in Joules. What determines an objects thermal energy? What determines an objects temperature. The higher the temperaturethe faster the particles move, the more Kinetic Energy they haveand the greater the objects Thermal energy=) Haha I am superr dupper dumb What determines potential energy of an object? it exist whenevet an object which has mass has a position within a force field What determines an energy level? The energy of a photon is described by the equation: Where l is the wavelength, h is Planck's constant, c is the speed of light, and E is the energy. So, the energy of a photon increases as the wavelength decreases. To determine energy band gap of a semiconductor using four probe method? Probes connect the metal and semi conductors so that we can allow the flow of current as germanium could not produce current.Probe provides a physical contact with germanium. What formula do you use to determine the amount of energy used by an appliance? The total amount of energy used by an appliance is equal to the power consumption multiplied by the time the appliance is in use. E = P x T ENERGY=POWERxTIME DDASIA What is formula you use to determine the gravitational potential energy of an object? PE = mgh, that is, mass x gravitation x height. Or simply weight times height, since weight is already equal to mass times gravitation.\n\n. On Earth, you would usually use 9.8 (meters per second square) for gravitation.\n\n. If mass is in kg., gravitation in meters per second square, and height in meters, then the result will be in Joule.\n An object's kinetic energy is determined by what? KE = 1/2mV 2 The mass of the object times the velocity of the object squared and a 1/2 constant derived from a calculus derivation of a kinematic equation. Why a Gm counter is not useful for determining absorbed energy in a gas? The Geiger-Meuller detector operates in avalanche mode, meaning that each ionization interaction results in a full pulse on the detector. Each pulse, then, is not proportional to the energy of the original event, making this not useful for determining absorbed energy. If you want to do that, you need to operate in linear mode. Its not just a case of reducing the operating voltage, as material, gas, and geometry are a factor in deciding which end result is more important. What factors determine whether a source of energy is useful? 1 Whether it is a renewable resource, (a resource that naturally renews itself, like solar power or wind) 2 Its cost effectiveness (does it help reduce the cost of energy in a community) 3 How easy it is to harness this source of energy 4. Whether or not using this resource is harmful to the environment (sometimes wind turbines kill wildlife and scare cows lol) 5 If people want a wind farm in their community or not (if the people aren't for it, it won't be used) 6. Is there a need for said source of energy 7 What types of gases/waste does it produce into the air/water supply when it's being used? Does it harm the environment, or does it simply vaporize? 8 Does it biodegrade safely in dumps, compost heaps, or landfills? Does amplitude determine the energy of a wave? Yes. Basically, the energy is proportional to the square of the amplitude. Yes. Basically, the energy is proportional to the square of the amplitude. Yes. Basically, the energy is proportional to the square of the amplitude. Yes. Basically, the energy is proportional to the square of the amplitude. What characteristic determines the energy of a light wave? It's wavelength or frequency. The energy of a light photon (particle of light) is equal to (h x c) / wavelength, or to h x frequency, where h is Planck's constant and c is the velocity of light in a vacuum. How is gravitational potential energy determined? Potential Energy is calculated by the product of the mass of the object ( not weight! ), the gravitational acceleration ( 9.81 m/s/s ) and the height of the object above a datum. mass x 9.81 x height How does an infrastructure determine your access to types of energy? if a fart eats another fart then the fart will mix with dookie an make a shart Determine how energy is related to change in state? Energy is used to do work against in separating the mutually attracive atoms or molecules from each other to change them from solid into liquid and from liquid into gaseous state. In experiment to determine the energy band gap of ge crystal using four probe method why it is called four probe method? Coz in this expermnt oven containing four probes is used to maintain the continuity with crystal What is the mathematical formula used to determine the amount of energy required to melt one gram of ice? The mathematical equation is E=MC^2. However, it'll only work if your substance is dihydrogen monoxide. Now that you know the answer, go solve it. What two factors does the electric company use to determine how much a business will pay for electrical energy? There are more than two factors, however, the main two factors are kilowatt hours of energy used and weather the energy is used in peak or off-peak times. Other factors like power factor correction and maintenance of on-site transformers and power conditioning equipment for large businesses can affect how much a business pays for electricity. How do the wavelength and the frequency of a wave determine its energy? There are two kinds of waves. One is mechanical for which the energy is given by the expression 2 m pi 2 a 2 nu 2 . a is the amplitude of vibration and nu is the frequency of vibration. Hence in case of mechanical waves, energy is proportional to the square of the amplitude. But in case of electromagnetic wave photon concept is introduced as quantum of energy. The energy of thee quantum is given by the expression h nu. Here h is Planck's constant and nu the frequency of radiation. Hence the energy content of each photon is proportional to the frequency of radiation. But the intensity of radiation is computed by the number of photons per second. How does activation energy determine if a reaction will release or absorb energy? Activation energy is really just the minimum amount of energy needed for a chemical reaction to occur. Without it, the energy will stay the same and the substance cannot undergo a chemical change. The thing to look at, I think, is the product of the reaction. For example, in a graph, two substances could have the same activation energy, but after the reaction the amount of energy in substance 1 could be extremely low and the amount of energy in substance 2 could be higher than the activation energy. In substance 1, evidence of an EXOTHERMIC reaction has occurred because the amount of energy in the original substance was lost indicating that it has released energy. Whereas in substance 2, when the amount of energy was higher than the activation energy, it is evident that an ENDOTHERMIC reaction has occurred because the amount of energy after the reaction is higher than it was before the reaction. This shows that substance 2 absorbed energy making it endothermic. hope this helps! What is the formula used to determine the gravitational potenital energy of an object? PE = mgh Potential energy = mass x gravity x height In SI units: Joules = kilograms x meters/second 2 x meters Standard Earth gravity is about 9.8 meters/second 2 . What determines an ecosystems energy budget? Ecosystem energy budget`s depend primarily of autotroph`s such as photoautotrophic organisms. The budget (energy that can be used by energy flux) depends on these primary producers for the rest of the food webs. What determines electomagnetic wave energy? The energy of a photon is the frequency times Planck's constant, hν . Gamma ray photons have the highest energy, then x-rays, ultraviolet, visible light (from violet to red), infrared, microwaves & radio waves. Then there's intensity whuch depends on how many photons you have. The brighter the light the more energy you get per unit time per unit area. How do you determine how many energy levels an atom has? To determine the no. of energy levels present in an atom, it's proper electronic configuration must be written. The highest principle quantum number tells that the atom consists of that much shells including all the sub-shells and the orbitals. Example:- Elec. config. of oxygen is 1s 2 2s 2 2p 4 . It shows that highest principle quantum number is 2 which means it has 2 energy levels. What formula do you use to determine the Gravitational potential energy of an object? Gravitational potential energy = (weight of the object) x (height) or Potential energy = (mass) x (acceleration of gravity) x (height) What determines kinetic energy of gases? Heat and pressure. the more pressure we put on a gaseous substance the greater the heat. the greater the heat the more kinetic energy it has. How can you determine if your appliances are energy efficient? The "Energy Star" logo is an easy way to see if an appliance is energy efficient. You should also read the instruction manual, and know your product well enough to observe that it is working properly. What determines the rate of energy delivered by current? The rate at which energy is delivered by an electric current is equal to (Magnitude of the current) 2 multiplied by (total resistance through which the current flows) How do you determine frequency and energy in a wave? To determine frequency, you can use the formula f = 1 / T, where T is the period of the wave. Period refers to the time taken for one complete wave cycle or a complete wavelength. Energy can be calculated by the sum of potential energy and kinetic energy at any point, since the total energy remains constant, assuming that there is no damping. What do nutritionists use to determine how much energy you get from food? they look at how many poopie fibers are in it then they add their own and feed it to your moms vagina. How is the thermal energy of an object determined? Just using H = m x s x @ m= mass of the object in kg s= specific heat capacity in J/kg/K @ temperature in K So H will be in J How can the activation energy of a reaction be determined experimentally? To ascertain activation energies experimentally you must measurethe reaction rate k on the basis of varying temperatures T, youshould plot the logarithm of k against 1/ T on a graph. How can the number of electrons in the outer main energy level of phosphorous be determined using the successive ionization energies? As each electron is removed, the successive ionization energy values increase. However, the ionization energy increases a lot when the sixth electron is removed. This suggests that the sixth electron is removed from a shell which is closer to the nucleus.. How is energy efficiency determined? You divide useful output energy by the input energy. Or equivalently, useful output power by input power. How do you determine the amount of electrical energy used by a device? Measure the current flowing through the device and the voltage across it; multiply the current by the voltage to get the power in Watts. Watts = Volts x Amps ; Watts = Amps 2 x Resistance (steady state answers using friendly units) How can you determined how much kinetic energy an object has? If you know the mass and velocity you can calculate it from theformula K.E. = ½m·V² where K.E. is the kinetic energy, m is the mass, and V is thevelocity. What determines the loudness or energy of sound? The source of the sound determines how loud it is. Sound is apressure wave transmitted through a material medium the energybeing released at the point of origin of the sound determines theenergy in that pressure wave and the more the energy the louder thesound.
<urn:uuid:35a45326-c840-47a6-bc36-a986f29bbfb5>
3.515625
3,989
Q&A Forum
Science & Tech.
53.509564
95,543,909
F or the good of the climate, the time has come for a major initiative to reunite climate change mitigation efforts with biodiversity conservation and wilderness protection. Recent scientific research has shown clearly that protecting primary ecosystems such as forests, wetlands, and peatlands (whether they be tropical, temperate, or boreal) keeps their carbon stocks intact, avoids emissions from deforestation and degradation, and is a necessary part of solving the climate change problem (Lyssaert et al. 2008; Lewis et al. 2009; Phillips et al. 2008; Keith et al. 2009). This new understanding provides a way to make important advances to mitigate both climate change and the biodiver-sity extinction crisis. Climate change has emerged as the leading environ-mental issue of our time with good reason (IPCC 2007a). The rapid rise in Earth's temperature threatens human well-being in several ways: rising sea levels will render millions homeless, populations of malaria-bearing mosquitoes will reach millions of African people who live in areas that were once too cool for these insects, and there will be an increase in the frequency of extreme climatic events such as droughts, fires, floods, and hurricanes. Freshwater will get scarcer in some areas, which will lead to increasing tensions and poten-tially armed conflict about access to this basic resource. It is even possible that we could experience " climate surprises " — rapid, large-scale, and difficult-to-predict changes in the climate system that we know have occurred in the geological past. For example, ocean currents such as the North Atlantic Gulf Stream could change, rendering the climate of western Europe cooler and less agriculturally productive. Climate change also threatens other forms of life with which we share Earth. Coral reefs are bleaching, thus destroying critical fish habitat; climate shifts will result in the extinction of populations of many temperature-sensitive species such as mountain-dwelling pikas; and the habitats of other species such as cold-water trout and polar bears will shift or disappear. These changes are already underway, and they threaten many wildlife species. Carbon Dioxide The general problem that has led to rapid climate change is that we humans are releasing carbon dioxide (and other greenhouse gases) into the atmosphere faster than natural processes can remove it. A certain amount of heat in the atmosphere is good and gives us a livable climate, but now the increasing concentration of carbon dioxide in the atmo-sphere is causing a rise in global temperature with disastrous consequences. The cause of the rapid climate change we are now expe-riencing is primarily the result of two main kinds of human actions: burning fossil fuels and clearing or degrading nat-ural ecosystems. These activities release carbon dioxide into the atmosphere from places on or under the Earth's surface where it was previously stored harmlessly or sequestered as one of a number of forms of carbon we call fossil fuels. The burning of carbon-dense oil, coal, and gas stocks is widely known as the primary source of carbon dioxide. Figure 1—Boreal forest in the Nahanni, Canada. Photo by Harvey Locke. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:bcdf38d8-42ef-4639-8492-03df0cf767f1>
4.0625
651
Academic Writing
Science & Tech.
30.343371
95,543,928
The oldest sections of transform faults, such as the North Anatolian Fault Zone (NAFZ) and the San Andreas Fault, produce the largest earthquakes, putting important limits on the potential seismic hazard for less mature parts of fault zones, according to a new study to be presented today at the Seismological Society of America (SSA) 2014 Annual Meeting in Anchorage, Alaska. The finding suggests that maximum earthquake magnitude scales with the maturity of the fault. Identifying the likely maximum magnitude for the NAFZ is critical for seismic hazard assessments, particularly given its proximity to Istanbul. "It has been argued for decades that fault systems evolving over geological time may unify smaller fault segments, forming mature rupture zones with a potential for larger earthquake," said Marco Bohnhoff, professor of geophysics at the German Research Center for Geosciences in Potsdam, Germany, who sought to clarify the seismic hazard potential from the NAFZ. "With the outcome of this study it would in principal be possible to improve the seismic hazard estimates for any transform fault near a population center, once its maturity can be quantified," said Bohnhoff. Bohnhoff and colleagues investigated the maximum magnitude of historic earthquakes along the NAFZ, which poses significant seismic hazard to northwest Turkey and, specifically, Istanbul. Relying on the region's extensive literary sources that date back more than 2000 years, Bohnhoff and colleagues used catalogues of historical earthquakes in the region, analyzing the earthquake magnitude in relation to the fault-zone age and cumulative offset across the fault, including recent findings on fault-zone segmentation along the NAFZ. "What we know of the fault zone is that it originated approximately 12 million years ago in the east and migrated to the west," said Bohnhoff. "In the eastern portion of the fault zone, individual fault segments are longer and the offsets are larger." The largest earthquakes of approximately M 8.0 are exclusively observed along the older eastern section of the fault zone, says Bohnhoff. The younger western sections, in contrast, have historically produced earthquakes of magnitude no larger than 7.4. "While a 7.4 earthquake is significant, this study puts a limit on the current seismic hazard to northwest Turkey and its largest regional population and economical center Istanbul," said Bohnhoff. Bohnhoff compared the study of the NAFZ to the San Andreas and the Dead Sea Transform Fault systems. While the earlier is well studied instrumentally with few historic records, the latter has an extensive record of historical earthquakes but few available modern fault-zone investigations. Both of these major transform fault systems support the findings for the NAFZ that were derived based on a unique combination of long historical earthquake records and in-depth fault-zone studies. Bohnhoff will present his study, "Fault-Zone Maturity Defines Maximum Earthquake Magnitude," today at the SSA Annual Meeting. SSA is an international scientific society devoted to the advancement of seismology and the understanding of earthquakes for the benefit of society. Its 2014 Annual Meeting will be held Anchorage, Alaska on April 30 – May 2, 2014. Nan Broadbent | Eurek Alert! Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:739ac939-dfec-4961-8102-a356065553f9>
3.109375
1,273
Content Listing
Science & Tech.
36.271521
95,543,937
Fundamentals of Remote Sensing Methodology New components and methods in the field of microwave technique, electrooptics, computer technology, statistics and electromagnetics open new and very interesting possibilities with regard to detection and identification problems in relation to surveillance of environmental pollution and resources by electromagnetic waves. The current contribution introduces general remote sensing consepts and theories which conceivably may have an impact on several application areas (environmental surveillance, detection/identification of specific objects). The basic principle is the following: Most of the existing detection/idenfication systems do not make optimum use of all the a priori information that one generally is in possession of with regard to the object of interest. Knowing the geometrical shape of the object of interest and its molecular surface structure (e.g. structure of paint) an illumination function can be structured (matched filter concept) which gives optimum system sensitivity (minimum receiver bandwidth) with respect to the object of interest at the expense of the sensitivity for background objects (interferents). Theoretical results are given in the paper for a limited number of geometrical objects and for two different molecular surface compositions. It is shown that the system sensitivity and identification capability can be improved considerably using optimum methods which are based on fundamental principles from radio science and information theory. KeywordsRemote Sensing Electromagnetic Wave Correlation Property Antenna Element Delay Function Unable to display preview. Download preview PDF. - 1.Gjessing, Dag T., Remote Surveillance by Electromagnetic Waves for Air-Water-Land. Ann Arbour Science Publishers Inc., Michigan, USA, 1978.Google Scholar - 2.Gjessing, Dag T., A Generalized Method for Environmental Surveillance by Remote Probing. J. Radio Science, March/ April, 1978.Google Scholar - 3.Gjessing, Dag T., On the influence of atmospheric refractive index irregularities on the resolution performance of a radar. NATO Advanced Institute Series. Atmospheric Effects on Radar Target Identification and Imaging, ed: H. Jeske, D. Reidel Publishing Company, 1975.Google Scholar - 6.Gjessing, Dag T. and J. B0rresen, The influence of an irregular refractive index structure on the spatial field-strength correlation of a scattered radio wave, IEE Conf. Proc. 48, September, (1968)Google Scholar - 7.Gjessing, Dag T., Target Detection and Identification Methods Based on Radio— and Optical Waves, AGARD Lecture Series No 93, page 12.1 to 12. 23, May 1978.Google Scholar
<urn:uuid:b521ace4-d2f1-4cb5-85e0-1eefa0b4f3be>
2.671875
533
Truncated
Science & Tech.
22.535413
95,543,951
Toward an age of genome design Toyoda believes that in the future, our key source of resources will shift from oilfields to genomes. For this reason, he calls genomes—the complete genetic blueprints for organisms—the “second oilfields”. In the 1990s, Toyoda worked for a private research institute to develop anti-malarial drugs. “In drug development, the key is how to design compound shapes so that the compound can combine perfectly with disease-related proteins and control their functions. The structures of the proteins, however, are so complex that trial-and-error-based design approaches often end in failure. ‘Rational’ design is therefore needed, which involves creating logical programs and designing drugs on the basis of the computed three-dimensional structure of the relevant proteins.” However, even if a compound is designed perfectly, the compound may still not be creatable using the techniques currently available in organic chemistry. This is one of the technological barriers Toyoda has encountered. In the 2000s, scientists embarked on the sequencing of genomes from a range of organisms, including humans. “The genome carries genetic information in the arrangement of four bases in the gene region, and proteins are produced according to this arrangement. If the arrangement can be ‘designed’, we could be able to design organisms with new functions more reliably. The information resources necessary for such an exercise are now becoming available,” says Toyoda. The concept of rational design has also recently begun to draw attention in the field of medicine, where medical scientists are working toward a personalized medicine approach in which drugs and treatments are designed according to the genetics of the patient. Database-supported rational approaches to design are therefore finding applications in various fields and beginning to displace the previous ‘blind’ approaches. Believing that the age of rational design would soon come to genomics, Toyoda initiated the Genomic Knowledge Base Research Team in the RIKEN Genomic Sciences Center (GSC) in 2001. Around that time, the GSC was working on ambitious genomics projects under the leadership of Akiyoshi Wada, the first director of the GSC. The projects were generating vast amounts of data for the purpose of establishing complete databases for certain organisms in order to construct a comprehensive of each organism. Toyoda saw these databases as a place in which the evolution of new life could occur. Databases as a realm for the evolution of life Organisms evolve through the repeated process of replication of genetic information and natural selection. Genetic information is naturally recorded in the structure of DNA and RNA, but the same information can now be recorded in databases. “Recording media have expanded from the natural physical world to the information world. We are at the point where the medium of life evolution has changed significantly.” Genetic information recorded in databases has been replicated on a global scale through the internet, allowing useful gene information to be plucked from a database. On this basis, could it be possible to create new and useful biological resources by using selected gene information to rationally design a genome that could then be transferred to an organism using DNA-synthesis technology? “A database can be regarded as a place for the replication and selection of genetic information—or a place where the evolution of life occurs,” says Toyoda. Toyoda’s ideas are not just dreams—life is actually beginning to evolve with the help of databases. “We selected the genetic information of groups of enzymes that produce a sticky paste of fermented soybeans called ãPGA from a database, and rationally designed the genetic information, which was then transferred to the genome of a plant, thus successfully creating a new plant that is drought tolerant.” Induced pluripotent stem (iPS) cells, which are now expected to be applied in regenerative medicine, are also created using the same concept. The original creation of iPS cells was based on the complete record for a particular cDNA, a DNA created from a template mRNA into which a gene region of DNA is transcribed, stored in a RIKEN database. Shinya Yamanaka and his laboratory staff at Kyoto University used the cDNA record to select special genes expressed in embryonic stem (ES) cells and transferred those special genes into grown human skin cells. This process resulted in iPS cells, which, like ES cells, are able to differentiate into any cell type coded for in the organism’s genome. “Designing a database is equivalent to designing a place suitable for the evolution of life. To create new biological resources that can support the Earth and society from information resources, we need databases, and we will find it very interesting if a database is designed from the perspective that it is a place for the evolution of life.” Establishing a data-sharing infrastructure—a global issue The BASE was inaugurated in April 2008 following a reorganization of the GSC based on advice that came out of the 2006 RIKEN Advisory Council (RAC) meeting. The RAC is an external advisory body consisting of world-leading scientists and eminent individuals from outside of RIKEN. The RAC evaluates the overall activities of RIKEN and delivers its recommendations to the RIKEN president. “In 2006, the RAC pointed out that although high-quality data were provided in each of the 100 or so data-release web sites operated by RIKEN, the data were not presented in an effective way. Everybody was really surprised because they believed their data was disclosed properly.” Most data-rerelease web sites operated by RIKEN were designed for people who wanted to view data directly; no connections could be made with the database to allow automated data analysis. In that regard, the databases were not being used effectively because there was no systematic system provided to standardize and share the various data sets. They were also insufficient from the perspective of displaying study results. These were the problems that the RAC identified and requested database experts to address, but they were also problems that were common among databases around the world. “The RAC asked RIKEN to solve a problem that had not been solved before, and I happened to be selected as the director responsible for solving the database-related problems.” At that time, most data-release web sites failed to keep pace with fast-changing web standards. As the number of disorganized web sites increased, information management was quickly spiraling out of control. Database maintenance costs were also becoming a heavy burden. “We needed to integrate our databases, but that was the most difficult issue,” says Toyoda. “I have seen many failures with respect to integration approaches. There are already hundreds of databases, and it is impossible to standardize all of them. So I adopted a new concept and started to develop an integration database consisting of a versatile database container that is compatible with international standards. This container automatically enables the standardization, collection and disclosure of data, and also facilitates data sharing when data is moved into it.” SciNeS captures the world’s attention Toyoda started by developing a ‘total incubation infrastructure system’ for life science-related databases called the Scientists’ Networking System, or SciNeS (Fig. 3). “The greatest advantage of SciNeS is the adoption of the semantic web, a next-generation international web standard, and cloud computing.” The semantic web is an extension of the widely used world wide web (WWW). The WWW is suitable for use by people, who read, understand and search for information by following hyperlinks to documents. However, automated computer-based data analysis is ineffective using hyperlinks because there are no relationships defined among documents. In the semantic web, all data has meaning, and every link refers to the relationships between the data, enabling computers to search data effectively for automated data analysis. Cloud computing is a complementary technology that provides a new way of using computer applications through web browsers. Researchers do not need to maintain their own servers; they instead prepare a virtual laboratory in the SciNeS and enter their data, which are then processed automatically and disclosed as a database that meets international standards. “Papers are published through the medium of academic journals, but no dedicated medium has been established in the world of databases. SciNeS thus became the world’s first academic medium for databases,” says Toyoda. As soon as SciNeS was made operational in March 2009, it attracted attention from around the world. “The semantic web has been known for many years, but building large-scale databases for the semantic web was said to be difficult. We succeeded in building such large-scale databases for the first time by adding a new function that enabled security management on a per-item basis.” Database sharing and the financing of maintenance costs are universal issues, but each research body conducts its own activities and deals with its own field-specific characteristic data, so there has been little organization until now. “The world’s attention is now focusing on SciNeS because it is a total incubation infrastructure system for databases that can be used by all fields based on the semantic web and cloud computing.” International Rational Genome Design Contest The virtual laboratories in SciNeS can be used for many purposes: as a substitute for personal databases, a repository for electronic laboratory notes of unreported data, or for joint research or ‘medical clouds’—an electronic health chart network among medical specialists and clinicians. Such uses are supported by the per-item security function. Another use for SciNeS virtual laboratories is the International Rational Genome Design Contest, or GenoCon. “RoboCon is a well-known robot contest in which individually developed robots compete on the basis of excellence in certain skills. GenoCon is the life-science version, where researchers are expected to compete on the excellence of their rational skills in designing genome base sequences.” GenoCon has been running since the end of May 2010 and will continue through to the end of September 2010. The assignment: to design a DNA sequence conferring to the model organism Arabidopsis thaliana the functionality to effectively eliminate and detoxify airborne formaldehyde, which causes sick building syndrome. Participants need to take advantage of genome and protein databases in SciNeS to find out which genes should be optimized to enhance functionality for eliminating and detoxifying airborne formaldehyde. They also need to program via a web browser in order to rationally design part of the genome. The best design results will be used by RIKEN and other research institutes and actually transferred into a plant for functional verification under proper statutory safety control standards. The invitation to participate has been extended not only to researchers and university students in Japan and around the world, but also to high-school students. “I will be pleased if GenoCon could give high-school students with good programming skills the opportunity to become interested in life science and join the world of life sciences to become ‘genome designers’. Many useful genes have been patented, but all current patents will expire by 2030, and this will bring about a genome design boom. Genome designer will become a glamorous job in the near future.” GenoCon will provide participants with the opportunity to enjoy the most advanced science and take on an open optimization challenge. Although genome designs for conferring the functionality to effectively eliminate and detoxify airborne formaldehyde to a plant have been published and some even patented, there may be better embodiments of the technology. The contest aims to search for better and more suitable embodiments with easier practical applications. Toyoda is also a member of the RIKEN Biomass Engineering Program, which was initiated in April 2010. Through the program, Toyoda aims to improve the efficiency of producing bioplastic materials based on rational genome design methods for plants. Genome design methods and programs collected through GenoCon would also be used for that purpose. “We intend to establish infrastructure for synthetic biology,” says Toyoda. “Synthetic biology is a newly emerging field of science in which bioinformatics and biology are combined, and deals with the whole range of information and biological resources. We are now required to use the collaboration network of SciNeS to connect groups at RIKEN’s technical bases, and to establish a structure that enables the creation of useful biological resources as a social asset from information resources. To begin with, I want to create an easily grown plant that can yield environmentally friendly bioplastic materials.” More information can be found at the SciNeS and GenoCon websites. About the Researcher Tetsuro Toyoda was born in Tokyo, Japan, in 1968. He graduated from the Faculty of Pharmaceutical Sciences at The University of Tokyo in 1992, and obtained his PhD in 1997 from the same university. He started as a researcher at the Institute of Medical Molecular Design in 1997, and joined RIKEN as team leader in the Genomic Sciences Center in 2001. He became director of the RIKEN Bioinformatics And Systems Engineering Division when it was established in 2008. His expertise is in bioinformatics and computer-aided rational design of biomolecules, including rational database-supported drug design based on protein structural information and rational genome design in synthetic biology for biomass engineering. He promotes Japan’s database integration projects as a member of several national database committees. gro-pr | Research asia research news Further reports about: > Arabidopsis thaliana > Bioinformatics > DNA > Database > Design Thinking > Division > EC-ESA GMES > Ferchau Engineering > Genetic clues > GenoCon > Genom > RIKEN > Rac > SciNeS > Science TV > anti-malarial drug > evolution of life > genetic blueprint > genetic information > genomic > iPS cells > information resources > life science > living organism > skin cell > synthetic biology Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:121313fe-2b2d-4c7a-a692-a7f8abe944d1>
3.25
3,527
Knowledge Article
Science & Tech.
26.062264
95,543,953
Come to our PaleoTime-BE International Fossil Show in Wijgmaal (BE), on November 11 2018! Contribute knowledge and information to Fossiel.net! How can I help? Most Popular Articles Stand chromed 170mm wide In the scientific naming of organisms groups of organisms are classified based on their characteristics. Humans, for example has the name Homo sapiens, of which Homo is the genus, and the species sapiens. The species is always written with a lowercase letter. It is the only taxonomic level that can be defined on a more or less concrete way. The species has been described as a group of individuals who themselves produce fertile offspring. This is not a black and white definition. There are 'species' which produce individuals with impaired fertility or infertility. Numerous forms of hybridization are possible. The definition of species has arbitrary boundaries. Within the level of species further divisions are subspecies and varieties (described with var.). See our special page about Scientific naming for more information. Do you have additional information for this article? Please contact the Fossiel.net Team.
<urn:uuid:98049f5c-05cd-4519-b08e-4848333144ce>
3.59375
230
Knowledge Article
Science & Tech.
39.65066
95,543,980
See attached file for more details. 1. A plane wave in incident upon single atom regular lattice. Calculate the scattered amplitude from all the atoms. 2. For the unit cell vectors given, find the value of alpha, beta and gamma where maximal diffraction will be observed. 3. Define the Brillouin zone and demonstrate that all wave vectors which start from the origin and satisfy the condition for maximum will terminate on the Brillouin zone boundary. 4. For a two dimensiona crystal with its cell vectors given, find the reciprocal lattice and draw it.© BrainMass Inc. brainmass.com July 22, 2018, 4:23 pm ad1c9bdddf Hello and thank you for posting your question to Brainmass! The solution is attached below in two files. the files are identical in content, only differ in format. The first is in MS Word format, while the other is in Adobe pdf format. Therefore you can choose the format that is most suitable to you. The incoming plane wave is: It is scattered by the atom at location . Due to symmetry we assume that the scattered amplitude from each lattice point is the same and proportional to A. When a plane wave hits the scattering pint, it is reflected with a different wave vector k, so from the atom to point rD, ... The 7 pages file shows in great detail how to apply first principles to the problems and how to solve them.
<urn:uuid:f14f248b-0373-4f2f-9388-65823cc9cc19>
3.53125
303
Q&A Forum
Science & Tech.
62.515327
95,544,036
Long-lived sub-structures exist in liquid water as discovered using novel ultrafast vibrational spectroscopies A team of scientists from the Max Planck Institute for Polymer Research (MPI-P) in Mainz, Germany and FOM Institute AMOLF in the Netherlands have characterized the local structural dynamics of liquid water, i.e. how quickly water molecules change their binding state. The lifetime of local water structures is probed using ultrafast laser pulses. Credit: © Yuki Nagata / MPI-P Using innovative ultrafast vibrational spectroscopies, the researchers show why liquid water is so unique compared to other molecular liquids. This study has recently been published in the scientific journal Nature Communications. With the help of a novel combination of ultrafast laser experiments, the scientists found that local structures persist in water for longer than a picosecond, a picosecond (ps) being one thousandth of one billionth of a second (10-12 s). This observation changes the general perception of water as a solvent. "71% of the earth's surface is covered with water. As most chemical and biological reactions on earth occur in water or at the air water interface in oceans or in clouds, the details of how water behaves at the molecular level are crucial. Our results show that water cannot be treated as a continuum, but that specific local structures exist and are likely very important" says Mischa Bonn, director at the MPI-P. Water is a very special liquid with extremely fast dynamics. Water molecules wiggle and jiggle on sub-picosecond timescales, which make them undistinguishable on this timescale. While the existence of very short-lived local structures - e.g. two water molecules that are very close to one another, or are very far apart from each other - is known to occur, it was commonly believed that they lose the memory of their local structure within less than 0.1 picoseconds. The proof for relatively long-lived local structures in liquid water was obtained by measuring the vibrations of the Oxygen-Hydrogen (O-H) bonds in water. For this purpose the team of scientists used ultrafast infrared spectroscopy, particularly focusing on water molecules that are weakly (or strongly) hydrogen-bonded to their neighboring water molecules. The scientists found that the vibrations live much longer (up to about 1 ps) for water molecules with a large separation, than for those that are very close (down to 0.2 ps). In other words, the weakly bound water molecules remain weakly bound for a remarkably long time. Johannes Hunger | Max Planck Institute for Polymer Research Princeton-UPenn research team finds physics treasure hidden in a wallpaper pattern 20.07.2018 | Princeton University Relax, just break it 20.07.2018 | DOE/Argonne National Laboratory A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ba6c9bc2-e650-4fac-8f7a-09e2ea536c93>
3.453125
1,123
Content Listing
Science & Tech.
36.924061
95,544,039
“We’ve known for a while that the protein coding genes of humans and chimpanzees are about 99 percent the same,” said senior author Michael Snyder, the Cullman Professor of Molecular Cellular and Developmental Biology at Yale. “The challenge for biologists is accounting for what causes the substantial difference between the person and the chimp.” Conventional wisdom has been that if the difference is not the gene content, the difference must be in the way regulation of genes produces their protein products. Comparing gene regulation across similar organisms has been difficult because the nucleotide sequence of DNA regulatory regions, or promoters, are more variable than the sequences of their corresponding protein-coding regions, making them harder to identify by standard computer comparisons. “While many molecules that bind DNA regulatory regions have been identified as transcription factors mediating gene regulation, we have now shown that we can functionally map these interactions and identify the specific targeted promoters,” said Snyder. “We were startled to find that even the closely related species of yeast had extensively differing patterns of regulation.” In this study, the authors found the DNA binding sites by aiming at their function, rather than their sequence. First, they isolated transcription factors that were specifically bound to DNA at their promoter sites. Then, they analyzed the sequences that were isolated to determine the similarities and differences in regulatory regions between the different species. “By using a group of closely and more distantly related yeast whose sequences were well documented, we were able to see functional differences that had been invisible to researchers before,” said Snyder. “We expect that this approach will get us closer to understanding the balance between gene content and gene regulation in the question of human-chimp diversity.” Janet Rettig Emanuel | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:4be6ab24-3143-4b3e-b3b4-0b41e5089de7>
3.53125
951
Content Listing
Science & Tech.
31.698841
95,544,040
This Kata is included in my book "The Coding Dojo Handbook". - In the folder for each language, are skeleton classes for the domain objects, to get you started on the kata. - In the folder "Refactoring" is a refactoring version of this kata - In the folder "SampleVisualization", is some Python code that can create a graph to visualize a medicine clash. Kata: Medicine Clash As a Health Insurer, I want to be able to search for patients who have a medicine clash, So that I can alert their doctors and get their prescriptions changed. Health Insurance companies don't always get such good press, but in this case, they actually do have your best interests at heart. Some medicines interact in unfortunate ways when they get into your body at the same time, and your doctor isn't always alert enough to spot the clash when writing your prescriptions. Sometimes, medicine interactions are only identified years after the medicines become widely used, and your doctor might not be completely up to date. Your Health Insurer certainly wants you to stay healthy, so discovering a customers has a medicine clash and getting it corrected is good for business, and good for you! For this Kata, you have a recently discovered medicine clash, and you want to look through a database of patient medicine and prescription records, to find if any need to be alerted to the problem. Create a "Patient" class, with a method "Clash" that takes as arguments a list of medicine names, and how many days before today to consider, (defaults to the last 90 days). It should return a collection of days on which all the medicines were being taken during this time. Doing this kata on cyber-dojo As an alternative to downloading the code, click one of the links below to create a new cyber-dojo to work in, then press "start" to get going coding. You can assume the data is in a database, which is accessed in the code via an object oriented domain model. The domain model is large and complex, but for this problem you can ignore all but the following entities and attributes: Patient + medicines Medicine + name + prescriptions Prescription + dispense date + days supply So each Patient has a list of Medicines. Medicines have a unique name. Each Medicine has a list of Prescriptions. Each Prescription has a dispense date and a number of days supply. You can assume: - patients start taking the medicine on the dispense date. - the "days supply" tells you how many days they continue to take the medicine after the dispense date. - if they have two overlapping prescriptions for the same medicine, they stop taking the earlier one. Imagine they have mislaid the medicine they got from the first prescription when they start on the second prescription.
<urn:uuid:031c47e3-4206-485f-a836-4ecb44949fd2>
2.625
593
Tutorial
Software Dev.
42.130385
95,544,052
Snowmageddon Redux January 2016 Six years after the Snowmageddon of 2010 buried the Washington DC area in up to 30 inches of snow, another major winter storm unloaded record amounts of snow in the region, while flooding parts of the Atlantic seaboard with hurricane-level storm surge. Three climate signals are linked to the destructive power of the "Snowmageddon Redux" storm. Global warming increases ocean heat content, which increases the energy and moisture available to storms. It also increases the heat in the atmosphere, allowing the air to hold and dump more precipitation, and it causes sea level rise, which allows storm surge to ride on higher seas. Snowmaggedon in the record books, fueled by climate change A massive snowstorm blanketed the East Coast with record-breaking amounts of snow on January 22 and 23, causing severe coastal flooding and 36 fatalities. As cities dug themselves out, scientists assessed how climate change would make these blizzard events more likely. “Snowmageddon” was fueled by Atlantic waters 5°F to 7°F (2.8°C to 3.8°C) above average for the time of year, much of which can be attributed directly to global warming. Extreme precipitation events like this one have become increasingly common in recent decades as storms become loaded with extra moisture due to above average water and atmospheric temperatures. Coastal flooding during storms is also increased by climate change. Global warming responsible for up to half of near-record ocean heat in the Mid-Atlantic Mid-Atlantic sea surface temperatures at the time of the event were near-record high at 5°F to 7°F (2.8°C to 3.8°C) above average. NCAR scientist Kevin Trenberth attributes up to half of this extra heat to global warming. Extra heat in the ocean and atmosphere fuels storms with more energy and moisture. Global warming—working in tandem with large-scale patterns like El Niño and the warm phase of the Atlantic Multidecadal Oscillation—is linked to the warm ocean temperatures. This storm is being compared to 2010’s “Snowmageddon,” which was also strengthened by warmer waters in the Atlantic (1.5°F/ 3°C) and high sea surface temperature (SST) anomalies in the Pacific, which modified the general flow of the of the atmosphere and affected weather conditions in many parts of the world. Warmer atmosphere that dumps more snow gives rise to more extreme snowstorms Of the top 30 snowfall events, in the Washington DC and Baltimore area since 1880, 11 have occurred in just the last 15 years. NOAA scientists, examining 120 years of data, found that there were twice as many extreme regional snowstorms in the US between 1961 and 2010, compared to 1900 to 1960. Extreme precipitation is linked to the global warming of the atmosphere, because a warmer atmosphere holds and dumps more precipitation. When storms pass through a warmer atmosphere containing more moisture, this results in extreme precipitation: torrential rainstorms when temperatures are above freezing, and blizzards when they are not. In addition, the unusually warm sea surface temperatures off the mid-Atlantic coast (see above) are helping to load this storm with extra moisture. In the Washington, DC metro area, the most extreme events can have snowfall totals that nearly double the annual average of 14.5 inches of snow. The current storm is forecast to dump upwards of 30 inches of snow in the capital city. In 2010, Snowmageddon dropped 20 to 30 inches of snow throughout the area with some locations recording more than 3 feet of snow. These events and trends are consistent with the findings of many leading climate change reports and organizations. The US National Climate Assessment shows that there has already been an increase in extreme precipitation in the Northeast, with precipitation rising by 71 percent between 1958 to 2012. Projections from the UN Intergovernmental Panel on Climate Change (IPCC) indicate that precipitation in the Northeast will continue to increase. The IPCC explains how there will be significant increases in either the frequency or intensity of heavy precipitation over the 21th century. Its models show that some of the largest changes are expected to occur over North America, particularly during the winter. The increase in atmospheric moisture content explains most of the projected increase.,, Climate change contribution to sea level rise can top coastal defenses There are three primary contributors to elevated sea levels that increase coastal flooding risk: astronomical tide, storm surge and sea level rise. Due to climate change, the global ocean has already risen eight to ten inches over the last century as warmer ocean waters expand and ice sheets and glaciers melt. Storms often lead to record flooding events when the astronomical tide—a cyclical pattern of sea level rise and fall based on gravitational forces—combines with long-term sea level rise to raise sea levels to their highest. It is when storm surge—the rise of water and waves generated by a storm—rides atop these higher seas that storms often do the most damage. In low-lying areas, a small increase in sea levels translates into much greater inundation as storm surge travels much further inland. In addition, even a small increase in surge can top coastal defenses, and disaster often strikes when thresholds are crossed. While climate change may be responsible for only part of any particular climate event, that change may be largely responsible for most of the damage in that event, such as when flooding defenses are breached. New, more intense extremes can overwhelm and collapse existing human systems and structures. Human infrastructure and natural systems have developed to cope with a range of historical extremes, such as 100-year events, and they often collapse when events exceed this range. When storm surge rides on top of sea level rise and high tides, it can be responsible for a disproportionate amount of damage. Sea level rise and coastal storms have increased the risk of erosion, storm-surge damage, and flooding for many coastal communities, especially along the Atlantic seaboard.
<urn:uuid:dec5ccd1-309f-434d-9c9e-a3471afe4fa1>
3.53125
1,217
Knowledge Article
Science & Tech.
37.436749
95,544,072
In a general sense, glaciology is a field of study which looks at any natural phenomena which involve ice, but more specifically, this field considers glaciers. The study of glaciers is conducted in many different areas of study including geophysics, geology, physical geography, geomorphology, climatology, meteorology, hydrology, biology and ecology. The study of ice on the Moon, Mars and Europa is studied in astroglaciology, another branch of this academic discipline. A glacier is a body of dense ice which has a surface area exceeding 0.1km2. It is known that glaciers are constantly moving. A glacier moves under its own gravity, which forms when the accumulation of snow exceeds its melting point over many years. Glaciers can slowly deform and flow due to stresses that are induced by their weight. This causes crevasses and seracs in the glacier. Glacial ice is the largest reservoir of freshwater on Earth. Furthermore, there are two categories of glaciation. The first is alpine glaciation where the accumulation of rivers of ice is confined to a valley. The second is continental glaciation where there is unrestricted accumulation covering the northern continents. Title Image: Wikimedia Commons© BrainMass Inc. brainmass.com July 17, 2018, 7:17 pm ad1c9bdddf
<urn:uuid:950103b7-4def-4940-ab0a-2ccff907f461>
3.5625
271
Knowledge Article
Science & Tech.
35.398179
95,544,074
A weather or sounding balloon is a balloon (specifically a type of high-altitude balloon) that carries instruments aloft to send back information on atmospheric pressure, temperature, humidity and wind speed by means of a small, expendable measuring device called a radiosonde. To obtain wind data, they can be tracked by radar, radio direction finding, or navigation systems (such as the satellite-based Global Positioning System, GPS). Balloons meant to stay at a constant altitude for long periods of time are known as transosondes. Weather balloons that do not carry an instrument pack are used to determine upper-level winds and the height of cloud layers. For such balloons, a theodolite or total station is used to track the balloon's azimuth and elevation, which are then converted to estimated wind speed and direction and/or cloud height, as applicable. One of the first people to use weather balloons was Léon Teisserenc de Bort, the French meteorologist. Starting in 1896 he launched hundreds of weather balloons from his observatory in Trappes, France. These experiments led to his discovery of the tropopause and stratosphere. Transosondes, weather balloons with instrumentation meant to stay at a constant altitude for long periods of time to help diagnose radioactive debris from atomic fallout, were experimented with in 1958. Materials and equipmentEdit The balloon itself produces the lift, and is usually made of a highly flexible latex material, though Chloroprene may also be used. The unit that performs the actual measurements and radio transmissions hangs at the lower end of the string, and is called a radiosonde. Specialized radiosondes are used for measuring particular parameters, such as determining the ozone concentration. The balloon is usually filled with hydrogen due to lower cost, though helium can also be used. The ascent rate can be controlled by the amount of gas with which the balloon is filled. Weather balloons may reach altitudes of 40 km (25 miles) or more, limited by diminishing pressures causing the balloon to expand to such a degree (typically by a 100:1 factor) that it disintegrates. In this instance the instrument package is usually lost. Above that altitude sounding rockets are used, and for even higher altitudes satellites are used. Launch time, location, and usesEdit Weather balloons are launched around the world for observations used to diagnose current conditions as well as by human forecasters and computer models for weather forecasting. About 800 locations around the globe do routine releases, twice daily, usually at 0000 UTC and 1200 UTC. Some facilities will also do occasional supplementary "special" releases when meteorologists determine there is a need for additional data between the 12-hour routine launches in which time much can change in the atmosphere. Military and civilian government meteorological agencies such as the National Weather Service in the US typically launch balloons, and by international agreements almost all the data are shared with all nations. Specialized uses also exist, such as for aviation interests, pollution monitoring, photography or videography and research. Examples include pilot balloons (Pibal). Field research programs often use mobile launchers from land vehicles as well as ships and aircraft (usually dropsondes in this case). In recent years weather balloons have also been used for scattering human ashes at high-altitude by companies such as Stardust Ashes, founded by Chester Mojay-Sinclare. The Weather balloon was also used to create the fictional entity 'Rover' during production of the 1960s TV series 'The Prisoner' in Portmeirion, Gwynedd, North Wales, UK in September 1966. This was retained in further scenes shot at MGM Borehamwood UK during 1966-67. - Chisholm, Hugh, ed. (1922). "Teisserenc de Bort, Léon Philippe". Encyclopædia Britannica (12th ed.). London & New York. - Staff (February 1958). "Chief Special Projects Section: Dr. Lester Machta" (PDF). United States Weather Bureau: 39–41. Retrieved 21 April 2012. - NWS factsheet Archived 2016-02-20 at the Wayback Machine. - "Out of this world! Space fan gives late grandmother the ultimate send-off by scattering her ashes in SPACE". The Daily Mail. - Paul-Davies, Steven (2002). The Prisoner Handbook. London: Pan Books. ISBN 978-0-230-53028-7. |Wikimedia Commons has media related to Weather balloon.| - Atmospheric Soundings for Canada and the United States – University of Wyoming - Balloon Lift With Lighter Than Air Gases – University of Hawaii - Examples of Launches of Instrumented Balloons in Storms – NSSL - Federal Meteorological Handbook No. 3 – Rawinsonde and Pibal Observations - Kites and Balloons – NOAA Photo Library - NASA Balloon Program Office – Wallops Flight Facility, Virginia - National Science Digital Library: Weather Balloons – Lesson plan for middle school - Pilot Balloon Observation Theodolites – Martin Brenner, CSULB - StratoCat – Historical recompilation project on the use of stratospheric balloons in the scientific research, the military field and the aerospace activity - WMO spreadsheet of all Upper Air stations around the world (revised location September 2008)
<urn:uuid:9edd1bfb-0b77-46ce-a498-4c1ca2ff029e>
3.984375
1,104
Knowledge Article
Science & Tech.
35.588674
95,544,106
Some features of this site are not compatible with your browser. Install Opera Mini to better experience this site. Winter Snow Cover in the Northern Hemisphere This page contains archived content and is no longer being updated. At the time of publication, it represented the best available science. However, more recent observations and studies may have rendered some content obsolete. The Moderate Resolution Imaging Spectroradiometer (MODIS), flying aboard NASAs Terra and Aqua satellites, measures snow cover over the entire globe every day, cloud cover permitting. At spatial resolutions of up to 500 meters per pixel, MODIS allows scientists to distinguish between snow and clouds, both of which appear bright white when seen from above at true-color wavelengths. Snow cover is a critical water resource in many areas. More than 75 percent of the water for human consumption and irrigation in the western United States comes from snow melt runoff. Scientists use satellite data on the extent of snow cover, along with other information, to estimate how much water is contained within a given snow pack. This information is essential for more efficiently managing reservoirs as well as predicting where and when floods are likely to occur. Snowmelt runoff from the Sierra Nevada Mountains constitutes a large component of the water supply for California. The image above shows snow cover (white pixels) across North America from February 2-9, 2002. Click the links above to access time series animations showing changes in snow cover during the winter of 2001-02. As you play the movies, notice how snow cover begins to build up in the early fall and melts by late spring in the Northern Hemisphere. Animations by Cindy Starr, NASA GSFC Science Visualizations Studio, based upon images and data courtesy MODIS Snow and Ice Science Team
<urn:uuid:c8102356-33a3-4ef5-8fc7-9d7b8056a62d>
3.921875
350
Knowledge Article
Science & Tech.
33.204667
95,544,138
Our ecological public charity concentrates on Surplus & Needs, Natural Abundance, Two Studies Reveal Amazing Resilience of Older Forests Maybe you can’t teach an old dog new tricks, but two recent studies revealed that old forests around the world are full of surprises. In Europe, scientists working to complete the first ever map of the continent’s old growth forests discovered there were more of them than previously believed. And in South America, a study of trees in the Amazon rainforest found that taller, older groups of trees are more resilient to drought. The map of Europe’s last wild forests was published in Diversity and Distributions May 24 and located more than 3.4 million acres within 34 countries. “What we’ve shown in this study is that, even though the total area of forest is not large in Europe, there are considerably more of these virgin or primary forests left than previously thought—and they are widely distributed throughout many parts of Europe,” University of Vermont (UVM) forest ecologist and study co-author Bill Keeton said in a UVM press release. Even though there are more of them than expected, the old-growth forests are still rare, and often small and isolated. But they are extremely rich in biodiversity. “Although such forests only correspond to a tiny fraction of the total forest area in Europe, they are absolutely outstanding in terms of their ecological and conservation value,” senior study author and director of the Conservation Biogeography Lab at Humbolt University in Berlin Tobias Kuemmerle said in the release. He added that these forests are often the only habitat left for certain endangered species. The map further found that 89 percent of the primary forests were in protected areas, but that protections were only strict on 46 percent of that land, meaning that some of these forests are at risk from human activities. “Wide patches of primary forest are being currently logged in many mountain areas, for instance in Romania and Slovakia and in some Balkan countries,” study co-author and University of Life Science in Prague researcher Miroslav Svoboda said. “A soaring demand for bioenergy coupled with high rates of illegal logging, are leading to the destruction of this irreplaceable natural heritage, often without even understanding that the forest being cut is primary.” However, researchers hope the new map will help protect Europe’s old growth forests, since they have used it to assess where land-use is low and therefore predict where other primary forests might be discovered. “We may find areas that are good to include in an expanded World Heritage Network or given other conservation status,’ Keeton said. In addition to promoting biodiversity, forests also absorb carbon dioxide from the atmosphere and therefore are important to mitigating climate change. Tropical rainforests in particular are the world’s largest carbon sink on land. This is what concerned Pierre Gentine at the Columbia University School of Engineering and Applied Science. Getine led a team of researchers in an attempt to discover how climate change would impact the ability of the Amazon rainforest to absorb carbon dioxide. The results, published May 28 in Nature, found that the ability of forests taller than 30 meters (approximately 98.4 feet) to photosynthesize was three times less impacted by drought than that of forests less than 20 meters tall (approximately 65.6 feet). The taller forests were also more drought resistant because they were older, with greater biomass and deeper roots that could suck more moisture from the soil. “Our findings suggest that forest height and age are an important regulator of photosynthesis in response to droughts,” Gentine said in a Columbia University press release published by Phys.org. However, while the older forests resisted drought, they were more sensitive to dry air and heat. The study gives yet another reason to halt deforestation in the Amazon, the release pointed out, since cutting trees risks eliminating irreplaceable older trees that would be more resilient to future droughts, which are projected to increase with climate change. read more original article Ecowatch agriculture agroforestry algae alternative energy alternative fuel batteries bees biofuel carbon carbon capture carbon farming carbon sequestration climate climate change CO2 compost electric cars farming food food waste forests green buildings green energy green roofs innovative design innovative products ocean plastic plastic pollution recycle regenerative agriculture renewable energy repurpose reuse soil solar Tesla trees urban farming waste water wave energy wetlands wind power zero waste
<urn:uuid:0348eafe-9a82-4444-b5fe-f2146a825640>
3.265625
931
Truncated
Science & Tech.
21.833047
95,544,140
SDR Search Engine Series Spinning Disc Reactor | Food | Botanical Oil Organic Fluid Extraction search was updated real-time via Filemaker on:Spinning Disc Reactor | Food | Botanical Oil Organic Fluid Extraction | Return to Search List Search Completed | Title | Green Chemistry and Clean Technology Original File Name Searched: 3527332138_c01.pdf | Google It | Yahoo | Bing Text | Green Chemistry and Clean Technology | 001 Introduction to Clean Technology and Catalysis James H. Clark Green Chemistry and Clean Technology Traditional chemical manufacturing is resource demanding and wasteful, and often involves the use of hazardous substances. Resources are used throughout the production and including the treatment of waste streams and emissions (Figure 1.1). Green chemistry focuses on resource efficiency and on the design of chemical products and processes that are more environmentally benign. If green chemistry is used in a process, it should be made simpler, the inputs and outputs should be safer and more sustainable, the energy consumption should be reduced and costs should be reduced as yields increase, and so separations become simpler and less waste is generated . Green chemistry moves the trend toward new, clean technologies such as flow reactors and microwave reactors, as well as clean synthesis. For instance, lower temperature, shorter reaction time, choice of an alternative route, increased yield, or using fewer washings at workup improve the ‘‘cleanness’’ of a reaction by saving energy and process time and reducing waste . At present, there is more emphasis on the use of renewable feedstocks and on the design of safer products including an increasing trend for recovering resources or ‘‘closed-loop manufacturing.’’ Green chemistry research and application now encompass the use of biomass as a source of organic carbon and the design of new greener products, for example, to replace the existing products that are unacceptable in the light of new legislation (e.g., REACH) or consumer perception. Green chemistry can be seen as a tool by which sustainable development can be achieved: the application of green chemistry is relevant to social, environmental, and economic aspects. To achieve sustainable development will require action by the international com- munity, national governments, commercial and noncommercial organizations, and individual action by citizens from a wide variety of disciplines. Acknowledgment of sustainable development has been taken forward into policy by many governments including most world powers notably in Europe , China , and the United States . Heterogeneous Catalysts for Clean Technology: Spectroscopy, Design, and Monitoring, First Edition. Edited by Karen Wilson and Adam F. Lee. © 2014 Wiley-VCH Verlag GmbH & Co. KGaA. Published 2014 by Wiley-VCH Verlag GmbH & Co. KGaA. Image | Green Chemistry and Clean Technology |Infinity Supercritical Spinning Disc Reactor Botanical Oil Extractor | Oil Extract Using Infinity Supercritical SDR Extraction System | Organic Method of Oil Extraction - Organic Botanical Extraction System Uses Water As A Solvent - Go to website| Search Engine Contact: firstname.lastname@example.org
<urn:uuid:ca30bfdb-aeb3-4978-8bce-9c3d0c98f4d0>
2.625
654
Content Listing
Science & Tech.
24.185498
95,544,175
The acronym ΛCDM (Lambda-cold dark matter) is shorthand for our current best cosmological model describing the early beginnings, evolution until now, and future development of our entire Universe. It posits a cosmos dominated by a cosmological constant (denoted by Λ, the Greek letter capital lambda) our best guess for the phenomenon of dark energy, and a type of slow-moving, non-interactive matter called cold dark matter that outweighs the ordinary matter making up stars and planets—and us—by more than five to one. ΛCDM does well enough explaining the majority of our astrophysical observations that it is the standard paradigm for most people working in the field. Some things just go together. Hot dogs and mustard, smart phones and selfies, school and summer vacation. But science is a year-round proposition, and several undergrads didn't seem to mind forgoing their summer vacations to pursue a variety of research opportunities with members of KIPAC. (Protip: it’s never too soon to start thinking about next summer!) By modeling the warped images of a gravitational lens observed with one of the most powerful telescopes in the world, KIPAC scientists have made the dramatic discovery that there is a clump of dark matter with no currently visible normal-matter counterpart in a far-away galaxy. Such unaccompanied clumps are incredibly difficult to detect and only a small handful of them have ever been discovered, but a concerted effort to find them and determine how and why they form could pay off significantly in the long term by giving us new insights as to the nature of dark matter. The first direct detection in 2015 of a gravitational wave event (GW) by the recently upgraded Laser Interferometer Gravitational-Wave Observatory, known as Advanced LIGO, ushered in with a mighty bang a completely new era in astronomy. The first science run with the Advanced LIGO detector started in September 2015, and two high-significance events (GW150914 and GW151226) and one sub-threshold event (LVT151012) were reported. These three events were compatible with signals expected from the mergers of two black holes. The Crab Nebula, our old friend, has continued giving us big surprises in the past few years, as we recently saw in this KIPAC blogpost (from April 2015, by Jeff Scargle and Roger Blandford). We have been gaining glimpses into these surprises thanks to the excellent performance of the orbiting gamma-ray telescopes, Fermi and AGILE, which have been able to get glimpses into the hidden secrets kept mum for so long in other wavelengths by this old stalwart. Jul 10, 2016 | Ripples in spacetime - from 1.3 billion light years away By now most of you who are “astro-enthusiasts” have already heard the news originally announced in February 2016 of the gravitational wave event observed by Advanced LIGO in September 2015, and perhaps also heard a bit about how excited astrophysicists were about it. As for why we were all so enthused, maybe the simplest explanation is this: we have grown a new sense and have for the first time heard the ripples in spacetime emanating from two colliding black holes, spreading out throughout the Universe, and gently jiggling the Earth as they pass us by, in a way humans have never been able to before. Jul 2, 2016 | Swift GRBs: a 3D step toward standard candles Gamma-ray bursts (GRBs) are some of the most energetic events known in astrophysics. In just a few seconds, a typical burst can release as much energy as our sun will emit over its entire 10 billion-year lifetime so it is not surprising that GRBs have been detected billions of light years away. If the intrinsic brightness of GRBs were known, a comparison with their detected brightness would yield their effective distance, and given their observed recession velocity or redshift, GRBs could then be used as accurate distance estimators for cosmology. This would enable researchers to arrive at solid estimates for the distances of all manner of extremely faint, old objects, such as very early galaxies. Last fall, KIPAC professor Bruce Macintosh managed to make time in his busy schedule of teaching and sleuthing for extrasolar planets orbiting around distant stars to help put together a progress report for a mid-decadal review of what is arguably the most important exercise in his entire field: The Astronomy and Astrophysics Decadal Survey. For the past 60 years, once each decade the astronomy and astrophysics community in the US takes a good, long look in the mirror. During this comprehensive self-assessment, scientists from across the country and around the world come together to hash out issues of scientific priorities and resource allocations, enabling the field as a whole to face the future together. "This is a good thing," Macintosh says—the democratic process results in a community that is more supportive of the resulting priorities The Dark Energy Camera (DECam) is a 570-megapixel camera installed on the 4-meter Victor Blanco telescope atop Cerro Tololo, a mountain in the Chilean Andes. The science mission for the Dark Energy Survey, of which I’m a member, is nothing less than to use this camera to understand what Dark Energy is. Which is a tall challenge, since the phrase “Dark Energy” itself is, as some cosmologists say, simply words we use to describe our profound ignorance about the current-day accelerating expansion of the universe. By Lori Ann White In the series, "Where are they now?" we check in with KIPAC alumni: where they are now, how they've fared since their days exploring particle astrophysics and cosmology at the Institute, and how their KIPAC experiences have shaped their journeys. Apr 18, 2016 | First Baby Photos of an Infant Planet For the first time we've managed to take a baby picture of a planet still in the process of growing. Our team was able to image this so-called “proto"-planet with the Magellan telescope in Chile, taking advantage of the high-speed adaptive optics of the telescope to correct for blurring by the Earth's atmosphere. This allowed us to take a super high-resolution image of the system and, after subtracting the light from the central star, isolate light coming directly from the protoplanet. More specifically, we isolated light emitted by ultra-hot hydrogen gas falling onto the protoplanet, which is named (systematically, if not super-creatively) LkCa 15 b after its star, LkCa 15 A. Apr 13, 2016 | Where are they now? -- Yvonne Edmonds In the series, "Where are they now?" we check in with KIPAC alumni: where they are now, how they've fared since their days exploring particle astrophysics and cosmology at the Institute, and how their KIPAC experiences have shaped their journeys. Next up is Yvonne Edmonds, who spent her time at KIPAC searching for signs of dark matter in the data gathered by the Fermi Gamma-ray Space Telescope (FGST).
<urn:uuid:14591091-892c-4575-afee-1152b7125dc8>
3.1875
1,493
Content Listing
Science & Tech.
31.683662
95,544,203
Modeling the benthic loading of particulate wastes from a gilthead seabream (Sparus aurata) farm during a complete rearing cycle Tipo de documentoArtículo Fecha de publicación2016-09 Condiciones de accesoAcceso abierto The relationship between aquaculture activities and water quality plays an important role in determining the volume and quality of production as well as the impact of aquaculture on the aquatic environment. The natural assimilation capacity of a waterbody is often insufficient to cope with the continuous input of wastes from fish farms. Their impact depends not only on the waste itself, but also on other factors such as the local hydrodynamics at the farm site or the cultured species’ feeding pattern. We combine a hydrodynamic model with a transport code to simulate the deposition of fecal pellets and uneaten feed from a gilthead seabream farm in the western Mediterranean Sea. The simulation spans a full 17-month rearing cycle and considers the natural variations in the feed demand of the fish during this period. The results indicated that the maximum benthic concentrations of organic matter and nutrients were found within a 100–500 m distance from the cages, depending on the type of particulate waste and on the local current characteristics. The combination of a long simulation period and realistic hydrodynamic patterns highlights the importance of considering long-term current variability when assessing the environmental impact of fish farms, and stresses the limitations of current simplistic assessment practices. CitaciónMestres, M., Chaperón, W., Sierra, J.P. Modeling the benthic loading of particulate wastes from a gilthead seabream (Sparus aurata) farm during a complete rearing cycle. "Ciencias marinas", Setembre 2016, vol. 42, núm. 3, p. 179-194. Versión del editorhttp://cienciasmarinas.com.mx/index.php/cmarinas/article/view/2594
<urn:uuid:92c2c842-4969-41db-a511-c18d71a52fc3>
2.5625
438
Academic Writing
Science & Tech.
32.030684
95,544,204
Light can be used to modify and control properties of media, as in the case of electromagnetically induced transparency or, more recently, for the generation of slow light or bright coherent extreme ultraviolet and X-ray radiation. Particularly unusual states of matter can be created by light fields with strengths comparable to the Coulomb field that binds valence electrons in atoms, leading to nearly free electrons oscillating in the laser field and yet still loosely bound to the core1,2. These are known as Kramers–Henneberger states3, a specific example of laser-dressed states2. Here, we demonstrate that these states arise not only in isolated atoms4,5, but also in rare gases, at and above atmospheric pressure, where they can act as a gain medium during laser filamentation. Using shaped laser pulses, gain in these states is achieved within just a few cycles of the guided field. The corresponding lasing emission is a signature of population inversion in these states and of their stability against ionization. Our work demonstrates that these unusual states of neutral atoms can be exploited to create a general ultrafast gain mechanism during laser filamentation. It is often assumed that photo-ionization happens faster in more intense fields. Yet, since the late 1980s, theorists have speculated that atomic states become more stable when the strength of the laser field substantially exceeds the Coulomb attraction to the ionic core1,6,7,8,9,10,11,12,13,14. The electron becomes nearly but not completely free: rapidly oscillating in the laser field, it still feels residual attraction to the core, which keeps it bound. The effective binding potential, averaged over the electron oscillations, is sketched in Fig. 1a. It has a characteristic double-well structure, the wells occur when the oscillating electron turns around near the core. The laser-modified potential also modifies the spectrum, with laser-induced shifts adding to the familiar ponderomotive shift associated with nearly free electron oscillations. We refer to these states as ‘strongly driven laser-dressed states’. In spite of many theoretical predictions, it took three decades before their existence was inferred in experiments2,4,5, showing neutral atoms surviving laser intensities as high as I≈1015 –1016 W cm−2. But are such unusual states really exotic? Can they also form in gases at ambient conditions, at intensities well below 1015–1016 W cm−2? After all, for excited electronic states bound by a few electronvolts, the laser field overpowers the Coulomb attraction to the core at I≈1013−1014 W cm−2. If so, would these states manifest inside laser filaments, the self-guiding light structures created by the nonlinear medium response at I≈1014 W cm−2 (ref. 15)? The formation of the Kramers–Henneberger (KH) states should modify both the real16 and imaginary17 parts of the medium's refractive index. While their response is almost free-electron-like, they do form discrete states and lead to new resonances. Crucially, at sufficiently high intensities, theory predicts the emergence of population inversion in these states, relative to the lowest excited states5,18. If the inversion is created inside a laser filament18, it would lead to amplification of the filament spectrum at the transition frequencies between the stabilized states. We first confirm these expectations by directly solving the time-dependent Schrödinger equation (TDSE). We then observe these states experimentally via the emergence of absorption and stimulated emission peaks at transition wavelengths not present in the field-free atom or ion. Notably the gain takes place in neutral atoms, and we are able to achieve gain only by using shaped laser pulses, tailored to a few-cycle resolution. We also confirm theoretically that for our experimental conditions such resonances do not appear in standard filamentation models. At present, lasing during laser filamentation in atmospheric gases is an active research field15,19,20,21,22,23,24,25,26,27,28. Recent work was also performed in low-pressure argon and krypton29,30, while stimulated X-ray emission has been observed from rare gas plasmas31. Our mechanism is novel and general for any gas: it relies on laser-dressed states in neutral atoms and uses pulse shaping to control their population and seed gain. First, we solve the TDSE for an argon atom interacting with an intense, 800 nm laser field (see Methods). We use a shaped infrared (IR) pulse with a sharp (~5 fs) front, which optimizes the population of the ‘nearly free’ laser-dressed states. Indeed, in IR fields their ionization rate maximizes at I≈1013 W cm−2 (known as ‘death valley’), before decreasing again at higher intensities1,8,11,12,13. Thus, the ‘death valley’ should be crossed quickly. The sharp front is followed by a flat top, so that the laser-dressed states are better defined. Next, we compute the linear response of the dressed atom in the visible frequency range to identify gain lines. To this end, the dressed atom is probed by a weak broadband (~5 fs) probe, carried at wavelength λ = 600 nm and centred in the middle of the pump pulse (t = 0). The time-dependent response to the probe, ∆d(t), is extracted from the full polarization d(t) = <Ψ(t)||Ψ(t)> as described in ref. 17: ∆d(t) = d(t)−dIR(t). Here dIR(t) = <ΨIR(t)||ΨIR(t)>, where d is the dipole operator in the acceleration form, and Ψ(t) and ΨIR(t) are the time-dependent wavefunctions computed with both fields or the strong IR pump only, respectively. The key quantity is Im[∆D(ω)], the imaginary part of the Fourier transform of ∆d(t): a negative imaginary part signifies gain, whereas a positive imaginary part signifies loss. Figure 1b,c shows a window Fourier transform of ∆d(t), using the sliding Gabor window, G2(t,t0) = exp[−(t−t0)2/T2] (where T = 500 atomic units (a.u.)), which allows us to time-resolve the emission. Below I = 1014 W cm−2, the time-dependent gain is offset by the loss, but the situation changes radically above this intensity: at I = 1.4 × 1014 W cm−2 gain dominates and amplification lines arise around 550–570 nm and 630–650 nm (Fig. 1c,d). The lines are asymmetric, more Fano-like than Lorentzian (Fig. 1d), as expected in the presence of a strong driving field32. Importantly, the gain has a threshold nature and occurs intra-pulse. Thus, theory predicts the emergence of gain at intensities I≈1014 W cm−2, which will manifest in the forward spectrum from only shaped (that is, sharp rise time) laser pulses. Experimentally, we look for new, atypical absorption and emission structures with asymmetric Fano-like shapes, between 400 nm and 700 nm. Second, the population inversion should arise intra-pulse and depend on the pulse shape (rise time and duration). Third, the emission should have lasing characteristics and occur at transitions absent in the field-free atom or ion. To test these predictions we employ a pulse-shaping set-up33 with a resolution down to two cycles. We use a self-phase-modulated broadened and compressed chirped pulse amplified (CPA) Ti:Sapphire laser in combination with a 640-pixel spatial light modulator (SLM), providing 50 μJ pulses centred at 800 nm34 (Supplementary Fig. 1b, Methods). The pulses are focused into a chamber by a 300 mm off-axis spherical mirror, leading to a short filament (4 mm, see Supplementary Fig. 1a and Methods) in Ar or Kr (2–9 bar). The pulse is shaped such that it acquires the required sharp rise at the beginning of the filament, maximizing the population of the stabilized, strongly driven laser-dressed states. The pre-compensation of the desired pulse shape is achieved by acoustic shock wave optimization at the focus (see Methods). Pulse fronts of ~5 fs are generated, as measured using a spectral phase interferometry for direct electric field reconstruction (SPIDER). Figures 2, 3 show the experimental results. The strongly driven laser-dressed states are best accessed using pulses with a sharp rise time. Thus we can compare the forward emission from pulses with the same spectra, but different temporal shapes. The red line in Fig. 2a shows the supercontinuum generated inside the filament, for a smooth, 40 fs, broad Gaussian laser pulse. This standard pulse yields a typical supercontinuum spectrum in the forward direction, with no resonant lines attributable to atoms or ions. In contrast, when the pulse rise is fast (that is, a 7 fs pulse), we observe dramatically different spectra with distinct asymmetric (Fano-like) amplification lines at 530 nm, 550 nm, 570 nm and 625 nm (Fig. 2a), as predicted by the theory. The Gaussian pulse has the seed radiation for gain or loss, but the slow rise time cannot populate the laser-driven states efficiently. Pulse-shaping control of gain is demonstrated when comparing an asymmetric triangular-like pulse (5 fs rise, 20 fs decay) against the reverse shape (20 fs rise, 5 fs decay). They have identical spectra but opposite spectral phase. The pulse with the fast rise generates strong gain lines, while the pulse with the slow rise leads to absorption at the same wavelengths. The gain lines are absent at wavelengths where no supercontinuum is present (that is, below 450 nm), as the supercontinuum acts as the lasing seed. All the emission lines are observed only in the forward direction, indicative of emission coherent with the dressing pulse. Their divergence, measured from lateral photographs using spectral filtering, is 39 mrad in the 600 nm region, below that of the 800 nm driving pulse (50 mrad). Their polarization is coincident with that of the driving pulse, as expected of stimulated, rather than amplified spontaneous emission. The side spectra (Fig. 2b) do not exhibit lines at these wavelengths, but instead show well-known argon plasma incoherent recombination lines around 350 nm and 800 nm (taken from the National Institute of Standards and Technology (NIST) database), thus the emission is not amplification of fluorescence. Above a certain threshold, the output emission intensity grows roughly linearly with the intensity of the seeding spectrum contained in the supercontinuum tail of the pulse, as expected for stimulated emission (see Supplementary Fig. 2). We now examine the dependence of gain on power and identify the lasing threshold. We use trapezoid-like pulse shapes (10 fs rise, 5 fs plateau, 10 fs decay; Fig. 2c), increasing the input laser energy. In Fig. 2c, the emission lines at 557 nm and 625 nm undergo absorption at lower pulse energies, but show gain when the pulse energy exceeds ~28 μJ. (For a 10 fs rise, 10 fs plateau, 10 fs decay, lasing commences at 33 µJ; Fig. 2d). The lasing output power versus the input power, in Supplementary Fig. 3, yields a lasing threshold of 1.5 GW (I≈1014 W cm−2). We stress that gain lines in the 610–690 nm region (highlighted resonances near 625 nm and 675 nm; Fig. 2d) have no counterpart in the field-free spectrum of argon, and cannot be explained by emission after the pulse. The key role of the laser-dressed (KH) states is confirmed by the theoretical results in Fig. 3. We cross-check the shape and spectrum of the trapezoidal input pulse (10 fs rise, 10 fs plateau, 10 fs decay) at the onset of filament, using numerical pulse propagation simulations (see Methods). We then use the experimental pulse in the TDSE simulations to calculate the intensity of the emitted radiation. The simulated output spectrum is normalized to the input spectrum at the 800 nm carrier wavelength, as in the experiment. Figure 3b shows the emergence of strong emission lines, as in the experiment (Fig. 3a). Note these peaks emerge where Fig. 1d shows gain. Figure 3b also shows that the observed lines cannot be attributed to standard nonlinear effects during propagation: a simulation of laser filamentation using standard propagation models (see Methods) does not lead to any peaks in the spectral region of interest. Finally, we focus on the spectral region between 610 nm and 690 nm. There are no field-free lines in the argon spectrum that coincide with the observed strong amplification lines at 625 nm and near 675 nm. However, Fig. 3c shows that transitions between the laser-dressed states (calculated in the KH frame, see Methods) do move into this region at I≈0.9 × 1014 W cm−2. Note that Fig. 3c does not show the overall pondermotive shift of the excited states and demonstrates only the additional shift. This shift is small compared to the pondermotive shift, which reaches 6 eV at 1014 W cm−2 (for λ = 800 nm). Figure 3d shows the population difference between the field-free states that move into this region at intensities around 1014 W cm−2. These are the states with field-free transition frequencies between 500 nm and 600 nm, which acquire population inversion at intensities around 1014 W cm−2. The lasing mechanism is not specific to argon. Similar results were found in krypton (see Fig. 4 and Supplementary Fig. 7). The lasing transitions are at different energies than in argon, reflecting the different atom, but also exhibit both broad and narrow gain features and asymmetric Fano-like lineshapes. There is no direct connection between the observed resonant widths of laser-dressed states, their lifetime and pulse duration. Indeed, the laser-dressed states undergo ultrafast dynamics intra-pulse and their positions are intensity-dependent, leading to ‘inhomogeneous’ broadening due to the spatial and temporal intensity distributions. In a 7 fs pulse, the dressed states shift rapidly with changing pulse intensity, so that resonances should broaden with increasing peak intensity (Fig. 4a, from 3 nm to 7 nm at 617 nm). For a long ‘trapezoidal’ pulse (10 fs rise, 40 fs plateau,10 fs decay), transition lines shift with intensity but keep their widths (~7 nm at 624 nm and ~12 nm at 613 nm; Fig. 4b). The observation of gain lines specific to the atom dressed by an intense, I > 1014 W cm−2, laser field, and absent in the spectrum of field-free transitions, shows that the seemingly exotic KH states are ubiquitous even in dense (1–9 bar) gases interacting with strong laser fields. At high intensities, the laser-driven atom can become an inverted medium, inside the laser pulse, where electrons respond almost as free, yet remain bound and can be used as a multi-photon pumped gain medium during laser filamentation. Amplification at the inverted transitions between the dressed states, resulting in the emergence of gain lines during the pump pulse, can trigger additional wave-mixing processes with the strong pump, possibly leading to additional parametric gain lines in the spectrum. After the end of the pulse, coherent free induction decay can also seed lasing between the field-free states carrying population inversion. Our findings illustrate new opportunities for enhancing and controlling lasing inside laser filaments by optimizing the shape of the input laser pulse. To synthesize laser waveforms with pulse-shape control down to the few-cycle level, a CPA Ti:Sapphire laser, (780 nm, 1.5mJ, 40 fs, 1 kHz; details and a diagram can be found in Supplementary Fig. 1b) undergoes two-stage filamentation in air, through loose focusing with 2 m and 1.25 m focal length mirrors. The pulse, broadened (700–900 nm) by the first filamentation stage, is re-collimated and recompressed with a pair of chirped mirrors before refocusing for the second filamentation stage, with a pair of spherical mirrors. At the exit of this second stage, the pulse spectrum spans more than one octave (450 nm–1 µm) and is recompressed by a chirp mirror arrangement34. The final compression of higher spectral phase orders and the pulse shape control are achieved using a 4 f all-reflective pulse shaper with a dual mask, 640-pixel, liquid crystal modulator. In this configuration, few-cycle 5 fs pulses of up to 50 µJ can be produced, in addition to flat top, or sawtooth with sharp rise times. These are optimized using a pulse-shape optimization algorithm explained below. Pulse-shape optimization and diagnostics In order to compensate for dispersion arising from the chamber window and the propagation in the pressured gas before the focal point, we apply a phase detection algorithm35, onto the SLM, to get the shortest pulse (Fourier transform-limited) at the focus. The signal used for the optimization loop was the acoustic shock wave released by the plasma, representative of the free-carrier density produced by the laser. Using subsequent measurements we verify this procedure leads to the desired pulse shape, at the onset of the filament (Fourier transform-limited, sawtooth, flat top trapezoids). The pulse shapes are measured using a transient-grating frequency-resolved optical grating (FROG)34, as well as a SPIDER (Venteon), at pulse positions before and after filamentation. To measure the pulse shape within the filament, a 100 µm Al foil is placed in the filament path. The filament drills a self-adapted iris, arresting further filamentation and nonlinear propagation36, but preserving the temporal pulse shape at this distance. The remaining beam was analysed by a SPIDER. The SPIDER traces are shown in Supplementary Figs. 4 and 5. Pulse propagation simulations Numerical simulations, based on a unidirectional pulse propagation equation (UPPE)37, are used to simulate the laser filamentation process and cross-check the pulse-shape optimization routine detailed above. The propagation simulations are first carried out up to the onset of filamentation for sample pulses, and confirmed the desired experimental pulse shape. Next the same simulations were carried out throughout the full filamentation region to obtain the spectra both at the input and at the output of the filament. The numerical method and the code verification are described in detail elsewhere38. Briefly, the simulations are performed in a cylindrically symmetric geometry, reducing the dimensionality of the problem to two spatial dimensions plus one temporal dimension. The ionization model uses the standard Perelomov–Popov–Terent’ev ionization rates. All standard nonlinear effects, such as self-focusing, self-phase modulation and self-steepening, are included (see Supplementary Fig. 8). Filamentation in pressured argon and krypton cells A schematic of the experimental set-up can be found in Supplementary Fig. 1. The shaped pulses enter a pressurized chamber (2–9 bar) containing Ar or Kr via 5 mm ultraviolet fused silica windows, where a 300 mm off-axis gold spherical mirror generates a filament ~4–5 mm in length, before exiting the chamber through a 5 mm ultraviolet fused silica window. Spectra from the filament and its plasma are focused in the forward and transverse directions onto Ocean Optics fibre spectrometers (ultraviolet–visible and near-infrared). An image of the filament in the transverse direction is taken by a digital camera, and the acoustic shock wave is recorded with a microphone. The theoretical results in Figs.1, 3 have been obtained by propagating the TDSE numerically, using the code described in ref. 39. We have used a radial box of 200.0 a.u., with a total number of radial points nr = 4000, and a radial grid spacing of 0.05 a.u. The maximum angular momentum included in the spherical harmonics expansion was Lmax = 50. The time grid had a spacing of Δt = 0.0025 a.u. In order to remove unwanted reflections from the border of the radial box, we have placed a complex absorbing potential40 at 32.7 a.u. before the end of the radial box. The argon potential used was fitted to reproduce energies and dipoles of the first few one-particle states of argon, as described in equation (22) of ref. 41: where Eprobe(Ω) is the spectral amplitude of the XUV probe pulse and Dprobe is the frequency-resolved linear response of the IR-dressed system to the probe pulse: therefore removing the contribution of the standard nonlinear response induced by the IR. Here, d(t) and dIR(t) are the dipole responses calculated with both fields and with only the IR field present, respectively (see main text). The infrared field used in the calculations consisted of a 4-cycle cos2 turn on, followed by a 32-cycle flat top part and a 4-cycle cos2 turn off. The carrier frequency of the dressing IR pulse is ω = 0.0569 a.u. (λ = 800 nm). The probe pulse used for extraction of the absorption spectrum of the dressed system consisted of a Gaussian pulse with central frequency Ω = 0.075942 a.u. (λ = 600 nm) and a full-width at half-maximum (FWHM) of 164 a.u.. The pulse is timed at the middle of the infrared pulse. Prior to the Fourier transform, the calculated time-dependent dipole was multiplied by a temporal mask with a flat top ending at 500 a.u. and followed by an exponential turn off with a time-constant of 200 a.u., so that the response is effectively turned off when the dressing IR pulse is over. This was done to ensure that only the dressed atom response is tracked, and that the coherent beating between the field-free states after the end of the dressing laser pulse is removed in this calculation. For the window Fourier transform with the Gabor window in Fig. 1b,c, only the Gabor window was applied, without additional exponential damping. To obtain the laser-dressed (KH) states shown in Fig. 3c, the model argon potential was adapted to a different solver for the stationary Schrödinger equation written in cylindrical (rather than spherical) coordinates specifically for the diagonalization of the KH Hamiltonian. The approach is described elsewhere13. For better numerical convergence, the model potential was modified slightly while keeping the energies and the transition dipoles for all relevant states unchanged. The data that supports the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. The authors acknowledge the valuable contributions of M. Moret, for advanced technical assistance with the experimental set-up, S. Courvoisier, for technical assistance with graphical formatting, and L. Woeste, for constructive advice. J.P, J.G. and S.H. acknowledge funding from SNF NCCR MUST grant. J.P and J.K acknowledge funding from ERC grant Filatmo. M.M. acknowledges funding from MHV fellowship grant number: PMPDP2-145444 and NCCR MUST Women's Postdoc Awards. M.I. acknowledges the support of the DFG QUTIF grant number IV 152/7-1. Supplementary notes, supplementary figures, supplementary references
<urn:uuid:3f860f91-db9a-435c-b38b-c056c20151d9>
2.734375
5,125
Academic Writing
Science & Tech.
50.144706
95,544,206
Radial collimators have been recently introduced to define the sampling volume during neutron diffraction stress and texture mapping experiments. This paper presents both analytical and Monte Carlo numerical models for the calculation of the spatial distribution of neutron transmission through a radial collimator. It is shown that the effective size of the scattered neutron beam as seen by detectors behind the collimator is quite sensitive to the collimator length and the number of blades. For a given radius of a collimator, the effective beam width increases sharply as the length is shortened. Due to the finite blade thickness, the center of gravity of the sampling volume is shifted away from the collimator. In contrast, attenuation of the neutron beam by the sample brings the center of gravity of the sampling volume closer to the collimator. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:15b62ad6-b1e2-4708-b3c9-5696d208fdcf>
2.703125
181
Academic Writing
Science & Tech.
23.296131
95,544,236
Isolation of Coding Sequences from Yeast Artificial Chromosome (Yac) Clones by Exon Amplification Part of the Methods in Molecular Biology™ book series (MIMB, volume 67) Exon amplification is a technique designed to address a central problem in mammalian molecular genetics—how to extract coding sequences from large tracts of genomic DNA. As shown in Fig. 1, the technique (also known as exon trapping) exploits the ability of the eukaryotic splicing machinery to detect splice sites flanking exon sequences in pre-mRNA molecules. The original exon trapping vector pSPL1 developed by Buckler et al. (1) and its subsequently improved derivative pSPL3 (2) allow segments of genomic DNA to be cloned into an HIV-tat intron that is flanked by the 5′ and 3′ splice sites and exons of the viral gene. Recombinant clones are transfected into COS-7 cells, which support high levels of transcription driven by the SV40 early promoter of the vector. During in vivo splicing, the 5′ and 3′ splice sites flanking an exon contained within the genomic insert are paired with the HIV-tat splice sites, with the result that the genomic exon is retained in the mature cytoplasmic poly A+ RNA. Reverse transcription of the cytoplasmic RNA is followed by PCR using primers specific for the HIV-tat exons to amplify the “trapped exon.“ After a secondary (nested) PCR amplification, the PCR products are cloned into a suitable plasmid vector. KeywordsCalf Intestinal Alkaline Phosphatase SV40 Early Promoter Exon Trapping Exon Amplification Exon Trap These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. - 3.Sambrook, J., Fritsch, E. F., and Maniatis, T. (1989) Molecular Cloning A Laboratory Manual, 2nd ed., Cold Spring Harbor Laboratory, Cold Spring Harbor, NY.Google Scholar © Humana Press Inc. 1997
<urn:uuid:007e0bb5-0b8b-416d-92ba-b2d3dd8de521>
2.671875
465
Academic Writing
Science & Tech.
41.153553
95,544,246
Unanticipated Response to Intense Laser Light Has Broad Implications for Ultrafast X-ray Science. Researchers assumed that tiny objects would instantly blow up when hit by extremely intense light from the world's most powerful X-ray laser at the Department of Energy's SLAC National Accelerator Laboratory in the United States. But to their astonishment, these nanoparticles initially shrank instead - a finding that provides a glimpse of the unusual world of superheated nanomaterials that could eventually also help scientists further develop X-ray techniques for taking atomic images of individual molecules. The experiments took place at the Linac Coherent Light Source (LCLS) X-ray laser, a DOE Office of Science User Facility. Its pulses are so bright that they can be used to turn solids into highly ionized gases, or plasmas, that blow up within a fraction of a second. Fortunately, for many samples researchers can take the data they need before the damage sets in - an approach that has been used to reveal never-before-seen details of a variety of samples relevant to chemistry, materials science, biology and energy research. The ultimate limits of this approach are, however, not well understood. One of the key visions for X-ray laser science is to image individual, one-of-a-kind particles with single X-ray pulses. To do so in a quantitative manner, researchers need to understand precisely how each molecule responds to the intense X-ray light. The new study, published in Science Advances, provides an unexpected insight into this aspect. "So far, all models have assumed that a very small system would immediately explode when we pump a lot of energy into it with the X-ray laser," says former LCLS researcher Christoph Bostedt, who is now at Argonne National Laboratory and Northwestern University. "But our experiments showed otherwise." At LCLS, Bostedt and his fellow researchers exposed minuscule clusters of xenon atoms to two consecutive X-ray pulses. The clusters, which were merely three millionths of an inch across, were heated by the first pulse for 10 quadrillionths of a second, or 10 femtoseconds. The second pulse then probed the clusters' atomic structures over the next 80 femtoseconds. "The unique nature of the LCLS X-ray pulse allowed us to create a freeze-frame movie of the response, with a resolution of about a tenth of the width of a single xenon atom," says LCLS and Stanford University graduate student Ken Ferguson, who led the data analysis. The researchers believe that the effect is a result of how electrons, which were initially localized around individual xenon atoms, redistribute over the entire cluster after the first X-ray pulse. "This phenomenon had never been observed before, nor had it been predicted by any of the existing theories," he says. "We expect it to have implications for many ultrafast X-ray laser experiments, especially those geared toward single-particle imaging with very intense X-ray pulses." The research could benefit studies in other areas as well, such as investigations of warm dense matter - a state of matter between a solid and a plasma that exists in the cores of certain planets and is also important in the pursuit of nuclear fusion with high-power lasers. Other institutions involved in the study were Technical University of Berlin, Germany; Tohoku University, Japan; National Science Foundation BioXFEL Science and Technology Center, Buffalo; and Kyoto University, Japan. This text and images for this release were provided by the SLAC National Accelerator Laboratory. Authors: Ken R. Ferguson, Maximilian Bucher, Tais Gorkhover, Sébastien Boutet, Hironobu Fukuzawa, Jason E. Koglin, Yoshiaki Kumagai, Alberto Lutman, Agostino Marinelli, Marc Messerschmidt, Kiyonobu Nagaya, Jim Turner, Kiyoshi Ueda, Garth J. Williams, Philip H. Bucksbaum and Christoph Bostedt Title: Transient lattice contraction in the solid-to-plasma transition Journal: Science Advances 29 Jan 2016 About SLAC National Accelerator Laboratory: SLAC National Accelerator Laboratory SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, Calif., SLAC is operated by Stanford University for the U.S. Department of Energy's Office of Science. SLAC National Accelerator Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. Institute of Multidisciplinary Research for Advanced Materials University of Tohoku University Original article from Tohoku University Ngaroma Riley | Research SEA What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:c9291623-3fa3-4c49-b05c-4b3c6149000d>
3.34375
1,688
Content Listing
Science & Tech.
33.95365
95,544,300
Through a series of laser-heated diamond anvil cell experiments at high pressure (350,000 to 700,000 atmospheres of pressure) and temperatures (5,120 to 7,460 degrees Fahrenheit), the team demonstrated that the depletion of siderophile (also known as "iron loving") elements can be produced by core formation under more oxidizing conditions than earlier predictions. An artist's conception of Earth's inner and outer core "We found that planet accretion (growth) under oxidizing conditions is similar to those of the most common meteorites," said LLNL geophysicist Rick Ryerson. The research appears in the Jan. 10 edition of Science Express. While scientists know that the Earth accreted from some mixture of meteoritic material, there is no simple way to quantify precisely the proportions of these various materials. The new research defines how various materials may have been distributed and transported in the early solar system. As core formation and accretion are closely linked, constraining the process of core formation allows researchers to place limits on the range of materials that formed our planet, and determine whether the composition of those materials changed with time. (Was accretion heterogeneous or homogeneous?) "A model in which a relatively oxidized Earth is progressively reduced by oxygen transfer to the core-forming metal is capable of reconciling both the need for light elements in the core and the concentration of siderophile elements in the silicate mantle, and suggests that oxygen is an important constituent in the core," Ryerson said. The experiments demonstrated that a slight reduction of such siderphile elements as vanadium (V) and chromium (Cr) and moderate depletion of nickel (Ni) and cobalt (Co) can be produced during core formation, allowing for oxygen to play a more prominent role. Planetary core formation is one of the final stages of the dust-to-meteorite-to-planet formation continuum. Meteorites are the raw materials for planetary formation and core formation is a process that leads to chemical differentiation of the planet. But meteorite formation and core formation are very different processes, driven by different heat sources and occurring in very different pressure and temperature ranges. "Our ability to match the siderophile element signature under more oxidizing conditions allows us to accrete the Earth from more common, oxidized meteoritic materials, such as carbonaceous and ordinary chondrites," Ryerson said. The earth's magnetic field is generated in the core, and protects the Earth from the solar wind and associated erosion of the atmosphere. While the inner core of the Earth is solid, the outer core is still liquid. The ability to preserve a liquid outer core and the associated magnetic field are dependent on the composition of the core and the concentration of light elements that may reduce the melting temperature. "By characterizing the chemical interactions that accompany separation of core-forming melts from the silicate magma ocean, we can hope to provide additional constraints on the nature of light elements in the present-day core and its melting/freezing behavior," Ryerson said. Other teams members include Julien Siebert and Daniele Antonangeli (former LLNL postdocs) from the Université Pierre et Marie Curie, and James Badro (a faculty scholar at LLNL) from the Institut de Physique du Globe de Paris. More Information"Looking at Earth in action" Anne Stark | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:30b4cf8f-4797-4ff0-93b7-47919c94899d>
3.6875
1,275
Content Listing
Science & Tech.
31.856846
95,544,329
Edited by Jamie (ScienceAid Editor) What is Pressure? Pressure is a measure of the amount of force in a given area. For example; the pressure of a finger pushing a wall is fairly small. However, the pressure of a finger pressing a pin against a wall is very high: even though the force is the same. This is because we have reduced the area by a lot. In order to calculate pressure, we use the below equation, which should be very easy to use: Example: Standing On One Foot To help you understand how to calculate pressure, we have got a fun little example for you below. Simon is standing on one leg, you are given his mass and the rough size of his feet (he has quite large feet). Firstly we need to convert his mass into a weight. Using a value of g=10 he weighs 750N, this is the force. The area of the foot is 20cm x 8cm = 160cm2. Since there are 10 000cm2 in 1m2 this means the area of one foot is 0.016m2. Now to calculate the pressure in Pascals the force is divided by the area; 750 / 0.016 = 43 750Pa or 43.75 kPa Gas particles are always moving, and in a completely random fashion. When in a container they will collide with its walls. These collisions are what cause gas pressure. This can be increased by: increasing the number of particles decreasing the volume of the container increasing the temperature in the container. The mathematical relationship between volume and pressure change (at a constant temperature) is as follows: P1 x V1 = P2 x V2 Referencing this Article If you need to reference this article in your work, you can copy-paste the following depending on your required format: APA (American Psychological Association) Pressure. (2016). In ScienceAid. Retrieved Jul 22, 2018, from https://scienceaid.net/physics/forces/pressure.html MLA (Modern Language Association) "Pressure." ScienceAid, scienceaid.net/physics/forces/pressure.html Accessed 22 Jul 2018. Chicago / Turabian ScienceAid.net. "Pressure." Accessed Jul 22, 2018. https://scienceaid.net/physics/forces/pressure.html. Categories : Forces
<urn:uuid:44fc1cd4-15b1-443a-8993-62c39721b736>
4.09375
495
Knowledge Article
Science & Tech.
64.139265
95,544,345
Growing evidence shows that the dinosaurs and their contemporaries were not wiped out by the famed Chicxulub meteor impact alone, according to a paleontologist who says multiple meteor impacts, massive volcanism in India and climate changes culminated in the end of the Cretaceous Period. The Chicxulub impact may have been the lesser and earlier of a series of meteor impacts and volcanic eruptions that pounded life on Earth for more than 500,000 years, say Princeton University paleontologist Gerta Keller and her collaborators Thierry Adatte from the University of Neuchatel, Switzerland, and Zsolt Berner and Doris Stueben from Karlsruhe University in Germany. A final, much larger and still unidentified impact 65.5 million years ago appears to have been the last straw, said Keller, exterminating two-thirds of all species in one of the largest mass extinction events in the history of life. It's that impact - not Chicxulub - that left the famous extraterrestrial iridium layer found in rocks worldwide that marks the impact that finally ended the Age of Reptiles, Keller believes. "The Chicxulub impact alone could not have caused the mass extinction," said Keller, "because this impact predates the mass extinction." Keller is scheduled to present that evidence at the annual meeting of the Geological Society of America (GSA) in Philadelphia, on Tuesday, October 24, 2006. "Chicxulub is one of thousands of impact craters on Earth's surface and in its subsurface," said H. Richard Lane, program director in the National Science Foundation (NSF) Division of Earth Sciences, which funded the research. "The evidence found by Keller and colleagues suggests that there is more to learn about what caused the major extinction event millions of years ago, and the demise of the dinosaurs at the end of the Cretaceous." Marine sediments drilled from the Chicxulub crater itself, as well as from a site in Texas along the Brazos River and from outcrops in northeastern Mexico, reveal that Chicxulub hit Earth 300,000 years before the mass extinction. Microscopic marine animals were left virtually unscathed, said Keller. "In all these localities we can analyze their microfossils in the sediments directly above and below the Chicxulub impact layer, and cannot find any significant biotic effect," said Keller. "We cannot attribute any specific extinctions to this impact." The story that seems to be taking shape, according to Keller, is that Chicxulub, though violent, actually conspired with the prolonged and gigantic volcanic eruptions of the Deccan Flood Basalts in India, as well as with climate change, to nudge species towards the brink. They were then pushed over with a second large meteor impact. The Deccan volcanism released vast amount of greenhouse gases into the atmosphere over a period of more than a million years leading up to the mass extinction. By the time Chicxulub struck, the oceans were already 3-4 degrees warmer, even at the bottom, Keller said. "On land it must have been 7-8 degrees warmer," she said. "This greenhouse warming is well-documented. The temperature rise was rapid over about 20,000 years, and it stayed warm for about 100,000 years, then cooled back to normal well before the mass extinction." Where's the crater? "I wish I knew," said Keller. Cheryl Dybas | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:579830cc-d23c-49bb-a5aa-a41de130b885>
3.890625
1,303
Content Listing
Science & Tech.
40.677795
95,544,347
If tuning into the news every day for the past year has made you want to give up on Earth entirely, this might just be the thing for you. A study has confirmed the existence of a potential habitat located on our planet’s moon, meaning humans could be able to stay for long stretches of time.Pussy Riot dressed up as police to invade pitch at World Cup final Scientific Journal, the Geophysical Research Letters, discovered a large open lava tube in a particular section of the moon’s surface – the Marius Hills region – which could be used to protect astronauts during space walks. No one has ever been on the moon for longer than three days because space suits alone are not enough to shield astronauts from the elements. Extreme temperature variation, cosmic ray radiation and meteorite impacts all pose real threats to humans during space walks. But according to researchers from Japan and the US, the intact lava tube could provide enough protection that astronauts would be able to seek shelter from the hazardous moon surface. Junichi Haruyama, a senior researcher at JAXA, Japan’s space agency, said: ‘It’s important to know where and how big lunar lava tubes are if we’re ever going to construct a lunar base. ‘But knowing these things is also important for basic science. We might get new types of rock samples, heat flow data and lunar quake observation data.’ What exactly is a lava tube? Lava tubes are naturally occurring channels formed when lava flow develops a hard crust, which thickens and forms a roof above a still-flowing lava stream. Once the lava stops flowing, the tunnel sometimes drains, forming a hollow void. Lunar lava tubes are formed on surfaces that have a slope and may be as wide as 1,600 ft. The existence of a lava tube is sometimes revealed by the presence of a ‘skylight’ in which the roof of the tube has collapsed, leaving a circular hole. However, stable lava tubes may still be disrupted by seismic events or meteoroid bombardment. Junichi and other scientists analysed radar data from Japan’s Selene spacecraft, which was used to bounce radar busts off the moon’s surface. It found a distinctive pattern suggesting the presence of a floor and a ceiling of a lava tube.Baby died from drug overdose after drinking mother's breast milk The scientists said that for a lava tube to be detectable, it would need to extend several kilometers in length and at least one kilometer in height and width. It means the lava tube they discovered near Marius Hills could be big enough to house an entire city.
<urn:uuid:cc2ca78b-8f65-4d66-abe6-0e652f001abe>
3.421875
552
News Article
Science & Tech.
44.672011
95,544,368
In mice, prolonged exposure to sound altered cells connected to the brain Cells that relay information from the ear to the brain can change in significant ways in response to the noise level in the environment. A synapse formed by the auditory nerve in a normal mouse (blue, left), and in a mouse exposed to noise for a week (blue, right). Researchers say it's likely that synapses become enlarged in noise-exposed mice to create space for more vesicles -- small round structures that store chemicals used to deliver messages to the brain. Credit: Amanda M. Lauer, Johns Hopkins University School of Medicine That's one major finding of a study out today in the Early Edition of the Proceedings of the National Academy of Sciences. Expose the cells to loud sounds for a prolonged period of time, and they alter their behavior and even their structure in a manner that may aid hearing in the midst of noise. End the ruckus, and the cells change again to accommodate for quieter environs. "The brain is amazingly adaptable: The way it receives information can change to accommodate for different conditions, and this is what we see in our research," said Matthew Xu-Friedman, PhD, an associate professor of biological sciences at the University at Buffalo. "What we see is that the cells in the auditory nerve adjust. They change themselves so they can respond to a different, heightened level of activity." The research, conducted on mice, was led by Xu-Friedman and completed by a team at UB and the Johns Hopkins University School of Medicine. The study tested how the animals responded to living in a noisy habitat for a week, and how the noise-exposed animals then reacted to a quiet environment. Auditory nerve cells as frugal savers When the ear is exposed to a noise, cells in the auditory nerve release chemicals called neurotransmitters. These chemicals tell the brain that the ear has been stimulated. That's how we hear. One problem that can complicate this process is depletion of neurotransmitter: Each cell has a limited supply of the chemicals, and when a cell runs out, it loses its ability to contact the brain. This becomes an issue in noisy environments, where constant stimulation puts cells at risk for using up neurotransmitter too fast. Xu-Friedman believes that the adaptations that mice exhibited in the new study are geared toward addressing this complication. For a week, the animals were exposed to noise akin to that made by a lawn mower or hair dryer. In response to this provocation, the animals' auditory nerve cells became more frugal, discharging a smaller proportion of their neurotransmitter reserves in response to stimuli than comparable cells in animals reared in quieter habitats. This meant the noise-exposed mice would be less likely to deplete neurotransmitters while processing background noise, and more likely to have chemicals available for signaling the brain when new sounds appeared. "The changes could help the animals deal with loud conditions and not go deaf," Xu-Friedman said. "Instead of draining your limited supply, you save some of it so you can continue processing new stimuli." In addition to altering their behavior, the animals' auditory nerve cells also changed their structure, enlarging their synaptic endings. This is the region of the cells where neurotransmitters are stored, and the increase in size implies that the cells were upping their inventories of the chemicals, Xu-Friedman said. Reverting to normal behavior When the researchers placed the noise-exposed mice into a quiet habitat, the animals' auditory nerve cells adapted, once again, to new conditions. The cells released neurotransmitters at a level akin to mice that had never been in loud environs. These results demonstrate the brain's adaptability, said Xu-Friedman. "We think we may have found another form of homeostasis," he said. "If the brain needs to process information under many different conditions, it's helpful if there's a set of rules to follow, ways to behave when activity is high and when activity is low. That appears to be happening with regard to these cells in the auditory nerve." The study was funded by grants from the National Institutes of Health, National Science Foundation and Dalai Lama Trust Fund. Xu-Friedman's co-authors on the paper were Tenzin Ngodup and Jack A. Goetz in the Department of Biological Sciences at UB; Brian C. McGuire and Amanda M. Lauer in the Center for Hearing and Balance and Department of Otolaryngology-Head and Neck Surgery at Johns Hopkins University School of Medicine; and Wei Sun in the Center for Hearing and Deafness and Department of Communicative Disorders and Sciences at UB. Charlotte Hsu | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:7cad8f82-b303-43a1-93b0-c5f74283a5fa>
3.578125
1,660
Content Listing
Science & Tech.
41.281599
95,544,400