text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Superclusters – regions of space that are densely packed with galaxies – are the biggest structures in the Universe. But scientists have struggled to define exactly where one supercluster ends and another begins. Now, a team based in Hawaii has come up with a new technique that maps the Universe according to the flow of galaxies across space. Redrawing the boundaries of the cosmic map, they redefine our home supercluster and name it Laniakea, which means ‘immeasurable heaven’ in Hawaiian.
Read the research paper: http://dx.doi.org/10.1038/nature13674
Have an amazing project to share? Join the SHOW-AND-TELL every Wednesday night at 7:30pm ET on Google+ Hangouts.
Join us every Wednesday night at 8pm ET for Ask an Engineer!
Maker Business — The many, many manufacturing processes listed on Wikipedia
Wearables — A bevel illusion
Electronics — Blown transistor?
Biohacking — We are Wired to Exercise at a Moderate Pace
Python for Microcontrollers — Make It Move with Crickit!
No comments yet.
Sorry, the comment form is closed at this time. | <urn:uuid:c1dd2a6b-f86a-46ba-9261-6b145b637871> | 3.75 | 250 | News (Org.) | Science & Tech. | 44.404985 | 95,610,180 |
Creates an URL cache object with the specified values.
- iOS 2.0+
- macOS 10.2+
- tvOS 9.0+
- watchOS 2.0+
The memory capacity of the cache, in bytes.
The disk capacity of the cache, in bytes.
pathis the location at which to store the on-disk cache.
pathis the name of a subdirectory of the application’s default cache directory in which to store the on-disk cache (the subdirectory is created if it does not exist).
The initialized cache object.
The returned cache instance is backed by disk, so you have more leeway when choosing the capacity for this kind of cache. A disk cache measured in the tens of megabytes should be acceptable in most cases. | <urn:uuid:62e1179e-0e31-41fe-a35a-aa20fb82fed8> | 2.640625 | 166 | Documentation | Software Dev. | 63.826726 | 95,610,206 |
Jump to navigation Jump to search
Named after Henri Lebesgue, a French mathematician.
- (analysis, singular only, definite and countable) An integral which has more general application than that of the Riemann integral, because it allows the region of integration to be partitioned into not just intervals but any measurable sets for which the function to be integrated has a sufficiently narrow range. (Formal definitions can be found at PlanetMath).
- The Lebesgue integral is learned in a first-year real-analysis course.
- Compute the Lebesgue integral of f over E. | <urn:uuid:23416786-6fe0-4966-bff6-106e4a7fcaac> | 3.484375 | 124 | Structured Data | Science & Tech. | 27.172494 | 95,610,207 |
Goddard Space Flight Center, Greenbelt, Md.
GREENBELT, Md. -- The extent of the sea ice covering the Arctic Ocean has shrunk. According to scientists from NASA and the NASA-supported National Snow and Ice Data Center (NSIDC) in Boulder, Colo., the amount is the smallest size ever observed in the three decades since consistent satellite observations of the polar cap began.
NASA and NSIDC scientists will host a media teleconference at 3 p.m. EDT, today, to discuss this new record low for summertime Arctic sea ice cover.
The extent of Arctic sea ice on Aug. 26, as measured by the Special Sensor Microwave/Imager on the U.S. Defense Meteorological Satellite Program spacecraft and analyzed by NASA and NSIDC scientists, was 1.58 million square miles (4.10 million square kilometers), or 27,000 square miles (70,000 square kilometers) below the Sept. 18, 2007, daily extent of 1.61 million square miles (4.17 million square kilometers).
The sea ice cap naturally grows during the cold Arctic winters and shrinks when temperatures climb in the spring and summer. But over the last three decades, satellites have observed a 13 percent decline per decade in the minimum summertime extent of the sea ice. The thickness of the sea ice cover also continues to decline.
"The persistent loss of perennial ice cover -- ice that survives the melt season -- led to this year's record summertime retreat," said Joey Comiso, senior research scientist at NASA's Goddard Space Flight Center in Greenbelt, Md. "Unlike 2007, temperatures were not unusually warm in the Arctic this summer."
The new record was reached before the end of the melt season in the Arctic, which usually takes place in mid- to late-September. Scientists expect to see an even larger loss of sea ice in the coming weeks.
"In 2007, it was actually much warmer," Comiso said. "We are losing the thick component of the ice cover. And if you lose the thick component of the ice cover, the ice in the summer becomes very vulnerable."
"By itself it's just a number, and occasionally records are going to get set," NSIDC research scientist Walt Meier said about the new record. "But in the context of what's happened in the last several years and throughout the satellite record, it's an indication that the Arctic sea ice cover is fundamentally changing."
The panelists for the briefing are:
-- Joey Comiso, senior research scientist, Goddard
-- Walt Meier, research scientist, NSDIC
To participate in the teleconference and obtain dial-in information, reporters must contact Maria-Jose Vinas at email@example.com or Natasha Vizcarra at firstname.lastname@example.org by 3 p.m. EDT today.
For more information and supporting images, go to http://go.nasa.gov/PmOyHo.
- end - | <urn:uuid:d9aca706-577d-4e28-b1bb-67b9c3097687> | 3.15625 | 624 | News (Org.) | Science & Tech. | 56.518844 | 95,610,214 |
The magnetic connectivity of coronal shocks from behind-the-limb flares to the visible solar surface during -ray events
Key Words.:Sun: flares – Sun: X-rays, gamma rays – Sun: coronal mass ejections (CMEs) - Sun: magnetic fields
Context:The observation of 100 MeV -rays in the minutes to hours following solar flares suggests that high-energy particles interacting in the solar atmosphere can be stored and/or accelerated for long time periods. The occasions when -rays are detected even when the solar eruptions occurred beyond the solar limb as viewed from Earth provide favorable viewing conditions for studying the role of coronal shocks driven by coronal mass ejections (CMEs) in the acceleration of these particles.
Aims:In this paper, we investigate the spatial and temporal evolution of the coronal shocks inferred from stereoscopic observations of behind-the-limb flares to determine if they could be the source of the particles producing the -rays.
Methods:We analyzed the CMEs and early formation of coronal shocks associated with ray events measured by the Fermi-Large Area Telescope (LAT) from three eruptions behind the solar limb as viewed from Earth on 2013 Oct 11, 2014 Jan 6 and Sept 1. We used a 3D triangulation technique, based on remote-sensing observations to model the expansion of the CME shocks from above the solar surface to the upper corona. Coupling the expansion model to various models of the coronal magnetic field allowed us to derive the time-dependent distribution of shock Mach numbers and the magnetic connection of particles produced by the shock to the solar surface visible from Earth.
Results:The reconstructed shock fronts for the three events became magnetically connected to the visible solar surface after the start of the flare and just before the onset of the 100 MeV -ray emission. The shock surface at these connections also exhibited supercritical Mach numbers required for significant particle energization. The strongest -ray emissions occurred when the flanks of the shocks were connected in a quasi-perpendicular geometry to the field lines reaching the visible surface. Multipoint, in situ, measurements of solar energetic particles (SEPs) were consistent with the production of these SEPs by the same shock processes responsible for the -rays. The fluxes of protons in space and at the Sun were highest for the 2014 Sept 1, which had the fastest moving shock.
Conclusions:This study provides further evidence that high-energy protons producing time-extended high-energy -ray emission likely have the same CME-shock origin as solar energetic particles measured in interplanetary space.
Observations of -rays and neutrons during flares provide the only information on the properties of very energetic ions accelerated in the corona and impacting the solar atmosphere (Vilmer et al., 2011). Energetic ions interacting with the solar atmosphere produce a wealth of -ray emissions. A -ray line spectrum is produced through interactions of ions in the range 1 –- 100 MeV/nucleon and consists of several nuclear de-excitation lines, neutron capture, and positron annihilation lines. If more energetic ions are present, over a few hundred MeV/nucleon, nuclear interactions with the ambient medium produce secondary pions whose decay products lead to a broadband continuum detectable at photon energies above 10 MeV. Such high-energy -ray emission in solar flares was widely studied since early 80s (e.g., Forrest et al., 1981; Murphy et al., 1987; Ramaty et al., 1987). The first -ray line images of a solar flare were obtained with the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI; Lin et al., 2002) for the X4.8 flare of 2002 July 23 (Hurford et al., 2003).
For some of the events, high-energy emission has been observed for hours after the impulsive phase of the flare, revealing that high-energy ions must be continuously accelerated or stored for tens of minutes to hours (e.g., Kanbach et al., 1993; Akimov et al., 1996). These high-energy events were named long duration gamma ray flares (LDGRFs; Ryan, 2000). If particle, responsible for this -ray emission were to be accelerated in the flare region (flare scenario; e.g., Aschwanden, 2012) then the emission should be confined to the proximity of the eruptive loop. Competing particle acceleration mechanisms in flares include acceleration in the reconnection current sheet (Litvinenko, 1996), stochastic acceleration by turbulence (Petrosian, 2012), and flare termination shocks (Ellison & Ramaty, 1985; Chen et al., 2015). The LDGRFs occurring for several hours challenge the flare hypothesis since emissions can outlive the solar flares and trapping of 100MeV protons on coronal loops seems theoretically unlikely. This class of time-extended -ray events often show pion-decay radiation over tens of minutes to hours after the impulsive phase, while other common flare emissions (e.g., X-rays) are absent or greatly diminished (e.g., Benz, 2008; Hudson, 2011). Share et al. (2017) (to be submitted) have completed a study 29 of these time-extended -ray events observed by LAT from the 2008 to 2016. As the events are distinct from the accompanying solar flares and have varying durations, these authors have named them sustained -ray emission (SGRE) events. We use this nomenclature in the remainder of this paper.
The hadronic processes can result from the interaction of very energetic protons with the collisional chromospheric region situated between the photosphere and the weakly collisional corona where efficient particle acceleration occurs in either solar flares or coronal shocks. For this interpretation, ray events that are detected near Earth require that energetic protons have propagated and impacted the chromospheric regions visible from the Earth (the so-called visible disk). Indeed, an additional constraint on the size of the accelerator and particle transport is imposed by the detection of SGRE from the events launched behind-the-limb. For instance an accelerator limited to the vicinity of the flare loop top is expected to influence a smaller region of the corona than a shock wave generated along a CME front.
With the launch of the Fermi satellite many more long-duration 100 MeV events have been detected (Ackermann et al., 2012) that are associated with much weaker X-ray flares. Fermi/Large Area Telescope (LAT) is a pair-conversion telescope (Atwood et al., 2009) that is sensitive to -rays from about 20 MeV to 300 GeV. In the standard Fermi sky-survey mode the detector axis rocks 50 from the zenith each orbit; therefore, at most times during the year the Sun can be observed for 20-40 min every two orbits. The large aperture (2.4 sr) and effective area of the LAT provide the capability to sensitively monitor the Sun with a duty cycle of 15-20%. Ackermann et al. (2012) presented a detailed description of the solar flare analysis procedures and instrument response functions for the telemetered LAT data. A comprehensive analysis of LAT solar events is provided in a paper in preparation (Share et al. 2017, to be submitted). In the present study, we use the publicly available Pass7 and Pass8 source-class data. The source-class data are used for most celestial source analyses for which long-time exposures are required. Our method for analyzing the LAT data is best described as a ’light bucket’ approach and is less sophisticated and faster than the maximum likelihood approach used by Ackermann et al. (2014); Ajello et al. (2014), but it is almost as sensitive for detecting long-duration high-energy solar -ray events. With this approach we reduced the -ray background from the atmosphere of the Earth by restricting the allowable events to those with zenith angles 100. We also restricted our studies to -rays with energies 100 MeV. With these restrictions we accumulated all photons that have measured locations within about 10 of the Sun, thus basically a ‘light bucket’. About 95% of all 200 MeV solar -rays have measured locations within this 10 region of interest (ROI).
Pesce-Rollins et al. (2015a, b) and Ackermann et al. (2017) have identified three -ray events observed by LAT that occured during behind-the-limb flare events. Because the 2014 Sept 01 occured well beyond the limb (36 degrees), the observed -ray emission could not have come from the flare footprints or loop top. Thus one needs to consider an acceleration process that can explain such an observation. Acceleration at a coronal shock is a natural candidate for broadly distributed particle acceleration
During its propagation out into the high corona and further into interplanetary space, the CME shock transfers a part of its kinetic energy to energetic particles; this typically amounts to roughly 10% (Mewaldt et al., 2008). These particles can eventually reach by number (flux greater than 10 particle cmssr) and energy ( MeV) the levels required to be identified as a solar energetic particle event (SEP) at 1 AU (Gopalswamy, 2003). It has been argued that most gradual SEP events observed in the interplanetary space are accelerated at the shock driven by the CME (Reames, 1999; Cliver, 2016). The highest particle energization (beyond GeV energies) occurs during the acceleration phase of the shock in the low corona (Zank et al., 2000; Berezhko & Taneev, 2003; Lee, 2005; Afanasiev et al., 2015) when the shock can reach speeds up to km/s and high Alfvénic Mach numbers , which are necessary for the efficient shock acceleration (e.g., Blandford & Eichler, 1987; Marcowith et al., 2016). In the extreme example of astrophysical shocks where reaches very high values above (i.e., shocks of supernova remnants) there is evidence that protons are accelerated up to the knee of the cosmic ray spectrum, i.e., eV (Morlino & Caprioli, 2012). In the coronal shocks the Mach numbers are more modest, typically 2-4, and probably do not exceed (Rouillard et al., 2016) and the available acceleration time is bound by the shock transit time of several hours to days. When the shock reaches the upper corona it decelerates and passes through more tenuous regions of the corona, diminishing the number and maximum energy of the accelerated particles. It was demonstrated by Ng & Reames (2008) that a 2500 km/s shock in a typical coronal conditions can accelerate protons up to GeV energies in 10 minutes. Also, the shock acceleration scenario is supported by in situ energetic particle observations (albeit at large heliocentric distances) when the shock of a CME or a corotating interaction region impacts an observing spacecraft.
Other researchers have used SDO observations to model the expanding coronal shock and its interaction with open field lines onto which particles would be released. These studies allowed comparisons with SEPs observed in interplanetary space. Kozarev et al. (2015) developed methods similar to those in the present study and combined a geometric model of the expanding shock front with the PFSS model of the coronal magnetic field. Lario et al. (2014, 2017) used a slightly different multipoint 3D reconstruction method than ours and used both PFSS and MHD MAST models to derive magnetic connectivity between the Sun and 1 AU to understand the longitudinal spread of SEPs at 1 AU; both models are presented in detail in Sections 3.1 and 3.2 of this work. Neither of these studies addressed the acceleration of particles that return to interact in the atmosphere.
The SEPs measured near 1 AU could be the interplanetary counterpart, i.e., the anti-sunward rather than the sunward propagating particles, of the SGRE events. Therefore the flux and the maximal energy of SEPs at 1 AU (discussed in the section 2.2) could provide additional clues regarding the acceleration efficiency of the candidate accelerator and may also provide an estimate of the energetic particle flux precipitating on the solar surface. For this purpose we would need to know the properties of the shock interaction with the magnetic fields going and those returning to the Sun. Because the solar corona is not yet accessible to in situ measurements, we rely here on remote sensing and high-energy radiation diagnostics to infer the conditions in which particles are accelerated.
In this study we concentrate on deriving the shock properties of three far-side CME events that were associated with -ray emissions with very different intensities and durations. Section 2 presents an observational overview and analysis of the three events measured as radio, visible, extreme ultraviolet (EUV), and high-energy X-ray and -ray radiation. The measurements of energetic particles in situ by distant spacecraft are reviewed in section 2.2 and in appendix A. We use a combination of reconstruction techniques and numerical models to derive the time-dependent properties of the expanding shock wave (velocity, Mach number and geometry) in section 3. In section 4 we test the idea that long-duration ray emissions result from energetic particles produced by the coronal shock region that is magnetically connected with the visible solar disk. We discuss the main findings of the study and draw conclusions in section 6.
The three far-side eruptions on 2013 Oct 11, 2014 Jan 06, and 2014 Sept 01 were observed by a number of ground- and space-based observatories. We first discuss the observations of the flares and the time-extended -ray measurements. In the next section we discuss the CME observations, the evolution of the shock front, and magnetic connections to spacecraft that measure the SEPs discussed in the last section.
2.1 Overview of flares and -ray observations
In Figures 1-3 we plot the ‘light-bucket’ flux time histories of 100 MeV -rays observed by LAT in the three events discussed in this paper. In the main figure, we plot the fluxes within 10 degrees of the Sun in the hours around the flare. Discrete LAT observing intervals with significant solar exposure are typically about 20-40 min in duration and occur every other orbit. The background fluxes are predominately comprised of Galactic, extragalactic, and quiescent solar photons. Details of these and 26 other sustained-emission -ray events observed by Fermi are discussed in (Share et al. 2017, to be submitted).
2013 Oct 11
On 2013 Oct 11 at 07:01 UT a flare occurred from the active region (AR) located at N21E106 position and was observed in soft X-rays by the Geostationary Operational Environmental Satellite Network (GOES) and the MErcury Surface, Space ENvironment, GEochemistry and Ranging (MESSENGER). In addition, a wealth of EUV, visible, radio, X-ray and -ray radiations were observed by the the Solar-Terrestrial Relations Observatory (STEREO; Kaiser et a.l, 2008), the Solar and Heliospheric Observatory (SoHO) and the Solar Dynamics Observatory (SDO; Lemen et al., 2012), RHESSI, GOES, and Fermi spacecraft (see, e.g., Pesce-Rollins et al., 2015a; Ackermann et al., 2017). The AR was 10 degrees behind the east limb viewed from Earth at the time of the flare. A drifting metric radio type II was measured between 07:10 and 07:20 UT by the Culgoora spectrograph with a drift speed of 924 km/s, close to the derived CME/shock speeds measured in Figure 6(d) during that time interval, as shown later in this work.
As the flare was located beyond the east limb of the Sun, the soft X-ray emission recorded by GOES came from the upper corona. The emission from this vantage point lasted 44 min and is shown by the dashed vertical lines in Figure 1 and by the dashed curve in its inset. The Solar Assembly for X-rays (SAX) instrument on MESSENGER (Schlemm et al., 2007) directly observed soft X-rays from the flare site (red dots) and showed that the 1–4 keV emission from the flare peaked more rapidly than that observed by GOES. The onset of the CME occurred about 7 min after the start of the soft X-ray emission and about 3 min before type ll metric radio emission was detected. The green dots represent the derivative of the 1–4 keV emission observed by SAX and provide a surrogate for the time profile of hard X-ray emission from the flare site. Hard X-ray emission from the flare began 07:00 UT and peaked 7 min before the peak in soft X-rays. Fermi/GBM observed a precipitous rise in 50–100 keV X-ray emission about 07:08 UT, which appears to reflect the time that flare hard X-rays in the corona became visible above the limb of the Sun, as viewed from Earth. The RHESSI hard X-ray images up to 50 keV indicate that the source of the emission was above the limb of the Sun (Pesce-Rollins et al., 2015a). We fit the background-subtracted hard X-ray spectrum observed from 15–200 keV between 07:09 and 07:12 UT; the emission appears to extend up to 100 keV and can be fit equally well by a spectrum of electrons with power-law index of –4.8, interacting in a thick target, or a spectrum with a power-law index of –3, interacting in a thin target.
The LAT had good solar exposures every other orbit on 2013 Oct 11. The exposure from 06:58–0740 UT captured all of the time in which hard X-ray emission was observed. As shown in the figure, emission 100 MeV was only observed during this first orbit. There was no evidence for emission in the next exposure three hours later between 10:16–10:50 UT. The source-class data plotted at 1 min resolution in the inset reveal an increase in 100 MeV emission beginning at 07:15 UT that peaks in 5 min and falls back to background by 07:40 UT. The centroid of the 100 MeV emission by LAT is consistent with a location near the east limb of the Sun at N03E62 with a 1 range in longitude from E39 to just above the eastern limb (Pesce-Rollins et al., 2015a). The background-subtracted -ray spectrum from 07:14–07:30 UT is consistent with emission from pion decay produced by a proton spectrum 300 MeV, following a differential power law with an index –3.7 0.2 with no evidence for spectral variation during the rising and falling phases. A better fit might be achieved with a proton spectrum rolling over at energies above 500 MeV. These measurements indicate that the protons producing the -ray emission interacted deep in the solar atmosphere on the visible disk as viewed from Earth.
2014 Jan 06 event
On 2014 Jan 06 a C2.2 flare occurred at about 07:30 UT (Thakur et al., 2014) from an AR located at S8W110 and a large filament eruption was observed at 07:44 UT. The GOES soft X-ray flare class does not represent its real magnitude because the AR was degrees behind the solar limb. Metric type II emission was observed between 07:45 and 08:05 UT by the Learmonth spectrograph, but no speed could be derived using the spectrograph. Type III radio emission started at 07:45 UT as seen by wave and S-wave receivers on board both STEREO and Wind spacecraft. The -ray emission from this behind-the-limb flare was reported by Pesce-Rollins et al. (2015b); Ackermann et al. (2017). However, as can be seen in Figure 2, this excess between 07:55 and 08:30 UT was not significant using our light bucket analysis and did not meet our criteria for detection above background. We find no evidence for variation in the 100 MeV flux in that time. This solar event was associated with energetic particles measured near Earth and deserves a thorough analysis in order to understand why the -ray fluxes were so weak.
2014 Sept 01 event
On 2014 Sept 01 a solar flare occurred at about 10:54UT and lasted 40 minutes as deduced from the on-disk observations made by the MESSENGER/SAX soft X-ray monitor. The AR was located at N14E126 corresponding to 36 degrees behind the east limb viewed from Earth. Metric emission was observed with a drift speed of 2079 km/s by the Nançay ORFEES spectrograph between 11:00 and 11:50 UT (Pesce-Rollins et al., 2015b). In addition the Nançay radioheliograph imaged the type II burst from 11:00 UT in a region situated near the forming CME. A type III burst was detected at 11:0200:02 UT by STEREO and Wind radio and plasma wave receivers.
The flare associated with this event occurred behind the solar limb at about E126. At this location the coronal region above the flare was not visible at Earth and no soft X-ray emission was observed by GOES. However, the flare was observed by the Solar Assembly for X-rays (SAX) instrument on MESSENGER (Schlemm et al., 2007). The soft X-ray time profile observed by SAX is plotted as the dashed curve in the inset of Figure 3; this emission had a duration of about 40 min. From SDO 193Å, 211Å images, we estimate that the onset of the associated CME occurred about 10:57 UT, at the rise of the inferred flare hard X-ray emission. The latter was computed by taking the derivative of the SAX soft X-ray time profile plotted as a solid curve in the inset.
Fermi/LAT had good solar exposures every other orbit and exposures about four times smaller in the intervening orbits. Such a 25% exposure occurred during the behind-the-limb flare from about 11:06–11:20 UT when the Sun was at a large angle with respect the LAT telescope axis. Source-class LAT data could be used to study 100 MeV -ray emission because the intense hard X-ray emission from the flare did not reach Fermi. At these large solar viewing angles, the detector response is small and not accurately determined. There were also two good exposures 12:26–12:58 and 15:36–16:08 UT during which LAT had significantly higher sensitivity to search for delayed high-energy emission. The 100 MeV flux plotted in the main figure just after the flare was one of the largest observed by LAT during its eight-year mission. The hourly fluxes are plotted logarithmically to reveal the sustained emission that lasted about 3 hours. In the inset we plot 100 MeV fluxes (individual data points) derived from the Pass 8 source-class data at 1-min resolution along with arbitrary 100-300 keV rates (solid data curve) observed by GBM. Both the -ray and 100–300 keV X-ray emissions, as viewed from Earth, appear to rise at 11:04 UT, about 7 min after the onset of the hard X-ray emission observed from the flare as viewed by MESSENGER. The hard X-rays observed by GBM peaked by 11:08 UT and fell exponentially while the 100 MeV -ray flux peaked about 5 min later and appeared to fall as rapidly as it increased. The sustained -ray emission observed between 12:26 and 12:58 UT is likely to be the tail of this earlier post-impulsive phase emission.
We have fit the background-subtracted 100 MeV -ray data with a pion-decay spectrum produced by 300 MeV protons following a differential power-law spectrum and interacting in a thick target at a heliocentric angle of 85. Our fits suggest that the spectrum hardened during the duration of the event with spectral indices of –4.25 0.15, –3.85 0.1, and –3.45 0.35, at 11:06–11:12, 11:12–11:20, 12:26–12:58 UT, respectively. We searched for the presence of the 2.223 MeV neutron capture line in the GBM BGO spectrum between 11:04 and 11:30 UT and set a 95% confidence upper limit of 0.016 cm s on its flux. Comparing the 100 MeV -ray fluence with the upper limit on the 2.223 MeV line fluence, we estimate that the proton power-law spectral index between 30 and 300 MeV was harder than –3.4; this was estimated assuming that the protons interacted at a heliocentric angle of 85. The index would be even harder for smaller heliocentric angles. This suggests that the spectrum of protons producing the pion-decay emission steepened above 300 MeV.
The NaI detectors on GBM observed a hard X-ray spectrum up to at least 1 MeV. Our fits to the 20–900 keV spectrum between 11:06 and 11:16 UT indicate that the hard X-rays were produced by a power-law electron spectrum with index –3.1 and low-energy cutoff of 130 keV for a thick target model with a 5% confidence in the quality of the fit. With this high low-energy cutoff in the electron energy we would not expect to observe significant soft X-ray emission from chromospheric evaporation on the visible disk and such emission was not detected by GOES. The fit to the spectrum for a thin target model is significantly worse (confidence %) than for a thick target. There is no evidence for variation of the electron spectrum over the event. We also fit the GBM BGO spectrum up to 40 MeV with an electron spectrum having a power-law index of –3.1 interacting in a thick target. Thus the results indicate that the bremsstrahlung originated from a single spectrum of electrons with a high-low energy cutoff and power-law index of 3.1 up to tens of MeV interacting in a thick target.
The fact that both the protons producing the -ray emission and the electrons producing bremsstrahlung up to about 40 MeV interacted in a thick target on the visible disk and that both emissions commenced at the same time, suggests that they had a common origin. In the next sections, we investigate whether this common origin could be the CME shock.
2.2 Solar energetic particles measured in situ
Strong solar energetic particles fluxes were also measured in situ at widely separated vantage points during the three -ray events by STEREO and by spacecraft situated along the Sun-Earth line, i.e., SoHO, Wind, and the Advanced Composition Explorer (ACE), and at Mars. Particle fluxes measurements provide relevant diagnostics of the particle acceleration in the low corona, which are exploited later in this article to reach a comprehensive interpretation of the SGRE and SEPs in terms of the expanding coronal shock. The in situ measurements are highly complementary for the following reasons.
The onset of the relativistic electron flux and its intensity measured in situ at the spacecraft, along with their magnetic connectivity to the Sun, provide a direct measurement of the level of energization of particles along a given magnetic field line and information on the energetic electron solar release times.
The peak level of the particle flux and its integrated value over the event provides information on the number of accelerated particles that propagate outward from the Sun along specific field lines. With certain assumptions, particle flux can also be used to estimate the number and energy of particles propagating sunwards (e.g., Cliver et al., 1993),
Solar energetic particle spectra can be compared with the particle spectra at the Sun derived from hard X-ray -ray emission, although the in situ spectra are likely modified by particle propagation and diffusion in the heliosphere.
When measurements from several well-separated spacecraft are available, the rapid increase of SEP flux at these probes provides information on particles accelerated across a broad range of heliographic longitudes (Reames et al., 1996; Lario et al., 1998; Dresing et al., 2012; Rouillard et al., 2012; Lario et al., 2014).
Several studies have jointly discussed the sustained -ray emission and SEPs (e.g., Ramaty et al., 1987; Ryan, 2000; Chupp & Ryan, 2009; Ackermann et al., 2017). Some recent works studied jointly SEPs and the expanding coronal shock with similar methods as developed in the present study. Because the main focus of this paper is the relation between the SGRE and expansion of coronal shocks, we placed our analysis of the SEPs measured in situ in the appendix. The conclusions drawn from this analysis are that, all spacecraft considered, the strongest measured SEPs were during the 2014 Sep 01. The peak SEP flux measured on 2013 Oct 11 and 2014 Jan 06 were one order and three orders of magnitude lower than the 2014 Sep 01 SEP event, respectively. This hierarchy follows that of the SGRE peak intensities for the three events. Probes well connected to the CME source region measured a significant increase of the highest SEP flux between 10 MeV and 100 MeV. Probes connected to the solar surface far from the CME source longitude measured much lower levels of SEP flux. The STEREO electron flux was 50-100 times higher during the 2014 Sept 01 event than during the 2013 Oct 11 event, we use this observation in the discussion section to interpret the relative strength of any associated sustained electron emission as inferred from 100 keV bremsstrahlung. As shown in the following, the in situ measurements suggest that a common accelerator is acting in the corona to produce the -ray and the SEP events. The onset times of the particle flux increases measured the in situ data are included in Table 1.
2.3 Coronal mass ejection propagation and 3D determination of the pressure wave fronts
In our study of the three CMEs associated with the behind-the-limb events we used complementary data sets from STEREO, SoHO, and SDO. For each event, the CME eruption and propagation were observed in EUV (SDO and STEREO) and in white-light images (SoHO and STEREO) from observatories situated at least two of the three available vantage points.
Figure 4 shows the positions of the four observatories used to triangulate the CME extent in three dimensions. The SDO location is the same as Earth and SoHO on these scales. Also shown are the fields of view of the instruments and the locations of the four inner planets. A first rough estimate of the directions of propagation for the 2013 Oct 11 and 2014 Jan 6 events was deduced from solar wind imaging and given in the catalog of CMEs produced by the Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS) project
To derive the 3D extent of the CME low in the corona we used EUV observations from (SDO and STEREO) and white-light observations from (SoHO and STEREO). Our requirement was that imaging data was acquired from at least two of the three available vantage points. The 3D triangulation technique employed is described in detail in Rouillard et al. (2016). In a nutshell, the technique tracks in 3D the region perturbed by the pressure front generated by the expansion of the CME. This region usually consists of a bubble visible off-limb combined EUV-white-light images as well as a wave front visible on disk in EUV images (Vourlidas & Ontiveros, 2009; Patsourakos & Vourlidas, 2009). In this reconstruction technique the wave front is considered to be the intersection between the 3D bubble, modeled as an ellipsoid, and the coronal region located just above solar surface imaged in EUV. The bubble-EUV wave system is considered to be the outermost extent of the region perturbed by the CME as it forms and expands in the corona. Areas of the bubble surface moving at speeds exceeding the local characteristic speed of the coronal medium (through which the bubble is propagating) are the location where a shock wave is likely to form.
We show a selection of images for the three events from the three instruments in Figure 5a,c,e. We manually extracted the location of the EUV wave and the CME bubble at all available times and from all available viewpoints (EUVI and coronagraphs on board SDO, SoHO, and STEREO spacecraft). We used EUV and white-light images from the three vantage points for the 2013 Oct 11 and 2014 Jan events but only from two vantage points for the 2014 Sept 01 event because no STEREO A data was available. The manually selected points are plotted as red crosses on the images of the CMEs. The CME events erupted on the far side of the Sun and it was essential that the STEREO EUV instruments provided on-disk images of the source regions situated on the far side. This allowed us to track the expansion of the CMEs during the early stages of their formation; in particular, we tracked the EUV wave expansion. The manually selected points plotted as red crosses on the images are also plotted on the cartoons of the CMEs in rows (b), (d), and (f) of Figure 5. We assumed that a 3D ellipsoid could model the pressure front and chose the dimensions of the ellipsoid (defined by a set of its three axis values and its central position) such that the locations of the red crosses, as viewed from the different vantage points, best reflected the rendered ellipsoid. This included the two constraints that the ellipsoid, as viewed off the solar limb passed by the outline of the bubble and that the intersection of the ellipsoid with the solar surface passed by the EUV front. We obtained a set of fitted ellipsoids covering the first hour of the CME expansion as viewed from Earth and from at least one of the STEREO spacecraft.
We follow the method outlined in Rouillard et al. (2016) to compute the 3D expansion speed of the shock surface. This procedure first involves selecting a grid of points on the fitted ellipsoid at time and then finding the locations of the closest respective points on the ellipsoid at previous time-step . The distance traveled between these points was divided by the time interval to obtain an estimate of the 3D speed of the shock surface as a function of time. We list the maximum CME speeds for the three events in Table 2.
The results of our analysis are provided in Figure 6 for the three events. In panels (a-c) we show the extent of the pressure waves in the ecliptic plane 30 minutes after the onset of radio type III bursts. The speeds at different points of the shock-front ellipsoids are color coded. The maximum speed at any one time is typically found at the nose (apex) of the structure. For example, the nose of the 2014 Sep 01 event reached a speed of 2140 km/s and is by far the fastest event of the three considered in this study. The maximum speeds reached at any time during the three CME expansions are given Table 2. In panels (d-f) we plot the time evolution of the radial extent and the maximal speed of the shock front for each event. The 2013 Oct 11 event accelerated rapidly in the low corona from 07:10 to 07:20 UT up to 1100 km/s and its corresponding height was . Then CME continued its acceleration phase much more gradually, reaching 1500 km/s by 08:45 UT and corresponding height of . The 2014 Jan 06 pressure wave was fitted over a 1 hour time interval. Its initial acceleration phase started at 07:40 UT and continued up to 08:10 UT when the maximum speed was reached, 1880 km/s, and corresponding apex height of . After this phase the pressure wave started decelerating. The acceleration phase was more abrupt and shorter lived during the 2014 Sep 01 event; it reached 2630 km/s between 10:55 to 11:05. The deceleration phase started just after and the front speed was 1500 km/s when the apex height was (by 12:30 UT). These different expansion profiles are likely the result of different properties of the magnetic piston driving the eruption of the CME.
Also shown in panels (a-c) of Figure 6 are representative magnetic field lines connecting the different spacecraft considered in this study and the expanding pressure front. We followed the procedure outlined in Lario et al. (2014) and Rouillard et al. (2016), which assumes the interplanetary medium consists of Parker spirals defined by the solar wind speeds measured at the connecting spacecraft; these speeds are given in the Table 2. This spiral is traced from the spacecraft down to 15 R and the coronal magnetic field is obtained by a 3D MHD model. This Magneto-Hydrodynamic Around a Sphere Thermodynamic (MAST) model is described in 3.2 (Lionello et al. 2009). This mapping of magnetic connectivity is exploited to discuss the complementary SEP measurements acquired in situ during the three SGRE events analyzed here.
3 Magnetic connectivity of the shock front to the Sun and space
We developed a model to test whether particles accelerated at the shock front of a CME can explain the sustained 100 MeV -ray emission observed by Fermi. This is based on our ellipsoid approximation for the pressure wave and knowledge of the coronal magnetic field lines that this pressure wave intersects as a function time, some of which can reach the visible disk of the Sun as view from Earth. We derived the magnetic connectivity lines using two different methods in the next two sections. In the last section, for the three flares, we project the velocities of the shock as it crosses these field lines onto the locations where they intersect the solar photosphere at different times in its expansion. A similar process allowed us to project these velocities onto field lines reaching the spacecraft making in situ particle measurements.
3.1 The Potential Field Source Surface (PFSS) model
The PFSS reconstruction technique (e.g., Wang & Sheeley, 1992) can be used to model the coronal magnetic field between one and a source surface, typically placed at 2.5 . This model only requires a magnetogram of the Sun on a given date. The two main assumptions of this technique are the absence of currents in the corona and the presence of a spherically uniform source surface. The latter assumption forces a rapid expansion of the magnetic flux tubes between the solar surface and source surface, where the field lines are forced to be radial. While this model provides fairly accurate estimates of the global magnetostatic topology of the corona, it does not give information about other required plasma quantities, such as local density and temperature and therefore cannot be used alone to provide the Mach number of the shock. The version of the PFSS model used here comes from the Lockheed Martin Solar And Astrophysics Laboratory (LMSAL)
3.2 MAST model
Magneto-Hydrodynamic Around a Sphere Thermodynamic (MAST) model is a MHD model (Lionello et al., 2009) that includes detailed thermodynamics with realistic energy equations accounting for thermal conduction parallel to the magnetic field, radiative losses, and coronal heating. The effect of Alfven waves on the expanding coronal plasma is also included via the so-called Wentzel-Kramers-Brillouin approximation. The MAST model provides the distribution of the coronal density and temperature that PFSS cannot. The temperature at the lower boundary in this model is 20,000 K (approximately the upper chromosphere) and the transition region is included in the model. Special techniques are used to broaden the extent of the transition region so that it is resolvable on 3D meshes, but still gives accurate results for the coronal part of the solution. The coronal heating description is empirical and the coronal densities arise entirely from the heating and its interaction with the other terms in the fluid dynamics set of mass, momentum, and energy equations. The simulation results used here employ SDO HMI magnetograms on a given date as input at the inner boundary of the simulation grid. The simulation grid extends from 1 to 30 . The weak points of these MHD simulations are lower grid resolution than in the PFSS model, magnetogram smoothing, and numerical diffusion. The latter can cause magnetic field lines to display unphysical behavior at large heliocentric distances, typically beyond . This does not affect our estimates of connectivity lines since we take the intersection point between the Parker Spiral and the MHD integrated line at lower radii, i.e., .
3.3 Shock front connectivity with the solar surface
We combined our reconstruction of the CME pressure front with the magnetic field models to determine the CME velocities when the shock front passes field lines that reach the solar photosphere visible from Earth. The results of the MAST and PFSS models were provided on a 3D discrete numerical grid in heliocentric coordinates. We then interpolated the coronal parameters from the model grids to the grid of the surface front that we approximated with an ellipse. In doing this we followed the procedure described in Rouillard et al. (2016), which uses trilinear interpolations between two grids in spherical coordinates (volume weighting technique).
There are certain approximations with this procedure, a coronal shock marks the outer boundary of the region influenced by an erupting CME, therefore upstream of the shock location we enter the background solar corona. We modeled this background corona, the MAST MHD model, or PFSS magnetostatic reconstructions along with some assumptions about the structure of the interplanetary magnetic field. In this way we were able to estimate the magnetic link between the coronal shock and the spacecraft making in situ measurements. However downstream of the shock, two types of regions are encountered: first, the magnetic piston that produces the coronal shock (the so-called CME core) with its often complex topology involving magnetic flux ropes and, second, the background corona disturbed by the expanding shock wave. The magnetic piston has an important effect on the post-shock coronal plasma. It exerts an important dynamical pressure and compresses the downstream fluid toward the outer parts of the expanding pressure front. Inclusion of the magnetic piston in our model would reconfigure the magnetic field lines that are currently connecting the nose of the pressure wave to the vicinity of the source region to the flanks of the eruptive structure. There are other processes that become viable with the inclusion of a piston, including magnetic reconnection, which may also reconfigure the magnetic field topology. Our understanding of the detailed topology of CME cores is not sufficiently advanced at this stage to model this region with enough detail for the purpose of tracking particles. We, therefore, ignored the presence of the piston and model the magnetic connectivity as if the ellipsoidal shock was in an unperturbed background corona both upstream and downstream of the shock.
We considered both the open and closed magnetic field lines and traced these lines from the shock sunward and anti-sunward until reaching a boundary, either at the photosphere or near 1 AU. For magnetic loops, a shock therefore connects at two distinct locations on the photosphere. In Figure 7 we present 10 percent of the 5041 open and closed magnetic field lines that we traced through the simulation grids for the PFSS (rows a and c) and MAST (rows b and d) coronal models for the three events. The plotted field lines intersect the pressure front (shown in gray) some 10 (a,b) and 30 (c,d) minutes after the onset of type III radio burst. The color coding of the field lines is defined by the value of the shock speed at the point of intersection of the line with the shock surface with the red lines mostly associated with the fast nose of the CMEs.
Overall, the PFSS and MAST models provide very similar field line connectivities between the shock front and solar surface and, only occasionally,a point on the pressure front connects to solar surface points that are considerably apart. This difference may be due to the differing magnetogram resolutions; the latter is higher for PFSS. In addition, the presence of a source surface in the PFSS model can force a stronger expansion of magnetic flux tubes between and than modeled in MAST. Both effects are important and it is not obvious to know which one is more realistic. We typically find a discrepancy in the field line mapping between the two models of 10 degrees in heliocentric coordinates at the photosphere. We consider this difference acceptable for this work. We used the MAST model, which provides the necessary coronal parameters to derive the properties of the pressure wave in our ensuing analyses.
For both numerical techniques, we typically find that high-speed ( km/s) shock fronts are connected to the visible solar disk within 10 minutes after the first appearance of the CME front surface in EUV images. As discussed later, -ray emission observed by Fermi-LAT also begins about 10 minutes following the CME onsets for the 2013 Oct and 2014 Sept events.
A clearer way to present the access of shock-front accelerated particles to the solar atmosphere is to plot the results shown in Figure 7 on Carrington maps. As the MAST and PFSS models give comparable results, and because the MAST model allows us to study more physical aspects of the shock front, we only plot the MAST results in Figure 8. The figure presents six Carrington maps showing the location of the photospheric footpoints of magnetic field lines connected to the front/shock surface 10 and 30 min after the CME onsets for the three events. The visible solar disk is delimited by dashed vertical lines and its center is indicated by a blue cross. For each footpoint we compute the average value of the shock-front speed magnetically connected to that point of the solar surface. The highest front speeds mapping to the visible disk occurred during the 2014 Sept 01 event even though the flare location is the furthest beyond the limb of the three events. Once again it is clear that shock fronts with speeds 1000 km s were connected to the visible solar disk within 10 minutes of the CME onsets.
During the first hour of the CME eruption, the distribution of footpoint locations are associated both with coronal holes where open field lines are rooted and coronal loops that extend from the solar surface to the shock. As the shock expands the size of the pattern increases and large front speeds connect to an increasing surface area. Even one hour after onset, when the front is connected to field lines intersecting the entire visible Sun, 30% of the field lines are closed magnetic loops, and not connected to the interplanetary medium. Most interestingly, within 10-15 minutes of the flare onset, the solar disk visible from Earth is magnetically connected to fast shocks for all three events even though there was a large disparity in speed between the different events that reflect the different kinematic properties already observed in Figure 6.
4 Comparing physical parameters of the shock – magnetic field-line model with -ray observations
In addition to the magnetic connectivity it is possible to derive the shock normal angle with respect to the local magnetic field orientation and different Mach numbers (only for the MHD model). For the 2012 May 17 CME, it was shown that high fast-magnetosonic number regions can develop on the shock front (Rouillard et al., 2016).
As we discussed earlier, the MAST model provides the capability of including physical quantities other than the shock-front speed into the magnetic connection model. These parameters include the magnetic field strength and orientation, plasma temperature, and density at the front location that enable us to estimate the magnetic field obliquity with respect to the shock-normal and Mach numbers. These parameters are defined by the formulae
where is the local shock-normal vector, is the local shock speed, is the Alvfén speed, is the fast-magnetosonic speed, and is the background solar wind speed vector. The latter enters in the Mach number estimation because the appropriate frame for our calculations is the local fluid frame, i.e., the background solar wind. The use of the fast magneto-sonic number instead of the Alfvénic Mach number is more relevant here because the magnetic field intensity can drop to very low levels in the coronal neutral region where magnetic flux tubes expand greatly and can even annihilate owing to the presence of current sheets. The sound speed remains at about the same value in the quasi-isothermal upper corona. Since retains the contribution from both speeds, we thereby avoid excessively high values of the Mach number in such regions.
Shock regions with critical Mach numbers , larger than are usually considered efficient particle accelerators. Self-consistent, but still idealized simulations, indicate that the particle acceleration efficiency of shocks is such that it can transmit more than 10% of its kinetic energy into energetic ions (Caprioli & Spitkovsky, 2014). In the precise context of coronal shocks, Ng & Reames (2008) demonstrated that a 2500 km/s quasi-parallel shock propagating through coronal media with typical magnetic fields and sound speeds accelerate seed protons from several keVs to GeV energies in 10 minutes. These authors also showed that both shock speed and Mach number are both important parameters to consider.
We computed over the shock-front surface by dividing our estimated speeds over the 3D ellipsoid with the fast-mode speeds obtained with the plasma parameters modeled by the MAST model. In Figure 9 we project these local values onto the solar surface by tracing the magnetic field lines connecting the shock to the solar photosphere (as illustrated in Figure 7). The values are plotted onto Carrington maps in the same way as we plotted CME speeds in Figure 8. We find that the visible disk connects to high values () during all three events as early as 10-15 minutes after the CME onset. Unlike the projected shock-front speed, there is no striking difference in the three events with regards to the values projected on the visible disk. Each event has a contribution with on the visible disk from early post-flare times. The contribution is most clear for the 2014 Jan 06 event and more marginal for the other two. Assuming that a fast shock with high-Mach number is an efficient particle accelerator, the magnetic connection established early in the event with the visible solar disk may allow these particles to propagate and impact the surface and produce -rays visible from Earth. As previously reported by Rouillard et al. (2016) and confirmed for the three events studied here, the highest occurs in the regions where the magnetic field values are very low and the densities are high. This typically corresponds to the location of the coronal neutral current sheet, which is the source region of the heliospheric current sheet and its associated heliospheric plasma sheet. These regions could be efficient particle accelerators for a number of reasons discussed in Rouillard et al. (2016).
Also, there is an interesting feature when we visually compare Figures 8 and 9. For the regions with roughly equal , the highest is reached at the boundaries of the corresponding coronal holes. This feature is logical because magnetic flux tubes expand more rapidly near the coronal hole boundaries than in the central regions. This effect is most striking for the 2014 Sep 01 event at 11:30 UT.
We now look in more detail at the early temporal evolution of the shock connectivity and magnetic geometry in the event on 2014 Sept 01, which had the fastest CME and a source region that was situated furthest behind the limb. In Figure 10 we present the first 15 minutes of shock evolution, in 5 minutes snapshots, as seen from Earth. The left-hand column (panels (a)-(d)) presents the 3D location of the pressure front at 10:55, 11:00, 11:05, and 11:10 UT, respectively. Magnetic field lines, color coded by shock-front speed are plotted in the central column (panels (e)-(h)). The same lines, color coded by the value of the magnetic field inclination to the shock-normal are plotted in the right column (panels (i)-(l)). As can be seen, there was no clear magnetic connection of any fast-moving shock regions to the visible disk before 11:05 UT and no magnetic connection at all at 10:55 UT. This also excludes any magnetic connection of the flare region itself to the visible disk since the shock represents the outermost region disturbed by the CME eruption. The figure also clearly illustrates the quasi-perpendicular nature of the shock (i.e., degrees), which is magnetically connected to the visible disk between 11:05 and 11:10 UT. The associated front speeds for these times range from 500 km/s to 1700 km/s with a mean value of 800 km/s and mean Mach . This time range also corresponds to the onset and increase in -ray flux observed by Fermi-LAT.
The overall evolution is similar for the two other events: the region of the early emerging shock front that connected to the visible disk corresponded to the flanks of the pressure wave. Therefore these regions propagated transversely to the approximately radial coronal field lines or therefore the interaction was predominantly quasi-perpendicular. As shown by Rouillard et al. (2016) and recently by Lario et al. (2017), the mostly parallel region of the CME shock relative to the field lines, at least during the early min post-flare evolution, corresponds to its ‘nose’. The anchor points of the nose-connected field lines for all three events are occulted for an observer at Earth during the early post-flare times. At later times, when the shock front engulfs a significant part of the Sun, evolves gradually to a quasi-parallel configuration.
Figure 11 compares the maximum Mach number, , the average Mach numbers, , and the average shock geometry for the shock front crossing all the magnetic field lines connected with the visible disk for the three events as a function of time using the MAST model. The three panels present the evolution of these quantities for 2013 Oct 11, 2014 Jan 06, and 2014 Sept 01 from top to bottom, respectively. The temporal resolution for determining these parameters is 5 min. The figure also shows the fraction of the visible disk connected to the shock, the flux of 100 MeV rays measured by Fermi-LAT, and 100–300 keV hard X-rays for the 2014 Sept 1 event. The periods when Fermi-LAT did not observe the Sun (South Altlantic Anomaly (SAA) or Fermi night as described in Ackermann et al. (2017)) are delimited by gray-shaded areas. The start times of the 2013 Oct 11 and 2014 Sept 1 soft X-ray flares were determined using the SAX instrument on MESSENGER (2.1.1 and 2.1.3).
The expansion of the coronal pressure wave started at 07:10 UT on 2013 Oct 11, 10 minutes after the X-ray flare. Hence our speed measurements started around 07:15 UT; this explains why the shock-related quantities are delayed compared to the X-ray flare for this event (top panel of the Figure). This delay is less pronounced for the two other events. As shown in this figure early connected parts of the shock to the visible disk are quasi-perpendicular () and super magneto-sonic in each case. Some parts reach supercritical values in each event rapidly after connecting to the visible disk (). The values of these quantities reached early on seem to remain roughly constant at later times. The mean and maximal values of the Mach number remain stable around 2 and 5, respectively, up to 40 minutes after the start of the soft X-ray flare and decreases slightly from quasi-perpendicular values toward quasi-parallel configurations at much later times. The covered visible disk by shock-connected field lines is very small during the early post-flare times and increases rapidly, up to 5% values during the bulk of the -ray emission.
Even though the time resolution for determining the various shock parameters is only 5 min, we see that the -ray flux increases when the fast shock regions form around the CME for both the 2013 Oct 11 and Sept 01 events. This also corresponds to the time when parts of the shock with high values and with quasi-perpendicular geometry become magnetically connected with the solar disk visible from Earth. This is shown with the increases in the colored curves reaching the visible disk in Figure 10.
The LAT detected -rays near 8:00 UT (18 minutes after the soft X-ray flare onset) on 2014 Jan 06 but the signal is very weak, as discussed in section 2. Fermi was in the SAA during the first 15 minutes of the event when the CME began. The type II radio burst and our reconstruction suggest that the shock formed during the SAA transit and that a supercritical section of the shock was connected to the visible disk at that time. These are favorable conditions for the production of -rays. A weak ground level enhancement was also detected by the South Pole neutron monitor (Thakur et al., 2014). Thakur et al. (2014) estimated that the solar release time of relativistic protons occurred at 07:55UT just before LAT detected -rays.
5 Comparing physical parameters of the shock – magnetic field-line model with SEP observations
It is possible that shock-acceleration of particles onto magnetic field lines may be responsible for both the protons interacting on the visible solar disk that produced the -ray emission and the SEPs observed by spacecraft at 1 AU. In the previous section we showed that the onset times of the -rays in two of the events began after the high-Mach shock crossed field lines connected to the visible disk. In this section we investigate whether the magnetic connectivity of the same CME shock fronts to the STEREO spacecraft precede the observed onset times of the SEPs discussed in 2.2 and in appendix A.
On the right side of Figure 12 we plot the field lines connecting the Sun to the STEREO spacecraft and to L1 at about the times of the onset of the type II radio bursts. Also shown are the ellipsoid models of the developing shock fronts. The field lines are assumed to follow a Parker spiral between the spacecraft making in situ measurements down to . This spiral is defined by the solar wind speed measured in situ and given in Table 1. Below down to the solar surface, we used the MAST MHD model to derive the topology of the field lines.
On the left side of the figure we plot the time series of the shock Mach numbers where the connected field lines intersect the locations of STEREO-A (solid red line), STEREO-B (solid green line), and L1 (solid blue line). These are compared with the time profiles of the flux MeV electrons, plotted in (a) for the 2013 Oct 11 and (c) for the 2014 Sep 01 event. We chose the MeV energy band because these electrons are relativistic and the most energetic of these electrons propagate almost scatter free from the accelerator to the in situ probe. If the high Mach number shocks accelerated the electrons, we would expect that the intersection times of the shocks with the magnetic field lines would agree with the electron onset times, after taking into account the 11 min electron propagation time along the Parker spiral.
For the 2013 Oct 11 (a) and 2014 Sep 01 (c) events where direct comparisons were possible, we find that the earliest intersection times of the shock regions with field lines connected to STEREO-A occur several minutes before the onset times of the SEP electrons at the spacecraft. In addition, the high Mach-number shock connectivity model also predicts the earlier onset times of several minutes for SEP electrons at STEREO-A than at STEREO-B. Unfortunately, there were no energetic electron measurements available for the 2014 Jan 06 event from the Wind or SOHO spacecraft, located at the well-connected L1 point to the early shock front.
We therefore conclude that the CME shock-acceleration model accounts for both the onset times of SEP electrons at 1 AU and the onset times of the -rays produced by protons propagating to the solar atmosphere visible from Earth.
6 Summary and discussion
We have analyzed three CME events on 2013 Oct 11, 2014 Jan 6, and 2014 Sep 1, which (1) erupted behind the solar limb as viewed from Earth, (2) were all associated with the formation of shocks low in the corona as confirmed by the detection of metric type II radio bursts, (3) were associated with detections of 100 MeV -rays by the Fermi-LAT instrument, and (4) were associated with detections of solar energetic particles in the interplanetary medium. Our goal was to test the hypothesis that the shock that formed around the CME could have accelerated the particles responsible for both the -ray emission and the SEPs.
In order to achieve this goal we used coronagraph images of the CMEs from SoHO-LASCO and at least one of the STEREO spacecraft (Figure 4). From these images (Figure 5), we reconstructed the expansion of the 3D shape of the CME shock and determined the shock front velocities (Figure 6). By employing a global simulation of the coronal magnetic field (here MAST), we determined how and when the shock front connected to the solar disk visible from Earth and to the spacecraft measuring SEPs. We then calculated the times when fast and supercritical parts of the shock, capable of accelerating protons to energies sufficient to produce pion-decay -rays deep in the chromosphere, were magnetically connected to the visible disk. Our 3D reconstruction analysis showed that the shock fronts of the three events were already visible around the CMEs at the onset times of the metric type II bursts (Figure 5). In the 2014 Jan 6 and Sep 1 events we confirmed the presence of the shock at type-II onset by showing that the Mach number exceeded super-magnetosonic values at that time; unfortunately we lacked sufficient observations to infer the shock speed at the time of type-II onset for the 2013 Oct 11 event.
In the three studied cases we demonstrated that the shock Mach number underwent a rapid increase to supercritical values within 15 minutes after the type-II onsets. The highest numbers were measured at the shock front nose and near the coronal neutral sheet, just as for the event analyzed in Rouillard et al. (2016). The latter is a region where the magnetic field is low and the density high in MHD simulations. Some of the parts of the shock became magnetically connected to the visible disk before the onset of the MeV -ray emission, thus providing a path between the accelerator and visible disk (Figure 11). We also showed that these shocks had a quasi-perpendicular geometry during the bulk of the -ray emission. This is true because the flanks of the coronal shock from far-side solar eruptions connecting to the visible disk early in the event propagate across quasi-radial coronal field. Strong quasi-perpendicular shocks can be efficient particle accelerators as long as the upstream coronal magnetic field is sufficiently turbulent and the acceleration time is smaller than in quasi-parallel configuration (e.g., Giacalone, 2005).
The -ray and X-ray observations provide information about energetic energetic protons and electrons in the low corona and chromosphere. The in situ SEP measurements at 1 AU and at Mars provide information about the proton and electron populations escaping from the solar corona. In the strongest event, on 2014 Sep 01, the 0.7–4 MeV electron and 13–100 MeV proton SEP fluxes rose to peaks just after the flare. The peak electron-to-proton flux ratio in these energy bands was about 4.5 as measured in these energy bands. The electron-to-proton flux ratio was about 4.5 as measured in these energy bands. In 2.1.3 we presented evidence that GBM detected sustained electron bremsstrahlung emission in addition to the SGRE observed by LAT. If the electrons and protons producing the observed bremsstrahlung and 100 MeV -ray emission came from the same CME-shock source as the SEPs, we would expect their relative numbers to be comparable. From our fits to the bremsstrahlung spectrum, we estimate that about 3 electrons with energies between 0.7–4.0 MeV interacted in the solar atmosphere. Our fits to the pion-decay -ray spectrum provided us with the spectral index and number of protons above a few hundred MeV. Our comparison of this number with the limit on the number of protons above 40 MeV from neutron-capture line measurements, discussed in 2.1.3, constrained the proton spectral index at lower energies. For a proton spectral index of –3.0 from 10 MeV to 200 MeV that steepened to -4.0 above 200 MeV, we estimate that there were about 5 protons with energies between 13–100 MeV that interacted at the Sun. The fact that the electron to proton ratios at the Sun and in space agree to within a factor 5 suggests a common origin for the particles.
The same analysis of the 2013 Oct 11 event shows that the peak pion-decay -ray flux was about 20 times lower than that in the 2014 Sep 01 event. This is consistent with the 6 to 10 times lower proton flux measured in the Oct 11 event by STEREO (e.g., Appendix A) relative to Sept 01. The STEREO 0.7-4.0 MeV electron data from the STEREO A/B spacecraft for both events (Figure 13) indicate that the SEP electron flux was between 50 and 100 times higher higher in the 2014 Sept 01 event than it was in the 2013 Oct 11 event. Using this ratio and the measured peak hard X-ray flux in the October event, we estimate that the 100-300 keV peak flux from electron bremsstrahlung would have been too small to have been observed by the GBM.
We have modeled the CME and shock parameters of the 2014 Jan 06 event even though it was barely detected in -rays. The event is also of interest because the coronal shock wave was fast and there is evidence in neutron-monitor data for production of protons up to 700 MeV (Thakur et al., 2014). One of the possible reasons for the weak -ray emission is that Fermi was not observing the Sun during the early phases of the coronal shock evolution between 07:45 and 07:55 UT, when the shock reached high Mach numbers where it could accelerate particles in a quasi-perpendicular geometry onto field lines reaching the visible disk. Although the eruption accelerated particles to high energies, it did not produce high levels of SEP fluxes (see Figure 13). The peak flux of MeV protons was about 10 times less than for the 2013 Oct 11 event and several orders of magnitude lower than in the 2014 Sep 01 event. Hence, even if Fermi had fully observed the Sun during this event, we would have expected the -ray flux to be weak.
We conclude that acceleration of protons by a common coronal shock can account for both the SEPs observed in interplanetary space and sustained -ray emission observed by LAT in the 2013 Oct 11 and 2014 Sept 1 behind-the-limb events. This conclusion is supported by our detailed modeling of the CME shock and particle acceleration onto magnetic field lines both reaching the visible solar disk and interplanetary SEP detectors. We found that supercritical shocks were capable of accelerating protons of high energy onto the visible disk fast enough to produce the SGRE observed by LAT. However, we found no clear correlation between the shock Mach number levels and the intensity of the -ray flux measured by LAT, suggesting that the physics is more complicated. We find that averaging the parameter over the shock parts that are connected to the visible disk can grossly reproduce the time-evolution profile of the MeV -ray flux if only shock parts are selected. In the latter expression is the fluid density and is the local shock front speed. This apparent empirical relationship needs to be substantiated by physically grounded models. This will potentially allow us to accurately estimate the time-dependent flux and energy of precipitating particles on the solar surface and escaping into the inner heliosphere.
Acknowledgements.I.P. kindly acknowledges Rui Pinto for providing the PFSS model grid data and for the help with the MSL RAD data extraction. I.P. acknowledges financial support from the HELCATS project under the FP7 EU contract number 606692. We thank Brian Dennis and Kim Tolbert for providing MESSENGER/SAX data. G. Share acknowledges support from NSF Grant 1156092, NASA Fermi/GI grant GSFC #71080, and the EU’s Horizon 2020 research and innovation program under grant agreement No 637324 (HESPERIA). We acknowledge usage of the tools made available by the plasma physics data center (Centre de Données de la Physique des Plasmas; CDPP; http://cdpp.eu/), the Virtual Solar Observatory (VSO; \(http://sdac.virtualsolar.org\)), the Multi Experiment Data Operation Center (MEDOC; \(https://idoc.ias.u-psud.fr/MEDOC\)), the French space agency (Centre National des Etudes Spatiales; CNES; https://cnes.fr/fr), and the space weather team in Toulouse (Solar-Terrestrial Observations and Modelling Service; STORMS; https://stormsweb.irap.omp.eu/). This includes the data mining tools AMDA (http://amda.cdpp.eu/) and CLWEB (clweb.cesr.fr/) and the propagation tool (http://propagationtool.cdpp.eu). The STEREO SECCHI data are produced by a consortium of RAL (UK), NRL (USA), LMSAL (USA), GSFC (USA), MPS (Germany), CSL (Belgium), IOTA (France), and IAS (France). The ACE data were obtained from the ACE science center. The WIND data were obtained from the Space Physics Data Facility. The MSL RAD data were obtained through the Planetary Data System (PDS).
Appendix A In situ SEPs
a.1 In situ measurements near 1 AU
Figure 13 presents the time evolution of the MeV proton flux measurements at STEREO (HET instruments), the MeV proton flux measurements by the Wind spacecraft (EPACT instrument; panels a-c), and the time evolution of the MeV electron flux at STEREO (HET) and MeV electron flux by the SoHO spacecraft (EPHIN instrument; panels d-f). The time series are presented over a four-day interval starting before the eruption and ending several days after. The onsets of energetic protons for the well-connected vantage points occur typically within 30 minutes of the CME onset. This results from either a close connectivity to the flare site or to the shock crossing the probe-connected field line (e.g., Reames, 1999; Rouillard et al., 2012; Lario et al., 2014). All events last several days and appear to be gradual SEP events. A small ground-level enhancement was reported by Thakur et al. (2014) during the 2014 Jan 06 event (panels (b) and (e)), which was well connected with the Earth, even though it was barely detected in -rays probably because most of the proton interactions occurred on the far side of the Sun. The largest MeV proton flux increase (almost 5 orders of magnitude at STEREO-B) was observed on 2014 Sep 01 (panels c,f). The peak flux on 2013 Oct 11 (a,d) was one order of magnitude lower (less than 4 orders of magnitude increase at STEREO-A) while the peak flux for 2014 Jan 06 was only two orders of magnitude above background. This hierarchy seems to follow the observed peak intensities in high-energy -rays observed by Fermi-LAT.
The temporal evolution of energetic particles flux from different vantage points, as illustrated in Figure 13, can be summarized as follows. On 2013 Oct 11 STEREO-A measured a rapid flux increase of relativistic electrons in MeV band at 07:29 UT and 10 minutes later on STEREO-B (at 07:40 UT). The peak intensity raised to higher values at STEREO-A then at STEREO-B. On L1 only a very gradual increase was measured, starting at 13:00 UT. On 2014 Jan 06, Earth had good connectivity close to the source region. Hence, energetic proton and electron intensity increased probably within 30 minutes after flare. Yet, electron detectors on SoHO and Wind spacecraft do not provide measurements during this early post-flare phase (data gap). Measurements from GOES (Thakur et al., 2014; Ackermann et al., 2017) indicate that MeV protons were detected starting from 07:57 UT. STEREO-A and STEREO-B were probably poorly connected and displayed late increases in energetic protons flux while no significant enhancements are seen in MeV electrons. The SEP event on 2014 Jan 07, visible in (b) and (e), is due to the unrelated flare and CME that occurred close to the center of the visible solar disk (Möstl et al., 2015). Finally, on 2014 Sep 01 both STEREO-A and STEREO-B measured a rapid flux increase in MeV protons and MeV electrons. For electrons the onset was at 11:11 UT and at 11:28 UT on STEREO-A and STEREO-B, respectively. Proton onset was measured at 11:48 UT on STEREO-B while no data point was available from level 2 STEREO-A data. Indeed, the public STEREO level 2 data excluded some measurements during this period while beacon plots show that, not only the onset occurred earlier on STEREO-A, but that the peak intensity was higher than on STEREO-B. This latter is not seen in Figure 13(c,f) where level 2 data was used. As advised, we used level 2 data here as more relevant for scientific purposes. Concerning the L1 measurements, similarly to 2013 Oct 11 event, a delayed and gradual increase in energetic particles is seen. Following this analysis, Table 1 gives the particle onset times from different vantage points and for the three events.
The broad longitude distribution of the SEPs in space can be tested because at least one of the in situ instruments was magnetically connected to the Sun at a location far from the site of the flare; these are the instruments on the near-Earth spacecraft for the 2013 Oct 11 event, STEREO-B spacecraft for the 2014 Jan 06 event, and Earth/Mars for the 2014 Sep 01 event. This is shown in the spacecraft connectivity lines in Figure 6. The broad longitude extent is studied later in this work once the temporal evolution of shock magnetic connectivity is established.
In Figure 14 we present the proton spectra at the three vantage points (Earth environment, STEREO-A, and STEREO-B) and for the three events averaged over 24 hours after SEP onset (start at 08:00 UT, 08:00 UT, and 11:00 UT for 2013 Oct 10 (a), 2014 Jan 06 (b), and 2014 Sep 01 (c), respectively). These spectra were generated using the online OMNIWeb tool that uses the interface developed by Natalia Papitashvili and Joe King
a.2 Measurements from the Martian surface
Measurements from the Radiation Assessment Detector (RAD; Hassler et al., 2014) on board Martian ground rover Curiosity, i.e., Mars Science Laboratory (MSL) provide additional information on the longitudinal extent of the events, which our model should be able to explain. Here we report on the detection of the energetic particle events by RAD for the three studied events. Figure 15 shows the energetic particle counts rate with the same time window as in Figure 13. The times of the solar eruption andthe energetic particles onset are indicated by dashed vertical lines. Mars heliocentric distance was between 1.45 AU and 1.67 AU for the dates of the events. We note that RAD detector is sensitive to MeV protons and MeV electrons but it requires technical expertise to retrieve the counts from different energy bands and particle species from the public data on the Planetary Data System depository. Here, we plot the “all included” energetic particle count rate, which is equivalent to the dose rate received by the detector. Bearing in mind that we do not distinguish between different particle species we assumed that first arriving particles were either the relativistic electrons or the most energetic protons. While this procedure provides only limited information compared to the previously discussed 1 AU measurements, we use it here to illustrate the wide longitudinal spread of high particle fluxes measured in the inner heliosphere rapidly after the onset of the solar eruptions.
All three events were clearly detected on the martian surface even though Mars was not always well connected magnetically to the flare site. As illustrated in Figure 6, Mars was well connected to the eruption sites of the 2013 Oct 11 and 2014 Jan 06 events and was poorly connected on 2014 Sep 01 site (150 degrees away) (a,b,c). This is reflected in the relatively fast, 3–4 hours, rise times for the two well-connected events and the much longer rise time of the 2014 Sept event. It is interesting that the particle increases for the three events all started 30 minutes - 1 hour after the accompanying flares. This suggests that some particles had prompt access to the field lines reaching Mars even though the flare site might be over 100 degrees away.
- Another possible scenario involves long-lasting post-CME coronal energy release (e.g., Akimov et al., 1996; Chertok, 1996; Klein & Trottet, 2001; Klein et al., 2014).
- Project website: https://www.helcats-fp7.eu/
- http://www.lmsal.com/ derosa/pfsspack/
- Tool available at:
- Ackermann, M., Ajello, M., Albert, A., et al. 2012, ApJS, 203, 4
- Ackermann, M., Ajello, M., Albert, A., et al. 2014, ApJ, 787, 15
- Ackermann, M., Allafort, A., Baldini, L., et al. 2017, ApJ, 835, 219
- Afanasiev, A., Battarbee, M., & Vainio, R. 2015, A&A, 584, A81
- Ajello, M., Albert, A., Allafort, A., et al. 2014, ApJ, 789, 20
- Akimov, V. V., Ambrož, P., Belov, A. V., et al. 1996, Sol. Phys., 166, 107
- Aschwanden, M. J. 2012, Space Sci. Rev., 171, 3
- Atwood, W. B., Abdo, A. A., Ackermann, M., et al. 2009, ApJ, 697, 1071
- Benz, A. O. 2008, Living Reviews in Solar Physics, 5, 1
- Berezhko, E. G., & Taneev, S. N. 2003, Astronomy Letters, 29, 530
- Blandford, R., & Eichler, D. 1987, Phys. Rep, 154, 1
- Caprioli, D., & Spitkovsky, A. 2014, ApJ, 783, 91
- Chen, B., Bastian, T. S., Shen, C., et al. 2015, Science, 350, 1238
- Chertok, I. M. 1996, Radiophysics and Quantum Electronics, 39, 940
- Chupp, E. L., & Ryan, J. M. 2009, Research in Astronomy and Astrophysics, 9, 11
- Cliver, E. W., Kahler, S. W., & Vestrand, W. T. 1993, International Cosmic Ray Conference, 3, 91
- Cliver, E. W. 2016, ApJ, 832, 128
- Dresing, N., Gómez-Herrero, R., Klassen, A., et al. 2012, Sol. Phys., 281, 281
- Ellison, D. C., & Ramaty, R. 1985, ApJ, 298, 400
- Forrest, D. J., Chupp, E. L., Ryan, M. M., et al. 1981, International Cosmic Ray Conference, 10, 5
- Giacalone, J. 2005, ApJ, 624, 765
- Gopalswamy, N. 2003, Geochim. Res. Lett., 30, 8013
- Hassler, D. M., Zeitlin, C., Wimmer-Schweingruber, R. F., et al. 2014, Science, 343, 1244797
- Hudson, H. S. 2011, Space Sci. Rev., 158, 5
- Hurford, G. J., Schwartz, R. A., Krucker, S., et al. 2003, ApJ, 595, L77
- Kaiser, M. L., Kucera, T. A., Davila, J. M., et al. 2008, Space Sci. Rev., 136, 5
- Kanbach, G., Bertsch, D. L., Fichtel, C. E., et al. 1993, A&AS, 97, 349
- Klein, K.-L., Masson, S., Bouratzis, C., et al. 2014, A&A, 572, A4
- Klein, K.-L., & Trottet, G. 2001, Space Sci. Rev., 95, 215
- Kozarev, K. A., Raymond, J. C., Lobzin, V. V., & Hammer, M. 2015, ApJ, 799, 167
- Lario, D., Kwon, R.-Y., Richardson, I. G., et al. 2017, ApJ, 838, 51
- Lario, D., Raouafi, N. E., Kwon, R.-Y., et al. 2014, ApJ, 797, 8
- Lario, D., Sanahuja, B., & Heras, A. M. 1998, ApJ, 509, 415
- Lee, M. A. 2005, ApJS, 158, 38
- Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17
- Lin, R. P., Dennis, B. R., Hurford, G. J., et al. 2002, Sol. Phys., 210, 3
- Lionello, R., Linker, J. A., & Mikić, Z. 2009, ApJ, 690, 902
- Litvinenko, Y. E. 1996, ApJ, 462, 997
- Marcowith, A., Bret, A., Bykov, A., et al. 2016, Reports on Progress in Physics, 79, 046901
- Mewaldt, R. A., Cohen, C. M. S., Giacalone, J., et al. 2008, American Institute of Physics Conference Series, 1039, 111
- Morlino, G., & Caprioli, D. 2012, A&A, 538, A81
- Möstl, C., Rollett, T., Frahm, R. A., et al. 2015, Nature Communications, 6, 7135
- Murphy, R. J., Dermer, C. D., & Ramaty, R. 1987, ApJS, 63, 721
- Ng, C. K., & Reames, D. V. 2008, ApJ, 686, L123
- Patsourakos, S., & Vourlidas, A. 2009, ApJ, 700, L182
- Pesce-Rollins, M., Omodei, N., Petrosian, V., et al. 2015, ApJ, 805, L15
- Pesce-Rollins, M., Omodei, N., Petrosian, V., et al. 2015, arXiv:1507.04303
- Petrosian, V. 2012, Space Sci. Rev., 173, 535
- Ramaty, R., Murphy, R. J., & Dermer, C. D. 1987, ApJ, 316, L41
- Reames, D. V. 1999, Space Sci. Rev., 90, 413
- Reames, D. V., Barbier, L. M., & Ng, C. K. 1996, ApJ, 466, 473
- Rouillard, A. P., Sheeley, N. R., Tylka, A., et al. 2012, ApJ, 752, 44
- Rouillard, A. P., Plotnikov, I., Pinto, R. F., et al. 2016, ApJ, 833, 45
- Ryan, J. M. 2000, Space Sci. Rev., 93, 581
- Scherrer, P. H., Schou, J., Bush, R. I., et al. 2012, Sol. Phys., 275, 207
- Schlemm, C. E., Starr, R. D., Ho, G. C., et al. 2007, Space Sci. Rev., 131, 393
- Schrijver, C. J., & De Rosa, M. L. 2003, Sol. Phys., 212, 165
- Thakur, N., Gopalswamy, N., Xie, H., et al. 2014, ApJ, 790, L13
- Vilmer, N., MacKinnon, A. L., & Hurford, G. J. 2011, Space Sci. Rev., 159, 167
- Vourlidas, A., & Ontiveros, V. 2009, American Institute of Physics Conference Series, 1183, 139
- Wang, Y.-M., & Sheeley, N. R., Jr. 1992, ApJ, 392, 310
- Zank, G. P., Rice, W. K. M., & Wu, C. C. 2000, J. Geophys. Res., 105, 25079 | <urn:uuid:512894ae-2ff2-4547-bc3f-b3b40c2acc21> | 2.890625 | 18,383 | Academic Writing | Science & Tech. | 58.684994 | 95,610,223 |
What is a Virus? | Best Biology Review
Viruses are tiny, non-living biological infectious agents. So while they aren’t technically alive, they are biological agents and they are infectious. They infect healthy cells. They are going to consist of either a DNA or RNA genome encased in a protein coat known as a capsid. So this capsid is protecting the viral genome until it can get where it wants to go to infect the host cell. And it can only reproduce – these viruses – they can only reproduce by taking over a living host cell. So while they are non-living they must take over a living host cell to be able to reproduce their viral DNA or RNA. Now, once they’re inside a host cell, the virus may remain dormant for awhile so it doesn’t always immediately enter the living host cell and attack.
Sometimes it remains dormant for awhile but eventually it is simulated into the active or lytic phase and the virus inserts its on genetic material into that of the host. So the host cell is unware of this the DNA that is or RNA that is mixed in with the host DNA or RNA – it just doesn’t recognize that right away. So the virus is able to take control of the host DNA by mixing with it and then taking over it. It causes the host cell to produce many viral genes and proteins so once it gets in there it starts saying “reproduce, reproduce.” “Make more genes and proteins.” The ones they are making – the ones these cells are making are going to be new viral proteins and genes instead of the normal cells genes and proteins. And then all these viral genes and proteins combine to form new varions or virus particles that destroy the host cell and are released to infect other cells.
So what happens is once the virus becomes active and is in the lytic phase, it’s going to insert its own genetic material into that of the host cell – so probably into the nucleus. The virus DNA or RNA is going to mix with the host DNA or RNA, take control of it and then tell it to produce lots more of these viral genes and proteins which combine to form new varions and more and more are being made so fast that the cell can’t keep up and the cell is going to burst or lyce. That’s where we get the lytic cycle or lytic phase from because it is whenever the end result is when the cell is going to be lycing or bursting open and once it bursts open all these varions are released to infect other cells and go and repeat the process. Something to remember is that the genome of both DNA and RNA viruses can be either single or double stranded.
A lot of times a virus is going to not be too complex the genome is only going to be three to a hundred genes. It’s not going to be very big but it can be double stranded if it is a more complex virus. So viruses are sneaking little things. they are going to be protected by their protein coat – the capsid and insert their genome – their genetic material into a host cell, take over, reproduce more virual genes and proteins which form so many new varions that it causes these cells to burst open and send out more viruses into the host to attack other cells.
Provided by: Mometrix Test Preparation
Last updated: 05/28/2018
Find us on Twitter: Follow @Mometrix | <urn:uuid:92a86bb2-686e-480e-9187-dcd59514a733> | 3.890625 | 715 | Tutorial | Science & Tech. | 53.693168 | 95,610,224 |
NERSC Continues Tradition of Cosmic Microwave Background Data Analysis with the Planck Cluster
October 30, 2009
Contact: Linda Vu, firstname.lastname@example.org, 510-495-2402
More than 95 percent of our universe is made up of mysteriously "dark" materials—approximately 22 percent of it is comprised of invisible dark matter, while another 73 percent is posited to be dark energy, the force that is accelerating universal expansion.
Armed with a new spacecraft called Planck and supercomputers at the Department of Energy's (DOE) National Energy Research Scientific Computing Center (NERSC), astronomers around the world hope to make tremendous strides toward illuminating the nature and origins of these mystifying materials by creating high-resolution maps of extremely subtle variations in the temperature and polarization of the Cosmic Microwave Background (CMB), which is leftover light from the Big Bang that permeates the universe.
"The CMB is our most valuable resource for understanding fundamental physics and the origins of our universe. The Planck mission will provide the cleanest, deepest and sharpest images of the CMB ever made," says Charles Lawrence, of NASA's Jet Propulsion Laboratory and principal investigator of the Planck Mission in North America.
Equipped with 74 detectors at nine different frequencies, the European Space Agency's Planck observatory will map the entire sky several times in its lifetime, from a perch in space called the second Lagrangian point that is about 1.5 million kilometers (930,000 miles) away from the Earth along a line from the Sun.
NASA has been a significant participant in the Planck program, even initiating the first-ever Interagency Implementation Agreement with DOE to guarantee a minimum annual allocation of NERSC resources to the Planck mission throughout its entire observing lifetime. Additionally, the U.S. Planck team has purchased exceptional levels of service from NERSC, including 32 TB of the NERSC Global Filesystem and technical support for a 256-processor cluster.
The spacecraft officially began taking science observations and funneling data to NERSC on August 13, 2009. NERSC is located at the Lawrence Berkeley National Laboratory's (Berkeley Lab) Oakland Science Facility.
Planck Computing at NERSC
"NERSC has the whole package—the systems are well-balanced and suit our analysis needs, the user support is phenomenal, and the center has a long history of working with our community. This is a unique relationship that we don't have with any other computing facility in the world,"
—Charles Lawrence, NASA's Jet Propulsion Laboratory
According to Julian Borrill, a staff scientist in the Computational Cosmology Center in Berkeley Lab's Computational Research Division (CRD), the process of understanding data from a CMB mission is long and complicated. Each observation of the sky includes three main components—instrument noise, signals from foreground components like dust and the CMB signal itself—that need to be untangled from one another.
"Because each of these components is correlated, we have to look at the entire dataset at once, and some of these analyses simply cannot be done without well-balanced high performance computing systems," says Borrill, who is also a member of the U.S. Planck Collaboration.
He adds that even if a petascale machine can solve quadrillions of calculations per second, if it cannot move data in and out of the machine sufficiently quickly, then it is not going to work for CMB research.
"We looked at some supercomputers elsewhere and they simply did not have the I/O (input/output) capability that we needed," says NASA's Lawrence. "At NERSC we get a team of experts that knows how to optimize and maintain well-balanced high performance computing systems, as well as the essential advantage of having Planck team members like Julian nearby to test and run the systems, and develop and try out codes."
Realizing that not all aspects of CMB analysis are supercomputing jobs, the U.S. Planck Collaboration also bought in to NERSC's PDSF (Parallel Distributed Systems Facility) cluster collaboration to complement its allocation of supercomputer time. By installing a medium-sized system at NERSC, the team plans to leverage the center's expertise in setting up and maintaining the hardware. The strategy also allows them to take advantage of the NERSC Global File System, which provides a single disk space common to all NERSC machines. This means users do not have to save and transfer data every time they move between machines, which saves time and allows them to be more productive.
"NERSC has the whole package—the systems are well-balanced and suit our analysis needs, the user support is phenomenal, and the center has a long history of working with our community. This is a unique relationship that we don't have with any other computing facility in the world," says Lawrence.
In addition to teasing out instrument noise from the CMB maps, the U.S. team will also be modifying existing codes, as well as developing new methods for analyzing data.
Evolution of CMB Computing at Berkeley Lab's CRD and NERSC
According to Lawrence, preparations for a Planck-like mission began in the U.S. in the late 1990s when NASA was supporting balloon-borne CMB missions like BOOMERANG and MAXIMA, which scanned small patches of the sky to map the minute temperatures variations in the CMB. At the time, both missions were collecting unprecedented volumes of data, and were the first CMB experiments to ever use supercomputing resources for data analysis. They both received allocations to compute on NERSC's 600-processor Cray T3E, called Mcurie, then the fifth fastest supercomputer in the world.
As the CMB community started incorporating supercomputers into their scientific process, they also had to develop new data analysis methods to optimize these emerging resources. That’s when Borrill and his colleagues Andrew Jaffe and Radek Stompor stepped in and developed MADCAP, the Microwave Anisotropy Dataset Computational Analysis Package. The researchers used this software to create the maps and angular power spectra of the BOOMERANG and MAXIMA data. These analyses would eventually confirm that the universe is approximately geometrically flat.
Berkeley Lab has been at the forefront of this research ever since. Borrill now co-leads the Computational Cosmology Center (C3), which is a collaboration between CRD and the Lab's Physics Division to pioneer algorithms and methods for optimizing CMB research on cutting-edge supercomputer technology.
"BOOMERANG and MAXIMA were essentially what got the CMB community to start developing supercomputing methods for analyzing data; the Planck computing methods have grown with the experiment and supercomputer technologies," says Lawrence.
"When NERSC first started allocating supercomputing resources to the CMB research community in 1997, it supported half a dozen users and two experiments. Almost all CMB experiments launched since then have used the center for data analysis in some capacity, and today NERSC supports around 100 researchers from a dozen experiments," says Borrill.
He says that advancements in CMB detectors have led to a trend where the volume of data collected doubles about every 18 months, with upcoming ground-based experiments such as PolarBeaR and QUIET-II set to exceed the Planck data volume by 2 to 3 orders of magnitude. "Coincidentally, this trend mirrors Moore's Law and means that we have to stay on the leading edge of computation just to keep up with CMB data," says Borrill.
For more on the history of Computational Cosmology research at the Berkeley Lab, please visit:
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. | <urn:uuid:98dc4c94-45f0-49f2-9c7f-daf9de5ca805> | 2.96875 | 1,934 | News (Org.) | Science & Tech. | 26.117298 | 95,610,228 |
Shenandoah Salamander: Climate Change Casualty or Survivor?
- Grade Level:
- Seventh Grade-College Undergraduate Level
- Biology: Animals, Climate Change, Conservation, Ecology, Environment, Wildlife Biology, Wildlife Management
- Pre-visit lessons: 90 minutes; Park Field Trip: 2 hours on site in Shenandoah National Park; Post-visit lessons: 90 minutes
- Group Size:
- Up to 24 (4-8 breakout groups)
- National/State Standards:
- Virginia Life Science Standards of Learning: LS.11 and LS.13
OverviewThe Shenandoah salamander is an endangered species found only on a few rocky slopes within Shenandoah National Park. Its survival is being threatened by changing climate and habitat competition from the more common red-backed salamander. Students will conduct field research on the red-backed salamander to compare the two salamander species’ habitat requirements and determine how climate change and habitat competition are impacting the survival of the Shenandoah and red-backed salamanders.
Following the park experience and classroom activities, the students will be able to:
1. Define climate change and list examples of natural and human-influenced contributors to climate change.
2. Conduct a salamander population study to determine habitat preferences and environmental conditions in Shenandoah National Park.
3. Assess/predict the potential impact of climate change and species competition on the survival of the Shenandoah and red-backed salamanders.
4. Determine ways people can reduce contributions to climate change.
5. Create a persuasive media message to educate others on the impacts of climate change and ways people can reduce their carbon footprint.
Climate change is any significant change in the climate lasting for decades or longer. Climate patterns (e.g. temperature, rain, snow) can vary naturally, but modern climate changes are accelerated by human activity. Although scientists cannot yet predict with certainty what the long-term impacts from climate change will be, there is ample evidence of climate change effects already being felt within national parks.
Shenandoah National Park is a refuge for many species of animals otherwise pressured by human activities such as development and other land uses. The Shenandoah salamander (Plethodon shenandoah) lives nowhere else on the planet except a few rocky mountaintops in the park and it is the only federally endangered animal species found in the park.
Worldwide, there are many species like the Shenandoah salamander living in the microclimates of higher mountaintop elevations that are at risk of extinction. One factor contributing to this risk could be increases in temperature due to climate change. Locally, scientists are predicting dramatic alterations in temperature, humidity and precipitation in the Appalachian Mountains in the future.
Scientists are studying potential impacts of a warming climate. Shenandoah National Park is collaborating with the Smithsonian Institution, University of Virginia, and the U.S. Geological Survey to assess potential climate change impacts on its high elevation species. Research and experiments focusing on the Shenandoah salamander are investigating how climate change might affect the species' use of habitat, feeding success, growth, and competition for habitat with red-backed salamanders. This research will help resource managers understand the habitat needs of these and other species that are highly adapted to mountaintop living and to develop strategies that will help protect these species.
By doing this lesson, students will understand the plight of the Shenandoah salamander, will be able to educate others about the Shenandoah salamander and climate change, and will be able to make educated lifestyle choices that reduces their "carbon footprint."
This program was funded in part by a generous donation from the Shenandoah National Park Trust.
AssessmentEducation Program Evaluation Form (77kb pdf)
Shenandoah National Park (SNP) serves as a refuge for many species of animals otherwise pressured by human activities such as development and other land uses. There are over 200 resident and transient bird species, over 50 mammal species, 51 reptile and amphibian species, and over 35 fish species found in the park. Shenandoah is home to 14 species of salamanders. The Shenandoah salamander (Plethodon shenandoah) lives nowhere else on the planet except a few rocky mountaintops in the park and it is the only federally endangered animal species found in the park.
1. Use this lesson's research procedures to sample for red-backed salamanders on the school site or nearby park area and compare with their park sample. Be sure to get necessary permission to collect data at that site and return all collected animals.
2. Research other resource issues in Shenandoah National Park such as air quality, invasive species.
3. Investigate "success stories" of other imperiled species: Peregrine Falcon, Bald Eagle.
4. Investigate the research on whether the lead-backed color morph of the red-back salamander is more likely to survive than the striped morph in a warmer climate.
5. Research other national parks that have serious resource management challenges and report on those to the class.
Additional ResourcesTwo short videos on climate change and the Shenandoah Salamander will help prepare students for their research and field study.
Our Changing World: Climate Change: A three minute video that introduces climate change and how people can work together to correct the problem.
The Shenandoah Salamander: A seven minute video describing the Shenandoah Salamander and its importance to the park ecosystem. | <urn:uuid:32c96ac0-2773-44ff-87ce-a61a3d8ffcac> | 3.921875 | 1,154 | Product Page | Science & Tech. | 25.944065 | 95,610,239 |
Jessica Rowbury investigates the iSpex-EU project, a study underway that enlists members of the public to take air pollution readings via their mobile phones
A Europe-wide photonics experiment has begun that crowdsources data from members of public to further understand how air pollution affects the environment and human health.
The experiment will see thousands of people across Europe transform their mobile phones into a scientific tool – through use of an attachment and a mobile app – allowing them to measure pollutant particles in locations that have not yet been monitored with existing technologies.
The iSpex-EU project was organised by two EU organisations, Light2015 and iSpex, following an initial campaign in the Netherlands in 2013, where thousands of participants monitored Dutch air pollution using a similar method. Now Europe-wide, the experiment will run from 1 September to 15 October 2015, in major cities including Athens, Barcelona, Belgrade, Berlin, Copenhagen, London, Manchester, Milan, and Rome.
The impact of atmospheric aerosols on both human health and the environment is still poorly understood, but they can contribute to heart and respiratory disease, as well as posing a danger to air traffic and forming one of the largest uncertainties in current estimates of climate change.
Ninety per cent of atmospheric aerosols are of natural origin, which can include sea salt, mineral dust or tiny sand particles, and volcanic ash. However, the remaining 10 per cent are anthropogenic, or human-made, mainly caused by traffic, industry, and biomass burning.
For the iSpex-EU project, participants will use a device called iSpex, which consists of an add-on and corresponding app for the iPhone.
The iSpex mobile app will instruct participants to take several photos of the cloud-free sky, which will allow the iSpec add-on to obtain information on both the spectrum and the degree of linear polarisation of visible light.
The corresponding mobile app will then send all of the observations to a central database, where measurements from all over the continent will be analysed. A live map showing the results obtained from across Europe will also be constructed.
The add-on is essentially a slit spectrograph that uses a transmission grating foil and a plastic lens in addition to the lens inside the smartphone camera. It also contains a combination of stretched plastic sheets and Polaroid film to modulate every spectrum by a sine curve. The relative amplitude of this sine curve directly scales with the degree of linear polarisation, and its phase is determined by the polarisation angle. As such, all the information on both the spectrum and the linear polarisation of the light entering the slit is obtained in a single shot. The iSpex app extracts the spectral information from the polarisation information.
Through this method, the degree of linear polarisation (DoLP) of the cloud-free sky can be measured as a function of wavelength and, by pointing the phone at different directions in the sky, as a function of scattering angle. This DoLP as a function of both wavelength and scattering angle yields unique information on fundamental aerosol properties, including the quantity of aerosol, as well as the particle size distribution and the chemical composition (through the refractive index).
Its measurement principles are based on the Spectropolarimeter for Planetary EXploration (SPEX), an instrument currently being developed by a Dutch team of engineers to measure aerosol and cloud particles in atmospheres of planets within our solar system.
The measurements that the public will obtain using their smartphones will be crucial in assessing the impacts of atmospheric aerosols on human health, the environment and air traffic.
The experiment also represents a new kind of science investigation, whereby citizens can contribute to answering important scientific questions by delivering measurements from almost anywhere, without the need for specialised equipment or training.
Other research institutions have proposed similar methods for involving the public in measuring air quality. In 2014, Researchers from the Karlsruhe Institute of Technology (KIT) in Germany announced that they were developing optical sensors that can be connected to a smartphone to measure the levels of dust pollution in the air.
Like the iSpex add-on, the device also works in conjunction with a mobile app, allowing users to share measurements with other participants in the same city, helping to create real-time fine dust pollution maps for different locations. At the time, the researchers proposed that the invention could be ready for use by the public in 2015.
According to the KIT researchers, the device uses the flashlight of the smartphone camera to emit light into the measurement area, which is then scattered by dust or smoke. The camera serves as a receptor and takes a picture representing the measurement result. The brightness of the pixels can then be converted into the dust concentration.
It seems as though the decrease in cost and subsequent miniaturisation of spectral technologies are enabling a new type of ‘citizen-science’, whereby the public are able to carry out complex, simultaneous mass-measurements, which could potentially play an important role in scientific investigations in the future. | <urn:uuid:81390a4f-3963-472f-8a02-4d28e2ece371> | 3.265625 | 1,030 | Truncated | Science & Tech. | 19.820107 | 95,610,255 |
Zsh 20 Completion System The term “shell scripting” gets mentioned often in Linux forums, but many users aren’t familiar with it. Styles determine such things as how the matches are generated, similarly to shell options but with much more control. users may write their own.
Writing Your Own Shell - Cornell University Learning this easy and powerful programming method can help you save time, learn the command-line better, and banish tedious file management tasks. Writing Your Own Shell. The object of this assnment is to gain experience with some advanced programming ques like process creation and contol, file.
Bash - Quick-and-dirty way to ensure only one instance of a shell. The object of this assnment is to gain experience with some advanced programming ques like process creation and contol, file descriptors, snals and possibly pipes. First argument should be pointer to command string and the second arguments should be a pointer to an array which contains the command string as arg and the other arguments as arg through arg[n]. Dev/null; then ACQUIRED="TRUE" return 0 else echo "Cannot write to $LOCKFILE. Error." &2 return 1 fi fi else echo "Do. How to run shell script.
How to Write a Shell Script Using Bash Shell in. - How There are many, many text editors available for your Linux system, both for the command line environment and the GUI environment. How to Write a Shell Script Using Bash Shell in Ubuntu. Ever wanted to Automate operations in your OS? Ever wanted to write a program that could create a file, copy.
Write sur Amazon - Commandez Write sur Amazon. A text editor is a program, like a word processor, that reads and writes ASCII text files.
Spark Programming Guide - Spark 2.0.2 Documentation The first step is often the hardest, but don't let that stop you. To write applications in Scala, you. The first thing a Spark program must do is to create a JavaSparkContext object, which tells Spark how to access a.
Tutorial - Write a Shell in C - Stephen Brennan Ever wanted to write a program that could create a file and copy that file to a directory? Tutorial - Write a Shell in C Stephen Brennan • 16 January 2015. It’s easy to view yourself as “not a real programmer.” There are programs out there that.
BASH Programming - Introduction HOW-TO To do this, you will be writing your own command shell - much like csh, bsh or the DOS command shell. First open the file (use open or creat, open read only for infiles and creat writable for outfiles ) and then use Any advanced shell feature is likely to earn you some extra credit, but you should do it only if you've finished the required functions, are having fun and would like to learn more. This article intends to help you to start programming basic-intermediate shell scripts. I decided to write this because I'll learn a lot and it mht.
PHP shell_exec - Manual For my class I have to create a basic shell similar to bash that will allow the user to commands like ls, sleep, etc. To add built-in commands like exit or cd you would have to tokenize the line using strtok() and look at the first token. Error checking, the reason for that is I do not know how to cd to a directory and then execute another command unless they are in the same shell.
How to write shell:
Rating: 93 / 100
Overall: 100 Rates | <urn:uuid:b1587b89-7f4c-4fd5-aa11-101b9538bdf6> | 2.671875 | 742 | Content Listing | Software Dev. | 61.391895 | 95,610,264 |
Tan - problems
- Right triangle
Calculate the length of the remaining two sides and the angles in the rectangular triangle ABC if a = 10 cm, angle alpha = 18°40'.
- Angle of diagonal
Angle between the body diagonal of a regular quadrilateral and its base is 60°. The edge of the base has a length of 10cm. Calculate the body volume.
- Trapezoid MO
The rectangular trapezoid ABCD with right angle at point B, |AC| = 12, |CD| = 8, diagonals are perpendicular to each other. Calculate the perimeter and area of the trapezoid.
- Trigonometric functions
In right triangle is: ? Determine the value of s and c: ? ?
From the observatory 14 m high and 32 m from the river bank, river width appears in the visual angle φ = 20°. Calculate width of the river.
- Regular 5-gon
Calculate area of the regular pentagon with side 7 cm.
Determine angles of the right triangle with the hypotenuse c and legs a, b, if: ?
- Cuboid diagonal
Calculate the volume and surface area of the cuboid ABCDEFGH, which sides abc has dimensions in the ratio of 9:3:8 and if you know that the wall diagonal AC is 86 cm and angle between AC and the body diagonal AG is 25 degrees.
Calculate volume and surface area of the cone with diameter of the base d = 15 cm and side of cone with the base has angle 52°.
Mast has 13 m long shadow on a slope rising from the mast foot in the direction of the shadow angle at angle 15°. Determine the height of the mast, if the sun above the horizon is at angle 33°.
Calculate the length of the side GN and diagonal QN of rectangle QGNH when given: |HN| = 25 cm and angle ∠ QGH = 28 degrees.
On the road sign, which informs the climb is 8.7%. Car goes 5 km along this road. What is the height difference that car went?
- Rotary cone
The volume of the rotation of the cone is 472 cm3 and angle between the side of the cone and base angle is 70°. Calculate lateral surface area of this cone.
- Triangular prism
Plane passing through the edge AB and the center of segmet CC' of regular triangular prism ABCA'B'C', has angle with base 39 degrees, |AB| = 3 cm. Calculate the volume of the prism.
- Regular quadrangular pyramid
How many square meters is needed to cover the tower the shape of regular quadrangular pyramid base edge 10 meters, if the deviation lateral edges from the base plane is 68 °? Calculate coverage waste 10%.
- Pyramid - angle
Calculate the surface of regular quadrangular pyramid whose base edge measured 6 cm and the deviation from the plane of the side wall plane of the base is 50 degrees.
How high is the building that throws horizontal shadow 95.4 m long at angle 50°?
- 4side pyramid
Calculate the volume and surface of 4 side regular pyramid whose base edge is 4 cm long. The angle from the plane of the side wall and base plane is 60 degrees.
- n-gon II
What is the side length of the regular 5-gon circumscribed circle of radius 11 cm?
What angle rising stairway if step height in 17 cm and width 27 cm? | <urn:uuid:55ec6eea-5286-466b-b738-6319050af99b> | 3.625 | 748 | Content Listing | Science & Tech. | 67.807037 | 95,610,290 |
Scientists have generated and begun to analyze the rat genome, paving the way for comparisons with the two other mammalian genomes sequenced so far -- human, and mouse. The primary results of the Rat Genome Sequencing Project Consortium (RGSPC) are presented in the April 1 issue of Nature, and an additional thirty manuscripts describing further detailed analyses are contained in the April issue of the journal Genome Research.
The cover image of Genome Research (see end of release) was produced by University of California, San Diego professors Pavel Pevzner and Glenn Tesler and their co-author on the journal paper, Guillaume Bourque of the University of Montreal. It depicts the course of evolution for the X chromosome in humans, rats and mice from a common ancestor over 80 million years ago and, for the first time, reconstructs the genomic architecture of mammalian ancestors. “It contributes to the solution of the so-called original synteny problem in biology,” said Pevzner, the Ronald R. Taylor Professor of Computer Science at UCSDs Jacobs School of Engineering. “While scientists routinely find bones that lead to often unrealistic reconstructions of dinosaurs and other prehistoric animals on movie screens and in toy stores, this is the first rigorous reconstruction of the genomic makeup of our mammalian ancestors.”
Pevzner and Tesler are among the more than 200 co-authors of the Nature article, and expanded on their part of the research in a Genome Research paper with Bourque titled “Reconstructing the Genomic Architecture of Ancestral Mammals: Lessons from Human, Mouse and Rat Genomes.” “Having the third genome allows us to reconstruct the putative genomic architecture of our mammalian ancestor,” said Pevzner. “Our contribution has been to demonstrate how to look at the human, mouse and rat genomes -- each roughly three billion letters in length -- and then infer the evolutionary earthquakes that shaped their genomic architectures.”
Doug Ramsey | UCSD
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:26dae59c-9590-4f59-9916-fab65098b1bc> | 3.28125 | 1,055 | Content Listing | Science & Tech. | 35.784898 | 95,610,314 |
Scientists have moved a step closer to understanding what happens when cells receive a faulty signal that is known to be a cause of cancer.
Many different types of signal control normal cell development but when some of these signals are ‘mis-activated’ they can result in the formation of tumours. Now, a team of researchers at The University of Manchester has discovered that the way cells communicate with each other is often more complicated than previously thought.
The breakthrough should help in the fight against cancer as understanding how these signals work in healthy cells means scientists can better investigate what happens when the signal goes wrong. Dr Martin Baron, who led the research, said the discovery could take scientists down a new route in their battle against the disease.
Aeron Haworth | alfa
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:b3c0aaad-08f3-4973-9a23-7e5f68e13edf> | 3.0625 | 802 | Content Listing | Science & Tech. | 41.436292 | 95,610,315 |
Pacific Northwest National Laboratory has produced more proof that science can make beautiful art.
The public voted on its favorite scientific images, with a tie declared from votes cast via Facebook for the most artistic image of the annual competition.
In one image, custom-made carbon materials caught by a sophisticated scientific microscope seem to take on the shape of a sunflower lying in a tangle of grass.
Researches are synthesizing new materials within of the pores of commercially available carbon felt, creating smaller and smaller pores to increase surface areas. The material could have applications in cooling, energy storage or transportation.
The other top winner was another microscopic image, this one looking like coral against a dark background.
It is actually a battery electrode surface using certain molecules from a complex mixture of raw components.
A third winner was picked by PNNL director Steven Ashby.
The Director’s Choice Award went to a brightly colored image of amorphous gel at the molecular scale, caught as it transformed from a liquid to a stable nanoparticle solid. The photo was from research to develop a material to treat harmful emissions before they pass through the exhaust pipe of vehicles.
It’s not just the local community that has recognized the beauty in images created as part of research at the Department of Energy national lab in Richland.
An image included among the 94 submitted for the PNNL online contest was picked as a finalist in the Federation of American Societies for Experimental Biology’s BioArt contest.
The image shows a surprise found by scientists studying how ancient glass has aged. They found the remains of bacteria that once lived on the glass.
The research is being done to help determine the long-term stability of glassified waste to be made at the Hanford vitrification plant. | <urn:uuid:f16b7961-dc59-482d-95e8-1ad21ea6e922> | 2.78125 | 364 | News Article | Science & Tech. | 32.091965 | 95,610,318 |
Open Source Projects
HTML web page consists of Elements and Tags. Let us understand them in detail.
The < tag-name > make a HTML tag. For example <p> is a Paragraph tag.
Opening < tag-name > some content followed by closing </ tag-name > make a HTML element. For example <h1> Hello World!</h1> is an element consisting of opening heading tag then a text "Hello World" followed by closing heading tag.
In HTML code you have to open and close tags to make an HTML element. That is, a content must be enclosed within a matching opening and closing tags to make up a HTML element.
Conside the following HTML code.
The opening and closing body tag encloses the h1 tag. And the opening and closing h1 tag encloses the text "Hello World!".
If the tags are not properly matched and closed then it gives unexpected or invalid result.
For example, conside the following invalid HTML code.
These are the HTML elements without any content. For example the break line tag <br> has no closing tag and no content. So, its an empty element.
Have fun learning :-)
Copyright © 2014 - 2018 DYclassroom. All rights reserved.
rendered in 0.0639 sec | <urn:uuid:53f2a02e-568d-499b-961c-8bbaa6fa19dc> | 3.65625 | 268 | Documentation | Software Dev. | 67.621963 | 95,610,326 |
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Clone this wiki locally
Parandroid Messaging uses ‘Diffie-Hellman key exchange’ to establish a ‘secret key’ which we use to encrypt the SMS message. The actual encryption used is 192 bit AES. The image below sums it all up.
When a user first launches Parandroid Messaging, he is guided through some information about how Parandroid Messaging works, and is then prompted to enter a password. When a user succesfully enters a password, Parandroid Messaging generates a 1024 bit Diffie-Hellman public and private key, storing the public key plain, and the private key 256 bit AES encrypted using the password just provided, on the phone’s internal memory. A user can always generate a new keypair from the menu.
Parandroid Messaging users will have to send their public key to the users they want to securely communicate with. When both users have each other’s public key, they can both generate a shared ‘secret key’ used to encrypt and decrypt the messages. This secret key is generated using the ‘other’ user’s public key and your own private key. The secret key is created on-the-fly when needed, so it’s never stored locally.
Since the private key is stored encrypted on the phone’s internal memory, a password needs to be provided before accessing the private key. The encryption we use on the private key is 256 bit AES (bouncycastle’s ‘PBEWithSHA256And256BitAES-CBC-BC’ provider, to be precise). In the preferences menu, a user can specify if the password needs to be ‘forgotten’ when the phone enters sleep-mode. In other words, when the screen goes black, you’ll have to re-enter your password when you do an action that needs the private key, like reading encrypted messages or composing a new one. Alternatively, users can choose to keep the password in memory during the lifetime of the Parandroid Messaging instance, which can be up to many hours. As this can be a security risk, the default setting is ‘forget on sleep’.
The password used to encrypt the private key is sometimes also used in some other places, like ‘manage public keys’ and ‘generate new keypair’ for obvious reasons. | <urn:uuid:f120953c-328c-498a-a448-1b18d6f79ce0> | 2.53125 | 531 | Documentation | Software Dev. | 44.973637 | 95,610,351 |
Lorenzo Leva11,708 Points
When adding string to a page with this function what is the meaning and use of this: "<h1>...</h1>" and other ones.
Steven Parker131,121 Points
When you write strings to a web page, they can be more than just text. They can include HTML code, and those are "tags" that are used in HTML.
Those in particular define a "heading type 1", and by default the text between them will be displayed by the browser in a much larger font that normal text. | <urn:uuid:20c4d006-8061-442b-848e-fda2fb1bfbd7> | 3.234375 | 117 | Q&A Forum | Software Dev. | 81.595455 | 95,610,392 |
Tales from the Tides - FOR BIRD'S SAKES! (Science Stories for Kids!)
Why go out in one of these boats (see image on right) to a remote coast in Alaska…in the winter, you ask? (I mean, it’s cold there even in the SUMMER, right?) Well, just like you check on your gerbils, fish, or any living thing you are responsible for, parks do too. We want to know who’s out there, where they are, and how they’re doing. We go out in winter and in summer because the types of birds that live along the coast change with the seasons.
Now, remember those pet fish we referred to earlier? Imagine you came home one day only to discover that they had all died! You’d probably want to know why and keep a close eye on the next batch. Something very similar happened in Katmai in 2016. The researchers found dead birds on EVERY SINGLE beach they went to. A LOT of them.
Now this is a little more like it! This is the Dream Catcher, the team’s home base during the four days of the survey. Boy were they glad to see this site at the end of each day!
So, what exactly did they see out there? Lots of beautiful and sometimes even odd-looking birds, sea otters, wolves, whales and a few other inquisitive visitors……….Watch the video below to see who accompanied one of the skiffs!
- 1 minute, 30 seconds
Watch this video to see what happened! It was quite an experience for the research team to have so many sea lions come right up to their boat and follow them! How would you have felt if you had been on that skiff? A little nervous? Excited? Probably both. This is a great display of sea lions' playful and curious nature. (Be warned though - they can be aggressive. Do NOT approach them.) Videos like this reveal marine mammals' character and intelligence and encourage us to protect them.
Have you got the codes figured out yet? Take the first two letters of each word in the bird's name and you've got it's code name. Get it? "Em" for emperor and "go" for goose. Try it yourself on the next bird!
(If you guessed "HADU", you're right!)
That makes these surveys really valuable. Remember when we said that bird species can recover if conditions are right? These surveys tell park managers where the popular wildlife hangouts are. Once they know this, they can study the conditions in these areas and make regulations, like setting speed limits on boats, to protect the birds and marine mammals in these areas.
But they can’t do it all. After all, warm water, oil and plastic trash don’t respect park boundaries. You can help by learning how to protect wildlife where you live! Volunteer during beach and stream cleanups in your area! Even using less plastic (straws are one of the most common types of trash found on beaches) and riding your bike or walking (when it’s safe and you have permission) instead of driving can make a difference. You don't have to be a wildlife biologist to help save wild places and protect the amazing animals we share this planet with. | <urn:uuid:627291a7-d1a5-4baf-a643-442fc97a827e> | 3.078125 | 694 | Personal Blog | Science & Tech. | 71.622377 | 95,610,395 |
The Question: What is the role of beaver dams on hydrological processes in montane riparian areas?
Understanding the hydrological processes such as inundation and recharge of alluvial aquifers in riparian areas is key to proper management of rivers and watersheds. For example these processes can influence biodiversity by providing wildlife habitat for a disproportionally large number of wildlife species (i.e. birds, butterflies, small mammals, insects, and amphibians). Biologists have long assumed that beaver (Castor canadensis) may influence hydrologic processes in riparian areas of rivers through the building of dams. Researchers conducted this study in order to test the assumption that beaver dams play an integral role in creating and maintaining healthy montane riparian areas.
The Project: Measure ground water flow patterns and levels before and after the breach/construction of two beaver dams.
During the summers of 2002-2004, Cherie Westbrook and David Cooper (Colorado State University) and Bruce Baker (USGS) used 95 pipe wells to measure subsurface water fl ow patterns and water table fluctuations along a one-mile reach of the Colorado River containing two beaver dams. One of the dams was constructed in October 2003, and the other dam breached in May 2004 allowing researchers to take surface and subsurface hydrologic data in the study area in the presence and absence of beaver dams.
This study found that beaver dams strongly aff ect the hydrologic processes of the Colorado River and its floodplain and terraces near its headwaters. The beaver dams and ponds greatly enhance the depth, extent, and duration of inundation associated with fl oods. Additionally the investigators found that beaver dams elevate the water table during both high and low river flows and slow the decline of the water table during dry months. Unlike previous studies the researchers found that the main eff ects of beaver dams occur below the dam and not just at the pond created by the dam. Overall this study confirms that beavers and their associated dams play an important role in the formation, function, and persistence of riparian wetlands.
The Results: Beaver can influence hydrological processes in streams and valleys and thus create flow patterns suitable for the formation and persistence of wetlands. | <urn:uuid:6b2f9041-19ac-4219-ab0e-9486147f506f> | 3.9375 | 463 | Knowledge Article | Science & Tech. | 31.277816 | 95,610,410 |
Bottom friction and bedload sediment transport caused by boundary layer streaming beneath random waves
- Myrhaug Dag, Holmedal Lars Erik, Rue Håvard
- Department of Marine Technology, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway, Department of Mathematical Sciences, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway
- APPLIED OCEAN RESEARCH SCI(E) SCOPUS
- Elsevier in 2005
- Cited Count
The effect of boundary layer streaming on sea bed shear stresses, as well as on the mean bedload sediment transport rate, beneath random waves, is investigated. Formulas for the bottom friction and bedload sediment transport under regular waves have been applied to obtain the mean bedload sediment transport rate caused by steady streaming under linear random waves. Friction factors for steady streaming under random waves are also provided. The effect of streaming and second order wave asymmetry on the mean bedload sediment transport rate is discussed.
1.The mechanics of the boundary layer near the bottom in a progressive wave. Proc. 6th Int. Conf. Coastal Eng., ASCE, Miami. 184193(1956) Longuet-Higgins M.S.
2.Coastal Bottom Boundary Layers and Sediment Transport.(1992) Nielsen P.
3.Shear stress and sediment transport calculations for sheet flow under waves. Coastal Eng. Vol. 47. 347354(2003) Nielsen P. et al.
4.Near-bed sand transport mechanisms under waves. Proceedings of 27th International Conference on Coastal Engineering, ASCE, Sydney, Australia.(2000) Ribberink JS et al.
5.Oscillatory flow over rippled beds: Boundary layer structure and wave-induced Eulerian drift. Gravity Waves in Water of Finite Depth. Advances in Fluid Mechanics. 215254(1997) Davies A.G. et al.
6.Wave-induced currents above rippled beds. Physics of Estuaries and Coastal Seas.(1998) Davies A.G. et al.
7.Eulerian drift induced by progressive waves above rippled and very rough beds. J Geophys Res. Vol. 104. 14651488(1999) Davies A.G. et al.
8.Turbulent wave boundary layers: 2. Second-order theory and mass transport(1984) John Trowbridge et al. Journal of Geophysical ResearchEarth Science cited 32 times
9.Bedload sediment transport rate by nonlinear random waves. Coastal Eng J. Vol. 43. 133142(2001) Myrhaug D. et al.
10.Spectral wave-current bottom boundary layer flows. Proc. 24th Int. Conf. Coastal Eng., ASCE, Kobe, Japan. 384398(1994) Madsen O.S. | <urn:uuid:93dd56f9-e74e-43af-8c67-06a0aadc0a0c> | 2.65625 | 598 | Academic Writing | Science & Tech. | 63.863832 | 95,610,423 |
So i understood the basic idea of
DNA sequence .its the process of working out the order of the four bases
A,C,G,T in a strand .And the given examples on the challenge also makes sense except the second one.
I dont understand , i mean can we put base from the four DNA bases even tho the base is not present in the sequence. Clearly
TTGAG dont have
C in it but still it prints
["G","C"] . HOW ?? some explanation would be great.
[I have not started coding for the algo yet tho , im just trying to understand the problem first, so didn’t provide any link]
@randelldawson wanna take a look mate ! | <urn:uuid:2b9381bc-2e89-4c0d-bf61-1ea3d01a4554> | 3.625 | 152 | Comment Section | Science & Tech. | 76.161208 | 95,610,425 |
Hi, I've finished the code tutorial for the pyg latin translator. It is telling me it should work now, however when I run it, I get the word output that I enter, but it is still as I entered it. I followed the instructions and tried to find others with the same issue, but I just cannot work out why it isn't working. Any help would be greatly appreciated. Below is my code.
pyg = 'ay' original = raw_input('Enter a word:') if len(original) > 0 and original.isalpha(): word = original.lower() first = word new_word = word + first + pyg new_word = new_word[1:] print original else: print 'empty' | <urn:uuid:51e647af-b77a-46d3-9bbd-45e58f5503f8> | 2.828125 | 155 | Q&A Forum | Software Dev. | 63.766887 | 95,610,436 |
University of Leeds research shows that porous sandstone, drained of oil by the energy giants, could provide a safe reservoir for carbon dioxide. The study found that sandstone reacts with injected fluids more quickly than had been predicted - such reactions are essential if the captured CO2 is not to leak back to the surface.
The study looked at data from the Miller oilfield in the North Sea, where BP had been pumping seawater into the oil reservoir to enhance the flow of oil. As oil was extracted, the water that was pumped out with it was analysed and this showed that minerals had grown and dissolved as the water travelled through the field. (1)
Significantly, PhD student Stephanie Houston found that water pumped out with the oil was especially rich in silica. This showed that silicates, usually thought of as very slow to react, had dissolved in the newly-injected seawater over less than a year. This is the type of reaction that would be needed to make carbon dioxide stable in the pore waters, rather like the dissolved carbonate found in still mineral water. (2)
The study gives a clear indication that carbon dioxide sequestered deep underground could also react quickly with ordinary rocks to become assimilated into the deep formation water.
The work was supervised by Bruce Yardley, Professor in the School of Earth and Environment at the University, who explained: “If CO2 is injected underground we hope that it will react with the water and minerals there in order to be stabilized. That way it spreads into its local environment rather than remaining as a giant gas bubble which might ultimately seep to the surface.
“It had been thought that reaction might take place over hundreds or thousands of years, but there’s a clear implication in this study that if we inject carbon dioxide into rocks, these reactions will happen quite quickly making it far less likely to escape.”
Although extracting CO2 from power stations and storing it underground has been suggested as a long-term measure for tackling climate change, it has not yet been put to work for this purpose on a large scale. “There is one storage project in place at Sleipner, in the Norwegian sector of the North Sea, and some oil companies have actually used CO2 sequestration as a means of pushing out more oil from existing oilfields,” said Prof Yardley.
In the UK the Prime Minister has recently announced a major expansion of energy from renewable sources and the launch of a competition to build one of the world's first carbon capture and storage plants. (3) The Leeds study suggests the technique has long-term potential for safely storing this major by-product of our power stations, rather than allowing it to escape and further contribute to global warming.
(1) The study covered samples of water pumped out from the Miller oilfield over a seven-year period. The data is routinely collected by BP to assess whether water-borne chemicals are liable to cause costly problems of scale to the drilling equipment. The Leeds scientists compared these with the composition of the water that was there before and the water that was injected. This showed that minerals had grown and dissolved as the water travelled through the field.
(2) Stephanie Houston worked on the project as part of an Industrial Case Studentship, funded by the Natural Environment Research Council and BP. Her work was supervised by Professor Bruce Yardley, who is based in the Institute of Geological Sciences within the School of Earth and Environment at the University of Leeds.
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:35b841f3-e62f-4258-9d70-54c3b212ea49> | 3.65625 | 1,364 | Content Listing | Science & Tech. | 43.67554 | 95,610,439 |
On my website there are sample Excel files, many of which use event programming to make things happen. Typing something in a cell, selecting a different cell, activating a worksheet, and many other actions, are Events. Add a bit of code in the background, and things can happen automatically if an event occurs. No need to click a button to run a macro, just change a cell, or refresh a pivot table, and the code for that event will run.
For example, on this worksheet there’s a data validation drop down list in cell B2, where a day of the week can be selected.
When you select a weekday, that changes the worksheet, and could trigger a Worksheet_Change event.
In this example, the Worksheet_Change event code checks the address of the cell that was changed, to see if it was cell B2. If it was, a message is displayed.
View the Worksheet Code
Multiple Worksheet Events
That code works so well, that you’d like to add another Worksheet_Change event. If you make a change in cell B4, you’d like the date automatically entered in cell B5. Unfortunately, if you add another Worksheet_Change event, you’ll see an error message when you change a cell, because you’ve used the same procedure name twice on the worksheet.
Combine the Code
Instead of creating separate events with the same name, combine both pieces of code into one. For example, you could use Select Case, and specify what should happen when specific cells are changed.
Some situations will require a more complex solution, and if you experiment a bit, you should be able to include multiple outcomes within a single worksheet event’s code. | <urn:uuid:effd3eec-b5c8-4786-ac28-f98ee985de1a> | 2.75 | 370 | Tutorial | Software Dev. | 49.812606 | 95,610,448 |
From the dawn of civilization, humans have dreamed of exploring the cosmos. To date, we have launched over 60 successful missions to the Moon (including six that landed on the Moon with humans), 17 successful missions to Mars, 13 missions to the outer solar system, and five that have left the solar system.
However, many have been concerned lately that the glory days of space exploration are behind us. The Apollo missions ended 44 years ago, and still we have not returned to the Moon. Our current Mars missions are only modestly more sophisticated than earlier missions. And futuristic dreams of humans traveling to the planets and to the cosmos have remained decades if not centuries away.
But within the past year or so, this situation seems to be changing. Perhaps it has been inspired by a string of highly successful Hollywood movies, including Gravity, Interstellar, The Martian and Star Wars: The Force Awakens. Perhaps it stems in part from the surprising success of private firms such as SpaceX and Blue Origin. Or perhaps it is simply the insatiable curiosity and wanderlust that is so deeply ingrained into our species via evolution.
Yuri Milner's plan to explore Alpha Centauri
Arguably the most daring plan to date is the Breakthrough Starshot project that was announced on 12 April 2016 by Russian billionaire Yuri Milner, with backing from physicist Stephen Hawking and Facebook founder Mark Zuckerberg.
Milner proposes to send a fleet of "nanocraft" to explore Alpha Centauri and its planets -- thousands of credit-card-sized spacecraft (to increase the chances that at least some will survive the journey), quickly accelerated to 20 percent of the speed of light by giant light sails powered by laser beams from a kilometer-square array of earth-bound lasers.Upon arrival in the Alpha Centauri system in 20 years or so, the nanocraft will send back photos and other data via laser beams, which will arrive at Earth four years later. Needless to say, this vision presents daunting technical hurdles, including:
- Fabricating diode lasers, megapixel cameras, computer processors and batteries for the nanocraft, together weighing less than one gram and able to survive 20 years of exposure to interstellar dust and cosmic rays.
- Maintaining integrity of the light sail while it and its nanocraft are being accelerated by lasers.
- Producing sufficient laser power and maintaining the focus of the laser array.
- Detecting the images and data that are sent back to Earth.
For additional details, see the Scientific American report.
An entirely different, but comparably ambitious, proposal to study extraterrestrial civilizations is to use the Sun as a gravitational lens. SETI pioneer Frank Drake, among others, proposes sending spacecraft outside the solar system to the focal point of the Sun's gravitational field, which, by principles of General Relativity, can then see enormously magnified images and even microwave transmissions coming from a distant star system.
Renewed interest in humans on Mars
An equally significant development is the resurgence in interest for humans not only to visit Mars but also to take up residence and ultimately form an independent colony.
The Mars Society observes that a round-trip journey to Mars is possible by manufacturing fuel for the return trip in situ on Mars (otherwise transporting fuel to Mars for the return trip is 90 percent of the outbound payload). In particular, they note that CO2 extracted from Martian atmosphere and H2 produced from Martian ice by electrolysis can be combined to form fuel by the exothermic reaction 3 CO2 + 6 H2 → CH4 + 2 CO + 4 H2O. A fully fueled and tested lift-off vehicle could be ready and waiting on Mars before the astronauts leave Earth.
Mars One, an organization founded in 2012, proposes to send humans to Mars by 2027 and establish a permanent colony there, to be funded in part by a reality TV show. Over 200,000 persons responded to their 2013 call for interest; this list has now been narrowed down to 100.
Tesla and SpaceX founder Elon Musk has also been formulating plans for a Mars colony, which he has promised to announce soon. His project reportedly will be known as the Mars Colonial Transporter, to be powered by a large version of the Raptor rocket engine, specifically designed for the exploration and colonization of Mars.
Advanced propulsion technologyLooking a bit further to the future, advanced propulsion systems will be required if more than a handful of people are to reach Mars or beyond. NASA has been exploring several concepts in this direction, including:
- Ion propulsion: A high-energy electron collides with a xenon atom, releasing electrons, and the charged atom is then discharged at high speed (up to 150,000 kph).
- High-power electric propulsion: This is like ion propulsion, except that the xenon ions are produced by a combination of microwave and magnetic fields, using a process called electron cyclotron resonance.
- Fusion-driven rocket: A fusion energy source releases its energy directly into the propellant, without converting to electricity; the propellant is rapidly heated and accelerated to high exhaust velocity (roughly 100,000 kph) with no physical interaction with the spacecraft, thus avoiding deterioration.
Space travel and Fermi's paradox
These developments have clear implications for Fermi"s paradox, that decades-old unsolved conundrum of why, given that an extraterrestrial civilization could explore the Milky Way in a million years or so (an eyeblink in cosmic time), do we not see evidence of even a single society?Numerous scientists have examined Fermi's paradox in detail and, as we wrote earlier, have proposed various explanations, such as:
- They exist, but are too far away.
- They are under strict orders not to disclose their existence.
- They exist, but have lost interest in communication and exploration.
- They are calling, but we do not recognize the signal.
- Civilizations like us invariably self-destruct.
- We are alone, at least within the Milky Way if not beyond.
All of these explanations have very reasonable rejoinders. Items 2 and 3 (and several other similar proposed explanations) fall prey to a diversity argument -- in a vast galactic ensemble it is hardly credible that every individual in every civilization forever lacks interest in communication and exploration, nor is it credible that some galactic society ban is absolutely 100 percent effective (note that once a signal has been sent, it cannot be called back by any known law of physics). Item 4 does not seem credible, since it is very reasonable to assume that at least some communications are being sent to planets such as Earth in a form that we could readily recognize, and, as before, it is not credible that a ban on such targeted communication could be absolutely 100 percent effective. Item No. 6 (we are alone) seems incredible in light of the thousands of recently discovered extrasolar planets, many in the habitable zone.
With regards to Item 1 (they exist, but are too far away), it is clear that the many exciting new developments in space exploration seriously draw into question the presumed technical impossibility of exploring the cosmos. For example, a fleet of "von Neumann probes" could travel to distant stars, make additional copies of themselves (using the latest software beamed from the home planet) and launch to yet more distant stars. Analyses of this scheme show the entire Milky Way could be explored in a one million years or so. And keep in mind that any other society is, almost certainly, many thousands or millions of years more advanced, so cost and distance cannot be insuperable obstacles.
These developments also draw Item 5 into question (civilizations like us invariably self-destruct). After all, we have survived 200 years of technological adolescence and have not yet destroyed ourselves. And if any of the current exploration and colonization plans work out, then the long-term survival of our species will be immune to possible calamities on Earth. Within a decade, we will become a multi-planet species, and within a century we very likely will be a multi-solar-system species.
So what is the answer to Fermi's paradox? Good question! We humans don't know. | <urn:uuid:3e2facaf-0091-4f99-94b4-c4b16e7c160e> | 3.1875 | 1,667 | Nonfiction Writing | Science & Tech. | 32.858168 | 95,610,450 |
Subject: simplified coefficient Posted: 3/3/2018 Viewed: 226 times
I want to simulate flows of a river in Colombia, and I am using the simplified coefficient method as a test, before doing it with the soil moisture method, without a load I do not know how to compare the simulated flows with the observed, someone can help me please ,
To compare simulated flows with the observed flows, you must do the following in your model:
1) Find the location of the gauge stations in your model - ideally have them in a GIS file.
2) Use the gauge locations to calculate the areas of the catchments that you will use your for simplified coefficient models.
3) Find the precipitation data for the catchment areas that you have defined. Each catchment will need its own precipitation data, averaged over the entire area of the catchment.
4) Build your WEAP model, and plug in your historic climate and streamflow information. (see www.WEAP21.org/tutorial chapter of Data, Results and Formatting for more detailed instructions)
5) [Provided that the climate and streamflow historic data overlap for some period of years], compare the simulated streamflow in the WEAP river with the historic data entered in your streamflow gauge in the WEAP model. You can do this by look at the results for "Selected River Reaches" and selecting just the reach with the relevant river flow volume and the stream gauge. They should have the same number as their reach number. | <urn:uuid:180779bc-16df-48c0-97d6-2b6239876757> | 2.671875 | 312 | Q&A Forum | Science & Tech. | 46.130492 | 95,610,459 |
Feb 16, 2017 06:58 AM EST
In its current plans, NASA's first launch is scheduled for late 2018. This particular launch does not include a crew for testing the rocket's systems and the capsule - Orion. However, NASA is looking into speeding its plans to take the heavy-lift rocket into space with a crew on board.
The acting NASA administrator, Robert M. Lightfoot Jr., says that the agency is currently studying everything involved in carrying a flew on board the mammoth rocket. And that includes a life support system that is capable for deep space missions.
Originally, the second mammoth mission would hold the crew and that rocket would take off in 2021, as reported by the New York Times. Now, NASA is reviewing the feasibility of such mission because of an earlier launch date.
In the quest to send humans into deep space, NASA is taking a huge risk. The plan was to originally send a crew-less mission to gain confidence before sending a manned one. This enables the agency to test out the environment of a prolonged flight.
Which is why NASA is looking to develop oxygen recovery technologies for deep space missions, as reported by Space Daily. The agency is now looking into two proposals that could help astronauts breathe easier during the long journey. With a team of engineers and experts working from different research groups, the agency is confident that they can supply the needed systems for Orion.
Honeywell Aerospace in Phoenix has proposed a soot-free recovery of oxygen from carbon dioxide. While the UMPQUA Research Co. in Myrtle Creek, Oregon proposed a continuous Bosch reactor. Both proposals have been selected and funded by NASA.
It is still unclear if NASA can make the deadline and Orion capable of supporting a whole crew on board given the short time table. But Lightfoot expresses that current United States President Donald Trump has their goals in mind.
So before NASA goes to Mars, they need to test Orion first. Watch the PBS clip below for more details:
See Now: Facebook will use AI to detect users with suicidal thoughts and prevent suicide© 2017 University Herald, All rights reserved. Do not reproduce without permission. | <urn:uuid:bf4f754d-d367-43e1-bed5-10efa16e2486> | 2.71875 | 435 | Truncated | Science & Tech. | 54.097352 | 95,610,485 |
Solar storms may have torn away Mars water
WASHINGTON - Solar storms, like a big one that affected Earth last year, might have torn away the water that used to cover parts of Mars, NASA scientists said Thursday.
Astronomers believe Mars once had oceans of surface water, enough to support long-ago life, but they have not determined where that water went some 3.5 billion years ago.
Now researchers monitoring the after-effects of a monster solar storm that hit Earth in October and November 2003 said they think repeated buffeting by this kind of space weather could have ripped away Mars' watery veil.
Latest Houston & Texas News
- George H. W. Bush's Heart Doctor Fatally Shot While Cycling GeoBeats
- STEM celebration goers beat the North Texas heat Fox4
- Weekend Greeting Fox 26 Houston
- Keeshea Pratt Band performs 'Have a Good Time Y'all' Fox 26 Houston
- Investigation into death of boy in hot child care center bus Fox 26 Houston
- Threesome the top American sexual fantasy Fox 26 Houston
- Outrage after north Houston mosque set on fire Fox 26 Houston
- Apps to help prevent hot vehicle deaths Fox 26 Houston
- Heat advisory Fox7
- Family of Raymond Pryer meets for balloon release, boy died after being left in daycare van Fox 26 Houston
- Brother, sister wanted on kidnapping, assault charges Fox 26 Houston
- Will charges follow after death of 3-year-old boy? Fox 26 Houston
- Mother in critical condition after car hits her then crashes into a home Fox 26 Houston
- American detained in Vietnam for 41 days released after trial, returning home to Houston Fox 26 Houston
- 5 p.m. July 20 FOXRAD Forecast Fox 26 Houston
- Outdoor jobs mean some can't escape record North Texas heat Fox4
Unlike Earth, which has a protective magnetosphere that guards the planet against bombardment by high-energy particles during a solar storm, Mars has only isolated zones of protection, astronomers said in a telephone-and-Internet briefing.
"Where did it go?" Zurbuchen asked rhetorically about Mars' water. "One of the key ideas that people are talking about is the connection to these space storms ... Over 3.5 billion years, there's kind of a gradual erosion of this water."
The astronomers referred to a video simulation of what might have happened on Mars, available online at http://www.gsfc.nasa.gov/topstore/2004/0708flare1.htm.
The brief video, Movie 4 at the Web site, showed water seemingly blowing away from the planet.
Scientists worked with a small fleet of robotic spacecraft to watch the impact of last year's "Halloween" solar storm, the most powerful ever monitored.
Starting with the SOHO spacecraft which monitors the sun from its vantage point near Earth, the astronomers also tracked the solar blast wave with the Ulysses craft near Jupiter and the Cassini craft that just began orbiting Saturn.
They followed the wave all the way out to the fringes of the solar system, where the two relatively ancient Voyager probes, launched in 1977, are aiming for interstellar space.
On Earth, the storm caused the rerouting of aircraft and the disruption of some long-distance radio communications and satellite operations. In space, astronauts aboard the International Space Station had to move periodically into the Russian-supplied Service Module, which offered better shielding from solar storms. | <urn:uuid:aea6a456-b14c-4231-bcae-2f91e34b9be5> | 2.796875 | 714 | News Article | Science & Tech. | 49.750905 | 95,610,494 |
The Expedition 56 crew members explored a variety of microgravity science today potentially improving the lives of people on Earth and astronauts in space. The orbital residents are also unpacking a new resupply ship and getting ready for the departure of another.
Cancer research is taking place aboard the International Space Station possibly leading to safer, more effective therapies. Flight Engineer Serena Auñón-Chancellor contributed to that research today by examining endothelial cells through a microscope for the AngieX Cancer Therapy study. AngieX is seeking a better model in space to test a treatment that targets tumor cells and blood vessels.
She also teamed up with Commander Drew Feustel imaging biological samples in a microscope for the Micro-11 fertility study. The experiment is researching whether successful reproduction is possible off the Earth.
The Northrop Grumman Cygnus space freighter has been packed full of trash and is due to leave the space station Sunday morning. Flight Engineer Alexander Gerst will command the Canadarm2 robotic arm to release Cygnus at 8:35 a.m. EDT as Auñón-Chancellor backs him up. It will orbit Earth until July 30 for engineering studies before burning up harmlessly over the Pacific Ocean.
Expedition 56 Commander Drew Feustel and Flight Engineer Ricky Arnold of NASA completed the sixth spacewalk at the International Space Station this year at 2:55 p.m. EDT, lasting 6 hours, 49 minutes. The two astronauts installed new high-definition cameras that will provide enhanced views during the final phase of approach and docking of the SpaceX Crew Dragon and Boeing Starliner commercial crew spacecraft that will soon begin launching from American soil.
They also swapped a camera assembly on the starboard truss of the station, closed an aperture door on an external environmental imaging experiment outside the Japanese Kibo module, and completed two additional tasks to relocate a grapple bar to aid future spacewalkers and secured some gear associated with a spare cooling unit housed on the station’s truss.
This was the 211th spacewalk in support of assembly and maintenance of the unique orbiting laboratory where humans have been living and working continuously for nearly 18 years. Spacewalkers have now spent a total of 54 days, 23 hours and 29 minutes working outside the station.
During the ninth spacewalk of Feustel’s career, he moved into third place for total cumulative time spent spacewalking with a total of 61 hours and 48 minutes. It was Arnold’s fifth spacewalk with a total time of 32 hours and 4 minutes.
NASA Television and the agency’s website have begun the broadcast of today’s spacewalk.
Expedition 56 Commander Drew Feustel and Flight Engineer Ricky Arnold of NASA are preparing to exit the International Space Station to make improvements and repairs to the orbiting laboratory. The spacewalk is scheduled to begin about 8:10 a.m. EDT and last about six-and-a-half hours.
Feustel and Arnold will install new high-definition cameras near an international docking adapter mated to the front end of the station’s Harmony module. The additions will provide enhanced views during the final phase of approach and docking of the SpaceX Crew Dragon and Boeing Starliner commercial crew spacecraft that will soon begin launching from American soil.
The astronauts also will swap out a camera assembly on the starboard truss of the station and close an aperture door on an external environmental imaging experiment outside the Japanese Kibo module.
Arnold and Feustel will begin Thursday’s spacewalk at 8:10 a.m. to install new high definition cameras to support upcoming commercial crew missions from SpaceX and Boeing to the orbital laboratory. The first uncrewed test missions are planned to begin later this year. The cameras will provide improved views of the commercial crew vehicles as they approach and dock to the station. NASA TV will provide complete live coverage of the 211th space station spacewalk starting at 6:30 a.m.
Auñón-Chancellor and Gerst, who just arrived at the station on Friday, will assist the spacewalkers on Thursday. Gerst will help the spacewalkers in and out of their spacesuits. Auñón-Chancellor will operate the Canadarm2 robotic arm. The duo practiced today on a computer the robotics procedures necessary to maneuver a spacewalker to and from the worksite on the starboard side of the station’s truss structure.
Arnold and Feustel had some extra time today to work on science and maintenance activities. Arnold worked with the Microgravity Science Glovebox to troubleshoot a semiconductor crystal growth experiment. Feustel performed some plumbing work in the Tranquility module before relocating a pair of incubator units to support new experiments being delivered on the next SpaceX Dragon cargo mission. Finally, the duo readied the Quest airlock and their spacesuits for Thursday morning’s spacewalk.
The Expedition 55 crew is unloading the Orbital ATK Cygnus space freighter today ahead of next week’s crew swap at the International Space Station. On top of the cargo transfers and crew departure activities, the orbital residents are also running space experiments to benefit humans on Earth and astronauts in space.
NASA Flight Engineer Scott Tingle has been working inside Cygnus today unpacking station hardware and research gear delivered just last week. He removed science kits and spacewalking gear and stowed them throughout the orbital lab.
Tingle finally wrapped up his workday with his homebound crewmates Commander Anton Shkaplerov and Flight Engineer Norishige Kanai preparing for their June 3 return to Earth. The trio packed personal items and other gear inside the Soyuz MS-07 spacecraft that will parachute the crew to a landing in Kazakhstan after 168 days in space.
Back on Earth, Soyuz MS-09 Commander Sergey Prokopyev and Flight Engineers Serena Auñón-Chancellor and Alexander Gerst are in final training in Kazakhstan ahead of their June 6 launch to the space station. The Expedition 56-57 trio will orbit Earth for two days before docking to the Rassvet module to begin a six-month stay in space.
NASA astronauts Ricky Arnold and Drew Feustel, who are staying in space until Oct. 4, familiarized themselves today with the new Cold Atom Lab’s hardware and installation procedures. The device, delivered last week on Cygnus, will research what happens to atoms exposed to temperatures less than a billionth of a degree above absolute zero.
The two later split up as Arnold set up thermal hardware that will help scientists understand the processes involved in semiconductor crystal growth. Feustel moved on and began uninstalling a plant biology facility, the European Modular Cultivation System (EMCS), which has finalized its research operations. The EMCS will now be readied for return to Earth aboard the next SpaceX Dragon cargo craft.
The Cygnus resupply ship from Orbital ATK is now open for business and the Expedition 55 crew has begun unloading the 7,400 pounds of cargo it delivered Thursday morning. The orbital residents are also conducting space research and preparing for a crew swap in early June.
There are now four spaceships parked at the International Space Station, the newest one having arrived to resupply the crew early Thursday morning. Astronauts Drew Feustel and Norishige Kanai opened Cygnus’ hatches shortly after it was installed to the Unity module. The cargo carrier will remain attached to the station until July so the astronauts can offload new supplies and repack Cygnus with trash.
NASA astronaut Scott Tingle, who caught Cygnus with the Canadarm2 robotic arm, swapped out gear inside a small life science research facility today called TangoLab-1. Tingle also joined Kanai later in the day transferring frozen biological samples from the Destiny lab module to the Kibo lab module.
The duo also joined Commander Anton Shkaplerov and continued to pack gear and check spacesuits ahead of their return to Earth on June 3 inside the Soyuz MS-07 spaceship. When the three crewmates land in Kazakhstan, about three and a half hours after undocking, the trio will have spent 168 days in space and conducted one spacewalk each.
Three new Expedition 56-57 crew members, waiting to replace the homebound station crew, are counting down to a June 6 launch to space. Astronauts Serena Auñón-Chancellor and Alexander Gerst will take a two-day ride to the space station with cosmonaut Sergey Prokopyev inside the Soyuz MS-09 spacecraft for a six-month mission aboard the orbital laboratory.
The Orbital ATK Cygnus cargo ship was bolted into place on the International Space Station’s Earth-facing port of the Unity module at 8:13 a.m. EDT. The spacecraft will spend about seven weeks attached to the space station before departing in July. After it leaves the station, the uncrewed spacecraft will deploy several CubeSats before its fiery re-entry into Earth’s atmosphere as it disposes of several tons of trash.
Orbital ATK’s Cygnus was launched on the company’s Antares rocket Monday, May 21, from the Mid-Atlantic Regional Spaceport Pad 0A at NASA’s Wallops Flight Facility in Virginia. The spacecraft’s arrival brings about 7,400 pounds of research and supplies to support Expedition 55 and 56. Highlights include:
The Ice Cubes Facility, the first commercial European opportunity to conduct research in space, made possible through an agreement with ESA (European Space Agency) and Space Applications Services.
The Microgravity Investigation of Cement Solidification (MICS) experiment is to investigate and understand the complex process of cement solidification in microgravity with the intent of improving Earth-based cement and concrete processing and as the first steps toward making and using concrete on extraterrestrial bodies.
Three Earth science CubeSats
RainCube (Radar in a CubeSat) will be NASA’s first active sensing instrument on a CubeSat that could enable future rainfall profiling missions on low-cost, quick-turnaround platforms.
TEMPEST-D (Temporal Experiment for Storms and Tropical Systems Demonstration) is mission to validate technology that could improve our understanding of cloud processes.
CubeRRT (CubeSat Radiometer Radio Frequency Interference Technology) will seek to demonstrate a new technology that can identify and filter radio frequency interference, which is a growing problem that negatively affects the data quality collected by radiometers, instruments used in space for critical weather data and climate studies.
For more information about newly arrived science investigations aboard the Cygnus, visit:
The Cygnus space freighter from Orbital ATK is closing in on the International Space Station ready to deliver 7,400 pounds of cargo Thursday morning. The Expedition 55 crew members are getting ready for Cygnus’ arrival while also helping researchers understand what living in space does to the human body.
NASA TV is set to begin its live coverage of Cygnus’ arrival at the orbital lab Thursday at 3:45 a.m. EDT. Flight Engineer Scott Tingle will be inside the Cupola and command the Canadarm2 robotic arm to reach out and capture Cygnus at 5:20 a.m. Robotics engineers at Mission Control will then take over and remotely install Cygnus to the Earth-facing port of the Unity module later Thursday morning.
The crew started its day collecting blood and urine samples for a pair of experiments, Biochemical Profile and Repository, looking at the physiological changes taking place in astronauts. Those samples are stowed in science freezers for return to Earth so scientists can later analyze the proteins and chemicals for indicators of crew health.
Another pair of experiments taking place today is looking at bone marrow, blood cells and the cardiovascular system. The Marrow study, which looks at white and red blood cells in bone marrow, may benefit astronaut health as well as people on Earth with reduced mobility or aging conditions. The Vascular Echo experiment is observing stiffening arteries in astronauts that resembles accelerated aging.
Two Expedition 55 Flight Engineers are using virtual reality and computer training today to prepare for next week’s spacewalk at the International Space Station. Robotics controllers from Houston and Japan are also maneuvering a pair of robotic arms for the upcoming spacewalk and satellite deployments.
NASA astronauts Ricky Arnold and Drew Feustel will conduct the 210th spacewalk at the space station beginning Wednesday, May 16 at 8:10 a.m. EDT. The veteran spacewalkers will work for about 6.5 hours swapping thermal control gear that controls the circulation of ammonia to keep external station systems cool. NASA TV begins its live coverage at 6:30 a.m.
The veteran spacewalkers checked the functionality a pair of jet packs that will be attached to their U.S. spacesuits next week. The jet packs, known as Simplified Aid For EVA Rescue (SAFER), provide mobility for spacewalkers in the unlikely event they become untethered from the station. The duo also wore virtual reality goggles to practice maneuvering their SAFER jet packs and reviewed their spacewalk procedures.
Robotics controllers from opposite sides of the world maneuvered a pair of robotic arms independently of each other today. Canada’s 57.7-foot-long robotic arm, nicknamed Canadarm2, was remotely positioned today by engineers in Houston in advance of next week’s spacewalk activities. Controllers from the Japan Aerospace Exploration Agency remotely operated the Kibo laboratory module’s robotic arm to prepare for the deployment of small satellites Friday morning.
Robotics controllers and Expedition 55 crew members are getting ready for the departure of the SpaceX Dragon resupply ship next week. The commercial space freighter will leave the International Space Station and splashdown in the Pacific Ocean on Wednesday loaded with cargo for retrieval and analysis.
Flight Engineer Ricky Arnold powered up command and communications gear today that will aid the crew when Dragon departs the station on Wednesday at 10:22 a.m. EDT. NASA TV will begin its live coverage of the departure activities at 10 a.m. Dragon will splashdown in the Pacific Ocean about six hours later to be recovered by SpaceX and NASA personnel. The splashdown off the southern coast of California will not be seen on NASA TV.
The Canadarm2 will be remotely maneuvered today to grapple Dragon today while it is still attached to the Harmony module. In the meantime the 57.7-foot-long robotic arm and its fine-tuned robotic hand, also known as Dextre, are completing the installation of an external materials exposure experiment outside of Japan’s Kibo laboratory module.
Astronauts Drew Feustel and Scott Tingle are still packing Dragon today with a variety of cargo including space station hardware and research samples. The STaARS-1 experiment facility has completed a year of operations at the station and is being readied for its return aboard Dragon next week. The research device supported observations of living systems exposed to simulated gravity such as Earth, the Moon and Mars. Feustel also stowed faulty life support gear in Dragon for refurbishment back on Earth. | <urn:uuid:ebd2ac3f-e02c-4469-a30a-8f26587369b9> | 3 | 3,152 | Content Listing | Science & Tech. | 37.205376 | 95,610,524 |
The singleton pattern is one of the best-known patterns in software engineering. Essentially, a singleton is a class which only allows a single instance of itself to be created, and usually gives simple access to that instance. Most commonly, singletons don’t allow any parameters to be specified when creating the instance – as otherwise, the second request for an instance but with a different parameter could be problematic! (If the same instance should be accessed for all requests with the same parameter, the factory pattern is more appropriate.) This article deals only with the situation where no parameters are required. Typically a requirement of singletons is that they are created lazily – i.e. that the instance isn’t created until it is first needed.
There are various ways of implementing the singleton pattern in C#. I shall present them here in reverse order of elegance, starting with the most commonly seen, which is not thread-safe, and working up to an entirely lazily-loaded, thread-safe, simple and highly performant version.
All these implementations share four common characteristics, however:
- A single constructor, which is private and parameterless. This prevents other classes from instantiating it (which would be a violation of the pattern). Note that it also prevents subclassing – if a singleton can be subclassed once, it can be subclassed twice, and if each of those subclasses can create an instance, the pattern is violated. The factory pattern can be used if you need a single instance of a base type, but the exact type isn’t known until runtime.
- A static variable which holds a reference to the single created instance, if any.
- A public static means of getting the reference to the single created instance, creating one if necessary.
Let’s start with a quick example of a singleton class.
- Deependra is a Senior Developer with Microsoft technologies, currently working with Opteamix India business private solution. In My Free time, I write blogs and make technical youtube videos. Having the good understanding of Service-oriented architect, Designing microservices using domain driven design. | <urn:uuid:f5616ac8-7115-42d4-8557-2b64634c73a7> | 3.296875 | 441 | Personal Blog | Software Dev. | 35.670605 | 95,610,551 |
C4 carbon fixation
C4 carbon fixation or the Hatch-Slack pathway is a photosynthetic process in some plants. It is the first step in extracting carbon from carbon dioxide to be able to use it in sugar and other biomolecules. It is one of three known processes for carbon fixation. The C4 in one of the names refers to the 4-carbon molecule that is the first product of this type of carbon fixation.
C4 fixation is an elaboration of the more common C3 carbon fixation and is believed to have evolved more recently. C4 overcomes the tendency of the enzyme RuBisCO to wastefully fix oxygen rather than carbon dioxide in the process of photorespiration. This is achieved by ensuring that RuBisCO works in an environment where there is a lot of carbon dioxide and very little oxygen. CO2 is shuttled via malate or aspartate from mesophyll cells to bundle-sheath cells. In these bundle-sheath cells CO2 is released by decarboxylation of the malate. C4 plants use PEP carboxylase to capture more CO2 in the mesophyll cells. PEP Carboxylase (3 carbons) binds to CO2 to make oxaloacetic acid (OAA). The OAA then makes malate (4 carbons). Malate enters bundle sheath cells and releases the CO2. These additional steps, however, require more energy in the form of ATP. Using this extra energy, C4 plants are able to more efficiently fix carbon in drought, high temperatures, and limitations of nitrogen or CO2. Since the more common C3 pathway does not require this extra energy, it is more efficient in the other conditions.
The first experiments indicating that some plants do not use C3 carbon fixation but instead produce malate and aspartate in the first step of carbon fixation were done in the 1950s and early 1960s by Hugo P. Kortschak and Yuri Karpilov. The C4 pathway was elucidated by Marshall Davidson Hatch and C. R. Slack, in Australia, in 1966; it is sometimes called the Hatch-Slack pathway.
In C3 plants, the first step in the light-independent reactions of photosynthesis involves the fixation of CO2 by the enzyme RuBisCO into 3-phosphoglycerate. However, due to the dual carboxylase and oxygenase activity of RuBisCo, some part of the substrate is oxidized rather than carboxylated, resulting in loss of substrate and consumption of energy, in what is known as photorespiration. In order to bypass the photorespiration pathway, C4 plants have developed a mechanism to efficiently deliver CO2 to the RuBisCO enzyme. They utilize their specific leaf anatomy where chloroplasts exist not only in the mesophyll cells in the outer part of their leaves but in the bundle sheath cells as well. Instead of direct fixation to RuBisCO in the Calvin cycle, CO2 is incorporated into a 4-carbon organic acid, which has the ability to regenerate CO2 in the chloroplasts of the bundle sheath cells. Bundle sheath cells can then utilize this CO2 to generate carbohydrates by the conventional C3 pathway.
The first step in the pathway is the conversion of pyruvate to phosphoenolpyruvate (PEP), by the enzyme pyruvate orthophosphate dikinase. This reaction requires inorganic phosphate and ATP plus pyruvate, producing phosphoenolpyruvate, AMP, and inorganic pyrophosphate (PPi). The next step is the fixation of CO2 into oxaloacetate by the enzyme PEP carboxylase. Both of these steps occur in the mesophyll cells:
- pyruvate + Pi + ATP → PEP + AMP + PPi
- PEP + CO2 → oxaloacetate
PEP carboxylase has a lower Km for HCO−
3 — and, hence, higher affinity — than RuBisCO. Furthermore, O2 is a very poor substrate for this enzyme. Thus, at relatively low concentrations of CO2, most CO2 will be fixed by this pathway.
The product is usually converted to malate, a simple organic compound, which is transported to the bundle-sheath cells surrounding a nearby vein. Here, it is decarboxylated to produce CO2 and pyruvate. The CO2 now enters the Calvin cycle and the pyruvate is transported back to the mesophyll cell.
Since every CO2 molecule has to be fixed twice, first by 4-carbon organic acid and second by RuBisCO, the C4 pathway uses more energy than the C3 pathway. The C3 pathway requires 18 molecules of ATP for the synthesis of one molecule of glucose, whereas the C4 pathway requires 30 molecules of ATP. This energy debt is more than paid for by avoiding losing more than half of photosynthetic carbon in photorespiration as occurs in some tropical plants, making it an adaptive mechanism for minimizing the loss.
There are several variants of this pathway:
- The 4-carbon acid transported from mesophyll cells may be malate, as above, or aspartate.
- The 3-carbon acid transported back from bundle-sheath cells may be pyruvate, as above, or alanine.
- The enzyme that catalyses decarboxylation in bundle-sheath cells differs. In maize and sugarcane, the enzyme is NADP-malic enzyme; in millet, it is NAD-malic enzyme; and, in Panicum maximum, it is PEP carboxykinase.
C4 Kranz leaf anatomy
The C4 plants often possess a characteristic leaf anatomy called kranz anatomy, from the German word for wreath. Their vascular bundles are surrounded by two rings of cells; the inner ring, called bundle sheath cells, contains starch-rich chloroplasts lacking grana, which differ from those in mesophyll cells present as the outer ring. Hence, the chloroplasts are called dimorphic. The primary function of kranz anatomy is to provide a site in which CO2 can be concentrated around RuBisCO, thereby avoiding photorespiration. In order to maintain a significantly higher CO2 concentration in the bundle sheath compared to the mesophyll, the boundary layer of the kranz has a low conductance to CO2, a property that may be enhanced by the presence of suberin. The carbon concentration mechanism in C4 plants distinguishes their isotopic signature from other photosynthetic organisms.
Although most C4 plants exhibit kranz anatomy, there are, however, a few species that operate a limited C4 cycle without any distinct bundle sheath tissue. Suaeda aralocaspica, Bienertia cycloptera, Bienertia sinuspersici and Bienertia kavirense (all chenopods) are terrestrial plants that inhabit dry, salty depressions in the deserts of the Middle East. These plants have been shown to operate single-cell C4 CO2-concentrating mechanisms, which are unique among the known C4 mechanisms. Although the cytology of both genera differs slightly, the basic principle is that fluid-filled vacuoles are employed to divide the cell into two separate areas. Carboxylation enzymes in the cytosol can, therefore, be kept separate from decarboxylase enzymes and RuBisCO in the chloroplasts, and a diffusive barrier can be established between the chloroplasts (which contain RuBisCO) and the cytosol. This enables a bundle-sheath-type area and a mesophyll-type area to be established within a single cell. Although this does allow a limited C4 cycle to operate, it is relatively inefficient, with the occurrence of much leakage of CO2 from around RuBisCO. There is also evidence for the exhibiting of inducible C4 photosynthesis by non-kranz aquatic macrophyte Hydrilla verticillata under warm conditions, although the mechanism by which CO2 leakage from around RuBisCO is minimised is currently uncertain.
The evolution and advantages of the C4 pathway
C4 plants have a competitive advantage over plants possessing the more common C3 carbon fixation pathway under conditions of drought, high temperatures, and nitrogen or CO2 limitation. When grown in the same environment, at 30 °C, C3 grasses lose approximately 833 molecules of water per CO2 molecule that is fixed, whereas C4 grasses lose only 277. This increased water use efficiency of C4 grasses means that soil moisture is conserved, allowing them to grow for longer in arid environments.
C4 carbon fixation has evolved on up to 61 independent occasions in 19 different families of plants, making it a prime example of convergent evolution. This convergence may have been facilitated by the fact that many potential evolutionary pathways to a C4 phenotype exist, many of which involve initial evolutionary steps not directly related to photosynthesis. C4 plants arose around during the Oligocene (precisely when is difficult to determine) and did not become ecologically significant until around , in the Miocene Period. C4 metabolism originated when grasses migrated from the shady forest undercanopy to more open environments, where the high sunlight gave it an advantage over the C3 pathway. Drought was not necessary for its innovation; rather, the increased resistance to water stress was a by-product of the pathway and allowed C4 plants to more readily colonise arid environments.
Today, C4 plants represent about 5% of Earth's plant biomass and 3% of its known plant species. Despite this scarcity, they account for about 23% of terrestrial carbon fixation. Increasing the proportion of C4 plants on earth could assist biosequestration of CO2 and represent an important climate change avoidance strategy. Present-day C4 plants are concentrated in the tropics and subtropics (below latitudes of 45°) where the high air temperature contributes to higher possible levels of oxygenase activity by RuBisCO, which increases rates of photorespiration in C3 plants.
Plants that use C4 carbon fixation
About 8,100 plant species use C4 carbon fixation, which represents about 3% of all terrestrial species of plants. All these 8,100 species are angiosperms. C4 carbon fixation is more common in monocots compared with dicots , with 40% of monocots using the C4 pathway, compared with only 4.5% of dicots. Despite this, only three families of monocots utilise C4 carbon fixation compared to 15 dicot families. Of the monocot clades containing C4 plants, the grass (Poaceae) species use the C4 photosynthetic pathway most. Forty-six percent of grasses are C4 and together account for 61% of C4 species. These include the food crops maize, sugar cane, millet, and sorghum. Of the dicot clades containing C4 species, the order Caryophyllales contains the most species. Of the families in the Caryophyllales, the Chenopodiaceae use C4 carbon fixation the most, with 550 out of 1,400 species using it. About 250 of the 1000 species of the related Amaranthaceae also use C4.
Members of the sedge family Cyperaceae, and members of numerous families of Eudicots -including Asteraceae (the daisy family), Brassicaceae (the cabbage family), and Euphorbiaceae (the spurge family)- also use C4.
Trees which use C4 include Paulownia.
Converting C3 plants to C4
Given the advantages of C4, a group of scientists from institutions around the world are working on the C4 Rice Project to turn rice, a C3 plant, into a C4 plant by studying the C4 plants maize and Brachypodium. As rice is the world's most important human food—it is the staple food for more than half the planet—having rice that is more efficient at converting sunlight into grain could have significant global benefits towards improving food security. The team claim C4 rice could produce up to 50% more grain—and be able to do it with less water and nutrients.
The researchers have already identified genes needed for C4 photosynthesis in rice and are now looking towards developing a prototype C4 rice plant. In 2012, the Government of the United Kingdom along with the Bill & Melinda Gates Foundation provided $14 million over 3 years towards the C4 Rice Project at the International Rice Research Institute.
- Slack, CR; Hatch, MD (1967). "Comparative studies on the activity of carboxylases and other enzymes in relation to the new pathway of photosynthetic carbon dioxide fixation in tropical grasses" (PDF). The Biochemical Journal. 103 (3): 660–5. doi:10.1042/bj1030660. PMC . PMID 4292834. Retrieved 2010-04-08.
- Nickell, Louis G. (1993). "A tribute to Hugo P. Kortschak: The man, the scientist and the discoverer of C4 photosynthesis". Photosynthesis Research. 35 (2): 201–204. doi:10.1007/BF00014751.
- Hatch, Marshall D. (2002). "C(4) photosynthesis: Discovery and resolution". Photosynthesis Research. 73 (1–3): 251–6. doi:10.1023/A:1020471718805. PMID 16245128.
- Laetsch (1971) Photosynthesis and Photorespiration, eds Hatch, Osmond and Slatyer
- Freitag, H; Stichler, W (2000). "A remarkable new leaf type with unusual photosynthetic tissue in a central Asiatic genus of Chenopodiaceae". Plant Biol. 2: 154–160. doi:10.1055/s-2000-9462.
- Voznesenskaya, Elena; Vincent R. Franceschi; Olavi Kiirats; Elena G. Artyusheva; Helmut Freitag; Gerald E. Edwards (2002). "Proof of C4 photosynthesis without Kranz anatomy in Bienertia cycloptera (Chenopodiaceae)". The Plant Journal. 31 (5): 649–662. doi:10.1046/j.1365-313X.2002.01385.x. PMID 12207654.
- Akhani, Hossein; Barroca, João; Koteeva, Nuria; Voznesenskaya, Elena; Franceschi, Vincent; Edwards, Gerald; Ghaffari, Seyed Mahmood; Ziegler, Hubert (2005). "Bienertia sinuspersici (Chenopodiaceae): A New Species from Southwest Asia and Discovery of a Third Terrestrial C4 Plant Without Kranz Anatomy". Systematic Botany. 30 (2): 290–301. doi:10.1600/0363644054223684.
- Akhani, H; Chatrenoor, T; Dehghani, M; Khoshravesh, R; Mahdavi, P.; Matinzadeh, Z. (2012). "A new species of Bienertia (Chenopodiaceae) from Iranian salt deserts: a third species of the genus and discovery of a fourth terrestrial C4 plant without Kranz anatomy". Plant Biosystems. 146: 550–559. doi:10.1080/11263504.2012.662921.
- Holaday, A. S.; Bowes, G. (1980). "C4 Acid Metabolism and Dark CO2 Fixation in a Submersed Aquatic Macrophyte (Hydrilla verticillata)". Plant Physiology. 65 (2): 331–5. doi:10.1104/pp.65.2.331. PMC . PMID 16661184.
- Sage, Rowan; Russell Monson (1999). "7". C4 Plant Biology. pp. 228–229. ISBN 0-12-614440-0.
- Sage, Rowan F. (2004-02-01). "The evolution of C4 photosynthesis". New Phytologist. 161 (2): 341–370. doi:10.1111/j.1469-8137.2004.00974.x. ISSN 1469-8137.
- Williams BP, Johnston IG, Covshoff S, Hibberd JM (September 2013). "Phenotypic landscape inference reveals multiple evolutionary paths to C₄ photosynthesis". eLife. 2: e00961. doi:10.7554/eLife.00961. PMC . PMID 24082995.
- Osborne, C. P.; Beerling, D. J. (2006). "Nature's green revolution: the remarkable evolutionary rise of C4 plants". Philosophical Transactions of the Royal Society B: Biological Sciences. 361 (1465): 173–194. doi:10.1098/rstb.2005.1737. PMC . PMID 16553316.
- Edwards, E. J.; Smith, S. A. (2010). "Phylogenetic analyses reveal the shady history of C4 grasses". Proceedings of the National Academy of Sciences. 107 (6): 2532–7. Bibcode:2010PNAS..107.2532E. doi:10.1073/pnas.0909672107. PMC . PMID 20142480.
- Osborne, C. P.; Freckleton, R. P. (2009). "Ecological selection pressures for C4 photosynthesis in the grasses". Proceedings of the Royal Society B: Biological Sciences. 276 (1663): 1753–60. doi:10.1098/rspb.2008.1762. PMC . PMID 19324795.
- Bond, W. J.; Woodward, F. I.; Midgley, G. F. (2005). "The global distribution of ecosystems in a world without fire". New Phytologist. 165 (2): 525–538. doi:10.1111/j.1469-8137.2004.01252.x. PMID 15720663.
- Kellogg, Elizabeth A. "C4 photosynthesis". Current Biology. 23 (14): R594–R599. doi:10.1016/j.cub.2013.04.066.
- Sage, Rowan F. (2016-07-01). "A portrait of the C4 photosynthetic family on the 50th anniversary of its discovery: species number, evolutionary lineages, and Hall of Fame". Journal of Experimental Botany. 67 (14): 4039–4056. doi:10.1093/jxb/erw156. ISSN 0022-0957. PMID 27053721.
- Sage, Rowan; Russell Monson (1999). "16". C4 Plant Biology. pp. 551–580. ISBN 0-12-614440-0.
- Zhu XG, Long SP, Ort DR (2008). "What is the maximum efficiency with which photosynthesis can convert solar energy into biomass?". Current Opinion in Biotechnology. 19 (2): 153–159. doi:10.1016/j.copbio.2008.02.004. PMID 18374559.
- Kadereit, G; Borsch, T; Weising, K; Freitag, H (2003). "Phylogeny of Amaranthaceae and Chenopodiaceae and the Evolution of C4 Photosynthesis". International Journal of Plant Sciences. 164 (6): 959–86. doi:10.1086/378649.
- Slewinski. "Scarecrow Plays a Role in Establishing Kranz Anatomy in Maize Leaves".
- Gilles van Kote (2012-01-24). "Researchers aim to flick the high-carbon switch on rice". The Guardian. Retrieved 2012-11-10.
- Von Caemmerer, S.; Quick, W. P.; Furbank, R. T. (2012). "The Development of C4 Rice: Current Progress and Future Challenges". Science. 336 (6089): 1671–1672. Bibcode:2012Sci...336.1671V. doi:10.1126/science.1220177. PMID 22745421.
- Hibberd, J. M.; Sheehy, J. E.; Langdale, J. A. (2008). "Using C4 photosynthesis to increase the yield of rice—rationale and feasibility". Current Opinion in Plant Biology. 11 (2): 228–231. doi:10.1016/j.pbi.2007.11.002. PMID 18203653.
- Munawan Hasan (2012-11-06). "C4 rice project gets financial boost". The News. Retrieved 2012-11-10. | <urn:uuid:097e9a1f-dc08-4939-869b-315df5a76936> | 3.546875 | 4,491 | Knowledge Article | Science & Tech. | 60.354589 | 95,610,552 |
Neuronal Cell Cultures Kept on the Straight and Narrow
News May 29, 2006
An improved technique for culturing cells, developed at the National Institute of Standards and Technology (NIST), may enable fundamental insights into the behavior of neuronal cells.
Culturing particular types of cells in isolation is a basic technique for measuring how they respond to various stimuli, testing new drugs, and similar cell biology tasks.
Neuronal cells, which make up the central nervous system in mammals, are both particularly important and particularly hard to culture.
They are highly specialized and choosy about their environment-normally they only survive and develop when cultured on a layer of non-neuronal "glial" cells that provide cellular support services.
There are usually far more glial cells than neuronal cells, which makes it hard to image neuronal cells and measure their activity against the glial background.
In a paper in the American Chemical Society's journal Langmuir, NIST researchers detail a microfluidics technique to culture neuronal cells in relative isolation on a variety of cell-culture surfaces, and to pattern the cells on the surface to study the effects of geometry on cell development.
The trick is to mask the substrate with multiple alternating layers of positively and negatively charged polymers, building up a so-called polyelectrolyte multilayer (PEM).
Properly selected, the PEM coating convinces the neuronal cells that they're in a good environment to attach, develop and produce the characteristic neuron projections and synapses, all without a glial layer.
Even better, according to the NIST team, microfluidic channels can be used to lay down the PEM coating in patterned lines just a few micrometers wide.
Neuronal cells will largely confine themselves to the pattern, enabling a variety of cell-geometry experiments, such as measuring the maximum gap between lines that can be bridged by neural axons and dendrites.
The research is part of a multidisciplinary NIST program to develop biochemical measurement technologies based on microfluidics.
Analytical Tool Predicts Disease-Causing GenesNews
Predicting genes that can cause disease due to the production of truncated or altered proteins that take on a new or different function, rather than those that lose their function, is now possible thanks to an international team of researchers that has developed a new analytical tool to effectively and efficiently predict such candidate genes.
Single Gene Change in Gut Bacteria Alters Host MetabolismNews
Scientists have found that deleting a single gene in a particular strain of gut bacteria causes changes in metabolism and reduced weight gain in mice. The research provides an important step towards understanding how the microbiome – the bacteria that live in our body – affects metabolism.READ MORE
Gotta Sample 'Em All! Underwater Pokéball Captures Ocean LifeNews
A new device developed by Wyss Institute reseachers safely traps delicate sea creatures inside a folding polyhedral enclosure and lets them go without harm using a novel, origami-inspired design. The ultimate aim is to allow the sea creatures to be (gently) analyzed in high detail.READ MORE | <urn:uuid:e9ae6a32-2dd7-4f6b-85a5-03b2960aa010> | 3.25 | 652 | Content Listing | Science & Tech. | 15.280694 | 95,610,589 |
In physics, in particular in special relativity and general relativity, a four-velocity is a four-vector in four-dimensional spacetime[nb 1] that represents the relativistic counterpart of velocity, which is a three-dimensional vector in space.
Physical events correspond to mathematical points in time and space, the set of all of them together forming a mathematical model of physical four-dimensional spacetime. The history of an object traces a curve in spacetime, called its world line. If the object is massive, so that its speed is less than the speed of light, the world line may be parametrized by the proper time of the object. The four-velocity is the rate of change of four-position with respect to the proper time along the curve. The velocity, in contrast, is the rate of change of the position in (three-dimensional) space of the object, as seen by an observer, with respect to the observer's time.
The value of the magnitude of an object's four-velocity, i.e. the quantity obtained by applying the metric tensor g to the four-velocity u, that is ||u||2 = u ⋅ u = gμνuνuμ, is always equal to ±c2, where c is the speed of light. Whether the plus or minus sign applies depends on the choice of metric signature. For an object at rest its four-velocity is parallel to the direction of the time coordinate with u0 = c. A four-velocity is thus the normalized future-directed timelike tangent vector to a world line, and is a contravariant vector. Though it is a vector, addition of two four-velocities does not yield a four-velocity: the space of four-velocities is not itself a vector space.[nb 2]
The path of an object in three-dimensional space (in an inertial frame) may be expressed in terms of three spatial coordinate functions xi(t) of time t, where i is an index which takes values 1, 2, 3.
The three coordinates form the 3d position vector, written as a column vector
The components of the velocity (tangent to the curve) at any point on the world line are
Each component is simply written
Theory of relativity
In Einstein's theory of relativity, the path of an object moving relative to a particular frame of reference is defined by four coordinate functions xμ(τ), where μ is a spacetime index which takes the value 0 for the timelike component, and 1, 2, 3 for the spacelike coordinates. The zeroth component is defined as the time coordinate multiplied by c,
Each function depends on one parameter τ called its proper time. As a column vector,
From time dilation, the differentials in coordinate time t and proper time τ are related by
where the Lorentz factor,
is a function of the Euclidean norm u of the 3d velocity vector :
Definition of the four-velocity
The four-velocity is the tangent four-vector of a timelike world line. The four-velocity at any point of world line is defined as:
where is the four-position and is the proper time.
The four-velocity defined here using the proper time of an object does not exist for world lines for objects such as photons travelling at the speed of light; nor is it defined for tachyonic world lines, where the tangent vector is spacelike.
Components of the four-velocity
The relationship between the time t and the coordinate time x0 is defined to be related to coordinate time by
Taking the derivative of this with respect to the proper time τ, we find the Uμ velocity component for μ = 0:
and for the other 3 components to proper time we get the Uμ velocity component for μ = 1, 2, 3:
where we have used the chain rule and the relationships
Thus, we find for the four-velocity :
Written in standard four-vector notation this is:
where is the temporal component and is the spatial component.
In terms of the synchronized clocks and rulers associated with a particular slice of flat spacetime, the three spacelike components of four-velocity define a traveling object's proper velocity i.e. the rate at which distance is covered in the reference map frame per unit proper time elapsed on clocks traveling with the object.
Unlike most other four-vectors, the four-velocity has only 3 independent components instead of 4. The factor is a function of the three-dimensional velocity .
When certain Lorentz scalars are multiplied by the four-velocity, one then gets new physical four-vectors that have 4 independent components. For example:
- Four-momentum: , where is the mass
- Four-current density: , where is the charge density
Effectively, the factor combines with the Lorentz scalar term to make the 4th independent component
Using the differential of the four-position, the magnitude of the four-velocity can be obtained:
in short, the magnitude of the four-velocity for any object is always a fixed constant:
The norm is also:
which reduces to the definition the Lorentz factor.
- ^ Technically, the four-vector should be thought of as residing in the tangent space of a point in spacetime, spacetime itself being modeled as a smooth manifold. This distinction is significant in general relativity.
- ^ The set of four-velocities is a subset of the tangent space (which is a vector space) at an event. The label four-vector stems from the behavior under Lorentz transformations, namely under which particular representation they transform.
- Einstein, Albert; translated by Robert W. Lawson (1920). Relativity: The Special and General Theory. New York: Original: Henry Holt, 1920; Reprinted: Prometheus Books, 1995.
- Rindler, Wolfgang (1991). Introduction to Special Relativity (2nd). Oxford: Oxford University Press. ISBN 0-19-853952-5.
- ^ McComb, W. D. (1999). Dynamics and relativity. Oxford [etc.]: Oxford University Press. p. 230. ISBN 0-19-850112-9.
This page is based on a Wikipedia article written by authors
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses. | <urn:uuid:fb48c142-85a5-4e6d-9610-dcc314c97086> | 3.59375 | 1,376 | Knowledge Article | Science & Tech. | 44.82338 | 95,610,595 |
Berkeley Lab researchers link rising CO2 levels from fossil fuels to an upward trend in radiative forcing at two locations
Scientists have observed an increase in carbon dioxide’s greenhouse effect at the Earth’s surface for the first time. The researchers, led by scientists from the US Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), measured atmospheric carbon dioxide’s increasing capacity to absorb thermal radiation emitted from the Earth’s surface over an eleven-year period at two locations in North America. They attributed this upward trend to rising CO2 levels from fossil fuel emissions.
Credit: Jonathan Gero
The scientists used incredibly precise spectroscopic instruments at two sites operated by the Department of Energy’s Atmospheric Radiation Measurement (ARM) Climate Research Facility. This research site is on the North Slope of Alaska near the town of Barrow. They also collected data from a site in Oklahoma.
The influence of atmospheric CO2 on the balance between incoming energy from the Sun and outgoing heat from the Earth (also called the planet’s energy balance) is well established. But this effect has not been experimentally confirmed outside the laboratory until now. The research is reported Wednesday, Feb. 25, in the advance online publication of the journal Nature.
The results agree with theoretical predictions of the greenhouse effect due to human activity. The research also provides further confirmation that the calculations used in today’s climate models are on track when it comes to representing the impact of CO2.
The scientists measured atmospheric carbon dioxide’s contribution to radiative forcing at two sites, one in Oklahoma and one on the North Slope of Alaska, from 2000 to the end of 2010. Radiative forcing is a measure of how much the planet’s energy balance is perturbed by atmospheric changes. Positive radiative forcing occurs when the Earth absorbs more energy from solar radiation than it emits as thermal radiation back to space. It can be measured at the Earth’s surface or high in the atmosphere. In this research, the scientists focused on the surface.
They found that CO2 was responsible for a significant uptick in radiative forcing at both locations, about two-tenths of a Watt per square meter per decade. They linked this trend to the 22 parts-per-million increase in atmospheric CO2 between 2000 and 2010. Much of this CO2 is from the burning of fossil fuels, according to a modeling system that tracks CO2 sources around the world.
“We see, for the first time in the field, the amplification of the greenhouse effect because there’s more CO2 in the atmosphere to absorb what the Earth emits in response to incoming solar radiation,” says Daniel Feldman, a scientist in Berkeley Lab’s Earth Sciences Division and lead author of the Nature paper.
“Numerous studies show rising atmospheric CO2 concentrations, but our study provides the critical link between those concentrations and the addition of energy to the system, or the greenhouse effect,” Feldman adds.
He conducted the research with fellow Berkeley Lab scientists Bill Collins and Margaret Torn, as well as Jonathan Gero of the University of Wisconsin-Madison, Timothy Shippert of Pacific Northwest National Laboratory, and Eli Mlawer of Atmospheric and Environmental Research.
The scientists used incredibly precise spectroscopic instruments operated by the Atmospheric Radiation Measurement (ARM) Climate Research Facility, a DOE Office of Science User Facility. These instruments, located at ARM research sites in Oklahoma and Alaska, measure thermal infrared energy that travels down through the atmosphere to the surface. They can detect the unique spectral signature of infrared energy from CO2.
Other instruments at the two locations detect the unique signatures of phenomena that can also emit infrared energy, such as clouds and water vapor. The combination of these measurements enabled the scientists to isolate the signals attributed solely to CO2.
“We measured radiation in the form of infrared energy. Then we controlled for other factors that would impact our measurements, such as a weather system moving through the area,” says Feldman.
The result is two time-series from two very different locations. Each series spans from 2000 to the end of 2010, and includes 3300 measurements from Alaska and 8300 measurements from Oklahoma obtained on a near-daily basis.
Both series showed the same trend: atmospheric CO2 emitted an increasing amount of infrared energy, to the tune of 0.2 Watts per square meter per decade. This increase is about ten percent of the trend from all sources of infrared energy such as clouds and water vapor.
Based on an analysis of data from the National Oceanic and Atmospheric Administration’s CarbonTracker system, the scientists linked this upswing in CO2-attributed radiative forcing to fossil fuel emissions and fires.
The measurements also enabled the scientists to detect, for the first time, the influence of photosynthesis on the balance of energy at the surface. They found that CO2-attributed radiative forcing dipped in the spring as flourishing photosynthetic activity pulled more of the greenhouse gas from the air.
The scientists used the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility located at Berkeley Lab, to conduct some of the research.
The research was supported by the Department of Energy’s Office of Science.
Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit www.lbl.gov
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website at science.energy.gov/
Dan Krotz | newswise
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:31f2f99b-83b8-4ae2-bbf4-70f74a37df31> | 3.6875 | 1,840 | Content Listing | Science & Tech. | 35.645899 | 95,610,602 |
A Newbie's Guide to Distances in Space
Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space.
- Introduction to Astronomy
- The Celestial Sphere - Right Ascension and Declination
- What is Angular Size?
- What is the Milky Way Galaxy?
- The Astronomical Magnitude Scale
- Sidereal Time, Civil Time and Solar Time
- Equinoxes and Solstices
- Parallax, Distance and Parsecs
- A Newbie's Guide to Distances in Space
- Luminosity and Flux of Stars
- Kepler's Laws of Planetary Motion
- What Are Lagrange Points?
- Glossary of Astronomy & Photographic Terms
- Astronomical Constants
Adverts Blocked Please disable AdBlocking software and allow me to set cookies so that I can continue providing free content and services.
When we talk about distance in astronomy, we are usually talking very, very large numbers. Far too many to describe them in terms of miles or kilometres. When we realised just how big space was, we needed some new units. In modern astronomy, we often use the Astronomical Unit or the Lightyear.
The Astronomical Unit is the average distance from Earth to the Sun. We say average because the Earths orbit is elliptical, varying from a maximum (aphelion) to a minimum (perihelion) and back again once a year.
Due to this variation, the Astronomical Unit is now defined as exactly 149,597,870,700 metres (about 150 million kilometres, or 93 million miles). You can see why we don't express this as kilometres! For objects in the solar system, their orbits are typically given in terms of the Astronomical Unit (AU). Earth being 1AU, Venus at 0.72AU, Jupiter at 5.2AU. These values are much easier to work with. If you want to convert AU to KM or Miles, simply multiply the Earths orbital radius by the AU value.
For distances outside of the solar system, the light year distance is often used. A light year is defined as the distance light travels in a year. Since the speed of light is constant, the distance is also constant. Light travels at around 300,000 kilometres per second, so these numbers can get very big, very fast. In one year, light travels about 10 trillion km. One light-year is equal to 9,500,000,000,000 kilometers, or 63,241 AU.
For large numbers like this, we often use scientific notion. We write a light year as 9.5x1012 km. This is called scientific notation. We simply move the decimal place to the left until we get to the smallest significant figure and count the number of times we moved the decimal place. Even using light years as a measure of distance we still deal with very large numbers. The Andromeda galaxy is the nearest galaxy to the Milky Way, and at a distance of 2.5 million light years, it's quite a bit further than walking down the road to the chemist. The furthest observed galaxy is EGS8p7 which is more than 13.2 billion light years away. Because we know how far away it is, and that the speed of light is constant, we know that the light from that galaxy has travelled for 13.2 billion years to arrive here. We are effectively looking back in time, to a point only a few hundred million years after the big bang. How cool is that?
Last updated on: Wednesday 24th January 2018
There are no comments for this post. Be the first! | <urn:uuid:63530f9d-929d-4c13-984a-966c9d2229fe> | 3.8125 | 784 | Truncated | Science & Tech. | 61.502135 | 95,610,614 |
Chemical and isotopic studies including analyses of noble gases were comprehensively conducted on the groundwater of the entire Kii Peninsula, which is located in the fore-arc region of southwest Japan. Groundwater of Na–Cl–HCO3, Na–HCO3–Cl, and Na–Cl types was shown to be distributed across the whole area. Groundwater in the inland central part of the peninsula shows relatively low salinity, whereas groundwater from the area along the ENE-trending Median Tectonic Line (MTL), on the north side of the peninsula, shows high salinity (up to 18,800mg/L of Cl−) and the presence of unusual heavy oxygen isotopes. This trend is similar to that documented in saline waters from the Arima region (the so-called “Arima-type thermal water”). High 3He/4He ratios relative to the atmospheric value (up to 6.7Ra) were recorded throughout the Kii Peninsula, covering a wider area than documented previously. The saline groundwater is also strongly depleted in 20Ne and heavy noble gases.From the wide distribution of high 3He/4He values and the associated 20Ne and Cl− concentrations, we infer that aqueous fluids derived from dehydration of the subducting slab are present at depth beneath almost the entire Kii Peninsula. These aqueous fluids may ascend along the major north-dipping boundary faults. The isotopic composition of groundwater from the southern part of the peninsula suggests that the contribution from these dehydration-derived fluids is relatively small in this region. However, volatile components (e.g., noble gases and CO2) in the groundwater of this area may originate from the dehydration-derived fluids. Upwelling of Arima-type thermal water of the Na–Cl–HCO3 type is expected to undergo a phase separation of volatile species due to decompression as the fluid ascends. The variety of water types documented may be due to this water–gas separation and the subsequent incorporation of gaseous species into shallow meteoric groundwater. The observed high 3He/4He ratios in the absence of a mantle wedge below the southern part of the Kii Peninsula may reflect the oblique ascent of these fluids along north-dipping boundary faults.
Geochimica et Cosmochimica Acta – Elsevier
Published: Jun 1, 2016
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera
Read and print from thousands of top scholarly journals.
Bookmark this article. You can see your Bookmarks on your DeepDyve Library. | <urn:uuid:5c8ce74a-8e19-4b9a-846d-0cab4fe98d8d> | 2.625 | 815 | Truncated | Science & Tech. | 41.643375 | 95,610,615 |
Researchers from Augsburg, Oxford, and Nanjing report in Nature Communications on a neutron experiment exposing experimental signatures of a low-temperature state predicted 44 years ago
Since 1973, Anderson's resonating valence bond model remains a paradigm for microscopic description of quantum spin liquids in frustrated magnets. It is of fundamental interest as a building unit for more complex quantum-mechanically entangled states that can be used in quantum computing.
Researchers from the Chair of Experimental Physics VI/EKM report in Nature Communications first experimental signatures of excitations from this fundamental state exposed by a neutron-scattering study performed in collaboration with Rutherford Appleton Laboratory in Oxford and Renmin University of China.
Liquids entail haphazardly moving particles that can be correlated on the short-range scale, but lack any long-range order. In contrast to gases, liquids are only weakly compressible, because separations between their particles are small, and inter-particle interactions strong. A liquid-like state can also form in magnets, where electron spins act as individual particles.
Neighboring spins in a spin liquid strongly interact with each other, but evade long-range order, unlike, for example, in ferromagnets, where parallel alignment of spins throughout the crystal generates macroscopic magnetization that can drive rotation of the motor of an electric car or interact with Earth's magnetic field in a compass.
Spins are pairwise correlated, but remain disordered
Back in 1973 American physicist and eventual Nobel prize winner Philip W. Anderson contemplated a model, where spins are arranged on a triangular plane, and only adjacent spins (nearest neighbors) interact. These interactions trigger spins to be mutually antiparallel, but a global antiparallel (antiferromagnetic) configuration is prevented by the triangular arrangement.
The quantum-mechanical description proposed by Anderson is based on the idea of pair-wise correlations, where different pairs form, as shown in the Figure. In each pair, spins are opposite to each other forming resonating valence bonds (RVBs), the name used to emphasize close resemblance with chemical bonds between atoms in molecules and crystals.
The RVB state is quantum-mechanically entangled, it can not be represented by a simple combination of individual spins. Such entanglement opens new possibilities for high-performance calculations in a quantum computer. Despite far-reaching implications for present-day theories, the validity of Anderson's model of the RVB state was in the meantime questioned, and signatures of the RVB state were nowhere to be seen experimentally.
New substance with the triangular spin geometry
"The formation of Anderson's RVB state requires magnetic frustration, the presence of competing interactions between the spins" explains Dr. Alexander Tsirlin, the leader of the young research group at the Center for Electronic Correlations and Magnetism at the Institute of Physics in Augsburg.
This is made possible by a new substance, YbMgGaO4, that was prepared and investigated in collaboration with Renmin University of China and Rutherford Appleton Lab in Oxford, UK. The original chemical compound features regular triangular arrangement of magnetic moments, which are localized on the ytterbium atoms (see the Figure).
Earlier work by the team confirmed that even at temperatures of several hundredths of degree above the absolute zero spins remain dynamic in the form of a spin liquid evading long-range order, a pre-condition for building the long-sought RVB state.
Magnetic excitations follow predictions of Anderson's theory
Neutrons scatter from crystals changing direction and energy, and providing researchers with a sensitive probe of correlations between the spins. Neutron-scattering experiments on YbMgGaO4 reveal two distinct regimes. At higher transfer energies, where neutrons trigger high-energy excitations, experimental observations are in perfect agreement with Anderson's RVB model.
"After several decades, signatures of the nearest-neighbor RVB state have been finally observed", explains Prof. Dr. Philipp Gegenwart, head of the Chair of Experimental Physics VI / EKM. Less clear remains the experimental response at low energies, where Anderson's RVB picture fails. This part of the spectrum appears to be intertwined with magnetic interactions beyond Anderson's model, and may give researchers further clues as to why the RVB state has formed.
Yuesheng Li, Devashibhai Adroja, David Voneshen, Robert I. Bewley, Qingming Zhang, Alexander A. Tsirlin, and Philipp Gegenwart, Nearest-neighbor resonating valence bonds in YbMgGaO4, Nat. Commun. 8 (2017), 15814.
Prof. Dr. Philipp Gegenwart and Dr. Alexander Tsirlin
Chair of Experimental Physics VI / EKM
Institute of Physics / Center of Electronic Correlations and Magnetism
University of Augsburg
Klaus P. Prem | idw - Informationsdienst Wissenschaft
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:445a815d-c25a-4cbb-ac85-6651e2130131> | 3.3125 | 1,662 | Content Listing | Science & Tech. | 29.808939 | 95,610,626 |
Diophantine equations, i.e., equations with integer coefficients for which integer solutions are sought, are among the oldest subjects in mathematics. Early historical occurrences often appeared in the guise of puzzles, and perhaps for that reason, Diophantine equations have been largely neglected in our mathematical schooling. Ironically, though, Diophantine equations play an ever-increasing role in modern applications, not to mention the fact that some Diophantine problems, especially the unsolvable ones, have stimulated an enormous amount of mathematical thinking, advancing the subject of number theory in a way that few other stimuli have.
KeywordsInteger Solution Diophantine Equation Triangular Number Room Acoustics Perfect Number
Unable to display preview. Download preview PDF.
- 7.1G. H. Hardy, E. M. Wright: An Introduction to the Theory of Numbers, 5th ed., Sect. 5. 4 ( Clarendon, Oxford 1984 )Google Scholar
- 7.3C. F. Gauss: Disquisitiones Arithmeticae [English transi. by A. A. Clarke, Yale University Press, New Haven 1966 ]Google Scholar
- R. Tijdeman: “Exponential Diophantine Equations,” in Proc. Int. Congr. Math., Helsinki (1978)Google Scholar
- 7.5W. Kaufmann-Bühler: Gauss. A Biographical Study (Springer, Berlin, Heidelberg, New York (1981)Google Scholar
- 7.6M. Abramowitz, I. A. Stegun: Handbook of Mathematical Functions ( Dover, New York 1965 )Google Scholar
- 7.9M. R. Schroeder: Eigenfrequenzstatistik und Anregungsstatistik in Räumen. Acustica 4, 45–68 (1954)Google Scholar | <urn:uuid:8bb2c14d-3b6e-428a-b79c-7d0d10818664> | 2.890625 | 389 | Truncated | Science & Tech. | 43.851261 | 95,610,654 |
These shoes developed by Puma and MIT Design Lab, use bacteria to improve athletic performance.
Puma and MIT Design Lab is developing products with a biological makeup. The idea behind this collaboration is that there is a more complete athletic experience when humans wear living, adaptable products.
“Deep Learning Insoles” and “Breathing Shoes.”
Bacteria is the secret ingredient to the Deep Learning Insoles. Placed inside discreet crevices on the top layer of the insole, bacteria is able to detect compounds present in sweat. The bacteria then responds by changing the conductivity of the insole. The next layer registers these changes. The third and final layer broadcasts the information to the user’s smart device. Users can read all about their fatigue and performance level in real time.
The Breathing Shoe has a biologically active shoe material that is home to microorganisms. The material learns a user’s specific heat patterns and opens up ventilation based on those user-specific heat patterns. Every user winds up with a unique shoe.
VR along with related technologies, AR and MR shows genuine potential to enhance learning outcomes for students of all ages across a variety of disciplines.
The benefits of VR in particular are based around participatory as opposed to passive learning to drive greater knowledge retention.
The virtual tour has been promoted as the premier application for the utilization of VR in K-12 to date, allowing students to visit locations outside of the classroom without the associated cost of a real life field trip.
The medical sector has been another area of focus with a number of high profile trials taking place including those sponsored by Pearson and Microsoft. Virtual labs to support scientists in conducting otherwise dangerous or costly experiments are also an opportunity for scalable VR deployment going forwards and unlike virtual tour applications, offer potential for monetization. Language learning offers similarities to simulation based training experiences with existing provision to the consumer market expected to translate to institutional sales in the mid to long term.
Creative tools servicing specific vocational subjects like architecture, engineering and product design are also expected to be a key driver for VR adoption in universities but these solutions are typically provided to students without charge by providers looking to seed future users in industry.
Head mounted display (HMD) manufacturers including Oculus, Google and Microsoft are partnering with educational publishers and content providers to develop content for education. Shipments of VR headsets to the higher and further education sector are expected to reach 700,000 units in 2021 accounting for $150 million in revenue. PC based and all in one solutions (the combined purchase of a headset and mobile device) are each forecast to account for a sizeable share of shipments. Sales of higher priced AR headsets are expected to escalate later in the forecast period with major hardware releases slated for the back end of the decade.
In the K-12 market, the number of students accessing HMD based VR/MR/AR content in K-12 institutions is expected to grow from 2.1 million in 2016 to 82.7 million in 2021. The majority of use cases will be supported by all in one headsets.
The world’s first lab-grown burger was unveiled to the world in 2013 carrying a price tag of $330,000 and said not all that tasty.
Scientists are busy creating artificial meat. It’s not just cow-free beef burgers on the future menu — several groups around the world are attempting to clone chicken breasts and fish fillets, as well.
One of the big takeaways from the 2013 cultured burger demo was that meat just ain’t right without fat. So, Post’s lab is now culturing fatty tissue in addition to muscle fibers. Working out that process has taken some time. Until now, there hasn’t been a whole lot of scientific interest in culturing fat cells, and methods that did exist used chemicals we don’t really want to be eating.
Post’s lab is culturing beef fat and muscle tissue separately, and mixing the two after the fact. In the future, Post imagines combining the two cell types in a co-culture. But first, there are a couple other burger basics the team is trying to improve on.
Designer pets” are already within reach; mice have been turned green. Beagles have been doubled in muscle mass. Pigs have been shrunk to the size of cocker spaniels with “designer fur.” Woolly mammoths are being attempted.
Illustration: Chelsea Beck/GMG
They are predicting that half of the population with decent health care will–have eggs grown from human skin and fertilized with sperm, then have the entire genome of about 100 embryo samples sequenced, peruse the highlights, and pick the best model to implant.
traits could changed in a designer baby
Embryo screening involves a process called pre-implantation genetic diagnosis (PGD). Embryos are created by in-nitro fertilization and grown to the eight-cell stage, at which point one or two cells are removed. Scientists then examine the DNA of these cells for defects, and only normal embryos are replaced in the womb.
Three-parent babies are human offspring with three genetic parents, created through a specialized form of In vitro fertilization in which the future baby’s mitochondrial DNA comes from a third party. The procedure is intended to prevent mitochondrial diseases including muscular dystrophy and some heart and liver conditions.
Pros and Cons of Designer Babies
Reduces risk of genetic diseases
Reduces risk of inherited medical conditions
Keep pace with others doing it
Better chance the child will succeed in life
Better understanding of genetics
Increased life span
Can give a child genes that the parents do not carry
Prevent next generation of family from getting characteristics/diseases
Termination of embryos
Could create a gap in society
Possibility of damage to the gene pool
Baby has no choice in the matter
Genes often have more than one use
Geneticists are not perfect
Loss of Individuality
Other children in family could be affected by parent’s decision
Only the rich can afford it
Some scientists disagreed over whether certain types of gene-editing would be important for helping patients, with one prominent researcher contending the technology would not often be needed, while another described dire current clinical needs for it.
CRISPR is a powerful technology that allows editing—by way of replacing or repairing—of multiple genes at once in animal, plant and human cells. This biological tool could help unlock understanding of basic human biology and also help patients in need of medical care. However, This method has also sparked new ethical controversy.
Gene editing could include altering genes in one person—say to treat disease or make a cosmetic change—but, more controversially, it could also include making changes to the germ line that would then alter the genome for an individual’s children, grandchildren and the following generations, with potentially unknown repercussions. | <urn:uuid:620a9085-04fd-4a33-9db5-d95f7d4693ff> | 2.984375 | 1,431 | Content Listing | Science & Tech. | 36.131173 | 95,610,688 |
In a new study, physicists look at the conditions necessary for the formation of carbon and oxygen to form carbon-based life in the universe.
Life as we know it is based upon the elements of carbon and oxygen. Now a team of physicists, including one from North Carolina State University, is looking at the conditions necessary to the formation of those two elements in the universe. They’ve found that when it comes to supporting life, the universe leaves very little margin for error.
Both carbon and oxygen are produced when helium burns inside of giant red stars. Carbon-12, an essential element we’re all made of, can only form when three alpha particles, or helium-4 nuclei, combine in a very specific way. The key to formation is an excited state of carbon-12 known as the Hoyle state, and it has a very specific energy – measured at 379 keV (or 379,000 electron volts) above the energy of three alpha particles. Oxygen is produced by the combination of another alpha particle and carbon.
NC State physicist Dean Lee and German colleagues Evgeny Epelbaum, Hermann Krebs, Timo Laehde and Ulf-G. Meissner had previously confirmed the existence and structure of the Hoyle state with a numerical lattice that allowed the researchers to simulate how protons and neutrons interact. These protons and neutrons are made up of elementary particles called quarks. The light quark mass is one of the fundamental parameters of nature, and this mass affects particles’ energies.
In new lattice calculations done at the Juelich Supercomputer Center the physicists found that just a slight variation in the light quark mass will change the energy of the Hoyle state, and this in turn would affect the production of carbon and oxygen in such a way that life as we know it wouldn’t exist.
“The Hoyle state of carbon is key,” Lee says. “If the Hoyle state energy was at 479 keV or more above the three alpha particles, then the amount of carbon produced would be too low for carbon-based life.
“The same holds true for oxygen,” he adds. “If the Hoyle state energy were instead within 279 keV of the three alphas, then there would be plenty of carbon. But the stars would burn their helium into carbon much earlier in their life cycle. As a consequence, the stars would not be hot enough to produce sufficient oxygen for life. In our lattice simulations, we find that more than a 2 or 3 percent change in the light quark mass would lead to problems with the abundance of either carbon or oxygen in the universe.”
The work was funded by the U.S. Department of Energy; the Deutsche Forschungsgemeinschaft, Helmholtz-Gemeinschaft Deutscher Forschungszentren and Bundesministerium fuer Bildung und Forschung in Germany; European Union HadronPhysics3 Project and the European Research Council.
Publication: Evgeny Epelbaum, et al., “Viability of Carbon-Based Life as a Function of the Light Quark Mass,” Phys. Rev. Lett. 110, 112502 (2013); DOI:10.1103/PhysRevLett.110.112502
Source: Tracey Peake, North Carolina State University
Image: Dean Lee | <urn:uuid:f2090da9-6650-47ab-a7a8-3dbb04af9abe> | 3.734375 | 717 | News Article | Science & Tech. | 49.032814 | 95,610,691 |
In physics, the Young–Laplace equation (//) is a nonlinear partial differential equation that describes the capillary pressure difference sustained across the interface between two static fluids, such as water and air, due to the phenomenon of surface tension or wall tension, although usage on the latter is only applicable if assuming that the wall is very thin. The Young–Laplace equation relates the pressure difference to the shape of the surface or wall and it is fundamentally important in the study of static capillary surfaces. It is a statement of normal stress balance for static fluids meeting at an interface, where the interface is treated as a surface (zero thickness):
where is the pressure difference across the fluid interface, is the surface tension (or wall tension), is the unit normal pointing out of the surface, is the mean curvature, and and are the principal radii of curvature. (Some authors[who?] refer inappropriately to the factor as the total curvature.) Note that only normal stress is considered, this is because it can be shown that a static interface is possible only in the absence of tangential stress.
The equation is named after Thomas Young, who developed the qualitative theory of surface tension in 1805, and Pierre-Simon Laplace who completed the mathematical description in the following year. It is sometimes also called the Young–Laplace–Gauss equation, as Gauss unified the work of Young and Laplace in 1830, deriving both the differential equation and boundary conditions using Johann Bernoulli's virtual work principles.
If the pressure difference is zero, as in a soap film without gravity, the interface will assume the shape of a minimal surface.
The equation also explains the energy required to create an emulsion. To form the small, highly curved droplets of an emulsion, extra energy is required to overcome the large pressure that results from their small radius.
Capillary pressure in a tube
In a sufficiently narrow (i.e., low Bond number) tube of circular cross-section (radius a), the interface between two fluids forms a meniscus that is a portion of the surface of a sphere with radius R. The pressure jump across this surface is related to the radius and the surface tension γ by
This may be shown by writing the Young–Laplace equation in spherical form with a contact angle boundary condition and also a prescribed height boundary condition at, say, the bottom of the meniscus. The solution is a portion of a sphere, and the solution will exist only for the pressure difference shown above. This is significant because there isn't another equation or law to specify the pressure difference; existence of solution for one specific value of the pressure difference prescribes it.
The radius of the sphere will be a function only of the contact angle, θ, which in turn depends on the exact properties of the fluids and the solids in which they are in contact:
so that the pressure difference may be written as:
In order to maintain hydrostatic equilibrium, the induced capillary pressure is balanced by a change in height, h, which can be positive or negative, depending on whether the wetting angle is less than or greater than 90°. For a fluid of density ρ:
|γ = 0.0728 J/m2 at 20 °C||θ = 20° (0.35 rad)|
|ρ = 1000 kg/m3||g = 9.8 m/s2|
— and so the height of the water column is given by:
Thus for a 2 mm wide (1 mm radius) tube, the water would rise 14 mm. However, for a capillary tube with radius 0.1 mm, the water would rise 14 cm (about 6 inches).
Capillary action in general
In the general case, for a free surface and where there is an applied "over-pressure", Δp, at the interface in equilibrium, there is a balance between the applied pressure, the hydrostatic pressure and the effects of surface tension. The Young–Laplace equation becomes:
— and characteristic pressure:
The non-dimensional equation then becomes:
Thus, the surface shape is determined by only one parameter, the over pressure of the fluid, Δp* and the scale of the surface is given by the capillary length. The solution of the equation requires an initial condition for position, and the gradient of the surface at the start point.
Application in medicine
Francis Hauksbee performed some of the earliest observations and experiments in 1709 and these were repeated in 1718 by James Jurin who observed that the height of fluid in a capillary column was a function only of the cross-sectional area at the surface, not of any other dimensions of the column.
Thomas Young laid the foundations of the equation in his 1804 paper An Essay on the Cohesion of Fluids where he set out in descriptive terms the principles governing contact between fluids (along with many other aspects of fluid behaviour). Pierre Simon Laplace followed this up in Mécanique Céleste with the formal mathematical description given above, which reproduced in symbolic terms the relationship described earlier by Young.
Laplace accepted the idea propounded by Hauksbee in his book Physico-mechanical Experiments (1709), that the phenomenon was due to a force of attraction that was insensible at sensible distances. The part which deals with the action of a solid on a liquid and the mutual action of two liquids was not worked out thoroughly, but ultimately was completed by Gauss. Franz Ernst Neumann (1798-1895) later filled in a few details.
- Surface Tension Module, by John W. M. Bush, at MIT OCW.
- Robert Finn (1999). "Capillary Surface Interfaces" (PDF). AMS.
- "Jurin rule". McGraw-Hill Dictionary of Scientific and Technical Terms. McGraw-Hill on Answers.com. 2003. Retrieved 2007-09-05.
- James Jurin (1718) "An account of some experiments shown before the Royal Society; with an enquiry into the cause of some of the ascent and suspension of water in capillary tubes," Philosophical Transactions of the Royal Society of London, 30 : 739–747.
- James Jurin (1719) "An account of some new experiments, relating to the action of glass tubes upon water and quicksilver," Philosophical Transactions of the Royal Society of London, 30 : 1083–1096.
- Lamb, H. Statics, Including Hydrostatics and the Elements of the Theory of Elasticity, 3rd ed. Cambridge, England: Cambridge University Press, 1928.
- Basford, Jeffrey R. (2002). "The Law of Laplace and its relevance to contemporary medicine and rehabilitation". Archives of Physical Medicine and Rehabilitation. 83 (8): 1165–1170. doi:10.1053/apmr.2002.33985.
- Francis Hauksbee, Physico-mechanical Experiments on Various Subjects … (London, England: (Self-published by author; printed by R. Brugis), 1709), pages 139–169.
- Francis Hauksbee (1711) "An account of an experiment touching the direction of a drop of oil of oranges, between two glass planes, towards any side of them that is nearest press'd together," Philosophical Transactions of the Royal Society of London, 27 : 374–375.
- Francis Hauksbee (1712) "An account of an experiment touching the ascent of water between two glass planes, in an hyperbolick figure," Philosophical Transactions of the Royal Society of London, 27 : 539–540.
- Maxwell, James Clerk; Strutt, John William (1911). "Capillary Action". Encyclopædia Britannica. 5 (11th ed.). pp. 256–275.
- Thomas Young (1805) "An essay on the cohesion of fluids," Philosophical Transactions of the Royal Society of London, 95 : 65–87.
- Pierre Simon marquis de Laplace, Traité de Mécanique Céleste, volume 4, (Paris, France: Courcier, 1805), Supplément au dixième livre du Traité de Mécanique Céleste, pages 1–79.
- Pierre Simon marquis de Laplace, Traité de Mécanique Céleste, volume 4, (Paris, France: Courcier, 1805), Supplément au dixième livre du Traité de Mécanique Céleste. On page 2 of the Supplément, Laplace states that capillary action is due to "… les lois dans lesquelles l'attraction n'est sensible qu'à des distances insensibles; …" (… the laws in which attraction is sensible [significant] only at insensible [infinitesimal] distances …).
- In 1751, Johann Andreas Segner came to the same conclusion that Hauksbee had reached in 1709: J. A. von Segner (1751) "De figuris superficierum fluidarum" (On the shapes of liquid surfaces), Commentarii Societatis Regiae Scientiarum Gottingensis (Memoirs of the Royal Scientific Society at Göttingen), 1 : 301–372. On page 303, Segner proposes that liquids are held together by an attractive force (vim attractricem) that acts over such short distances "that no one could yet have perceived it with their senses" (… ut nullo adhuc sensu percipi poterit.).
- Carl Friedrich Gauss, Principia generalia Theoriae Figurae Fluidorum in statu Aequilibrii [General principles of the theory of fluid shapes in a state of equilibrium] (Göttingen, (Germany): Dieterichs, 1830). Available on-line at: Hathi Trust.
- Franz Neumann with A. Wangerin, ed., Vorlesungen über die Theorie der Capillarität [Lectures on the theory of capillarity] (Leipzig, Germany: B. G. Teubner, 1894).
- Rouse Ball, W. W. (2003) "Pierre Simon Laplace (1749–1827)", in A Short Account of the History of Mathematics, 4th ed., Dover, ISBN 0-486-20630-0
- Maxwell, James Clerk; Strutt, John William (1911). "Capillary Action". In Chisholm, Hugh. Encyclopædia Britannica. 5 (11th ed.). Cambridge University Press. pp. 256–275.
- Batchelor, G. K. (1967) An Introduction To Fluid Dynamics, Cambridge University Press
- Jurin, J. (1716). "An account of some experiments shown before the Royal Society; with an enquiry into the cause of the ascent and suspension of water in capillary tubes". Philosophical Transactions of the Royal Society. 30 (351–363): 739–747. doi:10.1098/rstl.1717.0026.
- Tadros T. F. (1995) Surfactants in Agrochemicals, Surfactant Science series, vol.54, Dekker | <urn:uuid:55d943a1-cb7e-41a8-a1aa-1a2dad053f4d> | 3.875 | 2,372 | Knowledge Article | Science & Tech. | 51.210941 | 95,610,702 |
JAR stands for Java ARchive and is platform independent. By making all the classes into one JAR file, we make it portable that can be run anywhere and on any operating system or platform.
To execute a JAR file, one must have Java installed in their computer. Also the version of the Java should be up-to-date, so before proceeding further make sure your computer meets the requirement.
Second step is to make the JAR file executable. All class files of the application are collected at one place. A programmer must set an entry point to the main class containing the main method using the Manifest file "manifest.mf". To set an entry point use "-e" jar.
If you do not create a main class, your JAR files won't run. To create a main class use Main Class: [Package Name].[Class Name]. Put this file into JAR file in a subfolder named Meta-inf.
.jar files are executed from this entry point by compilers or JVM (Java Virtual Machine). The executable JAR files are different from other files as they have a hexadecimal field code on the first file. One can also set the entry point using .jar tool.
.jar file is executed from javaw.exe.
Now that you have made the JAR file executable, you can attach it along with any libraries the program uses.
To Run your .jar file type java -jar [Jar file Name] in the command prompt. To execute it on double click paste the path of executable file C:\Program Files\Java\j2rex.y.z\bin\javaw.exe" -jar "%1" %*
Point to remember:
A user does not normally create a jar files but they execute it quite often. IDE like Netbeans or Eclipse can also be used to execute a .jar file. | <urn:uuid:e97c9305-b796-4032-89f5-64fd796b748e> | 3.515625 | 393 | Tutorial | Software Dev. | 59.208465 | 95,610,708 |
In this program, we are going to read and write the excel sheet using java .
The package we need to import is :
org.apache.poi.poif.filesystem.POIFileSystem class is used to buffer the excel file into an object. In other words, we can say that it is used to read the excel file. After reading the excel file we use write() method to write into any excel file.
We can read any file and write it into other excel file by this way. You can add new cell and rows or you can set any default values for any cells. This means you can read excel file and modify it according to your need.
To run this example you have to download the excel
file and code first then make folder with the name of excel in C drive then copy the
excel file and copy the code into webapps then follow the steps as defined in previous
The code of the program is given below:
The output of the program is given below: | <urn:uuid:f21559db-9ad0-4bd0-837e-c7e25604cc08> | 3.28125 | 209 | Tutorial | Software Dev. | 65.374631 | 95,610,709 |
- Name meaning:
- 'before lizard crest [Saurolophus]'
© Berislav Krzic
- Dinosaur description:
Prosaurolophus is known from the skeletons of 24-29 individuals, some articulated (the bones were joined as they would have been in life).
- Dinosauria, Ornithischia, Genasauria, Cerapoda, Ornithopoda, Euornithopoda, Iguanadontia, Euiguanadontia, Dryomorpha, Ankylopollexia, Iguanodontoidea, Hadrosauridae, Euhadrosauria, Hadrosaurinae
- Named by:
- Brown (1916)
- Type species: | <urn:uuid:30d10079-ef46-4205-82c2-e5acb4f0bbb0> | 2.828125 | 162 | Structured Data | Science & Tech. | -67.239245 | 95,610,711 |
In physics and chemistry, a selection rule, or transition rule, formally constrains the possible transitions of a system from one quantum state to another. Selection rules have been derived for electromagnetic transitions in molecules, in atoms, in atomic nuclei, and so on. The selection rules may differ according to the technique used to observe the transition. The selection rule also plays a role in chemical reactions, where some are formally spin forbidden reactions, that is, reactions where the spin state changes at least once from reactants to products.
In the following, mainly atomic and molecular transitions are considered.
where and are the wave functions of the two states involved in the transition and µ is the transition moment operator. If the value of this integral is zero the transition is forbidden. In practice, the integral itself does not need to be calculated to determine a selection rule. It is sufficient to determine the symmetry of transition moment function, . If the symmetry of this function spans the totally symmetric representation of the point group to which the atom or molecule belongs then its value is (in general) not zero and the transition is allowed. Otherwise, the transition is forbidden.
The transition moment integral is zero if the transition moment function, , is anti-symmetric or odd, i.e. y(x) = -y(-x) holds. The symmetry of the transition moment function is the direct product of the parities of its three components. The symmetry characteristics of each component can be obtained from standard character tables. Rules for obtaining the symmetries of a direct product can be found in texts on character tables.
|Transition type||µ transforms as||note|
|Electric dipole||x, y, z||Optical spectra|
|Electric quadrupole||x2, y2, z2, xy, xz, yz||Constraint x2 + y2 + z2 = 0|
|Electric polarizability||x2, y2, z2, xy, xz, yz||Raman spectra|
|Magnetic dipole||Rx, Ry, Rz||Optical spectra (weak)|
The Laporte rule is a selection rule formally stated as follows: In a centrosymmetric environment, transitions between like atomic orbitals such as s-s,p-p, d-d, or f-f, transitions are forbidden. The Laporte rule applies to electric dipole transitions, so the operator has u symmetry (meaning ungerade, odd). p orbitals also have u symmetry, so the symmetry of the transition moment function is given by the triple product u×u×u, which has u symmetry. The transitions are therefore forbidden. Likewise, d orbitals have g symmetry (meaning gerade, even), so the triple product g×u×g also has u symmetry and the transition is forbidden.
The wave function of a single electron is the product of a space-dependent wave function and a spin wave function. Spin is directional and can be said to have odd parity. It follows that transitions in which the spin "direction" changes are forbidden. In formal terms, only states with the same total spin quantum number are "spin-allowed". In crystal field theory, d-d transitions that are spin-forbidden are much weaker than spin-allowed transitions. Both can be observed, in spite of the Laporte rule, because the actual transitions are coupled to vibrations that are anti-symmetric and have the same symmetry as the dipole moment operator.
In vibrational spectroscopy, transitions are observed between different vibrational states. In a fundamental vibration, the molecule is excited from its ground state (v = 0) to the first excited state (v = 1). The symmetry of the ground-state wave function is the same as that of the molecule. It is, therefore, a basis for the totally symmetric representation in the point group of the molecule. It follows that, for a vibrational transition to be allowed, the symmetry of the excited state wave function must be the same as the symmetry of the transition moment operator.
In infrared spectroscopy, the transition moment operator transforms as either x and/or y and/or z. The excited state wave function must also transform as at least one of these vectors. In Raman spectroscopy, the operator transforms as one of the second-order terms in the right-most column of the character table, below.
|E||8 C3||3 C2||6 S4||6 σd|
|A1||1||1||1||1||1||x2 + y2 + z2|
|E||2||-1||2||0||0||(2 z2 - x2 - y2,x2 - y2)|
|T1||3||0||-1||1||-1||(Rx, Ry, Rz)|
|T2||3||0||-1||-1||1||(x, y, z)||(xy, xz, yz)|
The molecule methane, CH4, may be used as an example to illustrate the application of these principles. The molecule is tetrahedral and has Td symmetry. The vibrations of methane span the representations A1 + E + 2T2. Examination of the character table shows that all four vibrations are Raman-active, but only the T2 vibrations can be seen in the infrared spectrum.
In the harmonic approximation, it can be shown that overtones are forbidden in both infrared and Raman spectra. However, when anharmonicity is taken into account, the transitions are weakly allowed.
There are many types of coupled transition such as are observed in vibration-rotation spectra. The excited-state wave function is the product of two wave functions such as vibrational and rotational. The general principle is that the symmetry of the excited state is obtained as the direct product of the symmetries of the component wave functions. In rovibronic transitions, the excited states involve three wave functions.
The infrared spectrum of hydrogen chloride gas shows rotational fine structure superimposed on the vibrational spectrum. This is typical of the infrared spectra of heteronuclear diatomic molecules. It shows the so-called P and R branches. The Q branch, located at the vibration frequency, is absent. Symmetric top molecules display the Q branch. This follows from the application of selection rules.
Resonance Raman spectroscopy involves a kind of vibronic coupling. It results in much-increased intensity of fundamental and overtone transitions as the vibrations "steal" intensity from an allowed electronic transition. In spite of appearances, the selection rules are the same as in Raman spectroscopy.
- See also angular momentum coupling
In general, electric (charge) radiation or magnetic (current, magnetic moment) radiation can be classified into multipoles Eλ (electric) or Mλ (magnetic) of order 2λ, e.g., E1 for electric dipole, E2 for quadrupole, or E3 for octupole. In transitions where the change in angular momentum between the initial and final states makes several multipole radiations possible, usually the lowest-order multipoles are overwhelmingly more likely, and dominate the transition.
The emitted particle carries away an angular momentum λ, which for the photon must be at least 1, since it is a vector particle (i.e., it has JP = 1−). Thus, there is no E0 (electric monopoles) or M0 (magnetic monopoles, which do not seem to exist) radiation.
Since the total angular momentum has to be conserved during the transition, we have that
where , and its z-projection is given by ; and are, respectively, the initial and final angular momenta of the atom. The corresponding quantum numbers λ and μ (z-axis angular momentum) must satisfy
Parity is also preserved. For electric multipole transitions
while for magnetic multipoles
Thus, parity does not change for E-even or M-odd multipoles, while it changes for E-odd or M-even multipoles.
These considerations generate different sets of transitions rules depending on the multipole order and type. The expression forbidden transitions is often used; this does not mean that these transitions cannot occur, only that they are electric-dipole-forbidden. These transitions are perfectly possible; they merely occur at a lower rate. If the rate for an E1 transition is non-zero, the transition is said to be permitted; if it is zero, then M1, E2, etc. transitions can still produce radiation, albeit with much lower transitions rates. These are the so-called forbidden transitions. The transition rate decreases by a factor of about 1000 from one multipole to the next one, so the lowest multipole transitions are most likely to occur.
Semi-forbidden transitions (resulting in so-called intercombination lines) are electric dipole (E1) transitions for which the selection rule that the spin does not change is violated. This is a result of the failure of LS coupling.
is the total angular momentum, is the Azimuthal quantum number, is the Spin quantum number, and is the secondary total angular momentum quantum number. Which transitions are allowed is based on the Hydrogen-like atom. The symbol is used to indicate a forbidden transition.
|Allowed transitions||Electric dipole (E1)||Magnetic dipole (M1)||Electric quadrupole (E2)||Magnetic quadrupole (M2)||Electric octupole (E3)||Magnetic octupole (M3)|
|LS coupling||(4)||One electron jump
||No electron jump
|None or one electron jump
||One electron jump
||One electron jump
||One electron jump|
In surface vibrational spectroscopy, the surface selection rule is applied to identify the peaks observed in vibrational spectra. When a molecule is adsorbed on a substrate, the molecule induces opposite image charges in the substrate. The dipole moment of the molecule and the image charges perpendicular to the surface reinforce each other. In contrast, the dipole moments of the molecule and the image charges parallel to the surface cancel out. Therefore, only molecular vibrational peaks giving rise to a dynamic dipole moment perpendicular to the surface will be observed in the vibrational spectrum.
- Harris & Bertolucci, p. 130
- Salthouse, J.A.; Ware, M.J. (1972). Point group character tables and related data. Cambridge University Press. ISBN 0-521-08139-4.
- Anything with u (German ungerade) symmetry is antisymmetric with respect to the centre of symmetry. g (German gerade) signifies symmetric with respect to the centre of symmetry. If the transition moment function has u symmetry, the positive and negative parts will be equal to each other, so the integral has a value of zero.
- Harris & Berolucci, p. 330
- Harris & Berolucci, p. 336
- Cotton Section 9.6, Selection rules and polarizations
- Cotton, Section 10.6 Selection rules for fundamental vibrational transitions
- Cotton, Chapter 10 Molecular Vibrations
- Cotton p. 327
- Califano, S. (1976). Vibrational states. Wiley. ISBN 0-471-12996-8. Chapter 9, Anharmonicity
- Kroto, H.W. (1992). Molecular Rotation Spectra. new York: Dover. ISBN 0-486-49540-X.
- Harris & Berolucci, p. 339
- Harris & Berolucci, p. 123
- Long, D.A. (2001). The Raman Effect: A Unified Treatment of the Theory of Raman Scattering by Molecules. Wiley. ISBN 0-471-49028-8. Chapter 7, Vibrational Resonance Raman Scattering
- Harris & Berolucci, p. 198
- Softley, T.P. (1994). Atomic Spectra. Oxford: Oxford University Press. ISBN 0-19-855688-8.
- Condon, E.V.; Shortley, G.H. (1953). The Theory of Atomic Spectra. Cambridge University Press. ISBN 0-521-09209-4.
Harris, D.C.; Bertolucci, M.D. (1978). Symmetry and Spectroscopy. Oxford University Press. ISBN 0-19-855152-5.
Cotton, F.A. (1990). Chemical Applications of Group Theory (3rd ed.). Wiley. ISBN 978-0-471-51094-9.
- Stanton, L. (1973). "Selection rules for pure rotation and vibration-rotation hyper-Raman spectra". Journal of Raman Spectroscopy. 1 (1): 53–70. Bibcode:1973JRSp....1...53S. doi:10.1002/jrs.1250010105.
- Bower, D.I; Maddams, W.F. (1989). The vibrational spectroscopy of polymers. Cambridge University Press. ISBN 0-521-24633-4. Section 4.1.5: Selection rules for Raman activity.
- Sherwood, P.M.A. (1972). Vibrational Spectroscopy of Solids. Cambridge University Press. ISBN 0-521-08482-2. Chapter 4: The interaction of radiation with a crystal. | <urn:uuid:a9078a51-5d66-435d-bbfa-042d77d14272> | 3.890625 | 2,855 | Knowledge Article | Science & Tech. | 50.24661 | 95,610,714 |
An emotive article on the ice apocalypse by Eric Holthaus describes a terrifying vision of catastrophic sea level rise this century caused by climate change and the collapse of the Antarctic ice sheet. But how likely is this and how soon could such a future is right there?
I’ve been gripped by the story of Antarctic’ ice cliff instability’ ever since Rob DeConto and Dave Pollard published their controversial predictions last year. They suggested disintegration of ice shelves caused by global warming could leave behind coastal ice cliffs so tall they would be unstable, crumbling endlessly into the ocean and causing rapid, sustained sea level rise.
I’m glad Eric Holthaus is writing about an impact of climate change that is both certain( seas will rise around the world , no matter what we do) and extraordinarily important( we must accommodate ). I’m sympathetic to his concerns about the future. But I believe his article is too pessimistic: that it exaggerates the possibility of disaster. Too soon, too certain.
I’ll admit my fascination is personal. In late 2015, Catherine Ritz and I co-led a study about the future of Antarctica, and our predictions– though definitely bad news- were far less dire. So what evidence do we have that the Antarctic ice sheet could, or would, collapse this century?
First, it’s important to map out what is known. Yes, the Antarctic ice sheet is vulnerable to global warming. Not because it will melt in the warm air like an ice cube, but because the ice shelves around the coastlines act as bracing for the glaciers. When the surface of an ice shelf melts it can disintegrate, and the flow of ice into the oceans can speed up: we saw this for Larsen B in 2002. Our greenhouse gas emissions make this more likely.
Yes, scientists have proposed that disintegration of ice shelves could trigger two types of’ instability’ that may have caused the’ marine’ parts of the Antarctic ice sheet( lying on bedrock below sea level) to drastically reduce in the past, creating sea levels by several metres. Confusingly, such is named’ marine ice sheet instability’ and’ marine ice cliff instability ‘. Broadly speaking, both are based around the idea that the thicker the leading edge of the ice sheet, the faster ice can be lost into the oceans. Both may be’ positive feedbacks ‘: in other words, self-sustaining. The marine parts of the ice sheet are thicker in the middle than at the edges, so once these proposed instabilities started, they would keep going until they operated out of ice. As Eric says, this would be over three metres worth of sea level rise.
Yes, one instability may already be happening. In 2014, new evidence indicated marine ice sheet instability may be underway in the Amundsen Sea area, forty years after it was first predicted. But Eric is wrong to say Antarctica’s’ ice budget’ has tip-off out of balance due to our burning of fossil fuel. Not only has it been out of balance before- such as the ancient West Antarctic collapse that causes fear- but the reason for the Amundsen Sea changes, where most ice is being lost, is that the ring of deep warm water around Antarctica has welled up onto the continental shelf and is melting the ice from underneath. We don’t know if human activities made this more likely.
As far as I know, scientists concur the basic hypothesi of marine ice cliff instability by Jeremy Bassis and Catherine Walker is sound( it’s a shame Catherine’s joint contribution to that key 2011 newspaper wasn’t mentioned ). Ice is not strong enough to form a sheer cliff taller than about 100 m above the water line. And indeed, we don’t see any ice cliffs taller than this, which gives us indirect evidence their calculations are correct. We also have the recent survey led by Matthew Wise of marks left by the keels of ancient iceberg, rubbing along the bedrock like heavy ships, which point towards this kind of cliff breakdown in the distant past.
But there is little consensus in the scientific community about how this ice cliff instability could behave. That’s because there is a big leap from identifying a potential problem and predicting the real world consequences. Iceberg scrape marks, though important, can’t tell us how soon, how fast, or for how long. Rob DeConto and Dave Pollard have been the first to put their heads above the parapet, but others may come to different conclusions. Ted Scambos mentioned’ heaps of iceberg’ acting as new ice shelves. Could cold freshwater from melted iceberg also slow things down? Could cliffs partially, instead of catastrophically, crumble? Would a model with finer detail predict fewer tall cliffs? Eric describes the collapse rate as conservative compared with Jakobshavn in Greenland, but this compares apples with oranges: the rapid ice losses from that great glacier are largely due to its unusually fast velocity, rather than the rate of retreat of the ice edge.
Just as important is the initial trigger: how sensitive are the ice shelves to global warming? In Rob and Dave’s study, they disintegrated fast and early: predictions by others are less pessimistic. Eric compounds this by describing their highest scenario as’ business as usual ‘, but it’s really business-worse-than-usual. Under current policies we’re headed for less warming than this scenario, and if national pledges under the Paris agreement are carried out this would decrease the warming more( though not meeting the two degree target ).
Even Rob and Dave had a huge range of predictions for that very high scenario- anything from 30 cm sea level autumn to virtually two metres rise – and they originally described their model as’ speculative ‘. So it’s not justified to utilize such strong, certain phrases as’ the destruction would be unstoppable’.
I was particularly concerned about some of the implied time scales and impacts. That’ slowly interring every shoreline…creating millions of climate refugees…could play out in a mere 20 to 50 years'( it could begin then, but would take far longer ). That’ the full 11 feet’ could be unlocked by 2100( Rob and Dave predicted the middle of next century ). That cities will be’ wiped off the map'( we will adapt, because the costs of protecting coastlines are predicted to be far less than those of flooding ). We utterly should be concerned about climate dangers, and reduce them. But black-and-white thinking and over-simplification don’t help with risk management, they hinder.
Is” the entire scientific community[ in] emergency mode “? We are cautious, and trying to learn more. Climate prediction is a strange game. It takes decades to test our predictions, so society must make decisions with the best proof but always under uncertainty. I understand why a US-based climate scientist would feel especially pessimistic. But we have to take care not to talk about the apocalypse as if it were inevitable.
Make sure to visit: CapGeneration.com | <urn:uuid:b1092db1-25f5-41c7-acc5-5adf8234c9e4> | 3.515625 | 1,485 | Personal Blog | Science & Tech. | 44.380854 | 95,610,717 |
World's longest running synchrotron light experiment reveals long-term behavior of nuclear waste cement
The first experiment placed on Diamond's Long Duration Experimental (LDE) facility, on beamline I11, has now been in place for 1,000 days. The experiment, led by Dr Claire Corkhill from the University of Sheffield, has used the world-leading capabilities of the beamline to investigate the hydration of cements used by the nuclear industry for the storage and disposal of waste.
"Understanding the rate at which hydration occurs in cement, a process that can take anywhere up to 50 years, is very important to help us predict the behaviours of cement in the long term," explained Dr Corkhill.
"These cements are being used to safely lock away the radioactive elements in nuclear waste for timescales of more than 10,000 years, so it is extremely important that we can accurately predict the properties of these materials in the future. The unique facility at Diamond has allowed us to follow this reaction in situ, for 1000 days, and the data is already allowing us to identify particular phases that will safely lock away radioactive elements in 100 years' time, something we would otherwise not have been able to determine."
Dr Corkhill is planning to return to Diamond to investigate the reaction of these phases with uranium, technetium and plutonium on one of Diamond's X-ray absorption spectroscopy beamlines, B18.
"Synchrotron light allows us a window into the chemistry of these materials that have a very important role in the safe disposal of nuclear waste that just isn't available through any other techniques," Corkhill added.
The LDE was commissioned in 2014, with the first experiment being placed on the beamline on the 6th October 2014. Since then, the samples have been put into the beamline periodically to capture data on any changes. Other experiments have included looking at looking at power cycling in batteries and ice crystal formation.
"Seeing long duration experiments in the facility is very pleasing," concludes Professor Chiu Tang, Principal Beamline Scientist. "That we can demonstrate experiments over 1,000 days is a testament to how well we've engaged with our user community, enabling them to use synchrotron light to probe the frontiers of science."
For further information, please contact:
Senior Press Officer
Diamond Light Source Ltd.
Harwell Science & Innovation Campus
Steve Pritchard | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:d8d4a7d7-e31f-46b6-b38e-38e0183d1806> | 2.984375 | 1,074 | Content Listing | Science & Tech. | 37.591131 | 95,610,718 |
This is when the Earth is in between the Sun and the Moon. Think of the moon being in the Earth's shadow.
A lunar eclipse occurs when the moon passes behind the earth such that the earth blocks the sun’s rays from striking the moon. This can occur only when the Sun, Earth and Moon are aligned exactly, or very closely so, with the Earth in the middle. Hence, there is always a full moon the night of a lunar eclipse
by sunforged 7 years ago
Full lunar eclipse tonight.."You have two options: Stay up Late or Get up Early!It's been over thirty months since the continental United States in it's entirety has been able to view a total lunar eclipse. Keep your eyes on the sky next Tuesday morning, December 21st. The moon will hit a...
by tvs290 3 years ago
Whats so special about today's Lunar Eclipse?
by sobhy 7 years ago
The Earth casts a long shadow behind the side facing the sun. A lunar eclipse occurs when the moon enters the Earth's shadow. This shadow has two parts: the known completely in the shadow of the umbra and the partial shadow called the penumbra.When the moon is completely immersed in the umbra, a...
by Beth Perry 4 years ago
Who is planning to watch the blood moon lunar eclipse tonight?I'm going to try and watch it -3AM EST- but we have a very cloudy sky tonight, so not sure if the view will be much. Anyone else going to try and see it?
by Rafini 11 months ago
My son thinks so, and now I'm beginning to think it could be true - has this theory already been considered? I mean, it's impossible to know what the sun is really made of...all we know for sure, is that its still burning. It could have a solid mass at its core....I'm thinking that when...
by Stacie L 7 years ago
Amazing spectacle: Total lunar eclipse Monday nightYou may be able to see it from your backyard, if weather is favorableImage: Eclipse chartStarry Night SoftwareSeen from Central Park in New York at 4:30 a.m., the eclipse is nearly over, but the moon stands high, flanked by Orion on the left and...
Copyright © 2018 HubPages Inc. and respective owners. Other product and company names shown may be trademarks of their respective owners. HubPages® is a registered Service Mark of HubPages, Inc. HubPages and Hubbers (authors) may earn revenue on this page based on affiliate relationships and advertisements with partners including Amazon, Google, and others.
|HubPages Device ID||This is used to identify particular browsers or devices when the access the service, and is used for security reasons.|
|Login||This is necessary to sign in to the HubPages Service.|
|HubPages Traffic Pixel||This is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.|
|Remarketing Pixels||We may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.|
|Conversion Tracking Pixels||We may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.| | <urn:uuid:90270c01-0967-4f86-b360-ef9a74187693> | 3.0625 | 742 | Comment Section | Science & Tech. | 62.486995 | 95,610,730 |
A systematic search of the Cambridge Crystallographic Database has yielded structures of 467 molecules containing, or reported to contain, a molybdenum to molybdenum quadruple bond, Mo[quadruple bond]Mo. We have arranged these data as a histogram, in order to determine the "normal" range of Mo[quadruple bond]Mo lengths, 2.06-2.17 A, as well as to examine molecules in which the Mo[quadruple bond]Mo length deviates from the norm. We discuss 24 molecules with exceptionally long Mo[quadruple bond]Mo lengths. Another molecule, Mo(2)(pyNC(O)CH(3))(4), was previously reported to have an unusually short Mo[quadruple bond]Mo bond. However, the structure is highly disordered (probably twinned); thus we have prepared and determined the structure of an analogous molecule, Mo(2)(pyNC(O)CH(2)CH(3))(4), which displays a completely normal Mo[quadruple bond]Mo bond length of over 2.08 A.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:5b65a522-78fe-4caa-a8ed-a02ec973d9f2> | 2.515625 | 254 | Academic Writing | Science & Tech. | 44.52 | 95,610,771 |
You are currently converting Velocity - Angular units from Revolution/Second to Degree/Second
4880 Revolution/Second (rev/s)
1756800 Degree/Second (°/s)
Revolution/Second : The revolution per second is a metric unit of angular velocity (rotational speed). It also is a unit of angular frequency. Its symbols are r•s⁻¹ and r/s. It is equal to 6.28318530718 radian/second.
Degree/Second : The degree per second is a metric unit of angular velocity (rotational speed). It also is a unit of angular frequency. It symbols are degree•s⁻¹ and degree/s. One degree per second is equal to 0.01745329251994 radian/second.
Velocity - Angular Conversion Calculator
Most popular convertion pairs of velocity - angular
- Frequency Wavelength | <urn:uuid:733b882b-3ad5-4185-90e5-4a3e18ea19af> | 3.03125 | 187 | Structured Data | Science & Tech. | 46.449525 | 95,610,775 |
12 July 2018
Sifting through cells using heat
Published online 30 November 2016
Laser-beam-induced heat and a nano-wrap reveal heat diffusion patterns in living cells.
A research team from Saudi Arabia has created a technique that reveals the thermal-transport properties of a living cell, which is potentially useful in identifying the different types of cells including cancer cells.
The scientists achieved this by trapping and heating the cell in a nanomembrane-based wrap1.
Be it a healthy or a diseased cell, living cells have their own distinct patterns of heat diffusion. But this is a first where their thermal characteristics were used as indicators to separate healthy cells from diseased ones.
“Our ultimate goal is to use this technique to study the effects of diseases on the thermal properties of single living cells, thereby opening new avenues for diagnosing and treating various diseases,” says Rami T. EIAfandy, the lead author of the study.
Boon S. Ooi and his colleagues from the King Abdullah University of Science and Technology (KAUST), Saudi Arabia, shone a pulsed, focused laser beam on a gallium nitride nanomembrane wrapped around a living cell. The beam heated the nanomembrane which transferred the heat to the cell, changing its thermal-transport properties.
The change in the thermal-transport properties of the cell changed the temperature and optical properties of the nanomembrane which, in turn, revealed the thermal properties of the living cell.
When the living cell was swapped with breast and cervical cancer cells in the lab, the nanomembrane similarly revealed their thermal properties. The scientists observed how cervical cancer cells expressed different thermal properties than those of breast cancer cells.
- EIAfandy, R. T. et al. Nanomembrane-based, thermal-transport biosensor for living Cells. Small http://dx.doi.org/10.1002/smll.201603080 (2016). | <urn:uuid:b59511fc-c358-4d9d-a4de-5f9bd10be231> | 3.265625 | 419 | Truncated | Science & Tech. | 49.456091 | 95,610,780 |
Time-factor. Introduction of the Moderator
The complex phenomena, which are summarized by the still rather ill-defined term “time-factor”, are concerned with time and dose rate influences on measurable biological effects produced by a determined radiation dose. Certainly, irradiation effects must show some form of time dependence, because irradiarion on rhe one hand is a rime-consuming procedure, and all possibiliries of consideration of living rhings on rhe orher, between birth and death and even before and afterwards, must be related to time. The results of such considerations are sometimes summarized by terms such as “evolution” or “development”, specifically historical terms. What we can really do in the observation of a biological system is to look at its topography or at its history. History, however, never returns exactly to former stages, though some phenomena may appear to an unskilled observer, to be periodic. Development, also every development after irradiation, is therefore the time sequence of irreversible changes.
KeywordsDose Rate Fast Electron Relative Biological Effectiveness High Dose Rate Lower Dose Range
Unable to display preview. Download preview PDF. | <urn:uuid:c5b34a95-3970-42d1-a3f2-7a993b683262> | 2.625 | 245 | Truncated | Science & Tech. | 9.885767 | 95,610,793 |
The imaging of living cells at the molecular level was barely a dream twenty years ago. Today, however, this dream is close to becoming reality. In the Max Planck Institute for NanoBiophotonics in Göttingen, Stefan Hell (2009 recipient of the Otto Hahn Prize) has developed fluorescence microscopy methods for observing objects on the nanoscale and with his colleagues Vladimir Belov and Christian Eggeling a new series of photostable dyes that can be used as fluorescent markers has been realised, as reported in a cover story in Chemistry—A European Journal.
Over the last two decades Stefan Hell and his group have revolutionized the art of microscopy beyond limits thought to have been unbreakable. Due to the wave properties (diffraction) of light, the resolution of an optical microscope is limited to object details of about 0.2 micrometers. The laws of physics appeared to prohibit imaging details beyond this limit. Stefan Hell saw beyond this limitation and about fifteen years ago his vision became concrete; he developed a method for observing objects at the nanometer scale by sequentially turning the fluorescence of nearby molecules off by stimulated emission, a technique known as STED nanoscopy.
The sensitivity of this technique depends on the brightness of the applied fluorescence markers and their photostability is also of great importance. The NanoBiophotonics group has succeeded in synthesizing a series of highly photostable and highly fluorescent dyes. These compounds emit green and orange light and are based on fluorine derivatives of the well-known Rhodamine dye. The use of these dyes in STED nanoscopy leads to images of high-quality with respect to brightness and signal-to-background ratio; further the resolution over that of more traditional optical microscopes is significantly improved giving more detailed structural information.
These rhodamine-based fluorine derivatives are even more special because of their versatility. The compounds are available in hydrophilic and lipophilic forms, and with the inclusion of amino reactive groups, they can be easily attached to antibodies or other biomolecules in the course of standard labeling and immunostaining procedures. The group demonstrate that these new dyes are able to cross cellular membranes and reach the interior of living cells, which could lead to new labeling strategies for biological systems. All eyes are now on Göttingen to see just how far optical nanoscopy can go.
Author: Stefan Hell, Max Planck Institute of NanoBiophotonics, Göttingen (Germany), http://www.mpibpc.mpg.de/groups/hell/
Title: New Fluorinated Rhodamines for Optical Microscopy and Nanoscopy
Chemistry - A European Journal 2010, 16, No. 15, 4477–4488, Permalink to the article: http://dx.doi.org/10.1002/chem.200903272
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:61209ff4-4dc9-41de-b8fb-3ca71d581ce3> | 2.90625 | 1,236 | Content Listing | Science & Tech. | 36.60141 | 95,610,807 |
Ants use sun, memories for ‘backward’ walk home
London: Ants, which are famed for their highly developed work ethic, use the sun and memories of their surroundings to find the way home when they walk backward dragging a heavy load, scientists have found.
A study, published in the journal Current Biology, showed that ants’ navigational skills are very sophisticated as when walking backward, they occasionally look behind them to check their surroundings and use this information to set a course relative to the sun’s position.
“In this way, the insects can maintain their course towards the nest regardless of which way they are facing,” the team of researchers from University of Edinburgh, Scotland, found.
“Ants have a relatively tiny brain, less than the size of a pinhead. Understanding their behaviour gives us new insights into brain function, and has inspired us to build robot systems that mimic their functions,” said Professor Barbara Webb of the University of Edinburgh’s School of Informatics.
Although ants usually walk forward when they carry small pieces of food, but walk backwards to drag larger items to their nest.
Researchers observed that ants set off in the wrong direction when a mirror was used to alter their perception of the sun’s location.
To ensure they stay on course, backward-walking ants also routinely drop what they are carrying and turn around.
They do this to compare what they see with their visual memories of the route, and correct their direction of travel if they have wandered off course.
The findings suggest ants can understand spatial relations in the external world, not just relative to themselves. | <urn:uuid:0995e071-1e80-4182-a66d-e7ec4eef202e> | 3.578125 | 339 | News Article | Science & Tech. | 40.818912 | 95,610,811 |
especially in absence of cross-checks by different methods, or if presented without sufficient information to judge the context in which it was obtained.
Isochron methods avoid the problems which can potentially result from both of the above assumptions.
The half-life of a radioactive isotope describes the amount of time that it takes half of the isotope in a sample to decay.
In the case of radiocarbon dating, the half-life of carbon 14 is 5,730 years.
There are over forty such techniques, each using a different radioactive element or a different way of measuring them.
It has become increasingly clear that these radiometric dating techniques agree with each other and as a whole, present a coherent picture in which the Earth was created a very long time ago.
If a sum invested gains \%$ each year how long will it be before it has doubled its value?
When an organism dies it ceases to replenish carbon in its tissues and the decay of carbon 14 to nitrogen 14 changes the ratio of carbon 12 to carbon 14.
Experts can compare the ratio of carbon 12 to carbon 14 in dead material to the ratio when the organism was alive to estimate the date of its death.
Radiocarbon dating can be used on samples of bone, cloth, wood and plant fibers.
His Ph D thesis was on isotope ratios in meteorites, including surface exposure dating.
He was employed at Caltech's Division of Geological & Planetary Sciences at the time of writing the first edition. | <urn:uuid:1f2bde5b-1dc9-4548-af6f-df7e074fd395> | 3 | 307 | Knowledge Article | Science & Tech. | 46.256838 | 95,610,813 |
Some features of this site are not compatible with your browser. Install Opera Mini to
better experience this site.
Saharan Dust over the Atlantic
This page contains archived content and is no longer being updated. At the time of publication, it represented the best available science. However, more recent observations and studies may have rendered some content obsolete.
Fierce winds ripped across the Sahara Desert this past weekend and blew
a large plume of dust out over the Atlantic Ocean. This true color image
of the dust event was acquired on February 11, 2002, by the
Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard
NASAs Terra spacecraft. The light brown dust trail can be seen forming
an arc a few hundred miles off the coast of Western Sahara and
Mauritania. Northeasterly winds blowing across the Atlantic have
redirected the dust towards Europe where it will likely settle. | <urn:uuid:ad69e84c-dfac-4f5b-9bbb-e6fef7a5b2bc> | 3.03125 | 189 | Truncated | Science & Tech. | 34.496842 | 95,610,822 |
Imagine a new and improved biorefinery, one that produces advanced biofuels as environmentally sustainable as they are economically viable.
In Wisconsin, as in many other parts of the Midwest, we grow a lot of corn – four million acres of it, in fact. That’s four million acres of corn generating $2 billion in economic benefits to the state. Since roughly a quarter of those corn crops are currently used for ethanol production, some of that Wisconsin corn is also finding its way into our gas tanks.
When Jim Steele thinks back over the last seven years, from the early research on biofuels, followed by a dream of moving Lactic Solutions LLC technology to the marketplace, and now the acquisition of the company by Lallemand Biofuels & Distilled Spirits (a unit of Lallemand Inc.) last month, he is thankful for the network of UW–Madison entrepreneurial experts.
Just as sequencing the human genome has netted major health and medical benefits, switchgrass genomics will pay dividends through the development of advanced liquid biofuels.
Versatile, resilient, and high yielding, sorghum has been giving us good ideas for millennia. People in northern Africa first domesticated this flowering grass some 8,000 years ago for a steady supply of porridge. Ben Franklin called sorghum “broom corn” and extolled its efficiency in sweeping up.
Michigan State University researchers are experimenting with harvesting seed oil to make biofuels that could someday power our jets and cars.
In a recent study published in the journal The Plant Cell, the researchers show that the chloroplast, where plant photosynthesis occurs, also participates in new ways to provide seed oil precursors.
With up to $1.8 million in funding from the U.S. Department of Energy (DOE), scientists affiliated with the Great Lakes Bioenergy Research Center (GLBRC) will conduct research with the potential to turn woody biomass into an economical source of renewable chemicals.
Researchers at the Great Lakes Bioenergy Research Center (GLBRC) recently published a study showing that quality of crop yield is less important than quantity when harvesting for cellulosic biofuels.
Earlier this month, the Great Lakes Bioenergy Research Center (GLBRC) received another major boost from the U.S. Department of Energy, receiving more than $250 million to conduct another five years of groundbreaking work on alternative fuels.
The U.S. Department of Energy (DOE) has selected the Great Lakes Bioenergy Research Center (GLBRC) for an additional five years of funding to develop sustainable alternatives to transportation fuels and products currently derived from petroleum. The past recipient of roughly $267 million in DOE funding, the GLBRC represents the largest federal grant ever awarded to UW–Madison.
Could cellulosic biofuels – or liquid energy derived from grasses and wood – become a green fuel of the future, providing an environmentally sustainable way of meeting energy needs? Writing today (Thursday, June 29) in Science, researchers at the U.S. | <urn:uuid:77da9678-b51a-40ee-a2d1-3e3b42615f59> | 2.828125 | 629 | Content Listing | Science & Tech. | 39.726308 | 95,610,840 |
posted by Arowolo
A is a solution of trioxonitate acid of an unknown concentration, B is a standard solution of sodium hydroxide containing 4.00g per dm3 of solution 25cm3 portion of solution B require an average of 24cm3 of solution A for complete neutralization. Write a balance equation for the reaction.calculate
A. Concentration of B in grame per dm3
B. Concentration of A in grame per dm3
C. Concentration in grame per dm3 of the trioxonitrate(v) acid in solution A
See your duplicate post. | <urn:uuid:45bbbcb8-f2eb-49d4-b207-2890279328bd> | 2.734375 | 135 | Q&A Forum | Science & Tech. | 52.247586 | 95,610,841 |
Sensible heat is heat exchanged by a body or thermodynamic system in which the exchange of heat changes the temperature of the body or system, and some macroscopic variables of the body or system, but leaves unchanged certain other macroscopic variables of the body or system, such as volume or pressure.
The classical Carnot heat engine
The term is used in contrast to a latent heat, which is the amount of heat exchanged that is hidden, meaning it occurs without change of temperature. For example, during a phase change such as the melting of ice, the temperature of the system containing the ice and the liquid is constant until all ice has melted. The terms latent and sensible are correlative. That means that they are defined as a pair, depending on which other macroscopic variables are held constant during the process.
Sensible heat and latent heat are not special forms of energy. Rather, they describe exchanges of heat under conditions specified in terms of their effect on a material or a thermodynamic system.
In the writings of the early scientists who provided the foundations of thermodynamics, sensible heat had a clear meaning in calorimetry. James Prescott Joule characterized it in 1847 as an energy that was indicated by the thermometer.
Both sensible and latent heats are observed in many processes while transporting energy in nature. Latent heat is associated with changes of state, measured at constant temperature, especially the phase changes of atmospheric water vapor, mostly vaporization and condensation, whereas sensible heat directly affects the temperature of the atmosphere.
In meteorology, the term 'sensible heat flux' means the conductive heat flux from the Earth's surface to the atmosphere. It is an important component of Earth's surface energy budget. Sensible heat flux is commonly measured with the eddy covariance method.
- Thermodynamic databases for pure substances
- Eddy covariance flux (eddy correlation, eddy flux)
- Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, Volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green, and Co., London, pages 155-157.
- Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London, pages 22-23.
- Adkins, C.J. (1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, ISBN 0-07-084057-1, Section 3.6, pages 43-46.
- Landsberg, P.T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford, ISBN 0-19-851142-6, page 11.
- J. P. Joule (1884), The Scientific Paper of James Prescott Joule, The Physical Society of London, p. 274,
I am inclined to believe that both of these hypotheses will be found to hold good,—that in some instances, particularly in the case of sensible heat, or such as is indicated by the thermometer, heat will be found to consist in the living force of the particles of the bodies in which it is induced;, Lecture on Matter, Living Force, and Heat. May 5 and 12, 1847
- Stull, R.B. (2000). Meteorology for Scientists and Engineers, second edition, Brooks/Cole, Belmont CA, ISBN 978-0-534-37214-9, page 57. | <urn:uuid:75dd0f49-5e89-4d24-b517-1465907dce3e> | 3.40625 | 724 | Knowledge Article | Science & Tech. | 50.000784 | 95,610,852 |
The Costs of Climate Impacts
The surest prospect for future world climate patterns is that they will differ from present ones. What is uncertain is how much, and exactly in what way in different geographical regions. The anthropogenic CO2 increase will probably exceed the unknown forcing functions of “natural” climate change within 30 to 60 years. It is not unlikely that by AD 2040 the world’s climate, driven by the CO2 increase, will enter a domain warmer than any within the past few million years.
The costs of averting this climate change or of absorbing its impact are likely to be huge, even though today imponderable. Not least among these are intangible and unquantifiable costs associated with changes in human values and the quality of everyday life for future generations.
As Donald Michael and others have said, we live in an emerging society and the evolution of such a society, on a 25–50 year time scale, cannot be predicted from the characteristics of the many components from which it will be built. Thus, economists are powerless to predict, beyond the short term, fiscal policies to stabilize the cost of gold, to predict future population growth rates, or to assess the costs of accommodating to a warmer climate 60 years hence, when the very nature and dynamic interactions of the emergent society are uncertain.
It is probable, however, that food and agriculture will be the sectors most strongly impacted. Better arid land management, increased use of irrigation, water and soil conservation, and the use of yet-to-be-developed genetic strains of food plants and animals will feature among the costly new strategies most promising of benefits. Moreover, these strategies will bring benefits that are entirely independent of their worth in minimizing the impact of climate changes.
KeywordsClimate Impact Social Innovation Fast Breeder Reactor Genetic Strain Grow Season Rainfall
Unable to display preview. Download preview PDF.
- (1).R. Garcia, Nature Pleads Not Guilty, Pergamon (New York), In PressGoogle Scholar
- (3).R. M. White “Climate at the Millenium,” World Climate Conference: Extended Summaries of Papers Presented at the Conference, World Meteorological Organization, Geneva 1979Google Scholar
- (4).G. M. Woodwell, “The Carbon Dioxide Question,” Scientific American, Vol. 238, no. 1, January 1978Google Scholar
- (6).U.S. GARP Committee, Understanding Climatic Change: A Program for Action, National Academy of Sciences (Washington) 1975Google Scholar
- (8).T. H. Moss, private communication of a draft paper for future publication in a policy journal.Google Scholar
- (9).R. Schware, IIASA Task Force paper, “The Nature of Climate and Society Research,” Laxenburg, February 4–6, 1980Google Scholar | <urn:uuid:14ab460c-7c8c-4138-a173-13856cee5e89> | 3.234375 | 594 | Truncated | Science & Tech. | 41.702914 | 95,610,854 |
Modern astronomical research has accumulated an astonishing wealth of knowledge about the universe despite extreme limitations on observation and data collection. Astronomers routinely report detailed information about objects that are trillions of miles away. One of the essential techniques of astronomical investigation involves measuring electromagnetic radiation and performing detailed calculations to determine the temperature of distant objects.
From Temperature to Color
The color of light radiated by a star reveals its temperature, and the temperature of a star determines the temperature of nearby objects such as planets. Light is produced when charged atomic particles vibrate and release energy as light particles, known as photons. Because temperature corresponds to an object's internal energy, hotter objects will emit photons of higher energy. The energy of photons determines the wavelength, or color, of light; thus, the color of light emitted by an object is an indication of temperature. This phenomenon is not observable, however, until an object becomes extremely hot -- about 3,000 degrees Celsius (5,432 degrees Fahrenheit) -- because lower temperatures radiate in the infrared spectrum rather than the visible spectrum.
The concept of a blackbody is essential to measuring the temperature of astronomical objects. A blackbody is a theoretical object that perfectly absorbs energy from all wavelengths of light. In addition, the emission of light from a blackbody is not influenced by the object's composition. This means that a blackbody radiates light according a certain spectrum of colors that depends solely on the temperature of the object. Stars are not ideal blackbodies, but they are close enough to allow for an accurate approximation of temperature based on emission wavelengths.
Many Wavelengths, One Peak
A simple visual observation does not reveal the temperature of a star because temperature determines the peak emission wavelength, not the only emission wavelength. Stars generally appear whitish because their emission spectra cover a wide range of wavelengths, and the human eye interprets a mixture of all colors as white light. Consequently, astronomers use optical filters that isolate certain colors, then they compare the intensities of these isolated colors to determine the approximate peak of a star's emission spectrum.
Warmed by a Star
Planetary temperatures are more difficult to determine because the absorption and emission characteristics of a planet may not be adequately similar to the absorption and emission characteristics of a blackbody. A planet's atmosphere and surface materials can reflect significant amounts of light, and some of the absorbed light energy is retained by the greenhouse effect. Consequently, astronomers estimate the temperature of a distant planet through complex calculations that account for such variables as the temperature of the nearest star, the planet's distance from the star, the percentage of light that is reflected, the composition of the atmosphere and the planet's rotational characteristics. | <urn:uuid:5ecdcaa8-01bf-43b4-a16f-6af4d6c8ece9> | 4.3125 | 537 | Knowledge Article | Science & Tech. | 8.881182 | 95,610,855 |
However, understanding—and ultimately controlling—this effect remains a challenge, because much about manganite physics is still not known. A research team lead by Maria Baldini from Stanford University and Carnegie Geophysical Laboratory scientists Viktor Struzhkin and Alexander Goncharov has made an important breakthrough in our understanding of the mysterious ways manganites respond when subjected to intense pressure.
At ambient conditions, manganites have insulating properties, meaning they do not conduct electric charges. When pressure of about 340,000 atmospheres is applied, these compounds change from an insulating state to a metallic state, which easily conducts charges. Scientists have long debated about the trigger for this change in conductivity.
The research team's new evidence, to be published online by Physical Review Letters on Friday, shows that for the manganite LaMnO3, this insulator-to-metal transition is strongly linked to a phenomenon called the Jahn-Teller effect. This effect actually causes a unique distortion of the compound's structure. The team's measurements were carried out at the Geophysical Laboratory.
Counter to expectations, the Jahn-Teller distortion is observed until LaMnO3 is in a non-conductive insulating state. Therefore, it is reasonable to believe that the switch from insulator to metal occurs when the distortion is suppressed, settling a longstanding debate about the nature of manganite insulating state. The formation of inhomogeneous domains—some with and some without distortion—was also observed. This evidence suggests that the manganite becomes metallic when the breakdown of undistorted to distorted molecules hits a critical threshold in favor of the undistorted.
"Separation into domains may be a ubiquitous phenomenon at high pressure and opens up the possibility of inducing colossal magnetoresistance by applying pressure" said Baldini, who was with Stanford at the time the research was conducted, but has now joined Carnegie as a research scientist.
Some of the researchers were supported by various grants from the Department of Energy, Office of Science and National Nuclear Security Administration. Some of the experiments were supported by DOE and Carnegie Canada.
The Carnegie Institution for Science (carnegiescience.edu) is a private, nonprofit organization headquartered in Washington, D.C., with six research departments throughout the U.S. Since its founding in 1902, the Carnegie Institution has been a pioneering force in basic scientific research. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science.
Maria Baldini | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:f177741e-3542-4007-ba69-b2eed270e775> | 3.765625 | 1,097 | Content Listing | Science & Tech. | 29.216164 | 95,610,856 |
March will roar in like a lion today over the Southern U.S., with another volley of strong EF2 and EF3 tornadoes possible over portions of Louisiana, Arkansas, and Mississippi. The Storm Prediction Center
has placed this region under its "Moderate Risk" target today--one level below the maximum "High Risk" threat level. The powerful low pressure system responsible for today's severe weather developed over Texas and Oklahoma on Sunday, spawning at least two tornadoes
and hail up to 4.25" in diameter. The huge hail was reported in Buffalo, near the Kansas border.
The intensifying low pressure system will move through Louisiana, Arkansas, and Mississippi today, dragging a cold front through those states. This front has already spawned severe thunderstorms across eastern Texas and southern Arkansas this morning, with many reports of damaging thunderstorm winds. The tornado page
is a good place to track the tornadoes as they occur today, along with our severe weather page
.2008 sets early tornado season records
The year 2008 smashed the record for most January and February tornadoes, with 368. The previous record was set in 1999, with 235 January/February tornadoes. Reliable tornado records extend back to 1950. The 232 tornadoes reported in February of 2008 was a record for the month of February. Second place goes to 1971, with a relatively paltry 83 tornadoes. Each of the past three years has seen an unusually early start to tornado season (Figure 1). One would expect to see a shift in tornado activity earlier in the year in a warming climate, along with an earlier than usual drop off in activity in late spring. We can see that in both 2005 and 2006 that tornado activity dropped off much earlier than usual, and it will be interesting to see if 2008 follows a similar pattern. Note that there is a very high natural variability in tornado numbers, and the record for fewest ever January and February tornadoes was set just six years ago in 2002, when only four twisters occurred. It will be at least ten more years before we can say with any confidence that a warming climate is leading to an earlier peak in tornado season. There does seem to be a tendency for more early season tornadoes during La Niña years--four of the five years that had January/February tornado counts 75 or above were all La Niña years (1971, 1975, 1999, and 2008). The only exception was 1998, which was an El Niño year, and had 118 January/February tornadoes.Figure 1.
Tornado reports so far this year have totaled 368 for the months of January and February, by far the greatest number of tornadoes observed so early in the year. Image credit: NOAA Storm Prediction Center
.Potential South Atlantic subtropical storm fizzles
Last Friday, I mentioned the possibility of a rare subtropical storm forming the South Atlantic off the coast of Brazil. It turned out that the candidate low pressure system wrapped a lot of dry air into its center, killing any chance it had to become a subtropical storm. There is still some vigorous tropical-looking thunderstorms firing up off the coast of Brazil this morning, and it is worth continuing to watch this region for formation of a subtropical storm. A separate low pressure system in southern Brazil has brought a wide variety of severe weather to Brazil, Argentina, and Uruguay over the past few days, as documented in the metsul.com weather blog
(for those of you who read Portugese). For those of you who don't, try the web page translator at http://babelfish.altavista.com/babelfish/tr. | <urn:uuid:d3b2fce2-e193-4618-b41f-99dcf9ce676a> | 2.53125 | 745 | Personal Blog | Science & Tech. | 51.10482 | 95,610,871 |
WASHINGTON (AP) ? Wind, humidity and rainfall combined precisely to create the massive killer tornado in Moore, Okla. And when they did, the awesome amount of energy released over that city dwarfed the power of the atomic bomb that leveled Hiroshima.
WASHINGTON (AP) ? Everything had to come together just perfectly to create the killer tornado in Moore, Okla.: wind speed, moisture in the air, temperature and timing. And when they did, the awesome energy released over that city dwarfed the power of the atomic bomb that leveled Hiroshima.
On Tuesday, the National Weather Service gave it the top-of-the-scale rating of EF5 for wind speed and breadth, and severity of damage. Wind speeds were estimated at between 200 and 210 mph. The death count is 24 so far, including at least nine children. The United States averages about one EF5 a year, but this was the first in nearly two years.
To get such an uncommon storm to form is "a bit of a Goldilocks problem," said Pennsylvania State University meteorology professor Paul Markowski. "Everything has to be just right."
For example, there must be humidity for a tornado to form, but too much can cut the storm off. The same goes with the cold air in a downdraft: Too much can be a storm-killer.
But when the ideal conditions do occur, watch out. The power of nature beats out anything man can create.
"Everything was ready for explosive development yesterday," said Colorado State University meteorology professor Russ Schumacher, who was in Oklahoma launching airborne devices that measured the energy, moisture and wind speeds on Monday. "It all just unleashed on that one area."
Several meteorologists contacted by The Associated Press used real time measurements, some made by Schumacher, to calculate the energy released during the storm's 40-minute life span. Their estimates ranged from 8 times to more than 600 times the power of the Hiroshima bomb, with more experts at the high end. Their calculations were based on energy measured in the air and then multiplied over the size and duration of the storm.
An EF5 tornado has the most violent winds on Earth, more powerful than a hurricane. The strongest winds ever measured were the 302 mph reading, measured by radar, during the EF5 tornado that struck Moore on May 3, 1999, according to Jeff Masters, meteorology director at the Weather Underground.
Still, when it comes to weather events, scientists usually know more about and can better predict hurricanes, winter storms, heat waves and other big events.
That's because even though a tornado like the one that struck Moore was 1.3 miles wide, with a path of 17 miles long, in meteorological terms it was small, hard to track, rare and even harder to study. So tornadoes are still more of a mystery than their hurricane cousins, even though tropical storms form over ocean areas where no one is, while this tornado formed only miles from the very National Weather Service office that specializes in tornadoes.
"This phenomenon can be so deadly you would think that something that catastrophic, that severe would lend itself to understanding," said Adam Houston, meteorology professor at the University of Nebraska, Lincoln. "But we're fighting the inherent unpredictability of these small-scale phenomena."
Unlike hurricanes, which forecasters can fly through in planes and monitor with buoys and weather stations, usually over a period of days, tornadoes form quickly and normally last only a matter of minutes. While meteorologists and television hosts chase tornadoes and try to get readings, it's not usually enough. This storm lasted 40 minutes ? long for a regular tornado but not too unusual for such a violent one, said research meteorologist Harold Brooks at the National Severe Storms Laboratory in Norman, Okla.
Still, the conditions needed to form such a violent and devastating tornado were there and forecasters knew it, warning five days in advance that something big could happen, Brooks said.
By Monday morning, forecasters at the National Weather Center, home of the storm lab and storm prediction center, knew "that any storm that formed in that environment had the potential to be a strong to violent tornado," he said.
"This is a pretty classic setup," Brooks said.
Tornadoes have two main ingredients: moist energy in the atmosphere and wind shear. Wind shear is the difference between wind at high altitudes and wind near the surface. The more moist energy and the greater the wind shear, the better the chances for tornadoes.
But just because the conditions are right doesn't mean a violent tornado will form, and scientists still don't know why they occur in certain spots in a storm and not others, and why at certain times and not others.
On Monday, the moist energy came up from the Gulf of Mexico, the wind shear from the jet stream plunging from Canada. "Where they met is where the Moore storm got started," Brooks said.
With the third strong storm hitting Moore in 14 years ? and following roughly the same path as an EF5 that killed 40 people in 1999 and an EF4 that injured 45 others in 2003 ? some people are wondering why Moore?
It's a combination of geography, meteorology and lots of bad luck, experts said.
If you look at the climate history of tornadoes in May, you will see they cluster in a spot, maybe 100 miles wide, in central Oklahoma, Houston said. That's where the weather conditions of warm, moist air and strong wind shear needed for tornadoes combine, in just the right balance.
"Central Oklahoma is a hot spot and there's a good reason for it," Houston said. "There's this perfect combination where the jet stream is strong, the instability is large and the typical position for this juxtaposition climatologically is central Oklahoma."
And the timing has to be perfect. Earlier in the year, there's not enough warm moist air, but the jet stream is stronger. Later, the jet stream is weaker but the air is moister and warmer.
The hot spot is more than just the city of Moore. Several meteorologists offer the same explanation for why that Oklahoma City suburb seemed to be hit repeatedly by violent tornadoes: Bad luck.
Of the 60 EF5 tornadoes since 1950, Oklahoma and Alabama have been struck the most, seven times each. More than half of these top-of-the-scale twisters are in just five states: Oklahoma, Alabama, Texas, Kansas, and Iowa. Less than 1 percent of all U .S. tornadoes are this violent ? only about 10 a year, Brooks said.
The United States' Great Plains is the "best place on Earth" for the formation of violent tornadoes because of geography, Markowski said. You need the low pressure systems coming down off the Rocky Mountains colliding with the warm moist unstable air coming north from the Gulf of Mexico.
Scientists know the key ingredients that go into a devastating tornado. But they are struggling to figure out why they develop in some big storms and not others. They also are still trying to determine what effects, if any, global warming has on tornadoes. The jet stream can shift to cause a record number of tornadoes ? or an unusually low number of them.
Early research, much of it by Brooks, predicts that as the world warms, the moist energy ? or instability ? will increase, and the U.S. will have more thunderstorms. But at the same time, the needed wind shear ? the difference between wind speed and direction at different altitudes ? will likely decrease.
The two factors go in different directions and it's hard to tell which will win out. Brooks and others think that eventually there may be more thunderstorms and fewer days with tornadoes, but more tornadoes on those days when twisters do strike.
"Tornadoes are perhaps the most difficult things to connect to climate change of any extreme," said NASA climate scientist Tony Del Genio. "Because we still don't understand all the factors required to get a tornado." | <urn:uuid:5c7b72c1-19da-4f2e-af33-f55c4c63e9d5> | 3 | 1,654 | News Article | Science & Tech. | 51.617688 | 95,610,885 |
Accurately forecasting rain will be easier thanks to new insights into clouds from the University of Leeds, UCL (University College London) and others. Details of a new model for predicting cloud and rain-formation are published today in the Proceedings of the Royal Society (10 August 2005).
Existing forecasting models – including ones used by the UKs Meteorological Office - assume rain droplets fall through still air within a cloud. However, there is turbulence within clouds that can speed up droplet settling and increase the likelihood of rain.
The international team developed a new mathematical model and showed for the first time how pockets of whirling air (tiny eddies) encourage collisions between very small droplets (about 1/1000 of a cm) and slightly larger droplets within a cloud. The collisions lead to the rapid growth of the larger drops – larger than a critical size of 20 microns ( 1 micron is a millionth of a metre). This size is necessary for rain to form, fall out of the clouds and, when conditions are right, reach the ground.
Dr Sat Ghosh | alfa
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Life Sciences
17.07.2018 | Information Technology
17.07.2018 | Power and Electrical Engineering | <urn:uuid:7556bbc9-5766-47d2-ad21-6be81c8cfd00> | 3.375 | 881 | Content Listing | Science & Tech. | 42.41366 | 95,610,893 |
posted by Da Fash
Thanks for checking my question out!
4. Water is a compound because it (1 point)
a) can be broken down into simpler substances.
b) always has two hydrogen atoms for each oxygen atom.
c) is made of water atoms joined together.
d) both the first and second choices above
My Answer: D
Could someone please check if I'm right?
Compound: substance made of two or more simpler substances, which can be broken down into those simpler substances.
- Da Fash | <urn:uuid:c65e70a3-078d-4dd4-9475-e979a05313c7> | 3.1875 | 114 | Q&A Forum | Science & Tech. | 69.769286 | 95,610,906 |
Last Updated: Nov 19, 2016 Views: 45
The Corning Museum of Glass website includes some brief information about telescopes and lenses. http://www.cmog.org/article/hale-reflecting-telescope-palomar
Some other useful websites for studying lenses are:
Fun Science Gallery: FROM LENSES TO OPTICAL INSTRUMENTS
by Giorgio Carboni; English version revised by David W. Walker
How Stuff Works website: How Telescopes Work by Craig C. Freudenrich, Ph.D.
You can search the "How stuff works" website to find additional information about lenses and making telescopes, for example.
The "Try Science" website provides links to telescope sites: http://search.tryscience.org/cgi-bin/query?mss=search&stq=0&pg=aq&kl=en&uil=eniso&q=telescope%20making*&i=allsites&custom2=all
Or search the Try Science... website: http://search.tryscience.org/ and fill in the search with < telescope making > for example.
These books could be useful:
Miller, Robert and Kenneth Wilson. Making & enjoying telescopes : 6 complete projects & a stargazer's guide. New York : Sterling Pub., Co., 1995. 160 p.
Panek, Richard. Seeing and believing : how the telescope opened our eyes and minds to the heavens . New York : Viking, 1998. 198 p.
Villard, Ray. Large telescopes : inside and out / by Ray Villard ; illustrations, Alessandro Bartolozzi, Leonello Calvetti, Lorenzo Cecchi. New York : PowerPlus Books, c2002. 47 p. Includes bibliographical references (p. 46) and index. Technology--blueprints of the future. (Juvenile book)
Bedini, Silvio A. Science and instruments in seventeenth-century Italy. Aldershot, Hampshire, Great Britain ; Brookfield, Vt., USA : Variorum, 1994. 1 v. (various pagings). This book is a reprint of articles and book chapters by various authors. Some of the instruments discussed are telescopes, microscopes, and lenses. Includes bibliographical references and index.
I am sending a brief bibliography of recently published juvenile books -- some include information about lenses and telescopes.
In a search of the "Swan [library] Online Catalog" I found a great variety of juvenile books relating to telescopes. Try searching their catalog to find additional titles: http://swan.sls.lib.il.us/search brings up the search screen; I used < telescopes juvenile > as keywords | <urn:uuid:c74d01a7-1f01-46f8-afaa-2c7d20ec446b> | 3.203125 | 567 | Q&A Forum | Science & Tech. | 53.134867 | 95,610,914 |
Viral Message: Volcanoes Main Cause Of Global Warming and NOT Man-made Carbon Emissions - NEW Study
eMessage on – Social media and internet
Fact Check by Ayupp– Fake
Viral Message Example –
Volcanoes Main Cause Of Global Warming and NOT Man-made Carbon Emissions - NEW Study
Climate scientists have discovered that the main cause of global warming is actually volcanoes and not man-made carbon emissions as previously thought.
A new study by the University of Alabama-Huntsville found that studies used by other scientists to justify the man-made global warming theory made a glaring oversight: Their data completely ignored volcanoes.
Conservativetribune.com reports: According to The Daily Caller, when scientists John Christy and Richard McNider re-calibrated satellite temperature data to remove the effects of naturally occurring volcanic eruptions, they found something stunning: The rate of global warming has been nearly unchanged in the last 30 years.
“We indicated 23 years ago — in our 1994 Nature article — that climate models had the atmosphere’s sensitivity to CO2 much too high,” Christy said in a statement. “This recent paper bolsters that conclusion.”
Volcanoes can dramatically influence the earth’s climate, but sometimes in ways you might not expect. We often imagine volcanoes as sources of heat, but when they erupt, the ash they spew can also act as an atmospheric layer that actually helps keep the earth cool.
“Two major volcanoes — El Chichon in 1982 and Pinatubo in 1991 — caused global average temperature to dip as a result of volcanic ash, soot and debris reflecting sunlight back into space,” The Daily Caller noted.
Think of the volcanic ash as a windshield sun reflector that protects a car on a hot day. Particles thrown into the sky by large volcanoes shield the earth from the sun’s rays, and the earth stays cooler as a result.
The University of Alabama scientists believe that this is essentially what happened 30 years ago. Because those two significant 1982 and 1991 volcanoes occurred right around the time a major global warming study began, the eruptions skewed the data and made the result look more dramatic than it truly was.
“Those eruptions happened relatively early in our study period, which pushed down temperatures in the first part of the dataset, which caused the overall record to show an exaggerated warming trend,” Christy said.
“While volcanic eruptions are natural events, it was the timing of these that had such a noticeable effect on the trend
“If the same eruptions had happened near the more recent end of the dataset, they could have pushed the overall trend into negative numbers, or a long-term cooling,” he said.
That means that current climate models that predict dramatic global warming are most likely wrong, or at least highly exaggerated. They include temperature spikes that appear higher than they truly are because volcanic eruptions created a false baseline.
As soon as those errors are removed, the global warming rate is suddenly very steady over the last three decades and does not seem to be dramatically climbing. Of course, that doesn’t help push massive economic takeovers or pad Al Gore’s pockets the same way as alarmist panic.
Perhaps the real takeaway of this recent study is that science is constantly changing and being revised.
If the experts didn’t realize that volcanoes were affecting conclusions for 30 years, what other “settled science” is actually based on false conclusions?
In truth, science is never settled. Scientific facts are not a democracy, and a false “consensus” doesn’t actually impact reality any more than hundreds of scientists claiming the sun orbits the earth would somehow make it true.
Good scientists are always questioning and revising. When a group seems openly hostile to questions and angrily bitter against skeptics, the odds are they are on the wrong side of reality.
Viral Message Verification- The article was published by number of sites which often produce satire and fictions stories. It has been published in sites like http://www.collective-evolution.com, yournewswire.com and several other websites which has been famous for publishing news with fabricated information. We did a fact check analysis to determine if volcanos are really the major cause of global warming, our first research lead us to the fact that it is wrong information published in the sites.
As per Nasa,
- Most climate scientists have agreed to the main cause of the current global warming trend is human expansion of the "greenhouse effect" — warming that results when the atmosphere traps heat radiating from Earth toward space.
Gases that contribute to the greenhouse effect include:
- Water vapor. The most abundant greenhouse gas, but importantly, it acts as a feedback to the climate. Water vapor increases as the Earth's atmosphere warms, but so does the possibility of clouds and precipitation, making these some of the most important feedback mechanisms to the greenhouse effect.
- Carbon dioxide (CO2). A minor but very important component of the atmosphere, carbon dioxide is released through natural processes such as respiration and volcano eruptions and through human activities such as deforestation, land use changes, and burning fossil fuels. Humans have increased atmospheric CO2 concentration by more than a third since the Industrial Revolution began. This is the most important long-lived "forcing" of climate change. So as per Nasa analytics it is clear that Volcano eruption leads to minot increase in temperature.
- Methane. A hydrocarbon gas produced both through natural sources and human activities, including the decomposition of wastes in landfills, agriculture, and especially rice cultivation, as well as ruminant digestion and manure management associated with domestic livestock. On a molecule-for-molecule basis, methane is a far more active greenhouse gas than carbon dioxide, but also one which is much less abundant in the atmosphere.
- Nitrous oxide. A powerful greenhouse gas produced by soil cultivation practices, especially the use of commercial and organic fertilizers, fossil fuel combustion, nitric acid production, and biomass burning.
- Chlorofluorocarbons (CFCs). Synthetic compounds entirely of industrial origin used in a number of applications, but now largely regulated in production and release to the atmosphere by international agreement for their ability to contribute to destruction of the ozone layer. They are also greenhouse gases.
On Earth, human activities are changing the natural greenhouse. Over the last century the burning of fossil fuels like coal and oil has increased the concentration of atmospheric carbon dioxide (CO2).
The consequences of changing the natural atmospheric greenhouse are difficult to predict, but certain effects seem likely:
- On average, Earth will become warmer. Some regions may welcome warmer temperatures, but others may not.
- Warmer conditions will probably lead to more evaporation and precipitation overall, but individual regions will vary, some becoming wetter and others dryer.
- Meanwhile, some crops and other plants may respond favorably to increased atmospheric CO2, growing more vigorously and using water more efficiently. At the same time, higher temperatures and shifting climate patterns may change the areas where crops grow best and affect the makeup of natural plant communities.
Source: climate nasa cause
You may also like to read our latest analysed news:
- Was Sare Jahan Se Achha played by Russian band at FIFA World Cup, Moscow
- Fake news check: Kashmiri terrorists said that RSS supplied weapons and money to them
- No it is not Varanasi roads but Nanpu Bridge, Shanghai
- Yes Did Rahul Gandhi said congress is a party of Muslims
Note: If you like our work, don't forget to share and like ayupp.com on facebook, twitter. | <urn:uuid:a40aa1b0-11dd-421d-ae4c-c69b4641ceb6> | 3.0625 | 1,604 | News Article | Science & Tech. | 30.482613 | 95,610,917 |
Temporal range: 488–0 Ma Ordovician to Present
|Common brittlestar (Ophiura ophiura)|
Brittle stars or ophiuroids are echinoderms in the class Ophiuroidea closely related to starfish. They crawl across the sea floor using their flexible arms for locomotion. The ophiuroids generally have five long, slender, whip-like arms which may reach up to 60 cm (24 in) in length on the largest specimens. They are also known as serpent stars; the New Latin class name Ophiuroidea is derived from the Ancient Greek ὄφις, meaning "serpent".
The Ophiuroidea contain two large clades, Ophiurida (brittle stars) and Euryalida (basket stars). Over 2,000 species of brittle stars live today. More than 1200 of these species are found in deep waters, greater than 200 m deep.
The ophiuroids diverged in the Early Ordovician, about 500 million years ago. Ophiuroids can be found today in all of the major marine provinces, from the poles to the tropics. Basket stars are usually confined to the deeper parts of this range; Ophiuroids are known even from abyssal (>6000 m) depths. However, brittle stars are also common members of reef communities, where they hide under rocks and even within other living organisms. A few ophiuroid species can even tolerate brackish water, an ability otherwise almost unknown among echinoderms. A brittle star's skeleton is made up of embedded ossicles.
The relationships among ophiuroids and all other echinoderms provide an enduring problem in invertebrate evolution. Developmental and other studies based on modern organisms imply that asteroidea and ophiuroids are not closely related within the echinoderms. Stenurid morphology, in contrast, suggests a close common ancestry for the two; the nature of the ambulacral plates is important, but even their general form is transitional.
This is a Paleozoic (Ordovician–Devonian) order, bearing a double row of plates (ambulacra) that abut across the arm axis either directly opposite one another or slightly offset. In contrast, modern ophiuroids have a single series of axial arm plates termed vertebrae. In stenurids, as in modern ophiuroids, lateral plates are present at the sides of ambulacrals, and prominent lateral spines are typical. Stenurids lack the dorsal and ventral arm shields that are found in most ophiuroids. Proximal ambulacral pairs can be partially separated, forming a buccal slit, an expansion of the mouth frame. The arms of some stenurids are slender and flexible, but those of others are broad and comparatively stiff. The central disk varies from little larger than the juncture of the arms to an expansion that extends most of the length of the arms. The content of the order is poorly established, and fewer than 10 genera are known.
Of all echinoderms, the Ophiuroidea may have the strongest tendency toward five-segment radial (pentaradial) symmetry. The body outline is similar to that of starfish, in that ophiuroids have five arms joined to a central body disk. However, in ophiuroids, the central body disk is sharply marked off from the arms.
The disk contains all of the viscera. That is, the internal organs of digestion and reproduction never enter the arms, as they do in the Asteroidea. The underside of the disk contains the mouth, which has five toothed jaws formed from skeletal plates. The madreporite is usually located within one of the jaw plates, and not on the upper side of the animal as it is in starfish.
The ophiuroid coelom is strongly reduced, particularly in comparison to other echinoderms.
The vessels of the water vascular system end in tube feet. The water vascular system generally has one madreporite. Others, such as certain Euryalina, have one per arm on the aboral surface. Still other forms have no madreporite at all. Suckers and ampullae are absent from the tube feet.
The nervous system consists of a main nerve ring which runs around the central disk. At the base of each arm, the ring attaches to a radial nerve which runs to the end of the limb. The nerves in each limb run through a canal at the base of the vertebral ossicles.
Most ophiuroids have no eyes, or other specialised sense organs. However, they have several types of sensitive nerve endings in their epidermis, and are able to sense chemicals in the water, touch, and even the presence or absence of light. Moreover, tube feet may sense light as well as odors. These are especially found at the ends of their arms, detecting light and retreating into crevices.
The mouth is rimmed with five jaws, and serves as an anus (egestion) as well as a mouth (ingestion). Behind the jaws is a short esophagus and a large, blind stomach cavity which occupies much of the dorsal half of the disk. Ophiuroids have neither a head nor an anus. Digestion occurs within 10 pouches or infolds of the stomach, which are essentially ceca, but unlike in sea stars, almost never extend into the arms. The stomach wall contains glandular hepatic cells.
Ophiuroids are generally scavengers or detritivores. Small organic particles are moved into the mouth by the tube feet. Ophiuroids may also prey on small crustaceans or worms. Basket stars in particular may be capable of suspension feeding, using the mucus coating on their arms to trap plankton and bacteria. They extend one arm out and use the other four as anchors. Brittle stars will eat small suspended organisms if available. In large, crowded areas, brittle stars eat suspended matter from prevailing seafloor currents.
In basket stars, the arms are used to rhythmically sweep food to the mouth. Pectinura consumes beech pollen in the New Zealand fjords (since those trees hang over the water). Eurylina clings to coral branches to browse on the polyps.
Gas exchange and excretion occur through cilia-lined sacs called bursae; each opens between the arm bases on the underside of the disk. Typically ten bursae are found, and each fits between two stomach digestive pouches. Water flows through the bursae by means of cilia or muscular contraction. Oxygen is transported through the body by the hemal system, a series of sinuses and vessels distinct from the water vascular system.
Like all echinoderms, the Ophiuroidea possess a skeleton of calcium carbonate in the form of calcite. In ophiuroids, the calcite ossicles are fused to form armor plates which are known collectively as the test. The plates are covered by the epidermis, which consists of a smooth syncytium. In most species, the joints between the ossicles and superficial plates allow the arm to bend to the side, but not to bend upwards. However, in the basket stars, the arms are flexible in all directions.
Both the Ophiurida and Euryalida (the basket stars) have five long, slender, flexible, whip-like arms, up to 60 cm in length. They are supported by an internal skeleton of calcium carbonate plates referred to as vertebral ossicles. These "vertebrae" articulate by means of ball-in-socket joints, and are controlled by muscles. They are essentially fused plates which correspond to the parallel ambulacral plates in sea stars and five Paleozoic families of ophiuroids. In modern forms, the vertebrae occur along the median of the arm.
The ossicles are surrounded by a relatively thin ring of soft tissue, and then by four series of jointed plates, one each on the upper, lower, and lateral surfaces of the arm. The two lateral plates often have a number of elongated spines projecting outwards; these help to provide traction against the substrate while the animal is moving. The spines, in ophiuroids, compose a rigid border to the arm edges, whereas in euryalids they are transformed into downward-facing clubs or hooklets. Euryalids are similar to ophiurids, if larger, but their arms are forked and branched. Ophiuroid podia generally function as sensory organs. They are not usually used for feeding, as in Asteroidea. In the Paleozoic era, brittle stars had open ambulacral grooves, but in modern forms, these are turned inward.
In living ophiuroids, the vertebrae are linked by well-structured longitudinal muscles. Ophiuroida move horizontally, and Euryalina species move vertically. The latter have bigger vertebrae and smaller muscles. They are less spasmodic, but can coil their arms around objects, holding even after death. These movement patterns are distinct to the taxa, separating them. Ophiuroida move quickly when disturbed. One arm presses ahead, whereas the other four act as two pairs of opposite levers, thrusting the body in a series of rapid jerks. Although adults do not use their tube feet for locomotion, very young stages use them as stilts and even serve as an adhesive structure.
The sexes are separate in most species, though a few are hermaphroditic or protandric. The gonads are located in the disk, and open into pouches between the arms, called genital bursae. Fertilisation is external in most species, with the gametes being shed into the surrounding water through the bursal sacs. An exception is the Ophiocanopidae, in which the gonads do not open into bursae and are instead paired in a chain along the basal arm joints.
Many species brood developing larvae in the bursae, effectively giving birth to live young. A few, such as Amphipholus squamata, are truly viviparous, with the embryo receiving nourishment from the mother through the wall of the bursa. However, some species do not brood their young, and instead have a free-swimming larval stage. Referred to as an ophiopluteus, these larvae have four pairs of rigid arms lined with cilia. They develop directly into an adult, without the attachment stage found in most starfish larvae. The number of species exhibiting ophiopluteus larvae are fewer than those that directly develop.
In a few species, the female carries a dwarf male, clinging to it with the mouth.
Brittle stars generally sexually mature in two to three years, become full grown in three to four years, and live up to 5 years. Euryalina, such as Gorgonocephalus, may well live much longer.
Ophiuroids can readily regenerate lost arms or arm segments unless all arms are lost. Ophiuroids use this ability to escape predators, in a way similar to lizards which deliberately shed the distal part of their tails to confuse pursuers. Moreover, the Amphiuridae can regenerate gut and gonad fragments lost along with the arms. Discarded arms have not been shown to have the ability to regenerate.
Some brittle stars, such as the six-armed members of the family Ophiactidae, exhibit fissiparity (division though fission), with the disk splitting in half. Regrowth of both the lost part of the disk and the arms occur which yields an animal with three large arms and three small arms during the period of growth.
The West Indian brittle star, Ophiocomella ophiactoides, frequently undergoes asexual reproduction by fission of the disk with subsequent regeneration of the arms. In both summer and winter, large numbers of individuals with three long arms and three short arms can be found. Other individuals have half a disk and only three arms. A study of the age range of the population indicates little recruitment and fission is the primary means of reproduction in this species.
In this species, fission appears to start with the softening of one side of the disk and the initiation of a furrow. This deepens and widens until it extends across the disk and the animal splits in two. New arms begin to grow before the fission is complete, thus minimizing the time between possible successive divisions. The plane of fission varies so that some newly formed individuals have existing arms of different lengths. The time period between successive divisions is 89 days, so theoretically, each brittle star can produce 15 new individuals during the course of a year.
Brittle stars use their arms for locomotion. They do not, like sea stars, depend on tube feet, which are mere sensory tentacles without suction. Brittle stars move fairly rapidly by wriggling their arms which are highly flexible and enable the animals to make either snake-like or rowing movements. However, they tend to attach themselves to the sea floor or to sponges or cnidarians, such as coral. They move as if they were bilaterally symmetrical, with an arbitrary leg selected as the symmetry axis and the other four used in propulsion. The axial leg may be facing or trailing the direction of motion, and due to the radially symmetrical nervous system, can be changed whenever a change in direction is necessary.
Over 60 species of brittle stars are known to be bioluminescent. Most of these produce light in the green wavelengths, although a few blue-emitting species have also been discovered. Both shallow-water and deep-sea species of brittle stars are known to produce light. Presumably, this light is used to deter predators.
Brittle stars live in areas from the low-tide level downwards. Six families live at least 2 m deep; the genera Ophiura, Amphiophiura, and Ophiacantha range below 4 m. Shallow species live among sponges, stones, or coral, or under the sand or mud, with only their arms protruding. Two of the best-known shallow species are the green brittle star (Ophioderma brevispina), found from Massachusetts to Brazil, and the common European brittle star (Ophiothrix fragilis). Deep-water species tend to live in or on the sea floor or adhere to coral or urchins. The most widespread species is the long-armed brittle star (Amphipholis squamata), a grayish or bluish, strongly luminescent species.
The main parasite to enter the digestive tract or genitals are protozoans. Crustaceans, nematodes, trematodes, and polychaete annelids also serve as parasites. Algal parasites such as Coccomyxa ophiurae cause spinal malformation. Unlike in sea stars and sea urchins, annelids are not typical parasites.
Diversity and taxonomyEdit
Between 2,064 and 2,122 species of brittlestars are currently known, but the total number of modern species may be over 3,000. This makes brittlestars the most abundant group of current echinoderms (before sea stars). We know around 270 genera distributed in 16 families, which makes them at the same time a relatively poorly diversified group structurally, compared with the other echinoderms. For example, 467 species belong to the sole family of Amphiuridae (frail brittle stars which live buried in the sediment leaving only their arms in the stream to capture the plankton). There are also 344 species in the family of Ophiuridae.
List of families according to the World Register of Marine Species :
- order Euryalida
- order Ophiurida Müller & Troschel, 1840
- sub-order Ophiomyxina
- family Ophiomyxidae Ljungman, 1867
- sub-order Ophiurina
- infra-order Chilophiurina
- family Ophiuridae Müller & Troschel, 1840
- infra-order Gnathophiurina
- infra-order Hemieuryalina
- family Hemieuryalidae Verrill, 1899
- family Ophiacanthidae Ljungman, 1867
- infra-order Ophiodermatina
- infra-order Ophiolepidina
- family Ophiolepididae Ljungman, 1867
- family Ophiomycetidae Verrill, 1899
- infra-order Chilophiurina
- sub-order Ophiomyxina
Brittle stars are not used as food, though they are not toxic.
Brittle stars are a moderately popular invertebrate in fishkeeping. They can easily thrive in marine tanks; in fact, the micro brittle star is a common "hitchhiker" that will propagate and become common in almost any saltwater tank, if one happens to come along on some live rock.
Larger brittle stars are popular because, unlike Asteroidea, they are not generally seen as a threat to coral, and are also faster-moving and more active than their more archetypical cousins.
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (December 2009) (Learn how and when to remove this template message)
- Stöhr, S.; O'Hara, T.D.; Thuy, B. (2012). "Global diversity of brittle stars (Echinodermata: Ophiuroidea)". PLoS ONE. 7 (3): e31940. doi:10.1371/journal.pone.0031940.
- Mikuláš, Radek; Petr, Václav; Prokop, Rudolf J (1995). "First occurrence of a "brittlestar bed" (Echinodermata, Ophiuroidea) in Bohemia (Ordovician, Czech Republic)". Bulletin of the Czech Geological Survey. Praha. 70 (3): 17–24. Retrieved 12 November 2017.
- Stöhr, S; O'Hara, T. D.; Thuy, B (2 March 2012). "Global Diversity of Brittle Stars (Echinodermata: Ophiuroidea)". PLoS ONE. 7 (3): e31940. doi:10.1371/journal.pone.0031940. Retrieved 12 November 2017.
- Cousteau, Jaques-Ives; Schiefelbein, Susan (2007). The Human, The Orchid and The Octopus. Bloomsbury. pp. 205–206.
- Turner, R. L.; Meyer, C. E. (30 April 1980). "Salinity Tolerance of the Brackish-Water Echinoderm Ophiophragmus filograneus (Ophiuroidea)". Marine Ecology Progress Series. Inter-Research Science Center. 2 (3): 249–256. doi:10.3354/meps002249. JSTOR 24813186.
- Barnes, Robert D. (1982). Invertebrate Zoology. Philadelphia, PA: Holt-Saunders International. pp. 957–959. ISBN 0-03-056747-5.
- McGovern, Tamara M. (2002-04-05). "Patterns of sexual and asexual reproduction in the brittle star Ophiactis savignyi in the Florida Keys" (PDF). Marine Ecology Progress Series. 230: 119–126. doi:10.3354/meps230119. Retrieved 2011-07-13.
- Mladenov, Philip V.; Roland H. Emson; Lori V. Colpit; Iain C. Wilkie (1983). "Asexual reproduction in the west indian brittle star Ophiocomella ophiactoides (H.L. Clark) (Echinodermata: Ophiuroidea)". Journal of Experimental Marine Biology and Ecology. 72 (1): 1–23. doi:10.1016/0022-0981(83)90016-3.
- Astley, H. C. (2012). "Getting around when you're round: Quantitative analysis of the locomotion of the blunt-spined brittle star, Ophiocoma echinata". Journal of Experimental Biology. 215 (11): 1923–1929. doi:10.1242/jeb.068460. PMID 22573771.
- Jones, A.; Mallefet, J. (2012). "Study of the luminescence in the black brittle-star Ophiocomina nigra: toward a new pattern of light emission in ophiuroids" (PDF). Zoosymposia. 7: 139–145.
- Mah, Christopher L. "Brittle Star Diversity! How many are there and where do they live?". The Echinoblog.
- Mah, Christopher L. "Face to disk with Ophiolepis : Let's get to know some brittle stars". The Echinoblog.
- Andrew B. Smith, Howard B. Fell, Daniel B. Blake, Howard B. Fell, "Ophiuroidea", in AccessScience@McGraw-Hill, http://www.accessscience.com, DOI 10.1036/1097-8542.471000
- David L. Pawson, Andrew C. Campbell, David L. Pawson, David L. Pawson, Raymond C. Moore, J. John Sepkoski, Jr., "Echinodermata", in AccessScience@McGraw-Hill, http://www.accessscience.com, DOI 10.1036/1097-8542.210700
- "brittle star."Encyclopædia Britannica. 2008. Encyclopædia Britannica 2006 Ultimate Reference Suite DVD 17 June 2008 .
- Palaeos: Ophiuroidea | <urn:uuid:a37edce3-377d-43f9-b525-f006aa0e7862> | 4.03125 | 4,708 | Knowledge Article | Science & Tech. | 49.641833 | 95,610,930 |
Generate three random numbers to determine the side lengths of a triangle. What triangles can you draw?
Can you explain why it is impossible to construct this triangle?
Draw all the possible distinct triangles on a 4 x 4 dotty grid. Convince me that you have all possible triangles.
ABC is an equilateral triangle and P is a point in the interior of the triangle. We know that AP = 3cm and BP = 4cm. Prove that CP must be less than 10 cm.
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
ABCDEFGH is a 3 by 3 by 3 cube. Point P is 1/3 along AB (that is AP : PB = 1 : 2), point Q is 1/3 along GH and point R is 1/3 along ED. What is the area of the triangle PQR?
Imagine an infinitely large sheet of square dotty paper on which you can draw triangles of any size you wish (providing each vertex is on a dot). What areas is it/is it not possible to draw?
An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length?
Triangles are formed by joining the vertices of a skeletal cube. How many different types of triangle are there? How many triangles altogether?
A game in which players take it in turns to try to draw quadrilaterals (or triangles) with particular properties. Is it possible to fill the game grid?
A game in which players take it in turns to turn up two cards. If they can draw a triangle which satisfies both properties they win the pair of cards. And a few challenging questions to follow...
Start with a triangle. Can you cut it up to make a rectangle?
Make an equilateral triangle by folding paper and use it to make patterns of your own.
What is the total area of the first two triangles as a fraction of the original A4 rectangle? What is the total area of the first three triangles as a fraction of the original A4 rectangle? If. . . .
Determine the total shaded area of the 'kissing triangles'.
If you know the sizes of the angles marked with coloured dots in this diagram which angles can you find by calculation?
A very mathematical light - what can you see?
If I tell you two sides of a right-angled triangle, you can easily work out the third. But what if the angle between the two sides is not a right angle?
If the yellow equilateral triangle is taken as the unit for area, what size is the hole ?
Can you describe what happens in this film?
Using LOGO, can you construct elegant procedures that will draw this family of 'floor coverings'?
Liethagoras, Pythagoras' cousin (!), was jealous of Pythagoras and came up with his own theorem. Read this article to find out why other mathematicians laughed at him.
A description of some experiments in which you can make discoveries about triangles.
The centre of the larger circle is at the midpoint of one side of an equilateral triangle and the circle touches the other two sides of the triangle. A smaller circle touches the larger circle and. . . .
A hexagon, with sides alternately a and b units in length, is inscribed in a circle. How big is the radius of the circle?
Triangle ABC is an equilateral triangle with three parallel lines going through the vertices. Calculate the length of the sides of the triangle if the perpendicular distances between the parallel. . . .
Find the missing angle between the two secants to the circle when the two angles at the centre subtended by the arcs created by the intersections of the secants and the circle are 50 and 120 degrees.
Prove that, given any three parallel lines, an equilateral triangle always exists with one vertex on each of the three lines.
Using the interactivity, can you make a regular hexagon from yellow triangles the same size as a regular hexagon made from green triangles ?
Take any point P inside an equilateral triangle. Draw PA, PB and PC from P perpendicular to the sides of the triangle where A, B and C are points on the sides. Prove that PA + PB + PC is a constant.
Prove that a triangle with sides of length 5, 5 and 6 has the same area as a triangle with sides of length 5, 5 and 8. Find other pairs of non-congruent isosceles triangles which have equal areas.
A floor is covered by a tessellation of equilateral triangles, each having three equal arcs inside it. What proportion of the area of the tessellation is shaded?
Jennifer Piggott and Charlie Gilderdale describe a free interactive circular geoboard environment that can lead learners to pose mathematical questions.
You are only given the three midpoints of the sides of a triangle. How can you construct the original triangle?
From the measurements and the clue given find the area of the square that is not covered by the triangle and the circle.
Two right-angled triangles are connected together as part of a structure. An object is dropped from the top of the green triangle where does it pass the base of the blue triangle?
Construct a line parallel to one side of a triangle so that the triangle is divided into two equal areas.
Triangle ABC is equilateral. D, the midpoint of BC, is the centre of the semi-circle whose radius is R which touches AB and AC, as well as a smaller circle with radius r which also touches AB and AC. . . .
The largest square which fits into a circle is ABCD and EFGH is a square with G and H on the line CD and E and F on the circumference of the circle. Show that AB = 5EF. Similarly the largest. . . .
If the altitude of an isosceles triangle is 8 units and the perimeter of the triangle is 32 units.... What is the area of the triangle?
The sides of a triangle are 25, 39 and 40 units of length. Find the diameter of the circumscribed circle.
Given that ABCD is a square, M is the mid point of AD and CP is perpendicular to MB with P on MB, prove DP = DC.
Can you work out the fraction of the original triangle that is covered by the inner triangle?
Find the area of the shaded region created by the two overlapping triangles in terms of a and b?
Find the sides of an equilateral triangle ABC where a trapezium BCPQ is drawn with BP=CQ=2 , PQ=1 and AP+AQ=sqrt7 . Note: there are 2 possible interpretations. | <urn:uuid:ea510801-676a-4d3d-b08f-275dc07e13b8> | 3.03125 | 1,454 | Content Listing | Science & Tech. | 71.134306 | 95,610,932 |
Interactive Java Tutorials
Refraction by an Equilateral Prism
Visible white light passing through an equilateral prism undergoes a phenomenon known as dispersion, which is manifested by wavelength-dependent refraction of the light waves. This interactive tutorial explores how the incident angle of white light entering the prism affects the degree of dispersion and the angles of light exiting the prism.
The tutorial initializes having white light incident on a single face of an equilateral prism at a 40-degree angle with respect to a perpendicular line drawn from the prism face. In order to vary the incident angle, use the mouse cursor to translate the Incident Angle slider, which will also produce a corresponding change in the exit angles (q(d)) of the light rays dispersed by the prism. The Refractive Index slider can be utilized to vary the prism refractive index between a value of 1.40 and 2.00, increasing the exit angles of light rays refracted by the prism.
The first demonstration of refraction and dispersion in a triangular prism was performed by British physicist Sir Isaac Newton in the late 1600s. Newton showed that white light could be dissected into its component colors by an isosceles prism having equal sides and angles. In general, a refracting or dispersing prism has two or more plane surfaces that are oriented in a manner favorable to refraction rather than reflection of incident light beams. When a light ray strikes the surface of a dispersing prism, it is refracted upon entering according to Snell's law and then passes through the glass until the second interface is reached. Once again, the light ray is refracted and emerges from the prism along a new path (see Figure 1). Because the prism alters the propagation direction of light, waves passing through a prism are said to be deviated by a specific angle, which can be very precisely determined by applying Snell's law to the geometry of the prism. The deviation angle is minimized when the light wave enters the prism with an angle that allows the beam to traverse through the glass in a direction parallel to the base.
The amount of light deviation by a prism is a function of the incident angle, the prism apex (top) angle, and the refractive index of the material from which the prism is constructed. As prism refractive index values are increased, so is the deviation angle of light passing through the prism. Refractive index is often dependent upon the wavelength of light, with shorter wavelengths (blue light) being refracted at greater angles than longer wavelengths (red light). This variation of the deviation angle with wavelength is referred to as dispersion, and is responsible for the phenomenon that Newton observed over 300 years ago. Dispersion can be fine-tuned by selecting glasses with the appropriate refractive index characteristics for a particular application. In general, the dispersion properties of various glass formulations are compared through Abbe numbers, which are determined by measuring the refractive indices of specific reference wavelengths passed through the glass.
Matthew J. Parry-Hill and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
Questions or comments? Send us an email.
© 1998-2018 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our | <urn:uuid:18da66db-014f-4e40-b243-8fc4aec21b40> | 4.0625 | 729 | Tutorial | Science & Tech. | 41.113839 | 95,610,973 |
Pluto (134340 Pluto) is a Dwarf Planet in an eccentric orbit ranging roughly from 30 AU (Neptune's distance) to 50 AU. It has a steeper Orbital Inclination than the eight planets: 17° from the Ecliptic. It and Neptune share an Orbital Resonance, with a ratio of 3:2.
Pluto was discovered in 1930 and was considered the ninth planet until the discovery of similar objects. Five satellites are known:
The New Horizons mission observed Pluto during a 2015 flyby.
Equilibrium Temperature (Teq)
International Astronomical Union (IAU)
Kuiper Airborne Observatory (KAO)
Kuiper Belt (K Belt)
Late Heavy Bombardment (LHB)
New Horizons (NF1)
Trans-Neptunian Object (TNO) | <urn:uuid:1626e890-383c-4df8-99bc-c23076e3e7fd> | 3.4375 | 176 | Knowledge Article | Science & Tech. | 37.666057 | 95,610,984 |
Tooth Enamel Secret To Stronger Aircraft
Aerospace engineers are brushing up on their knowledge of tooth enamel. It turns out that teeth and aircraft materials have something in common. Professor Herzl Chai of Tel Aviv University's School of Mechanical Engineering and his colleagues at National Institute of Standards and Technology and George Washington University have published research on how the structure of teeth holds useful clues for aerospace engineers.
"Teeth are made from an extremely sophisticated composite material which reacts in an extraordinary way under pressure," says Prof. Chai. "Teeth exhibit graded mechanical properties and a cathedral-like geometry, and over time they develop a network of micro-cracks which help diffuse stress. This, and the tooth's built-in ability to heal the micro-cracks over time, prevents it from fracturing into large pieces when we eat hard food, like nuts."
In teeth, fibers aren't arranged in a grid, but are "wavy" in structure. There are hierarchies of fibers and matrices arranged in several layers, unlike the single-thickness layers used in aircrafts. Under mechanical pressure, this architecture presents no clear path for the release of stress. Therefore, "tufts" — built-in micro cracks — absorb pressure in unison to prevent splits and major fractures.
Aerospace engineers already use a variety of composite materials in the construction of aircraft. Composites consist of two or more materials (matrix and reinforcement) combined to give a material with properties distinct from the original constituents. They may be naturally occurring, or they may be synthetic.
(Composite materials in aircraft)
Although the use of composite materials is as old as engineering, science fiction writers have championed unusual ideas about the use of natural materials in the creation of sophisticated technological constructions. For example, in his 1989 novel Hyperion, Dan Simmons writes about mighty treeships that are used to travel between the stars.
The Consul remembered his first glimpse of the kilometer-long treeship as he closed for rendezvous, the treeship's details blurred by the redundant machine and erg-generated containment fields which surrounded it like a spherical mist, but its leafy bulk clearly ablaze with thousands of lights which shone softly through leaves and thin-walled environment pods, or along countless platforms, bridges command decks, stairways and bowers. Around the base of the treeship, engineering and cargo spheres clustered like oversized galls while blue and violet streamers trailed behind like ten kilometer-long roots...
From Flying by the Skin of Our Teeth .
Scroll down for more stories in the same category. (Story submitted 8/25/2009)
Follow this kind of news @Technovelgy.
| Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit |
you like to contribute a story tip?
Get the URL of the story, and the related sf author, and add
Comment/Join discussion ( 1 )
Related News Stories -
Self-Healing Circuits From Carnegie Mellon
'It even had an inter-skin layer of gum that could seal the punctures...'- Raymond Z. Gallun, 1951.
Dune Fans! Metal-Organic Frameworks Make Science Fiction Real
'Dew collectors,' he muttered, enchanted by the simple beauty of such a scheme. - Frank Herbert, 1965.
Fungi-Infused Concrete Repairs Itself
'I noticed that curious mottled knots were forming, indicating where the room had been strained and healed faultily.'- J.G. Ballard, 1962.
3D Printed Graphene Aerogel - So Light!
'... light as cork and stronger than steel...' - Edgar Rice Burroughs, 1929.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
Ontario Starts Guaranteed Minimum Income
'Earned by just being born.'
Is There Life In Outer Space? Will We Recognize It?
'The antennae of the Life Detector atop the OP swept back and forth...'
Space Traumapod For Surgery In Spacecraft
' It was a ... coffin, form-fitted to Nessus himself...'
Tesla Augmented Reality Hypercard
'The hypercard is an avatar of sorts.'
A Space Ship On My Back
''Darn clever, these suits,' he murmured.'
Biomind AI Doctor Mops Floor With Human Doctors
'My aim was just not to lose by too much.' - Human Physician participant.
Fuli Bad Dog Robot Is 'Auspicious Raccoon Dog' Bot
Bad dog, Fuli. Bad dog.
Las Vegas Humans Ready To Strike Over Robots
'A worker replaced by a nubot... had to be compensated.'
You'll Regrow That Limb, One Day
'... forcing the energy transfer which allowed him to regrow his lost fingers.'
Elon Musk Seeks To Create 1941 Heinlein Speedster
'The car surged and lifted, clearing its top by a negligible margin.'
Somnox Sleep Robot - Your Sleepytime Cuddlebot
Science fiction authors are serious about sleep, too.
Real-Life Macau or Ghost In The Shell
Art imitates life imitates art.
Has Climate Change Already Been Solved By Aliens?
'I had explained," said Nessus, "that our civilisation was dying in its own waste heat.'
First 3D Printed Human Corneas From Stem Cells
Just what we need! Lots of spare parts.
VirtualHome: Teaching Robots To Do Chores Around The House
'Just what did I want Flexible Frank to do? - any work a human being does around a house.'
Messaging Extraterrestrial Intelligence (METI) Workshop
SF writers have thought about this since the 19th century.
More SF in the News Stories
More Beyond Technovelgy science news stories | <urn:uuid:e53bbc19-a29a-4378-8d62-88706f16bc56> | 3.59375 | 1,272 | Content Listing | Science & Tech. | 51.817617 | 95,610,989 |
Three huge volcanic eruptions rocked Jupiterís moon
- August 07, 2014
- 487 Views
- 0 Likes
- 0 Comment
Three huge volcanic eruptions rocked Jupiter's moon Io within two weeks last August, astronomers say.
The events are leading scientists to speculate that these outbursts, which can send material hundreds of miles or kilometers above the surface, might be much more common than previously thought.
“We typically expect one huge outburst every one or two years, and they're usually not this bright,” said Imke de Pater, chair of astronomy at the University of California, Berkeley, and lead author of one of two papers describing the eruptions.
“Here we had three extremely bright outbursts, which suggest that if we looked more frequently we might see many more of them on Io.”
Io (pronounced ee-o or eye-o) is about the size of Earth's moon and the most volcanically active planet or moon in our solar system, according to astronomers. It's also the only one with volcanoes erupting extremely hot lava like that seen on Earth. Because of Io's low gravity, large eruptions blast an umbrella of debris high into space.
De Pater's long-time colleague and coauthor Ashley Davies, a volcanologist with NASA's Jet Propulsion Laboratory at the California Institute of Technology in Pasadena, Calif., said the recent eruptions match past events that spewed lava over hundreds of square miles in a short time.
“These new events are in a relatively rare class of eruptions on Io because of their size and astonishingly high thermal [heat] emission,” he said. “The amount of energy being emitted by these eruptions implies lava fountains gushing out of fissures at a very large volume per second, forming lava flows that quickly spread over the surface of Io.”
All three events, including the largest, most powerful eruption of the trio on Aug. 29, were likely characterized by “curtains of fire,” as lava blasted out of fissures perhaps several miles or kilometers long, according to the scientists. The papers have been accepted for publication in the research journal Icarus.
“This will help us understand the processes that helped shape the surfaces of all the terrestrial planets, including Earth, and the moon,” said Davies. | <urn:uuid:75fdb526-511e-4ce9-9935-72a7f596c6bf> | 3.140625 | 494 | News Article | Science & Tech. | 36.377793 | 95,611,005 |
The Periodic Table. Jedediah Mephistophles Soltmann. Dmitri Mendeleev. Studied the properties of elements and organized the elements by similar properties (families) and by increasing atomic mass. He left blanks for elements he knew had to exist, such as:. Ekaaluminum (gallium).
The Answer is Silicon
What defines a metal?
We’ve used words like: luster, ductility, malleability, & conductivity. Why do metals behave this way?
Metals tend to be larger atoms. Since it is easier to remove an electron from a larger atom, it should make sense then that metals tend to form cations.
Conversely we can say that the larger an atom is (or the lower its first ionization is) the more metallic the atom is.
So if we compared O, S, and Se (all nonmetals) we could say that selenium, being the largest atom, is the most metallic - even though it is a nonmetal.
Nonmetals tend to be smaller atoms. Since it is easier to add an electron to a smaller atom, it should make sense then that nonmetals tend to form anions.
Conversely we can say that the smaller an atom is (or the higher its first ionization is) the more nonmetallic the atom is.
So if we compared Li, Na, and K (all metals) we could say that lithium, being the smallest atom, is the most nonmetallic - even though it is a metal. | <urn:uuid:a67cf5d9-9178-434d-9f50-84ba00906ea2> | 3.75 | 327 | Knowledge Article | Science & Tech. | 55.409176 | 95,611,025 |
In differential geometry of curves, the osculating circle of a sufficiently smooth plane curve at a given point p on the curve has been traditionally defined as the circle passing through p and a pair of additional points on the curve infinitesimally close to p. Its center lies on the inner normal line, and its curvature is the same as that of the given curve at that point. This circle, which is the one among all tangent circles at the given point that approaches the curve most tightly, was named circulus osculans (Latin for "kissing circle") by Leibniz.
The center and radius of the osculating circle at a given point are called center of curvature and radius of curvature of the curve at that point. A geometric construction was described by Isaac Newton in his Principia:
There being given, in any places, the velocity with which a body describes a given figure, by means of forces directed to some common centre: to find that centre.— Isaac Newton, Principia; PROPOSITION V. PROBLEM I.
Description in lay terms
Imagine a car moving along a curved road on a vast flat plane. Suddenly, at one point along the road, the steering wheel locks in its present position. Thereafter, the car moves in a circle that "kisses" the road at the point of locking. The curvature of the circle is equal to that of the road at that point. That circle is the osculating circle of the road curve at that point.
Let γ(s) be a regular parametric plane curve, where s is the arc length, or natural parameter. This determines the unit tangent vector T(s), the unit normal vector N(s), the signed curvature k(s) and the radius of curvature R(s) at each point for which s is composed:
Suppose that P is a point on γ where k ≠ 0. The corresponding center of curvature is the point Q at distance R along N, in the same direction if k is positive and in the opposite direction if k is negative. The circle with center at Q and with radius R is called the osculating circle to the curve γ at the point P.
If C is a regular space curve then the osculating circle is defined in a similar way, using the principal normal vector N. It lies in the osculating plane, the plane spanned by the tangent and principal normal vectors T and N at the point P.
The plane curve can also be given in a different regular parametrization where regular means that for all . Then the formulas for the signed curvature k(t), the normal unit vector N(t), the radius of curvature R(t), and the center Q(t) of the osculating circle are
- [dubious ]
For a curve C given by a sufficiently smooth parametric equations (twice continuously differentiable), the osculating circle may be obtained by a limiting procedure: it is the limit of the circles passing through three distinct points on C as these points approach P. This is entirely analogous to the construction of the tangent to a curve as a limit of the secant lines through pairs of distinct points on C approaching P.
The osculating circle S to a plane curve C at a regular point P can be characterized by the following properties:
- The circle S passes through P.
- The circle S and the curve C have the common tangent line at P, and therefore the common normal line.
- Close to P, the distance between the points of the curve C and the circle S in the normal direction decays as the cube or a higher power of the distance to P in the tangential direction.
This is usually expressed as "the curve and its osculating circle have the second or higher order contact" at P. Loosely speaking, the vector functions representing C and S agree together with their first and second derivatives at P.
If the derivative of the curvature with respect to s is nonzero at P then the osculating circle crosses the curve C at P. Points P at which the derivative of the curvature is zero are called vertices. If P is a vertex then C and its osculating circle have contact of order at least three. If, moreover, the curvature has a non-zero local maximum or minimum at P then the osculating circle touches the curve C at P but does not cross it.
The curve C may be obtained as the envelope of the one-parameter family of its osculating circles. Their centers, i.e. the centers of curvature, form another curve, called the evolute of C. Vertices of C correspond to singular points on its evolute.
For the parabola
the radius of curvature is
At the vertex the radius of curvature equals R(0)=0.5 (see figure). The parabola has fourth order contact with its osculating circle there. For large t the radius of curvature increases ~ t3, that is, the curve straightens more and more.
A Lissajous curve with ratio of frequencies (3:2) can be parametrized as follows
It has signed curvature k(t), normal unit vector N(t) and radius of curvature R(t) given by
- [dubious ]
See the figure for an animation. There the "acceleration vector" is the second derivative with respect to the arc length .
- Actually, point P plus two additional points, one on either side of P will do. See Lamb (on line): Horace Lamb (1897). An Elementary Course of Infinitesimal Calculus. University Press. p. 406.
For some historical notes on the study of curvature, see
- Grattan-Guinness & H. J. M. Bos (2000). From the Calculus to Set Theory 1630-1910: An Introductory History. Princeton University Press. p. 72. ISBN 0-691-07082-2.
- Roy Porter, editor (2003). The Cambridge History of Science: v4 - Eighteenth Century Science. Cambridge University Press. p. 313. ISBN 0-521-57243-6.
For application to maneuvering vehicles see
- JC Alexander and JH Maddocks: On the maneuvering of vehicles
- Murray S. Klamkin (1990). Problems in Applied Mathematics: selections from SIAM review. Society for Industrial and Applied Mathematics. p. 1. ISBN 0-89871-259-9.
|Wikimedia Commons has media related to Graphical illustrations of curvature and osculating circles.| | <urn:uuid:98caffd0-7d3f-4d3e-a109-c8a6e4eb1418> | 4.0625 | 1,410 | Knowledge Article | Science & Tech. | 59.523425 | 95,611,028 |
Real life Star Trek tractor beam developed - but it won't be dragging space ships any time soon
- Researchers have so far managed to move microscopic spheres of silica suspended in water over distances of 30 micrometres
- But Nasa have already been in touch...
Star Date 24102012: We have detected evidence of a working tractor beam.
Two physicists working at New York university have developed a technique for using beams of light to draw a particle toward a source and claim to have demonstrated it experimentally.
Professor David Grier and David Ruffner, a graduate student, working at the Department of Physics and Centre for Soft Matter Research say they have realised the Star Trek-style technology - only on a miniscule scale.
Scroll down for video
Science fiction technology: The crew of the star ship Enterprise often used their tractor beam to help friendly ships in distress and capture enemy vessels
Whenever a friendly star ship was in distress, it was no problem for the crew of the Enterprise to activate their tractor beam and drag the vessel to safety.
However, until now the technology has remained beyond the reach of real-world physicists, with the best they could manage being laser-based tweezers that can drag particles across microscopic distances in two dimensions.
Now in a paper published in the journal Physical Review Letters - and available in full here - Professor Grier and Mr Ruffner describe a technology that goes one better, by actually pulling particles towards their beam's source.
It is well known that light can push objects - a property that forms the basis for the idea of using solar sails - but getting light to drag something has so far proved to be more difficult.
The NYU tractor beam is based on Chinese research published last year into a kind of laser called a Bessel beam, which emits light in concentric rings.
That study calculated that a such a beam could be designed to make a particle inside it emit photons on the side facing away from the beam's source, forcing the particle to recoil towards that source.
No-one has as yet been able to make such a beam, but the NYU researchers got around the problem by instead projecting two Bessel beams side by side down a microscope and using a lens to angle them so they overlap.
By varying the relative phase of the two beams, this technique traps the particle in a moving hologram they call an 'optical conveyor' which allows 'bi-directional transport in three dimensions'.
New Scientist explains how projecting the beams in this way creates a pattern of alternating bright and dark regions. By fine tuning the beam photons in the bright regions which flow past the chosen particle can be made to scatter backwards, hitting the particle and knocking it on towards the next bright region.
However, the beam is not quite ready to snare a starship. Professor Grier and Mr Ruffner demonstrated their technology by moving microscopic spheres of silica suspended in water over distances of 30 micrometres.
'This is still very much in its infancy,' Mr Ruffner told New Scientist. Nevertheless it opens the door to transforming science fiction into science fact - and Nasa has already been in touch.
VIDEO: Professor Greer on "How things go together":
Most watched News videos
- Brutal bat attack caught on surveillance video in the Bronx
- Man fatally shoots a father during an argument over a handicap spot
- Man sets up projector to make garden look like jurassic park
- Waitress tackles male customer after grabbing her backside
- Sir David Attenborough shuts down Naga Munchetty's questions
- Video shows Russia's newest ballistic missiles being tested
- Shocking video shows mother brutally beating her twin girls
- Road rage brawl ends with BMW driver sending man flying
- Bon Jovi star Richie Sambora soars in fighter plane
- Several people injured during knife attack on bus in Germany
- Passenger films inside BA flight that made emergency landing at Gatwick
- Last known survivor of Amazon tribe captured on camera | <urn:uuid:cbce236f-e883-4b83-bcfc-a26d6714a478> | 3.46875 | 815 | Truncated | Science & Tech. | 32.80056 | 95,611,038 |
Victoria’s faunal emblem the Leadbeater’s Possum and other species will become extinct within about 30 years unless clear-fell logging stops in Victoria’s
Mountain Ash forests, new research based on 30 years of monitoring the forests has found.
Professor David Lindenmayer from The Australian National University (ANU) said governments needed to act to protect the remaining forests and to protect the Leadbeater’s possum and other species.
He said the 2009 Back Saturday bushfires had wiped out 42 per cent of habitat and reduced the population from about 5,000 to 2,000 animals.
“Unless conservation areas are expanded to cover almost all remaining Mountain Ash forests, the critically endangered Leadbeater’s Possum will become extinct and other species like the Greater Glider will continue to decline,” said Professor Lindenmayer, from the ANU Fenner School of Environment and Society.
A logging coupe smouldering after being burnt. Logging is the main cause of decline of many species found in Victoria's Mountain Ash forests. Photo Dave Blair.
“Conserving these forests will benefit many species. If logging continues the Leadbeater’s Possum will be the first to become extinct, but it would not be the last. Think of it as the canary in the coal mine.
“When the current Regional Forest Agreements expire at the end of this year, the Victorian and Federal Governments need to act to bring these areas into conservation reserves.”
He said half of the greater glider population in Mountain Ash forests has also been lost in the past decade.
Professor Lindenmayer has studied the Mountain Ash Forests for more than 30 years, published more than 200 scientific studies and written eight books on the topic.
The research is part of the Australian Government’s National Environmental Science Program and was carried out by researchers from The Australian National University and The University of Melbourne.
Dr Chris Taylor from The University of Melbourne led the spatial analysis which identified the important areas for the Leadbeater’s Possum.
“The existing conservation reserves only cover 30 per cent of ideal habitat. On their own they are inadequate to conserve the species,” Dr Taylor said.
“Some important areas currently fall outside of conservation reserves and are open to logging. It is really important that these areas are brought in to conservation reserves.”
The best habitat trees are hundreds of years old. Photo by Dave Blair
Professor Lindenmayer said native animals in the Mountain Ash forest require large old trees over 190 years for tree hollows, which they need to breed and survive. He said only one per cent of Mountain Ash forest was old growth, compared to 60 per cent 150 years ago.
Professor Lindenmayer said studies had found the value of water and tourism from the forests also outweighed the value of logging, while logged forests have more frequent and intense fires, due to the debris and the nature of young forests.
“Every angle you take points to the same conclusion, there is no viable long-term future for logging native Mountain Ash forests,” he said.
“The best option for everyone including forestry workers is to immediately begin transitioning to all timber production coming from plantations.”
The research has been published in the PLOS ONE journal.
Photos to accompany this story are available in dropbox. Photographers listed in photo names must be credited. Photos must not be added to stock photo libraries for future use unassociated with this story.
Professor David Lindenmayer
ANU Fenner School of Environment and Society
M: 0427 770 593
Dr Chris Taylor
Melbourne Sustainable Society Institute
University of Melbourne
M: 0409 338 887
Most people know that cats kill many birds and mammals, but they also have impacts on less charismatic species.
Australian cats are killing about 650 million reptiles per year, according to new research published in the journal Wildlife Research.
You have to be pretty lucky to make a living by combining your passion and interests, and that’s exactly how Dr Daniel White feels about his current state of affairs. Dan began his career studying genes, and has since applied his science to saving species. Here he describes how.
The TSR Hub recognises that outcomes for threatened species will be improved by increasing Indigenous involvement in their management. In response to this, the Hub is guided by an Indigenous Reference Group and has a number of projects across Australia that are collaborating with Indigenous groups on threatened species research on their country.
A new contagious fungal plant disease has entered Australia, myrtle rust. It’s highly mobile, can reproduce rapidly and is infecting many species across a broad geographic range. Containment and eradication responses have so far been unsuccessful.
Australia is losing large old hollow-bearing trees in our mountain ash forests due to logging, fires and climate change. A team at the Australian National University have been investigating the importance of these trees, the implications of their loss and things we can do to ensure we have enough mountain giants for the future. | <urn:uuid:4ce5031e-f847-438a-9fe3-5d88a6f3951b> | 3.640625 | 1,057 | News (Org.) | Science & Tech. | 40.522228 | 95,611,040 |
Looking for the proper explanation of bohr model of atom neil bohr atomic model theory niels bohr discovery what were bohr model limitations stay with us. Get this from a library niels bohr and the quantum atom the bohr model of atomic structure 1913 1925 helge kragh this comprehensive scientific history . Learn about the bohr model of the atom based on quantum mechanics the bohr model bohr model does not explain fine structure and . Descarga rapida descargar gratis libros niels bohr and the quantum atom the bohr model of atomic structure 1913 1925 en espanol pdf epub txt doc isbn
How it works:
1. Register Trial Account.
2. Download The Books as you like ( Personal use ) | <urn:uuid:cfe11d19-1fd0-4110-8ee7-5ada2d4ee8e8> | 2.78125 | 169 | Truncated | Science & Tech. | 53.018563 | 95,611,041 |
Distribution patterns of zooplankton in Tomales Bay, California
- W. J. Kimmerer
- Estuaries SCOPUS
- Coastal and Estuarine Research Federation in 1993
- Cited Count
- Springer JSTOR
Spatial patterns of abundance of the zooplankton of Tomales Bay, California, were studied over one year from August 1987 to September 1988. Samples were taken on six transects up the long axis of the bay, and the species composition and abundance of common species were determined. Distribution patterns were similar to those observed in other estuaries and bays, with species from nearby neritic waters occurring in the outer bay and a few resident species in the inner bay. This pattern may be best explained by size-selective predation within the bay. Most alternative explanations can be ruled out for Tomales Bay, except for possible temperature effects on cool-temperate neritic species. The four common species ofAcartia in Tomales Bay were in two subgenera, each of which included a neritic species and a smaller inner-bay species. The occurrence of the smaller of each pair in the inner bay, which has been observed forAcartia and other species in other estuaries and bays, may also be a result of size-selective predation.
No relevant information is available
If you register references through the customer center, the reference information will be registered as soon as possible. | <urn:uuid:4379b9a5-00a5-41c2-9500-eedf22d6dd55> | 2.71875 | 305 | Academic Writing | Science & Tech. | 27.695214 | 95,611,042 |
That's not the case with quantum mechanics.
Each bit in a quantum machine -- known as qubits -- can be both a one and zero. It's about possibilities. When a qubit is constructed, it's built so you don't know if it's a one or a zero. It has the possibility of being both.
The D-Wave system with the 512 qubit chip is being tested by NASA and Google. (Photo: D-Wave)
It's not known what those qubits are until they begin to interact - or entangle - with other qubits. Based on their entanglements, they become a one or a zero. However, just because a qubit acted as a zero during one calculation, doesn't mean it will act as a zero during the next calculation. It goes back to the original possibility.
That's where the quantum computer's power comes into play.
A quantum system doesn't work in an orderly, linear way. Instead, its qubits communicate with each other, through entanglement, and they calculate all the possibilities at the same time.
That means if a quantum machine has 512 qubits, it's calculating at 2 to the 512th power at the same time. That number is so immense that there are not that many atoms in the universe, according to Rupak Biswas, chief of NASA's Advanced Supercomputing Division. Some physicists theorize that all those calculations are being done in different dimensions.
"We're so far outside of our everyday experience," said Germano S. Iannacchione, head of the Physics Department at Worcester Polytechnic Institute. "Common sense doesn't guide us here. We're trying to come up with pictures in our heads of how it works. When you're at the hairy edge of the unknown in physics and you don't have experience and common sense to guide you, you have to rely on the math. That's the only thing you can hold on to."
Despite the complexities, D-Wave's Brownell said his company has built quantum computers, using their own quantum processor built with different metals, such as niobium, a soft metal that becomes superconducting when cooled to very low temperatures.
One machine, the D-Wave Two, leased by the Universities Space Research Association, is based at NASA's Ames Research Center in Mountain View, Calif. NASA has use of the machine 40% of the time, Google has another 40% and the research association has 20%.
Google declined to talk about its work with the system. However, its experiments on the computer have led to debate on whether D-Wave's computer performs any better than classic computing or whether it is a quantum computer at all.
NASA, which has had its hands on the D-Wave Two since last September, has only been testing it, Biswas said. His group has been doing high-performance modeling and simulation on problems related to Earth sciences, aeronautics and deep space exploration.
Sign up for MIS Asia eNewsletters. | <urn:uuid:82e5fdb9-fd60-4a77-bdf1-0b88d298d0bd> | 3.65625 | 621 | News Article | Science & Tech. | 55.411136 | 95,611,047 |
Share this article:
The National Snow and Ice Data Center (NSIDC) reported last week that there was an increase in the thicker, multi-year sea ice in the Arctic between the end of February 2013 and 2014.
Imagery from the European Advanced Scatterometer shows the distribution of multiyear ice compared to first year ice for March 28, 2013 (yellow line) and March 2, 2014 (blue line). Image courtesy NOAA, NESDIS and the Canadian Ice Service.
During the summer of 2013, a larger fraction of first-year ice survived compared to recent years. This ice has now become second-year ice, according to the NSIDC.
This multi-year ice is critical to sustaining what is left of the Arctic sea ice later in the summer. Thinner, one-year or less ice is much more susceptible to complete loss by the end of the melt season.
The multi-year ice in the Arctic Basin increased from 2.25 to 3.17 million square kilometers during the year.
Multi-year sea ice made up a total of 30% of the Arctic icepack the previous compared to 43% this winter.
However, the experts at the NSIDC explain that one of the reasons for this increase may be due to the more extensive ice cover in September 2013 compared to the very low extent of September 2012.
Despite the above increase, the longer-term trend since the late 1980's shows a clear decrease in mult-year ice. (see NSIDC image below)
Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated.
Data indicates that there has been a slight downward trend in the annual maximum extent of Great Lakes ice cover since the 1970s.
A new study concludes that global warming may eventually be twice as warm as what current climate model consensus indicates.
The increased use of air conditioning in a warming world may lead to a significant degradation of air quality in the eastern U.S. by mid-century.
Dr. James Hansen's climate model projections from the 1980s have been mostly on target.
May 2018 and the spring of 2018 both ranked in the top five warmest on record.
Rate of ice loss in Antarctica has tripled over the past decade.
A combination of a warming climate climate and increased urbanization (heat island effect) has caused a 25 to 50 percent decrease in low cloud cover in the greater Los Angeles area since the 1970s. | <urn:uuid:716acd50-c0bd-4a41-a51e-f82bfb9bf2ff> | 3.21875 | 519 | News Article | Science & Tech. | 55.507721 | 95,611,070 |
Archive | Climate Change Is A Hoax Alert RSS for this section
If you were to believe the mainstream media, you’d think our world is burning up. But that is not true.
Yes, there were places on our planet where it was warmer than normal today. But many parts of the world displayed normal or even colder than normal temperatures.
Look at all of the white and blue on the above map.
Large portions of the Atlantic Ocean were normal or colder than normal.
It was colder than normal across a lot of Africa.
It was colder than normal across almost all of the Arctic and the Arctic Ocean.
It was colder than normal across almost all of Greenland.
It was colder than normal across almost all of India.
It was colder than normal across almost all of Mexico.
It was colder than normal across central Europe.
It was colder than normal across most of Brazil.
It was colder than normal across Indonesia and the Philippines.
It was colder than normal across all of Antarctica (and remember, Antarctica is twice as big as the contiguous United States).
It was colder than normal across almost all of the Southern Ocean
Please, please remember that the media is not giving you the full story.
Courtesy of iceagenow.info
Two days ago, on July 4th, Chubu University scientist Professor Kunihiko Takeda told a national audience on popular Japanese TV program ‘HONMADEKKA! TV’ that cold will be reported on rather than global warming in the second half of 2018.
Dr. Takeda said we will be reading in the newspapers about global cooling and not global warming in the second half of this year’. ‘HONMADEKKA!? TV’ is broadcast nationwide ever Wednesday evening on Fuji TV. And when questioned about an mini ice age, he affirmed it – adding crops would be adversely affected.
Japanese scientist Dr. Takeda Kunihiko
He also said that sunspots have been decreasing, and so the amount of cloud cover will increase as cosmic rays from space increase and the magnetic field of the sun diminishes. In that case the temperature of the Earth would fall somewhat.
The story that the ice of Antarctica and Arctic is melting is a lie, he stated further.
Screenshot illustration of Dr. Takeda’s appearance on HONMADEKKA!? TV’ with translation. Credit: Kirye.
It’s not the first time Professor Takeda appeared on national television to dismiss global warming. In January 2017, on the popular ‘HONMADEKKA!? TV’ program, Takeda told the audience that it would be exposed that global warming was “a hoax” and that the earth is not warming as claimed.
In the January, 2017, show he reminded the audience that the earth in fact currently finds itself in an ice age and that “Antarctic ice is increasing”.
Takeda also said “CO2 in the early times of the Earth was 95%; now it’s 0.04%”.
And Dr. Takeda once commented that global warming was “a political vehicle that keeps Europeans in the driver’s seat and developing nations walking barefoot.”
Courtesy of notrickszone.com | <urn:uuid:1844f150-009b-4245-8d34-fd205f20ef35> | 2.96875 | 694 | Personal Blog | Science & Tech. | 58.057702 | 95,611,072 |
Biodiversity refers varieties among living organisms from all sources, including terrestrial, marine, and other aquatic ecosystems and the ecological complexes of which they are a part.
Instead of knowing about biodiversity’s importance for a long time, yet human activities are causing massive extinctions. According to the Environment New Service, the current extinction rate will be leading 1,000 times the background rate(August 1999) and may climb to 10,000 times the background rate during the next century.
The International Union for Conservation of Nature (IUCN) has revealed a fact-
- At threat of extinction are
- 1 out of 4 conifers
- 1 out of 3 amphibians
- 6 out of 7 marine turtles
- 1 out of 4 mammals
- 1 out of 8 birds
- 75% of genetic diversity of agricultural crops has been lost
- 1/3rd of reef-building corals around the world are threatened with extinction
- Over 350 million people suffer from severe water scarcity
- 75% of the world’s fisheries are fully or over exploited
- Up to 70% of the world’s known species risk extinction if the global temperatures rise by more than 3.5°C
We were not wishing for that kind of world, since our lives are inextricably linked with biodiversity and ultimately its protection is essential for our very survival.
As explained in the UN’s 3rd “Global Biodiversity Book” the rate of biodiversity loss has not been reduced because the 5 principle pressures on biodiversity are persistent, even intensifying:
- Excessive nutrient load and other forms of pollution
- Over-exploitation and unsustainable use
- Invasive alien species
- Habitat loss and degradation
- Climate change
Extinction risks out pace any conservation successes. Amphibians are the most at risk of extinctions, while corals have had a dramatic increase in risk of extinction in recent years.
IUCN maintains the Red List to assess the conservation status of species, subspecies, varieties, and even selected sub populations on a large scale.
Latest posts by Mohit Arora (see all)
- LinkTrackr Review – How’s This Affiliate Link Cloaker Tool? - June 12, 2018
- Hosting24 Review – Everything You Need to Know - June 7, 2018
- Losing Twitter Followers? Here are Top 10 Reasons - May 1, 2018 | <urn:uuid:69be0b18-8d5c-4c2b-8ea2-952c5d647260> | 3.65625 | 503 | Personal Blog | Science & Tech. | 25.874536 | 95,611,092 |
What is MVC?
Jun 22, 2007, 09:00 (0 Talkback[s])
(Other stories by Curtis Poe)
How to Help Your Business Become an AI Early Adopter
"If you read up on the Model-View-Controller (MVC) design
pattern, you might find yourself a bit confused. In fact, I found
myself confused by it when I first started reading about it,
because there are plenty of resources out there to describe it, but
so many of them seem to have different flavors of MVC and different
diagrams explaining how data flows that it's no wonder that
programmers are bewildered about it. Fully believing that I don't
want perfect to be the enemy of the good, I'll show a few practical
implementation details of one way of looking at MVC, primarily
focused on the Web needs.
"The primary thing you need to know about MVC is how logic can | <urn:uuid:7a1e14a3-ffae-432c-baa8-d6290532cc6a> | 2.578125 | 201 | Content Listing | Software Dev. | 48.0765 | 95,611,118 |
+44 1803 865913
Edited By: Brad J Pusey
233 pages, colour photos, illustrations, and maps; colour tables
The aquatic biodiversity of northern Australia is a very rich, highly distinctive and frequently economically important component of the Australian fauna and flora reflecting the distinctive nature of the landscape, soils and climate. More than one million gigalitres of rain falls over northern Australia every year in a dramatic seasonal cycle of short intense humid wet seasons followed by long extended dry seasons that may last for as long as nine months. This vast rainfall creates an equally vast tapestry of aquatic habitats across the landscape. However, the annual water budget of the region (rainfall minus evapotranspiration) is in deficit by more than 1000 mm per year and thus, the aquatic habitats seasonally vary in extent and quality. Vast floodplains dry out and most rivers cease to flow. The region's aquatic biodiversity must deal with this profound seasonal change. Water is a key element in all aspects of human development in northern Australia.
This book provides an entry into the research, both past and present, concerning the aquatic biodiversity of northern Australia and more importantly, will help inform the continuing debate about the future of the region and especially of the distinctive biodiversity of its freshwater ecosystems.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
I don't know how you got a book printed 26 years ago in the conditions that I received it (like new) but you do it! ABSOLUTELY AWESOME!
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:e4fa7d19-e38c-4236-a501-07b12c458012> | 2.890625 | 354 | Product Page | Science & Tech. | 34.08413 | 95,611,131 |
By Daniel Fleisch,Julia Kregenow
Read or Download A Student's Guide to the Mathematics of Astronomy PDF
Similar astronomy books
Info gathered through contemporary area probes despatched to discover the Moon via the us, the eu house employer, Japan, China and India has replaced our wisdom and knowing of the Moon, relatively its geology, because the Apollo missions. This publication provides these findings in a fashion that would be welcomed by way of novice astronomers, scholars, educators and an individual drawn to the Moon.
Charge-Coupled units (CCDs) are the cutting-edge detector in lots of fields of observational technological know-how. up-to-date to incorporate the entire most recent advancements in CCDs, this moment version of the guide of CCD Astronomy is a concise and available reference on all sensible facets of utilizing CCDs. beginning with their digital workings, it discusses their uncomplicated features after which provides tools and examples of ways to figure out those values.
In accordance with the recent details won in regards to the sunlight approach from contemporary house probes and area telescopes, the skilled technology writer Dr. John Wilkinson offers the state-of-the artwork wisdom at the sunlight, sunlight procedure planets and small sun process gadgets like comets and asteroids. He additionally describes house missions just like the New Horizon’s area probe that supplied by no means noticeable sooner than images of the Pluto approach; the sunrise area probe, having simply visited the asteroid Vesta, and the dwarf planet Ceres; and the Rosetta probe inorbit round comet 67P/Churyumov–Gerasimenko that has despatched outstanding and most fun photographs.
This fourth quantity within the sequence Physics and Evolution of the Earth's inside, presents a entire evaluate of the geophysical and geodetical facets regarding gravity and low-frequency geodynamics. Such facets contain the Earth's gravity box, geoid form thought, and low-frequency phenomena like rotation, oscillations and tides.
- Sun Lore of All Ages: A Collection of Myths and Legends (Dover Books on Astronomy)
- The Path Less Traveled: A Comparison of "Tired Light" and the "Expanding Universe": How the Universe Looks without the Big Bang
- Vom Universum zu den Elementarteilchen: Eine erste Einführung in die Kosmologie und die fundamentalen Wechselwirkungen (German Edition)
- Life in the Universe: Expectations and Constraints (Advances in Astrobiology and Biogeophysics)
- Light Pollution: The Global View (Astrophysics and Space Science Library)
- The Macrocosm and Microcosm, or the Universe Without and the Universe Within : Being an Unfolding of the Plan of Creation and the Correspondence of Truths, ... in the World of Sense and the World of Soul
Extra info for A Student's Guide to the Mathematics of Astronomy | <urn:uuid:fa9901f5-e211-4bf3-bfc6-2a85050968f0> | 2.75 | 614 | Product Page | Science & Tech. | 15.00979 | 95,611,137 |
Column performance of carbon nanotube packed bed for methylene blue and orange red dye removal from waste water
MetadataShow full item record
Environmental issues have always been a major issue among human kind for the past decades. As the time passes by, the technology field has grown and has helped a lot in order to reduce these environmental issues. Industries such as metal plating facilities, mining operations and batteries production are a few examples that involves in the environmental issues. Carbon nanotube is proven to possess excellent adsorption capacity for the removal of methylene blue and orange red dyes. The effect of process parameters such as pH and contact time was investigated The results revealed that optimized conditions for the highest removal for methylene blue (MB) (97%) and orange red (94%) are at pH 10, CNTs dosage of 1 grams, and 15 minutes for each dyes removal respectively. The equilibrium adsorption data obtained was best fit to Freundlich model, while kinetic data can be characterized by the pseudo second-order rate kinetics.
Showing items related by title, author, creator and subject.
Size exclusion chromatography as a tool for natural organic matter characterisation in drinking water treatmentAllpike, Bradley (2008)Natural organic matter (NOM), ubiquitous in natural water sources, is generated by biogeochemical processes in both the water body and in the surrounding watershed, as well as from the contribution of organic compounds ...
Khor, Ee Huey (2011)The accuracy of oil-in-water analysis for produced water is increasingly crucial as the regulations for disposal of this water are getting more stringent world wide. Currently, most of the oil producing countries has ...
Incidence, risk factors and estimates of a woman's risk of developing secondary lower limb lymphedema and lymphedema-specific supportive care needs in women treated for endometrial cancerBeesley, V.; Rowlands, I.; Hayes, S.; Janda, M.; O'Rourke, P.; Marquart, L.; Quinn, M.; Spurdle, A.; Obermair, A.; Brand, A.; Oehler, M.; Leung, Yee-Hong; McQuire, L.; Webb, P. (2015)Objectives: Few studies have assessed the risk and impact of lymphedema among women treated for endometrial cancer. We aimed to quantify cumulative incidence of, and risk factors for developing lymphedema following treatment ... | <urn:uuid:6d2fb16d-fe3f-408f-8034-7084cfec3efd> | 2.515625 | 509 | Content Listing | Science & Tech. | 36.657903 | 95,611,153 |
Configuration infrastructure for RUNC network and hosts
This is a collection of scripts which eases life of a system administrator. There is a single configuration file for all managed machines and a number of utilities which generate uniform settings for many services, including:
- Networking (DHCP, forward and reverse DNS, iptables port forwarding and access rules, Nginx reverse proxy etc);
- Monitoring (generate Nagios config based on all declared hosts and services);
- Computation services (SLURM configs and distcc machine hosts are dynamically generated with proper machine resources declaration, allowing to exploit all available computing power);
- SSH known hosts for secure access (all machines automatically and securely explore each other public keys without any administrator intervention and generate single shell script which manages keys for all hosts);
- Centralised configuration of managed hosts. Puppet is heavily used and manifests are derived from templates and main configuration file. This ensures consistent system configuration on a number of hosts;
- various useful utilities, e.g. quickly wake up a host with Wake-on-LAN by host name or alias, or build a network map using information from config file and ARP tables on network devices.
The structure of the project is as follows:
cfg/: this is the directory with the main config file (
conf.yaml) and various templates for other files. Templates are written using Jinja2 templating language. The main configuration file uses very convenient and concise YAML syntax;
nagios/: this is where some files useful for [Nagios] (http://www.nagios.org/) reside;
parse/: this directory contains Python code for parsing and validating the main config file and other auxiliary files;
gen/: this directory contains Python code for generating a lot of things (config files, scripts etc.) using templates and the state parsed from the main config file;
puppet/: this directory contains [Puppet] (http://puppetlabs.com/puppet/what-is-puppet) modules and other Puppet-specific stuff.
util/: some useful stuff which does not belong to cathegories listed above resides here, mainly there are a bunch of Bash scripts.
There are two entry points.
Main.py is a Python script for
executing a generation process which only operates with templates and other
text stuff and does not touch the system in any way. On the other hand
do.sh Bash script which is a wrapper for
does actual work generating configuration of various services and takes
care of reload/restart signaling when it is necessary. It contains a number
of shell functions which may be used separately or in batch mode. Every
function in this file does some piece of configuration in such a way that
a service is touched only if it is necessary. This enables the following
scenario for the administrator: edit
conf.yaml, check, commit, then
do.sh and it will automatically apply all relevant changes.
The main configuration file
The main configuration file (
conf.yaml) consists of a
number of sections, each of which represents a list of entities of some
- People. This is a list of nicknames along with their associated contact information, useful for e-mail notifications from monitoring, automatic updates and other services.
- Defaults. This section contanins a bunch of default values for other sections, such as default monitoring host IP address or default network prefix in host descriptions (which may be omitted to avoid duplication).
- Networks. This is a list of IPv4 (and, in future, maybe IPv6) networks
known to the system. Each network may have some attributes, e.g.
rdnsmeans that we should build reverse DNS zone for this network, while
dhcpdefines a range of IP addresses that should be used by DHCP server for dynamic IP address pool and so on.
- Groups. Groups define properties common for a number of hosts to avoid duplication and provide more structure. Their second purpose is to add some grouping for hosts to services that support grouping functionality natively (for example, Nagios or SLURM). Every group has unique name, may contain a list of hosts (which are defined using Python regular expression syntax) or other groups, and may define a list of properties which should apply to all contained entities.
- Hosts. The rest of main config file is a long list of hosts known to the system. Each host has a host name as the only required attribute, which goes first in it's description. It may have some shorter names called aliases, which help to identify it with one or two letters in many places. Host may have an IP address, a number of MAC addresses associated with it, and a dictionary of properties. Host properties define some host-specific data, for example who admins that host (see People above), which services this host has, what UDP ports sholud be forwarded to that host or what UPS powers it. Propertes may have arbitrary structure convenient for specific purpose (for example, running services are most naturally expressed as a list). The names and semantics of properties are defined by templates and generation code which uses them. Unsupported properties are quietly skipped.
As the main config file is read and written entirely by humans, there are two main thoughts which constitute it's philosophy:
- All terms should be self-descripting. If someone sees something which looks
like MAC address, that should be so. And if there is
httpsin some place, you should't have to look through the documentation to find out what it is.
- Duplication should be avoided as much as possible. Less duplication means less code to read and, consequently, less errors. Group functionality, although somewhat odd and complicated, was specifically developed for this purpose. Also there are host aliases, default network prefixes, regular expressions and generally beautiful and clean YAML markup which all serve this purpose. | <urn:uuid:5a38e517-7361-48ee-86c5-909cf77008c1> | 2.578125 | 1,236 | Documentation | Software Dev. | 39.314844 | 95,611,173 |
|Scientific Name:||Spiranthes brevilabris Lindl.|
|Taxonomic Notes:||Spiranthes brevilabris var. floridana was elevated to the species rank, therefore Spiranthes brevilabris reviewed here excludes S. floridana.|
|Red List Category & Criteria:||Endangered B2ab(ii,iv) ver 3.1|
|Assessor(s):||Treher, A., Sharma, J., Frances, A. & Poff, K.|
Spiranthes brevilabris is listed as Endangered because the species has an area of occupancy (AOO) of 40 km2. Historically, Spiranthes brevilabris occurred along the Atlantic and Gulf Coastal Plain from Texas east to Florida but it is now extant only in Texas and Florida, resulting in extremely fragmented locations. One historical subpopulation is known in Georgia (last seen early 1900s) (J.G. Chafin pers. comm. 2009). The total number of historical or extirpated subpopulations is unknown but it includes at least one each in Georgia and Louisiana, at least three in Texas, a few in Florida, and possibly one in each of Alabama and Mississippi. This indicates a decline in both area of occupancy and number of locations. Recent discoveries (Sharma 2013) expanded the species range in Texas, increasing the handful of previously known subpopulations to between 10 and 25. Subpopulations occur in a county-maintained roadside ditch, National Park land, and on private lands.
|Range Description:||This species is thought to have occurred on the Atlantic and Gulf Coastal Plain from Texas east to Florida. The only known extant subpopulations are in Texas and Florida. The largest remaining subpopulation is in Levy County, Florida.|
Native:United States (Alabama - Possibly Extinct, Florida, Georgia - Possibly Extinct, Louisiana - Possibly Extinct, Mississippi - Possibly Extinct, Texas)
|Range Map:||Click here to open the map viewer and explore range.|
Currently, 10 to 25 subpopulations of Spiranthes brevilabris are believed to be extant. As of 2000, just a single extant subpopulation of this taxon was known, in Levy County, Florida (FNA 2002). In 2007, another extant subpopulation was discovered in Walker County, Texas (Keith 2007). Keith's report (2007) of this discovery notes that "only two other extant sites are known for the species, both occurring in Florida," so it is possible that there is also a second, as-yet unmapped extant subpopulation in Florida. Since 2009, new subpopulations were discovered in San Jacinto, Walker, and Polk Counties in Texas.
One historical subpopulation is known in Georgia (last seen early 1900s) (L.G. Chafin, personal communication). The total number of historical or extirpated subpopulations is unknown but it includes at least three other sites in Texas (Keith 2007), some in Florida, at least one in Louisiana (Kartesz 1999), and possibly one in each of Alabama and Mississippi (FNA 2002).Plant counts at the Florida subpopulation were between 38-127 plants between 1998 and 2002; however, in 2003, another 1,000 plants appeared at the site due to "improvements in mowing schedule." In 2007, 22 plants were counted at a subpopulation in Walker County, Texas (Keith 2007) and 25 plants were counted at the San Jacinto County, Texas site in 2009 (J. Singhurst, personal communication). A subpopulation discovered in Texas in 2013 had thousands of individuals.
|Current Population Trend:||Unknown|
|Habitat and Ecology:||Spiranthes brevilabris occurs in sandy soil in moist prairies including blackland/Fleming prairies in Texas (calcareous prairie pockets surrounded by pines). It is also known from pine-hardwood forest, open pinelands, wetland pine savannahs/flatwoods, and dry to moist fields, meadows and roadsides. It occurs from zero to 100 m asl (FNA 2002).|
|Continuing decline in area, extent and/or quality of habitat:||Yes|
|Use and Trade:||This species is not documented in the horticulture trade but it might be wild collected, illegally, for the niche or specialty market or private collections.|
|Major Threat(s):||Very little information is available on threats for this species. Habitat loss and degradation through development and agriculture are the primary threats. Habitat conversion has also led to changes in land management including changes in disturbance regimes including fire and agricultural practices such as mowing and herbicide use. The species may be wild collected for specialty markets or private collections.|
General conservation actions currently in place across the species range include surveying potential habitat for new subpopulations and monitoring known subpopulations for status of threats, site condition and abundance of plants. One site in Florida has responded well to a planned mowing schedule that works with the life cycle of the plant. Replicating this management plan at other sites might be beneficial if that is a threat. Exposure to herbicides should be eliminated, especially at roadside sites.
Some subpopulations occur on National Forest Land. This species is listed on CITES Appendix II (CITES 2015).
|Citation:||Treher, A., Sharma, J., Frances, A. & Poff, K. 2015. Spiranthes brevilabris. The IUCN Red List of Threatened Species 2015: e.T64176923A64176934.Downloaded on 15 July 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:46cbdd8e-985b-4ce5-8fd4-f1e847c01c1b> | 2.953125 | 1,235 | Knowledge Article | Science & Tech. | 39.476215 | 95,611,200 |
The rectangle shaped field with dimensions 889 m and 1336 m harvested last year was 6235 q wheat. (1 q = 1 quintal = 100 kg). During the year, it was necessary to fix the pipe and therefore did kick wide 3 m parallel to the side of the field 1336 m, where not being able to grow anything. What percent can be expected to reduce harvest in the next year?
Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
To solve this example are needed these knowledge from mathematics:
Next similar examples:
- Iron fence
One field of iron fence consists of 20 iron rods with square cross-section of 1.5 cm side and a long 1 meter. How much weight a field if the density of iron is 7800 kg/m3.
- Air mass
What is the weight of the air in a classroom with dimensions 10 m × 0 m × 2.5 m ? The air density is 1.293 kg/m3.
Brick has volume 2.2 dm3. How many bricks can drive truck with capacity 23 ton? The density of brick is 1.6 g/cm3.
In human body the blood is about 7.2% body weight. How many kilograms of blood is in the human body with weight 93 kg?
- Cu thief
The thief stole 122 meters copper wire with cross-section area of 95 mm2. Calculate how much money gets in the scrap redemption, if redeemed copper for 5.5 eur/kg? The density of copper is 8.96 t/m3.
The lorry was loaded with 18 boxes of 15 kg. How many boxes with weight 18 kg can be loaded, if total load must be same?
- DIY press
Under socialism regime was in some socialist countries to own a typewriter requires special permission. That has hindered the spread of DIY literature (manually transcribed through carbon copy paper for typewriters). Calculate how many typewriters today.
Grower harvested 190 kg of apples. Pears harvested 10 times less. a) How many kg pears harvested? b) How many apples and pears harvested? c) How many kg harvested pears less than apples?
Imagine a word solidarity means that salt donation to the needy, who have neither the salt. If we take word solidarity has word base salt + gift (only in Slovakian language). Calculate how many kilos of salt sympathetic citizen "gives" government a year i
In stock are three kinds of branded coffee prices: I. kind......6.9 USD/kg II. kind......8.1 USD/kg III. kind.....10 USD/kg Mixing these three species in the ratio 8:6:3 create a mixture. What will be the price of 1100 grams of this mixture?
What power has a pump output to move 4853 hl of water to a height of 31 m for 8 hours?
One brick is 6 kg and half a brick heavy. What is the weight of one brick?
- The horses
A couple of horses consume 88 kg of oats for 14 days. How many oats consumed 7 horses at the same consumption for 6 days?
From one ton of coal is produced 772 kg of coke for iron production. How many wagons of coal by 13 tonnes per day is needed for the blast furnace, which has a daily consumption of 1020 tons of coke?
- Weight of air
What is the weight of air in the living room measuring width 5 m length 2 m and height 2.8 m? Air density is ρ = 1.2 kg/m3.
- Birthday party
For her youngest son's birthday party, mother bought 6 3/4 kg of hotdog and 5 1/3 dozens bread rolls. Hotdogs cost 160 per kilogram and a dozen bread roll costs 25. How much did she spend in all?
- Drunk man
Drunkman has 1.6‰ of alcohol in the blood. How many grams of alcohol has in blood, if he have 7 kg of blood? | <urn:uuid:01a22cbf-ae23-4574-937c-c0115ac61e5c> | 2.8125 | 867 | Tutorial | Science & Tech. | 92.984576 | 95,611,201 |
How to program in CyLogo
CyLogo is a programming language for beginners. It teaches you programming fundamentals. The result of the program is a picture you can see. Knowledge of CyLogo makes learning of other programming languages easier.
How to Start
Press the <F4> button on the Shortcut bar above the screen, or open the Applications Desktop and choose the CyLogo icon. Press <Enter>. The CyLogo intro screen will appear. Press any key and the intro screen will disappear. You'll see the dialog box "Open CyLogo File" which allows you to create a new program and to open the existing one (see Fig.1).
Figure 1. Dialog box "Open CyLogo File"
Actions in the "Open CyLogo File" box
- lets you create a new file.
List of files - contains files that were created before in alphabetical order.
Figure 2. Actions list
Upload - sends the file to the selected person.
Rename - changes the filename.
Delete - deletes the file from the list.
View - loads and executes the file.
Copy to - make a copy of file.
Move to - move the file to the wanted destination.
Open - open a file for editing.
Your Cybiko will connect wirelessly to your friend's Cybiko and announce that you want to send the file.
Note: Your CyFriend will find the uploaded file in his/her Uploader&File Manager.
This file will be deleted from the old flash and you will find it on the new one.
Note: All CyLogo files have an extension *.lg.txt. You can view the CyLogo files in Uploader & File Manager application just select the file with *.lg.txt extension and press <Enter>.
This is very important! You can perform any operation or action with your CyLogo files which the application Uploader&File Manager allows you to execute.
Figure 3. Action list
Run - executes a part of the program (or the whole program) from the first unexecuted operator to the end.
Note: The first unexecuted operator is marked as inverted in colors. If there are no marked operators or the first one is marked then it means that no operators have been executed.
Go to - executes a part of the program from the first unexecuted operator up to the one where the pointer is not including it. If the operator where a pointer is located precedes the unexecuted one then it will be executed as part of the program from the first unexecuted operator to the end.
Step - executes the first unexecuted operator.
At the very beginning when no operators have been executed no operator is marked. The first pressed Step marks the first operator. Any next chosen Step executes the following operator.
The first press of <Esc> removes impression from the operator that is currently marked.
If upon executing the program, the wrong operator is met then you will see a message about what error occurred. On that instance, the pointer will move to the wrong operator and the impression will be removed from the currently marked one. The program will then be executed from the very beginning.
Show - shows result of all executed operators (see Fig.4):
Figure 4. The result of the program
Press any key to move to the CyLogo Screen.
Quit - exits the program.
You can also use the keys for the following actions:
How to program in CyLogo
|file: /Techref/cybiko/cylogo_main.htm, 10KB, , updated: 2004/5/7 17:38, local time: 2018/7/17 12:35,
|©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?|
<A HREF="http://techref.massmind.org/techref/cybiko/cylogo_main.htm"> cylogo</A>
|Did you find what you needed?|
Welcome to massmind.org!
Welcome to techref.massmind.org! | <urn:uuid:dd82db25-21a2-4aba-8168-0e0bd88a9365> | 3.53125 | 911 | Tutorial | Software Dev. | 54.43908 | 95,611,219 |
Notes to OOP features of the Jeroo language
Object oriented programming defines objects
- Each Jeroo you create is an _______________.An object has ________________
- For example a Jeroo has:
, ___________________ , ________________
- An object has _________________ (things it can do)
Classes = definitions of objects When you create a new object it is _______________.
A class describes 3 things:
- What kind of _______________ is in an object
- How to make a ______ object of that class (constructor)
- Try this Example: Jeroo quentin = new Jeroo(10,7,EAST,1)
- Holding ____ flower/s, Location = row ____, column ____, Direction = ________
- The default constructor creates a Jeroo at location ________ facing ___________ with ____ flowers
: Jeroo andy = new Jeroo();
- The _____________ of an object (the actions it can perform)
- Example: andy.hop() Jeroos can hop, pick flowers, plant flowers, toss nets, and turn.
Syntax is important!
Jeroo andy = new Jeroo(2,5,SOUTH,3); andy is an _________ , Jeroo is the _________
Sending messages to objects
We “talk” to objects to tell them what to do.
This is called _________________________to the object
A message looks like this: object . method ( extra information )
- The __________ is the thing we are talking to
- The ____________ is a name of the action we want the object to take
- The ___________________ is anything required by the method in order to do its job. (parameter)
Messages or methods are used to:
- ______ an object some information
- ______ an object to do something
- ______ an object for information (usually about itself)
Using OOP syntax: Rewrite the following English statements as object oriented messages
- Tell the woman named Clarissa to walk 2 steps ____________________________
- Tell the Jeroo named Alan to turn to the right ______________________________
Vocabulary Review (watch out! the order is mixed up!)
- _________: something that you “say” to an object, either telling it something or asking it for information
- _________: an instance, or member, of a class
- _________: the type, or description, of an object
- _________: extra information sometimes needed when sending a message.
- _________: a way to create an object. comes as part of the class definition. | <urn:uuid:9bce89ad-71b2-4b0e-954a-258b9255bf1f> | 3.765625 | 572 | Documentation | Software Dev. | 58.35561 | 95,611,221 |
Interior Atomic Structure Helps Promote Adhesion
- 35 Downloads
The force with which two surfaces adhere to one another can depend on the composition of the material below the surface, as discovered by physicists working with Karin Jacobs and Peter Loskill at Saarland University in Germany and by researchers in Kellar Autumn’s team at Lewis & Clark College in Portland (Oregon, USA), who carried out systematic measurements of adhesive forces.
Geckoes are well known to be the largest animals that can walk across a ceiling. To enable them to do this, the reptiles have under their toes millions of fine hairs, each of which has around one hundred tiny spatulate thickenings at the tips. These spatulate endings come into very close contact with the surface that the gecko is walking on and are attracted by the molecular forces of the surface. The teams of researchers from Saarbrücken and Portland, who previously studied the adhesive forces of bacteria and proteins on surfaces, were able to demonstrate that even an animal as large as a gecko can detect the composition of the material below the surface.
For their experiments into depth sensitivity, the researchers carefully removed the hairs from the toes of a tokay gecko. (The hairs regrow when the gecko next sheds its skin.) They bunched the hairs together and bonded them to the tip of a highly sensitive dynamometer. This was then pulled across the surface of a silicon disk which had a silicon dioxide coating that varied in thickness. The resulting friction and attraction forces were measured with a high level of precision.
New description of adhesive forces
On the basis of these findings, the researchers have developed a new description of the adhesive forces of surfaces, which takes into consideration for the first time the structure of the material under the surface. “Until now, the adhesive forces have always been derived from the surface energy. This is a property of the outermost atomic layers which are close to the surface and reach down to a depth of around one nanometre,” says Karin Jacobs. “But our new description also relates to the molecular van der Waals force from deeper layers.”
The experiments with the hairs from the gecko’s toes have demonstrated that, as a result of the van der Waals force, the atomic structure in the interior of the material also has an effect on the surface in the form of macroscopically detectable differences in the adhesive forces. The scientists in Saarbrücken and Portland have therefore coined the new term “subsurface energy” to describe how the material below the surface contributes to these forces.
The studies have been published in the Journal of the Royal Society Interface.
For more information, please contact: Professor Karin Jacobs, k.jacobs@ physik.uni-saarland.de | <urn:uuid:f0cf0f5a-3b45-4680-a568-74517197c6f1> | 3.75 | 576 | Truncated | Science & Tech. | 32.133955 | 95,611,222 |
A 200 g ball hits a wall perdendicularly with a velocity of 20 m/s. If the collision last miliseconds, what is the average force exerted by the ball on the wall?© BrainMass Inc. brainmass.com July 21, 2018, 3:20 pm ad1c9bdddf
Hello and thank you for posting your question to Brainmass
You dont gve teh actuall time. I use t=1ms. If iyou have a different value for the time of collision, just substitute a differnt number in the final equation to find the force
Newton's second law states that the force is teh change in momentum ...
Calculation of the force of a collision. | <urn:uuid:529ca6c0-ff1c-4619-89c4-778321d71543> | 2.875 | 150 | Q&A Forum | Science & Tech. | 75.795061 | 95,611,271 |
The huge West Antarctic ice sheet would collapse completely if the comparatively small Amundsen Basin is destabilized, scientists of the Potsdam Institute for Climate Impact Research find. A full discharge of ice into the ocean is calculated to yield about 3 meters of sea-level rise. Recent studies indicated that this area of the ice continent is already losing stability, making it the first element in the climate system about to tip.
The new publication for the first time shows the inevitable consequence of such an event. According to the computer simulations, a few decades of ocean warming can start an ice loss that continues for centuries or even millennia.
“What we call the eternal ice of Antarctica unfortunately turns out not to be eternal at all,” says Johannes Feldmann, lead author of the study to be published in the Proceedings of the National Academy of Sciences (PNAS). “Once the ice masses get perturbed, which is what is happening today, they respond in a non-linear way: there is a relatively sudden breakdown of stability after a long period during which little change can be found.”
“A few decades can kickstart change going on for millennia”
This is what is expressed by the concept of tipping elements: pushed too far, they fall over into another state. This also applies to, for instance, the Amazon rainforest, and the Indian Monsoon system. In parts of Antarctica, the natural ice-flow into the ocean would substantially and permanently increase.
Ocean warming is slowly melting the ice shelves from beneath, those floating extensions of the land ice. Large portions of the West Antarctic ice sheet are grounded on bedrock below sea level and generally slope downwards in an inland direction. Ice loss can make the grounding line retreat, thereby exposing more and more ice to the slightly warmer ocean water – further accelerating the retreat.
“In our simulations 60 years of melting at the presently observed rate are enough to launch a process which is then unstoppable and goes on for thousands of years,” Feldmann says. This would eventually yield at least 3 meters of sea-level rise. “This certainly is a long process,” Feldmann says. “But it’s likely starting right now.”
The greenhouse-gas emission factor
“So far we lack sufficient evidence to tell whether or not the Amundsen ice destabilization is due to greenhouse gases and the resulting global warming,” says co-author and IPCC sea-level expert Anders Levermann, also from the Potsdam Institute. “But it is clear that further greenhouse-gas emission will heighten the risk of an ice collapse in West Antarctica and more unstoppable sea-level rise.”
“That is not something we have to be afraid of, because it develops slowly,” concludes Levermann. “But it might be something to worry about, because it would destroy our future heritage by consuming the cities we live in – unless we reduce carbon emission quickly.”
Article: Feldmann, J., Levermann, A. (2015): Collapse of the West Antarctic Ice Sheet after local destabilization of the Amundsen Basin. Proceedings of the National Academy of Sciences (PNAS, Online Early Edition) [DOI: 10.1073/pnas.1512482112]
Weblink to the article once it is published: www.pnas.org/cgi/doi/10.1073/pnas.1512482112
For further information please contact:
PIK press office
Phone: +49 331 288 25 07
Mareike Schodder | Potsdam-Institut für Klimafolgenforschung
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
Drones survey African wildlife
11.07.2018 | Schweizerischer Nationalfonds SNF
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:9b31fcf0-f9e4-4421-803d-b095cb4d7932> | 3.984375 | 1,381 | Content Listing | Science & Tech. | 42.461302 | 95,611,303 |
Go blueprints: code for common tasks
These code examples are intended to help you quickly solve some common everyday tasks in Go. There are also a few oddities that may be nice to have when writing more exotic code.
2 basic FIFO queue implementations
How to implement a FIFO queue in Go. Use a slice for a temporary queue. For a long-living queue you should probably use a dynamic data structure, such as a linked list.
2 basic set implementations
How to implement a hashset in Go using built-in maps with boolean or empty struct values.
A basic stack (LIFO) implementation
How to implement a stack or LIFO queue in Go using a slice and the append function.
Access environment variables
Use the Setenv, Getenv, Unsetenv and Environ functions to read and write environment variables.
Access private fields with reflection
How to read unexported fields in a struct using reflection in Go.
Bitmasks and flags
A bitmask is a small set of booleans, often called flags, represented by the bits in a single number.
Check if a number is prime
To check if a number is prime in Go use the ProbablyPrime function from package math/big.
Go as a scripting language: lightweight, safe and fast
How to write a basic command-line (CLI) application in Go.
You can access command-line arguments passed to a Go program through the os.Args variable.
Compute absolute value of an int/float
Write your own code to compute the absolute value of an integer, but use math.Abs for floating point numbers.
Compute max of two ints/floats
Write your own code to compute the minimum and maximum of integers, but use math.Min and math.Max for floating point numbers.
Format byte size as kilobytes, megabytes, gigabytes, ...
Utility functions for converting byte size to human-readable format. The code supports both SI and IEC formats.
3 simple ways to create an error
How to create simple string-based errors and custom error types with data.
Create a new image
To generate a png image programmatically in Go use the image, image/color, and image/png packages.
Enum with String function
How to declare an enumeration and give it a string representation.
Find build version
Use runtime.Version to find the current Go build version.
Generate all permutations
How to generate all permutations of a slice or string in Go.
Hash checksums: MD5, SHA-1, SHA-256
To compute the hash value of a string or byte slice, use the Sum function from crypto package md5, sha1 or sha256. For a file or input stream you need to create a Hash object and write to its Writer function.
HTTP server example
How to implement a basic HTTP web server in Go with client requests and responses.
How to iterate in Go
Iterator pattern: How to write iterators and generators in Go.
Maximum value of an int
The max and min values of an int can be computed as untyped constants.
Round float to 2 decimal places
How to round a float to a string/float with 2 decimal places in Go.
Round float to integer value
How to round a float64 to the nearest integer: round away from zero, round to even number, convert to an int type.
Table-driven unit tests
How to write a table-driven unit test for binary search in Go.
The 3 ways to sort in Go
How to sort any type of data in Go using the sort package. All algorithms in the package perform O(n log n) comparisons in the worst case.
See the String handling cheat sheet for all about basic string manipulation in Go.
Follow on Twitter
One useful golang fact per day
Share this page: | <urn:uuid:f2b56355-aaaa-45ab-ad05-f70482ff17ac> | 2.953125 | 820 | Content Listing | Software Dev. | 56.4642 | 95,611,307 |
Differing contributions of freshwater from glaciers and streams to the Arctic and Southern oceans appear to be responsible for the fact that the majority of microbial communities that thrive near the surface at the Poles share few common members, according to an international team of researchers, some of whom were supported by the National Science Foundation (NSF).
In a paper published in the Oct. 8 edition of the Proceedings of the National Academy of Sciences (PNAS), the researchers report that only 25 percent of the taxonomic groups identified by genetic sequencing that are found at the surface of these waters are common between the two polar oceans. The differences were not as pronounced among microbes deeper in the oceans, with a 40 percent commonality for those populations.
The findings were produced by research supported by NSF during the International Polar Year 2007-2009 (IPY), a global scientific deployment that involved scientists from more than 60 nations. NSF was the lead U.S. agency for the IPY.
"Some of the DNA samples were collected during "Oden Southern Ocean 2007-2008," a unique collaborative effort between NSF's Office of Polar Programs and the Swedish Polar Research Secretariat to perform oceanographic research in the difficult-to-reach and poorly studied Amundsen Sea," said Patricia Yager, a researcher at the University of Georgia and a co-author on the paper.
The Oden cruise was among the first IPY deployments. In addition, some of the samples used in the research were gathered as part of NSF's Life in Extreme Environments Program.
The Polar regions often are described as being, in many ways, mirror images of one another--the Arctic being a ocean surrounded by continental landmasses, while Antarctica is a continent surrounded by an ocean--but the new findings add a biological nuance to those comparisons.
"We believe that differences in environmental conditions at the poles and different selection mechanisms were at play in controlling surface and deep ocean community structure between polar oceans," said Alison Murray of the Desert Research Institute in Reno, Nev., and a co-author on the PNAS paper. "Not surprisingly, the Southern and Arctic oceans are nearest neighbors to each other when compared with communities from lower latitude oceans."
One of the most notable differences in environmental conditions between the two polar oceans is freshwater input. In the Southern Ocean, glacial melt water accounts for most of the freshwater that flows into the systems. In contrast, the Arctic Ocean receives much bigger pulses of freshwater from several large river systems with huge continental drainage basins, in addition to glacial melt water.
The group found that the differences between the poles were most pronounced in the microbial communities sampled from the coastal regions. "This likely is a result of the significant differences in freshwater sourcing to the two polar oceans," said Jean-François Ghiglione, lead author of the article and professor at the Observatoire Océanologique in Banyuls-Sur-Mer, France.
While the surface microbial communities appear to be dominated by environmental selection, such as through the freshwater inputs, the deep communities are more constrained by historical events and connected through oceanic circulation, providing evidence for biogeographically defined communities in the global ocean, according to the authors.
The team compared 20 samples from the Southern Ocean and 24 from the Arctic from both surface and deep waters. They also included an additional 48 samples from lower latitudes to investigate the polar signal in global marine bacterial biogeography.
The researchers specifically compared samples from coastal and open oceans and between winter and summer, to test whether or how environmental conditions and dispersal patterns shape communities in the polar oceans. Samples were processed and analyzed using an identical approach, based on a special technique of DNA sequencing called pyrosequencing, involving more than 800,000 sequences from the 92 samples.
"Our analyses identified a number of key organisms in both poles in the surface and deep ocean waters that are important in driving the differences between the communities," Murray said. "Further research is needed to address the ecological and evolutionary processes underlying these patterns."
The collaborative research was the result of an international effort coordinated by Murray, that involved national polar research programs from six countries--Canada, France, New Zealand, Spain, Sweden and the United States. Support for the work also came from the Sloan Foundation's Census of Marine Life program, which stimulated field efforts at both poles and a separate program targeting marine microbes, the International Census of Marine Microbes, that developed the approach and conducted the sequencing effort.
"The collective energies required to bring this study to fruition were remarkable," Murray said. "Through using similar strategies and technologies from sample collection through next- generation sequencing, we have a highly comparable, unprecedented dataset that for the first time has really allowed us to look in depth across a relatively large number of samples into the similarities of the microbial communities between the two polar oceans."Media Contacts
Peter West | EurekAlert!
Further reports about: > Arctic Ocean > DNA > Forum Life Science > Gates Foundation > Health Costs For Populations > IPY > Marine science > Microbial Nitrogen > NSF > PNAS > Pacific Ocean > Polar Day > Science TV > deep ocean > determine > environmental conditions > evolutionary process > freshwater > glacial melt > marine microbe > mental conditions > microbial communities > oceans
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:52b96ec4-e2e5-4f0a-a643-afd14722b3ce> | 3.15625 | 1,732 | Content Listing | Science & Tech. | 30.707967 | 95,611,319 |
Their detailed maps show the ‘local’ cosmos out to a distance of 600 million light years, identifying all the major superclusters of galaxies and voids. They also provide important clues regarding the distribution of the mysterious ‘dark matter’ and ‘dark energy’ which are thought to account for up to 96% of the apparent mass of the Universe.
The reconstructed density fields in the supergalactic coordinates (SGX, SGY). In this coordinate system, the equator is aligned with the Virgo Cluster, Great Attractor and Perseus-Pisces superclusters. The main overdensities are Hydra-Centaurus (centre-left), Perseus-Pisces (centre-right), Shapley Concentration (upper left), Coma (upper-middle).
Within this vast volume, the most massive galaxy supercluster is 400 million light years away. It was named after its identifier, the American astronomer Harlow Shapley. The Shapley supercluster is so big that it takes light at least 20 million years to travel from its one end to the other. However, Shapley is not the only massive supercluster in our vicinity.
The Great Attractor supercluster, which is three times closer than Shapley, plays a bigger role in the motion of our Galaxy. According to the team, our Milky Way galaxy, its sister galaxy Andromeda and other neighbouring galaxies are moving towards the Great Attractor at an amazing speed of about a million miles per hour. The researchers also established that the Great Attractor is indeed an isolated supercluster and is not part of Shapley.
The new maps are based on the observation that, as the Universe expands, the colours of galaxies change as their emitted light waves are stretched or “redshifted”. By measuring the extent of this redshift, astronomers are able to calculate approximate distances to galaxies.
The new survey, known as the 2MASS Redshift Survey (2MRS), has combined two dimensional positions and colours from the Two Micron All Sky Survey (2MASS), with redshifts of 25,000 galaxies over most of the sky. These redshifts were either measured specifically for the 2MRS or they were obtained from an even deeper survey of the southern sky, the 6dF Galaxy Redshift Survey (6dFGS).
The great advantage of 2MASS is that it detects light in the near-infrared, at wavelengths slightly longer than the visible light. The near-infrared waves are one of the few types of radiation that can penetrate gases and dust and that can be detected on the Earth’s surface. Although the 2MRS does not probe as deeply into space as other recent narrow-angle surveys, it covers the entire sky.
Galaxy redshift surveys are only able to detect luminous matter. This luminous matter accounts for no more than a small fraction of the total matter in the Universe. The remainder is composed of a mysterious substance called ‘dark matter’ and an even more elusive component named ‘dark energy’.
“We need to map the distribution of dark matter rather than luminous matter in order to understand large-scale motions in our Universe,” explained Dr. Pirin Erdogdu (Nottingham University), lead author of the paper. “Fortunately, on large scales, dark matter is distributed almost the same way as luminous matter, so we can use one to help unravel the other.”
Her collaborator, Dr. Thomas Jarrett from Caltech, added, “The other advantage of observing in the near-infrared wavelength is the fact that it traces directly the luminous matter, and thus the dark matter, as well.”
“Our nearly two decade effort has produced the absolute best ever map of the nearby Universe,” said Prof. John Huchra of Harvard University. “With this we hope to elucidate the nature and disposition of dark matter and understand much, much more about our cosmological model and about galaxies themselves.”
In order to map the dark matter probed by the survey, the team used a novel technique borrowed from image processing. The method was partly developed by Prof. Ofer Lahav, a co-author of the paper and head of the astrophysics group at University College London. The technique utilizes the relationship between galaxy velocities and the total distribution of mass.
“It is like reconstructing the true street map of London just from a satellite image of London taken at night. The street lights, like the luminous galaxies, act as beacons of the underlying roads,” said Prof. Lahav.
"This extraordinarily detailed map of the Milky Way’s cosmic neighbourhood provides a benchmark against which theories for the formation of structure in the Universe can be tested,” commented Prof. Matthew Colless, director of the Anglo-Australian Observatory and leader of the 6dF Galaxy Survey.
“In the near future, the predicted motions derived from this map will be confronted with direct measurements of galaxies’ velocities obtained by the 6dF Galaxy Survey, providing a new and stringent test of cosmological models.”
Dr. Pirin Erdogdu | alfa
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:5afadbde-d9cc-474b-8c46-6be473164e2c> | 3.578125 | 1,726 | Content Listing | Science & Tech. | 40.932452 | 95,611,343 |
Few weeks back I was reading a blog about concurrency limitations in Ruby (which we all are aware since long) and how Elixir is evolving. Thus I was extremely curious to know this new dynamic functional programming language "Elixir", the two decades old Erlang language & Erlang Virtual Machine (VM) known for running low-latency, distributed and fault-tolerant systems. This article is a result of my curiosity about Elixir and Erlang.
This article does not cover (i.e. out of scope) installation steps of Elixir & Erlang on Mac, Ubuntu or Windows machine as lot of help is already available online to do so.
What is Elixir?
Elixir is a functional programming language. Functional programming promotes a coding style that helps developers write code that is short, fast, and maintainable. Elixir has been designed to be extensible, letting developers naturally extend the language to particular domains, in order to increase their productivity. Elixir leverages the Erlang VM. José Valim is the creator of the Elixir programming language. His goals were to enable higher extensibility and productivity in the Erlang VM while keeping compatibility with Erlang's tools and ecosystem.
What is Erlang?
The Erlang VM and its standard library were designed by Ericsson in the 80’s for building distributed telecommunication systems. The decisions they have done in the past continue to be relevant to this day. As far as I know, Erlang is the only runtime and Virtual Machine used widely in production designed upfront for distributed systems.
On one of the blog, it was mentioned that
Erlang is a Ferrari wrapped in the sheet metal of an old beater. It has immense power, but to many people, it looks ugly. It has been used by WhatsApp to handle billions of connections, but many people struggle with the unfamiliar syntax and lack of tooling. Elixir fixes that. It is built on top of Erlang, but has a beautiful and enjoyable syntax, with tooling like mix to help build, test and work with applications efficiently.
Its a well known fact that Ruby is not a good language to build concurrent code. MRI, the main ruby interpreter, has a global lock that prevents any code to run in parallel. It is true that other implementations like JRuby and Rubinius don’t have that global lock, but still they are not optimal for concurrency, because the language itself is not designed for it.
Ruby objects are mutable, so it has state. When the code is running in parallel, this may lead to strange behaviour and bugs due to unexpected race conditions unless we are very careful.
The basic data-types (arrays, strings, hashes) are not prepared for parallel read/write. To solve this problem there is a gem called hamster that provides a set of real immutable structures, but still, is not part of the core. The vast majority of the libraries are not still thread safe.
In summary, people that are serious about concurrency tends to use real state-less programming languages. Functional programming shines with all its bright at this, and one of the most popular programming languages is Erlang.
Concurrency is Elixir's key to performance. It uses lightweight processes running on all of cores and additionally it has a very nice syntax similar to Ruby.
This is an excellent video of José Valim explaining why Elixir - https://vimeo.com/53221562
Its around 1 hour long and José takes us through the Journey of why he wrote Elixir, Goals & Inspiration for developing Elixir.
In brief, his video highlights following goals and motivation behind developing Elixir
Goal1 - Productivity is the top goal for developing Elixir
Goal2 - Extensibility
Goal3 - Compatibility
Extras - Use power of concurrency for example: running tests in parallel.
What is Phoenix?
Phoenix is a framework for building HTML5 apps, API backends and distributed systems. Written in Elixir, you get beautiful syntax, productive tooling and a fast runtime. Phoenix builds on top of Elixir to create very low latency web applications, in an environment that is still enjoyable. Response times in Phoenix are often measured in microseconds instead of milliseconds.
What is Functional Programming?
The most popular language for parallel and distributed programming is Erlang -- a functional language. An even better candidate for parallel programming is Haskell, which supports a large variety of parallel paradigms.
In functional programming all data is immutable. Functional programming involves writing code that does not change state. The primary reason for doing so is that successive calls to a function will yield the same result. Why do we want successive calls to a function to yield the same result? There are a few reasons why that's undesirable. First, if a function returns the same result, then you can cache that result and return it if the function is called again. Second, if the function does not return the same result, that means that it's relying on some external state of the program, and as such the function can't be easily precomputed or parallelised.
In functional programming languages you have first-class functions (functions that can be passed around just like any other value) and higher-order functions (functions that take other functions as arguments). In first-class functions you can pass functions as parameters to other functions and return them as values
So C is a non-functional programming language? Yes. C is a procedural language. However, you can get some of the benefit of functional programming by using function pointers and void * in C.
Functional languages are of two types, pure and impure. Haskell is considered as pure whereas Erlang Elixir is considered impure functional language. Refer this wikipedia document listing all programming languages by their types https://en.wikipedia.org/wiki/List_of_programming_languages_by_type
Few well know functional languages are F#, Clojure, Haskell, Scala, Erlang and Elixir
How hard is functional programming?
It definitely requires some getting used to. We'll have to learn to replace loops with recursion, to map and fold lists, traverse data structures without iterators, do input/output using monads, and many other exciting things. All these techniques have value on their own. As programmers we constantly learn new tricks, and functional programming is just another bag of tricks.
Why am I interested in Elixir?
Here are some key reasons about my interest in Elixir
- Erlang, Performance & Parallel processing. Tried and tested since more than two decades in Telecom domain.
- Concurrency - It can be done in Ruby but its wild west and not that great. Concurrency is basically heart of everything done in Elixir due to Erlang.
- Compiler instead of interpreter - but yes still dynamically typed and not statically typed.
- Since Elixir leverages Erlang, the processes are not OS processes - but instead they are much more lightweight. 100K on a single Erlang instance is not uncommon, millions per instance are doable.
- Defined, well understood pattern for handling state and message passing among processes.
- 100 bytes per process in Erlang / Elixir (2 MB per process in Java or other platforms.)
- José Valim and Rails background - Lot of synergy in developing apps.
- Phoenix - Web framework equivalent to Rails.
- Experts vouching for the language and its power in distributed environment.
It's standing on the shoulder of giants, in this case the Erlang VM. A battle-proven VM that makes it really easy to build a highly-scalable, fault-tolerant and distributed systems. A perfect fit for web applications. It already has a great set of libraries for web development: Plug (spec for composable web app modules - think Rack/wsgi), Ecto (library to query and interact with databases) and Phoenix (a fully-featured web framework). All of them have reached 1.0 or are about to do so, and they're incredibly well-designed and powerful.
Great tooling, out of the box: mix (build tool and much more), hex (package manager), ex_unit (unit-testing framework), ex_docs (generate documentation from the code), and others.
Fantastic community: this is a very special attribute of Elixir, and one that the creators and core contributors of the language are putting a lot of effort into: they're extremely available (irc, slack, ml, SO, etc.), they always respond very promptly to questions, issues, PR and give a lot of feedback, they take a lot of times explaining the concept of the language to beginners, and so forth. As a result the whole community behaves in the same way, it's quite impressive what they've achieved.
I am on Elixir slack channel for more than one week now and I have personally experienced these great things about the Elixir community. I have also posted, at bottom of this article, one of my chat on Slack channel.
Few special mentions about Elixir - The very strong meta-programming capabilities. The |> (pipe) operator. Pattern matching capabilities etc.
My interests with compiler and how things work?
Here is a quick depiction of how Elixir works internally (taken from one of the videos by José)
Ruby on Rails vs. Elixir ecosystem?
iexis for Elixir just like
irbis for Ruby and
iexstands for interactive Elixir.
- Atoms in Elixir are Symbols in Ruby. (Few other basic data types like int, float are common)
- We need to write functions to mutate data structures in Elixir instead of objects to hide the state or implementation.
- I will list down more soon.
Platform & Elixir Language features in brief
Elixir data types are immutable. By being immutable, Elixir code is easier to reason about as you never need to worry if a particular code is mutating your data structure in place. By being immutable, Elixir also helps eliminate common cases where concurrent code has race conditions because two different entities are trying to change a data structure at the same time. Reference to Elixir basic data types - http://elixir-lang.org/getting-started/basic-types.html
Elixir ships with a great set of tools to ease development. Mix is a build tool that allows you to easily create projects, manage tasks, run tests and more. Mix is also able to manage dependencies and integrates nicely with the Hex package manager, which provides dependency resolution and the ability to remotely fetch packages.
Coexistence with the Erlang ecosystem which is a huge benefit. Elixir is not trying to re-invent the wheel. Elixir allows to use or call Erlang methods and functions inside Elixir code. So there is a complete interoperability between Erlang and Elixir and vice versa.
Extensibility and DSLs
This wikipedia document lists out key features of Elixir very well - https://en.wikipedia.org/wiki/Elixir_(programming_language)
When you come from an object oriented point of view - as most developers do - picking up OTP (Open Telecom Platform) principles for all the concurrency and fault tolerance glory needs time and at least I needed more than few introduction. Some things that are non-brainers in OOP need serious thinking in a functional language. But usually, most problems can be tackled with less code in Elixir compared to, say Java.
Elixir in Production
One of the things that amazed me the most was the range of business domain Elixir is being used. From web development to game platforms, to embedded devices.
Here is a Blog by Plataformatec about Elixir in Production - http://blog.plataformatec.com.br/tag/elixir/
Elixir Slack Channel
Here is a link to Elixir Slack Channel. https://elixir-slackin.herokuapp.com/
It has roughly 1300 plus registered users and mostly 70+ are active at a given time. I am already part of this slack channel.
Question: What happened to dynamo. I was watching an elixir tutorial on YouTube which mentioned dynamo was the web framework for elixir. but I saw that dynamo was put in maintenance only mode. Why?. Is phoenix the replacement for dynamo now?
Answer: Dynamo was José's playground for what a web framework written in Elixir would look like. And Phoenix is now the "blessed" web framework.
Question: Why would you call Elixir from Erlang?
Answer: If you release a Elixir library that does something awesome and Erlang person wants to use it. Its pretty much for the same reason why you would call Erlang from Elixir.
Here is a query I posted on Elixir slack channel and the quick and detailed response I received.
rohan [1:11 PM]
Hello everyone. I come from non FP (Functional Programming) background. I have generally done Procedural coding using C and lot of OOP using C++ and Ruby. So my question is - do we really need to learn Scala or Haskell first to understand FP better or starting with Elixir is also fine? Though I have done some FP using function pointers in C but not much.
gjaldon [1:12 PM]
starting with Elixir is great, I think
tallakt [1:13 PM]
@rohan: From my experience you shoul rather start with Elixir before Scala and Haskell
gjaldon [1:13 PM]
Scala has a mix of FP and OOP so that could get confusing
tallakt [1:13 PM]
They are all interesting, but Elixir is the simpler mental model IMHO
gjaldon [1:13 PM]
I agree with @tallakt ^
rohan [1:14 PM]
sure @gjaldon and @tallakt - thanks :simple_smile: this is helpful. Thanks for your quick response and inputs on my query.
olivermt [1:18 PM]
unless you are interested in getting lost in a maze of types, I also highly suggest starting with elixir :wink:
rohan [1:19 PM]
sure @olivermt I actually started with Elixir few days ago, but while reading few blogs on FP I noticed tips about starting with Haskell and Scala, thus I was curious to know from Elixir community here. :simple_smile:
olivermt [1:20 PM]
the pattern matching of elixir kinda alleviates a lot of the need for strict typing, so in my oppinion elixir is the way best choice for a FP newbie
rohan [1:21 PM]
good to hear that @olivermt thanks for your inputs.
rohan [1:21 PM]
good to hear that @olivermt thanks for your inputs :simple_smile:
rohan [1:24 PM]
Is this true about FP ?? "Functional programming promotes a coding style that helps developers write code that is short, fast, and maintainable." Read it on one of the blog. Doesn't it depend on a developer how good or bad they write in their language of choice. Or how FP promotes these things ?
olivermt [1:25 PM]
in elixir its promoted by the |> operator
so easy to chain small components
makes the code very readable
and since you implement one function per set of inputs using parameter pattern matching, its all very compmentpartmentalized
god damn, did I actually spell that last word correctly??
anyhow, it makes you work harder to write bad code
because writing it the right way is almost always the easiest way to do it
the huge GOD FUNCTION of java where you send in a map reference and some magic happens, that just isnt possible in elixir
you are force to always consider the boundaries of your data
what goes in, what comes out
I really cant see anyone trying to make a 200 line function in elixir, it just doesnt feel natural
gjaldon [1:27 PM]
and FP code is less complex because you don’t have to deal with state like you do in OOP
there’s no complex class hierarchies
your functions are pure and do not change state in other parts of your application like some methods in OOP do
rohan [1:30 PM]
very interesting. sounds great @olivermt and @gjaldon
tallakt [1:36 PM]
@rohan: believe the hype! in all honesty, I think it is partially true since Elixir has immutability, and that OOP turned out not such a great idea after all... in particular with respect to concurrency. FP will make you rethink the way you program in a big way, and you will be a better programmer for it
rohan [1:53 PM]
Sure, Thanks @tallakt
Elixir and Micro-services - http://blog.plataformatec.com.br/2015/06/elixir-in-times-of-microservices/
Hex packages list (880 in total while writing this article) - https://hex.pm/packages
Should I use Elixir or Go? - http://www.quora.com/Concurrency-computer-science/Should-I-use-Go-Erlang-or-Elixir-for-a-mobile-app-backend
Latest Video on Elixir by José Valim @ Erlang User Conference 2015 - https://www.youtube.com/watch?v=QXcedVc2LQM
The Power of Erlang, the Joy of Ruby by Dave Thomas - https://www.youtube.com/watch?v=lww1aZ-ldz0
Subscribe to Engineering At Kiprosh
Get the latest posts delivered right to your inbox | <urn:uuid:318604e4-a63d-4491-9896-c594e6812790> | 2.65625 | 3,721 | Personal Blog | Software Dev. | 48.524925 | 95,611,354 |