text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
|CNIDARIA : ACTINIARIA : Haloclavidae||SEA ANEMONES AND HYDROIDS| Description: This burrowing sea anemone has an elongated column, which is sausage-shaped, with a rounded base, sometimes with adherent sand grains. There are 12 tentacles, long in extension (only when buried). From one angle of the mouth a small lobed projection (conchula) arises. The column is pale flesh-coloured, pink, or buff, mottled with darker buff or brown, with a ring of white markings just below the disc. The disc and tentacles are patterned with cream and brown; specimens with plain white discs are common. Length of column up to 300mm in full extension, span of tentacles to 120mm. A true burrowing anemone without an adhesive basal disc; the column has sticky spots which can adhere to solid objects. When not buried it may assume a variety of shapes from almost spherical to elongate. The tentacles remain short unless the column is buried. Habitat: Lives buried, unattached, in sand or gravel- a true burrowing anemone. Occurs on shore at LWST or offshore down to at least 100m. Distribution: All coasts of the British Isles, fairly common offshore, only occasional, between tide-marks. Throughout western Europe, including the Mediterranean. Similar Species: Halcampa chrysanthellum and Halcampoides elongatus are also burrowing anemones with 12 tentacles (short in Halcampa) but both lack a conchula; Halcampoides elongatus tentacles are very long, without a pattern. Key Identification Features: Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK. WoRMS: Species record : World Register of Marine Species. |Picton, B.E. & Morrow, C.C. (2016). Peachia cylindrica (Reid, 1848). [In] Encyclopedia of Marine Life of Britain and Ireland. | http://www.habitas.org.uk/marinelife/species.asp?item=D13190 Accessed on 2018-07-18 |Copyright © National Museums of Northern Ireland, 2002-2015|
<urn:uuid:20d26a9d-b698-4dbf-8e8a-431c5ff4148b>
3
477
Knowledge Article
Science & Tech.
43.57447
95,618,438
I am having trouble understanding why when maelic acid is heated to about 100 C it forms maelic anhydride but fumaric acid requires a much higher temp 250-300 before it dehydrates. Then in addition it only forms maleic anhydride?© BrainMass Inc. brainmass.com July 19, 2018, 3:26 pm ad1c9bdddf The physical properties of maleic acid are very different from that of fumaric acid. Maleic acid is soluble in water whereas fumaric acid is not and the melting point of maleic acid (130 - 139 °C) is also much ... The solution explains the reason for the question asked by the student in detail.
<urn:uuid:3b6fc051-18d8-4554-a973-c7b5faec4f1d>
3.140625
149
Q&A Forum
Science & Tech.
64.622772
95,618,458
Common name: Legskates A family of skates found on the lower continental shelf and slopes of all the Atlantic, Pacific and Indian oceans. Species have previously been included in the family Rajidae. A single genus and three species are known from Australian waters. Cite this page as: Dianne J. Bray, Legskates, ANACANTHOBATIDAE in Fishes of Australia, accessed 18 Jul 2018, http://fishesofaustralia.net.au/Home/family/217 Last, P.R. & Compagno, L.J.V. 1999. Families Rajidae, Arhynchobatidae, Anacanthobatidae. pp. 1452-1466 in Carpenter, K.E. & Niem, V.H. (eds). The Living Marine Resources of the Western Central Pacific. FAO Species Identification Guide for Fisheries Purposes. Rome : FAO Vol. 3 pp. 1397-2068. Last, P.R. & Séret, B. 2008. Three new legskates of the genus Sinobatis (Rajoidei: Anacanthobatidae) from the Indo-West Pacific. Zootaxa 1671: 33-58. Last, P.R. & Stevens, J.D. 2009. Sharks and Rays of Australia. Collingwood : CSIRO Publishing Australia 2, 550 pp.
<urn:uuid:10d76b08-c447-43c0-b80b-827e816db105>
2.703125
303
Knowledge Article
Science & Tech.
68.063233
95,618,524
Dinosaur Named for Rock Guitarist / Scientists say his music brought luck on dig Paleontologists have discovered a dinosaur with strange, hooked teeth and named the vicious creature after rocker Mark Knopfler. The team of researchers from Utah and New York chose the guitarist and songwriter from Dire Straits because they listened to a lot of the group's music while digging under the hot tropical sun on the island of Madagascar. The report of the discovery will be published tomorrow in the British science journal Nature. Dire Straits' music seemed to bring the team of paleontologists good luck, Sampson said. "Paleontology is largely serendipity. Whenever we played the music of Dire Straits, we'd have a good day," he said. The bones of the small dinosaur about the size of a German shepherd were discovered by a team of researchers led by Sampson. The island off the southeastern coast of Africa already has yielded some extraordinary fossil finds, but the latest discovery appears unique in the dinosaur world of the late Cretaceous period, about 65 million to 70 million years ago. "We know it had strange and bizarre teeth, the strangest of any dinosaur," Sampson said. "The teeth at front are weird. They're long and conical with hooked tips. They protrude straight forward, so it might be easier to catch fish, but it might be used to spear insects or some other animal." The dinosaur, christened Masiakasaurus knopfleri (pronounced mah-SHEE-kah- sawr-us nawp-FLAIR-ee), appears to be related to other fossils found in Argentina and India, suggesting that Madagascar was connected to the ancient supercontinent of Gondwana for longer than previously believed. The first part of the name for the new dinosaur is derived from "masiaka," the Malagasy word for "vicious." Knopfler said through a spokeswoman that he considered it an honor to have a dinosaur named after him. "I'm really delighted," he said. "The fact that it's a dinosaur, is certainly apt, but I'm happy to report that I'm not in the least bit vicious." The dinosaur was probably 5 to 6 feet long -- mostly neck and tail -- and it probably weighed about 80 pounds. It was probably as fast as a dog but ran on its hind legs like other meat-eating dinosaurs, Sampson said. There is evidence its diet covered a broad range of prey, including fish, insects, lizards, snakes and ancient mammals, which were no bigger than a shrew, he said. "The most important aspect of this animal is that it underscores the fact that, contrary to popular opinion, we still don't know everything about dinosaurs," Sampson said. Though the dinosaur is no longer with us, Knopfler still is; his latest solo album, "Sailing to Philadelphia," was released last year to critical acclaim.
<urn:uuid:1e7e5b65-77af-4841-86cc-86a89469aba7>
3
620
News Article
Science & Tech.
49.176656
95,618,542
Silver Perch (Bidyanus bidyanus) *** ACTION PLAN No. 26 In accordance with section 21 of the Nature Conservation Act 1980, the Silver Perch (Bidyanus bidyanus) was declared an endangered species on 4 September 2001 (Instrument No. 192 of 2001). Section 23 of the Act requires the Conservator of Flora and Fauna to prepare an Action Plan in response to each declaration. This is the Action Plan for the: The Nature Conservation Act 1980 establishes the ACT Flora and Fauna Committee with responsibilities for assessing the conservation status of the ACT’s flora and fauna and the ecological significance of potentially threatening processes. Where the Committee believes that a species or ecological community is threatened with extinction or a process is an ecological threat, it is required to advise the Minister for the Environment, and recommend that a declaration be made accordingly. Flora and Fauna Committee assessments are made on nature conservation grounds only and are guided by specified criteria as set out in its publication ‘Threatened Species and Communities in the ACT’, July 1995. In making its assessment of the Silver Perch, the Committee concluded that it satisfied the criteria indicated in the adjacent table. An Action Plan is required in response to each declaration. It must include proposals for the identification, protection and survival of a threatened species or ecological community, or, in the case of a threatening process, proposals to minimise its effect. The Flora and Fauna Committee will conduct an evaluation of the progress made in implementing this Action Plan every three years (see page 9 for details). This is due to first take place in 2004, which will bring it in line with the review of progress in implementing Action Plans for other declared aquatic items. While the legal authority of this Action Plan is confined to the Australian Capital Territory, management considerations are addressed in a regional context. 1.2 The species is observed, estimated, inferred or suspected to be at risk of premature extinction in the ACT region in the near future, as demonstrated by: 1.2.1 Current severe decline in population or distribution from evidence based on: 188.8.131.52 Direct observation, including comparison of historical and current records. 184.108.40.206 Severe decline in rate of reproduction or recruitment; severe increase in mortality; severe disruption of demographic or social structure. 220.127.116.11 Very high actual or potential levels of exploitation or persecution. 18.104.22.168 Severe threats from herbivores, predators, parasites, pathogens or competitors. Links with other Action Plans Measures proposed in this Action Plan complement those included in the Action Plans for other threatened aquatic species, such as the Two-spined Blackfish (Gadopsis bispinosus), Trout Cod (Maccullochella macquariensis) Macquarie Perch (Macquaria australasica) and Murray River Crayfish (Euastacus armatus). Action Plans are listed at the end of this document. Species Description and Ecology The Silver Perch Bidyanus bidyanus is a member of the family Terapontidae, which contains the freshwater grunters or perches. The family contains a total of about 22 species in eight genera in Australian freshwaters, of which one species, the Silver Perch, is found in the ACT and surrounding area. The majority of terapontids occur in northern Australian streams. B. bidyanus is a moderate to large fish (maximum length of about 500 mm and a maximum weight of around 8 kg) which commonly reaches 300-400 mm and 0.5-1.5 kg in rivers (Figure 1). The body is elongate and slender in juvenile and immature fish, becoming deeper and compressed in adults. The head is relatively small, jaws are equal in length, and eyes and mouth are small. The scales are thin and small (compared to Macquarie Perch or Golden Perch) and the tail is weakly forked. The lateral line follows the profile of the back. Colour is generally silvery grey to black on the body, with the dorsal, anal, caudal fins also grey. The pelvic fins are whitish (Merrick 1996, Merrick & Schmida 1984). B. bidyanus is found over a broad area of the Murray-Darling Basin and is often found in similar habitats to Murray Cod (Maccullochella peelii) and Golden Perch (Macquaria ambigua), ie. lowland, turbid rivers. There are some reports that suggest that B. bidyanus prefers faster, open water, but the general scarcity of information on the habitat preferences of the species makes generalisation difficult. The species is not found in the cool, fast-flowing, upland rivers of the Murray-Darling Basin. BEHAVIOUR AND BIOLOGY B. bidyanus is slow-growing and long-lived in rivers, with a greatest age of 17 years recorded from the Murray River and 27 years recorded from Cataract Dam. A 1.4 kg fish could be 17 years old (Mallen-Cooper et al. 1995, 1997). Growth rates in dams are much faster with a 2.3 kg fish from Googong Reservoir being approximately 6 years old (M. Lintermans unpublished data). B. bidyanus matures at 3–5 years and spawn in spring and summer after an upstream migration. They school in large numbers during the upstream migration and research conducted at Torrumbarry Weir demonstrated that large numbers of immature fish were part of this migration (Mallen-Cooper et al. 1997). This species is bred artificially in a number of government and commercial hatcheries and is widely stocked in farm dams and reservoirs, however, it rarely breeds in impoundments. The species is currently the subject of considerable interest in terms of its potential as an aquaculture species (Kibria et al. 1998). B. bidyanus is omnivorous, consuming aquatic plants, snails, shrimps and aquatic insect larvae. Reports that the species becomes mainly herbivorous once they reach lengths of 250 mm are incorrect, at least for lake populations, as their diet in Googong Reservoir shows little change with fish size (M. Lintermans unpublished data). Formerly widespread over much of the Murray Darling Basin (excluding the cooler upper reaches), the species has declined over most of its range. Numbers of B. bidyanus moving through a fishway at Euston Weir on the Murray River have declined by 93% between 1940 and 1990 (Mallen-Cooper 1993). The ACT probably represented the upstream limit of the species distribution in the Murrumbidgee catchment, but it could not be considered as a vagrant because it was a regular component of the recreational fishery. In the Canberra region the species has been recorded from the Murrumbidgee River where numbers recorded in a fish trap at Casuarina Sands between 1980 and 1991 declined noticeably from the mid 1980s (Lintermans 2000). Monitoring of the Murrumbidgee fishery in the ACT since 1994 has failed to capture any B. bidyanus (Lintermans 1995, 1997, 1998). In the last decade there have been a small number of angler reports of B. bidyanus from the Murrumbidgee River in the ACT. Formerly a ‘run’ of B. bidyanus from Lake Burrinjuck migrated upstream to the lower reaches of the Murrumbidgee River in the ACT in spring/summer, but this migration has not been recorded since the late 1970s/early 1980s (Lintermans 2000). In the ACT, B. bidyanus has not been recorded further upstream than Kambah Pool (Lintermans 2000). There have been occasional angler reports of B. bidyanus from the Murrumbidgee River at Bredbo, but these are thought to have originated from releases into local farm dams. Greenham (1981) reported anecdotal angler records of B. bidyanus from the Molonglo River in the 1940s and 1950s but no contemporary records are known from this river (other than stocked fish). There are no records of the species from the Paddys, Naas, or Gudgenby Rivers. There are occasional angler records of B. bidyanus from the Queanbeyan River below Googong Reservoir but these fish are assumed to be stocked fish displaced downstream from the reservoir. In the Canberra region B. bidyanus is also known from four other locations. These are: - a stocked population in Googong Reservoir on the Queanbeyan River; - a stocked population in the Yass weir pool on the Yass River; - a stocked population in Lake George; and - a population of unknown size in Burrinjuck dam (which is supplemented/maintained by stocking by NSW Fisheries). B. bidyanus is also regularly stocked into farm dams by land‑holders in the Canberra region. B. bidyanus is recognised as a threatened species in the following sources: In August 2000, the Australian Society for Fish Biology Threatened Fishes Subcommittee listed B. bidyanus as nationally ‘vulnerable’ (under ASFB categories) and ‘endangered’ (under IUCN categories). However, there has been no formal nomination of B. bidyanus as a threatened species under the Commonwealth Environment Protection and Biodiversity Conservation Act 1999. A Recovery Plan for the species was prepared by Clunie and Koehn (2001a) for the Murray-Darling Basin Commission. The plan recommends that the species may satisfy the criteria to be classified as ‘Critically Endangered’ under the IUCN categories. Australian Capital Territory Endangered—Section 21 of the Nature Conservation Act 1980, Disallowable Instrument No. 299 of 2001. Special Protection Status Species—Schedules 6 and 7 of the Nature Conservation Act 1980, Disallowable Instrument No. 42 of 2002. New South Wales Vulnerable—Schedule 5 of the Fisheries Management Act 1994 in NSW. Threatened taxon—Schedule 2 of the Flora and Fauna Guarantee Act 1988. Cadwallader et al. (1984) listed B. bidyanus as ‘Vulnerable’ in Victoria and this categorisation was retained by Koehn and Morison (1990) when they reviewed the conservation status of Victorian fish. The species is currently listed as critically endangered in Victoria (NRE 2000). The species is considered 'insufficiently known' in Queensland (Wager 1993). Threats to Populations in the ACT Region Alteration or destruction of fish habitat is widely regarded as one of the most important causes of native fish decline in Australia (Cadwallader 1978; Koehn and O'Connor 1990a,b; Lintermans 1991a; Hancock 1993) and overseas (Moberly 1993; Maitland 1987). The impacts of introduced fish species are also considered to have had an impact on populations of B. bidyanus nationally and locally. However, the specific contributions of these impacts to the species’ decline are not well understood as the threats are likely to have acted in concert. In an exercise to rank the threats to B. bidyanus, the members of the national recovery team considered the top three threats to the species were alteration of flow regimes, barriers to fish movement, and introduced species (Clunie & Koehn 2001b). ALTERATION OF FLOW REGIMES AND OTHER IMPACTS OF DAMS AND WEIRS The construction of dams has a severe effect on the quality of fish habitat through the modification of the natural flow regimes and water quality of rivers below impoundments. The effect of some impoundments (e.g. Corin Reservoir and Lake Burrinjuck) on downstream river flows is to partially reverse the seasonal nature of flows as water from spring and autumn rains is collected and stored for release in summer. Other impoundments such as Bendora, Cotter and Googong reservoirs and Lake Burley Griffin have a different impact in that insufficient water is released to maintain suitable environmental conditions in the river downstream. The quality of water released is also a problem in that it may be released from the lower levels of the reservoir and is much colder than the surface waters. The release of a cold slug of water during the breeding season is thought to inhibit spawning behaviour of B. bidyanus and other native fish species. The large areas of still water created by dams may also impact egg and early larval stages of B. bidyanus. The drifting semi-buoyant eggs and newly hatched larvae may settle in unfavourable habitats such as the backed up waters of dams and weir-pools, making them susceptible to sedimentation and low oxygen levels. BARRIERS TO FISH MOVEMENT Construction of dams and weirs prevents recolonisation of streams by preventing fish passage. Consequently, the construction of Burrinjuck dam in the early 1900s effectively isolated the upper Murrumbidgee catchment from downstream B. bidyanus populations. Similarly the construction of Lake Burley Griffin in 1963 isolated the Molonglo and Queanbeyan rivers from the Murrumbidgee River and has prevented any recolonisation. The establishment of introduced fish species is often cited as a cause of native fish decline in Australia, although much of the evidence is anecdotal. This is because the majority of introduced species became established in the mid to late 1800s when the distribution and abundance of native fish was poorly known or documented. Introduced fish species such as Carp (Cyprinus carpio) and Redfin Perch (Perca fluviatilis) have only recently become established in the Canberra region (Lintermans et al. 1990, Lintermans 1991b) and may compete for food with B. bidyanus, and P. fluviatilis may prey on juveniles of B. bidyanus. Another potentially serious impact of introduced species is their capacity to introduce or spread foreign diseases and parasites to native fish species. C. carpio or P. fluviatilis are considered to be the source of the Australian populations of the parasitic copepod Lernaea cyprinacea (Langdon 1989a). C. carpio, Goldfish (Carassius auratus) or Eastern Gambusia (Gambusia holbrooki) are implicated as the source of the introduced tapeworm Bothriocephalus acheilognathi which has recently been recorded in native fish species (Dove et al. 1997). This tapeworm causes widespread mortality in juvenile fish overseas. The most serious threat from introduced fish species to B. bidyanus may lie in the impacts of an exotic disease Epizootic Haematopoietic Necrosis Virus (EHNV). This virus, unique to Australia, was first isolated in 1985 on the introduced fish species P. fluviatilis (Langdon et al. 1986). It is characterised by sudden high mortalities of fish displaying necrosis of the renal haematopoietic tissue, liver spleen and pancreas (Langdon and Humphrey 1987). Experimental work by Langdon (1989a,b) demonstrated that B. bidyanus was one of several species found to be extremely susceptible to the disease. EHNV was first recorded from the Canberra region in 1986 when an outbreak occurred in Blowering Reservoir near Tumut (Langdon and Humphrey 1987). Subsequent outbreaks have occurred in Lake Burrinjuck in late 1990, Lake Burley Griffin in 1991 and 1994, Lake Ginninderra in 1994 and Googong Reservoir, also in 1994 (Lintermans 2000). Its relatively resistant characteristics and the ease with which it can be transmitted from one geographical location to another on nets, fishing lines, boats and other equipment have aided the spread of EHNV. Langdon (1989b) found that the virus retained its infectivity after being stored dry for 113 days. Once EHNV has been recorded from a water body it is considered impossible to eradicate. The Murrumbidgee and the Googong Reservoir populations of B. bidyanus have been exposed to the virus. Reduction of instream habitat In the ACT there has been little direct removal of instream habitat (such as the removal of logs from rivers and channelisation) as has occurred in lowland streams. Indirect causes of instream habitat reduction include sedimentation, clearing of riparian vegetation and the narrowing of stream channels below impoundments. Streams are often narrower and shallower below dams because of the storage capacity of the impoundments. Reduction in water quality The major reductions in water quality which are most likely to have affected the species in the Canberra region are sediment addition and changes to thermal regimes, either from the operation of impoundments or the clearing of riparian vegetation which shades streams. Major Conservation Objectives The major conservation objective of this Action Plan is to maintain in the long term, viable, wild populations of B. bidyanus as a component of the indigenous biological resources of the ACT and as a contribution to regional and national conservation of the species. This includes the maintenance of the species’ potential for evolutionary development in the wild. The objective is to be achieved through the following strategies: - Improving understanding of the biology and ecology of the species as the basis for managing its habitat. - Protecting sites and habitats that are critical to the survival of the species. - Managing activities in the Murrumbidgee catchment in the ACT to minimise or eliminate threats to fish populations. - Increasing community awareness of the need to protect fish and their habitats. Conservation Issues and Intended Management Actions The majority of riverine ecosystems in eastern Australia have been affected by human impact with a resultant substantial modification of aquatic habitats. Significant effects on the rivers of the ACT region include irrigation extraction, dam construction and agricultural practices. Poor land management practices in the mid to late 1800s in the upper Murrumbidgee catchment resulted in extensive soil erosion and sediment addition to rivers. Also, clearing of the riparian zone removed nearly all the large eucalypts which were previously common, hence there remains no source of large woody debris (snags) to provide structural complexity and habitat diversity for both fish and invertebrate populations. - Environment ACT will investigate options for rehabilitating critical fish habitats. These options include the selective removal of sand to restore critical pool/riffle habitats and provision of additional cover such as snags or boulders. - Environment ACT will investigate mechanisms for rehabilitating and improving the protection of riparian vegetation along the Murrumbidgee River in the ACT. Rehabilitation of fish habitat is costly and therefore requires a significant commitment of funds. Environment ACT will seek opportunities to secure external funding partnerships. Increasing attention worldwide is being focussed on the need to provide water allocations for the environment. When the three impoundments on the Cotter River were constructed, little thought was given to how the abstraction or diversion of water would affect the animals living in the river. It is now known that to stimulate breeding activity, many native fish species require environmental stimuli or triggers such as an increase in water flow and water temperature. Reservoirs have severely disrupted downstream flow and temperature patterns, with consequent deleterious impacts for fish communities. To address these issues, the ACT Government has developed Environmental Flow Guidelines that prescribe minimum flows to be achieved in the Cotter River above and below Bendora Reservoir, and include provisions for baseline flows as well as providing higher flows in spring to encourage fish spawning. ActewAGL is responsible for the operation of ACT water supply reservoirs and the release of water from them. Provision of additional water and a more natural flow regime under the Environmental Flows Guidelines should contribute to enhanced fish habitat in the Cotter and downstream reaches of the Murrumbidgee River. - Environment ACT will liaise with ActewAGL to ensure that the appropriate flows under the Environmental Flows Guidelines are released from storages operated by the company. Knowledge of the distribution of B. bidyanus in the upper Murrumbidgee catchment is largely complete. However, the status of the Lake Burrinjuck population has not been assessed since the mid 1980s when concerns were expressed about the impacts of an expanding P. fluviatilis population (Burchmore and Battaglene 1990). As the ACT B. bidyanus population is thought to be largely dependent on the status of the Lake Burrinjuck population, further investigations in Lake Burrinjuck are necessary to place the ACT population into a regional context. - Environment ACT (Wildlife Research and Monitoring (WR&M)) will liaise with NSW Fisheries about the possibility of assessing the status of the Lake Burrinjuck B. bidyanus population. The decline of B. bidyanus in the Murrumbidgee River raises concerns about the long-term viability of this population. A long-term monitoring program capable of detecting changes in distribution and abundance of the species, which are outside the normal variation expected in these parameters in natural populations, is required. - Environment ACT (WR&M) will continue to monitor the fish population in the Murrumbidgee River in the ACT. Monitoring techniques will include those suited to detecting the presence of B. bidyanus. - Environment ACT (WR&M) will liaise with Victorian and NSW fisheries agencies to ensure that there is exchange of relevant information on the species. There is some existing information on the biology and ecology of B. bidyanus, (Mallen-Cooper 1994; Gehrke 1990; Guo et al. 1995; Lake 1967a,b; Reynolds 1983) although much of the information remains unpublished. Diet, movement and reproduction have all been studied to some degree, but many studies are conducted in aquaculture ponds or laboratories, with few ‘wild’ studies available (see Barlow et al. 1987; Rowland et al. 1983; Allan & Rowland 1992). However, there are still some critical knowledge gaps which need addressing. Effects of Introduced Carp and Redfin Perch The effects of introduced C. carpio and P. fluviatilis on B. bidyanus (and many other native fish species) is unknown. Increasing C. carpio abundance is often correlated with decreasing aquatic macrophyte abundance and other food chain alterations such as reduced zooplankton and increased phytoplankton. How such ecosystem alterations affect native fish species warrants further investigation. Effects of EHN Virus in the Wild P. fluviatilis in the Canberra region is known to be infected with EHN Virus. This virus has been shown to infect B. bidyanus in laboratory experiments but there have been no studies of how this virus affects wild populations. - Environment ACT will encourage research into a number of priority areas with key information gaps. These include effects of introduced C. carpio and P. fluviatilis, and effects of EHN Virus in the wild. EDUCATION AND LIAISON Large sections of the general community are unaware of the reasons for the decline of native fish, and the actions that can help to halt this. Provision of such information will enhance community understanding and engender community support for research and management actions. Options for providing this information include the Internet (Environment ACT Website), development of curriculum materials, as well as pamphlets and signs. Some anglers either cannot, or choose not to discriminate between threatened and non-threatened fish species. Consequently some individuals of threatened species are not returned unharmed to the water after accidental capture. On-site identification aids at locations where threatened fish are likely to be caught may reduce the incidence of mis-identification of threatened fish species. Environment ACT has provided signage along the Murrumbidgee and Cotter rivers in the ACT to assist anglers identify other threatened fish species. - Environment ACT will investigate options for the provision of information to the public on the reasons for fish declines. The most appropriate and effective measures will be implemented where possible. - Environment ACT will investigate how to incorporate information on B. bidyanus into the existing threatened fish signage. The most appropriate and effective measures will be implemented where possible. Overfishing is cited as one of the contributing factors in the decline of several native Murray-Darling fish species such as Trout Cod (M. macquariensis)(Douglas et al. 1994; Berra 1974) and Murray Cod (M. peelii peelii) (Rowland 1989; Jackson et al. 1993) and Macquarie Perch (M. australasica) (Cadwallader 1978; Harris and Rowland 1996). Overfishing is unlikely to have played a major initial role in the decline of B. bidyanus, either nationally or locally. However, once a population has declined, even relatively low levels of fishing can pose a threat to recovery of the species. There is anecdotal evidence that local anglers targeted the spawning run of B. bidyanus from Lake Burrinjuck. The current protective management regimes by NSW Fisheries (which prohibits the taking of B. bidyanus in rivers and imposes bag and size limits in dams) and Environment ACT (which prohibits the taking of B. bidyanus in any public waters) are considered appropriate. - Environment ACT will continue to prohibit the taking of B. bidyanus in public waters until the local population has recovered to levels which are assessed to be capable of sustaining recreational harvest. - Environment ACT (WR&M) will continue to liaise with NSW Fisheries to ensure that the there is consistency in the relevant fishing regulations for B. bidyanus. STOCKING AND GENETIC INTEGRITY Hatchery-bred fish used in fish stocking programs are usually derived from a small number of brood fish, and so may lack the normal range of genetic variation present in wild populations. An investigation into the genetic variability of B. bidyanus in rivers and dams within the Murray-Darling Basin has revealed that stocked populations have less genetic diversity than wild populations (Keenan et al. 1996). The introduction of hatchery-bred fish into remnant wild populations may lead to reduced genetic variability in the population as a whole, and reduce its adaptive capacity. The remnant population of B. bidyanus in Lake Burrinjuck has been augmented with hatchery-bred fish for many years, and it is unknown whether ‘wild’ levels of genetic diversity remain in this population. The ACT Government does not stock streams for recreational purposes, preferring to concentrate its stocking program on lakes and dams (ACT Government 2000). There is provision for stocking streams for conservation purposes, but only when strict criteria are satisfied. - Environment ACT will encourage investigations into the identification of genetic composition of the Lake Burrinjuck population of B. bidyanus. - Environment ACT will not consider stocking B. bidyanus into the Murrumbidgee River in the ACT until the status and genetic composition of the Lake Burrinjuck population is known. A recent review of the conservation status of fish in the Murray-Darling Basin has proposed that B. bidyanus be listed as nationally endangered under the Environment Protection and Biodiversity Conservation Act 1999 (Morris et al. 2001). It is likely that the species will be formally nominated for this status in the near future. - Environment ACT will support the listing of B. bidyanus as endangered under the EPBC Act. Before its declaration as an endangered species in the ACT, B. bidyanus was unprotected. In a review of recreational fishing in the ACT (ACT Parks and Conservation Service 1995), it was proposed to create a dedicated Fisheries Officer position in an effort to curb illegal fishing and better protect the ACT’s fish resources. This proposal received widespread public support (ACT Parks and Conservation Service 1996) and the ACT Government now has a dedicated fisheries officer. The main social benefit of conserving representative populations of B. bidyanus is meeting community concerns that further loss or extinction of native species is prevented. Management of the Cotter catchment for conservation of threatened fish species, including provision of environmental flows, has previously been of concern to ActewAGL in terms of the security of water supply and pricing of domestic water. Compliance with the Environmental Flow Guidelines may have some impact on the urban water supply potential of the Cotter catchment. This may result in greater use of the higher cost water from Googong Dam which currently supplements water supply from the Cotter catchment during periods of high demand. The following legislation is relevant to conservation of flora and fauna in the ACT region: AUSTRALIAN CAPITAL TERRITORY Nature Conservation Act 1980 The Nature Conservation Actprovides a mechanism to encourage the protection of native plants and animals (including fish and invertebrates), the identification of threatened species and communities, and the management of Public Land reserved for nature conservation purposes. Specified activities are managed via a licensing system. Native plants and animals may be declared in recognition of a particular conservation concern and increased controls and penalties apply. Species declared as endangered must be declared as having special protection status (SPS), the highest level of statutory protection that can be conferred. As an endangered species, B. bidyanus must be declared a SPS species and any activity affecting such a species is subject to special scrutiny. Conservation requirements are a paramount consideration and only activities related to conservation of the species or serving a special purpose are permissible. The Conservator of Flora and Fauna may only grant a licence for activities affecting a species with SPS where satisfied that the act specified in the licence meets a range of stringent conditions. Further information can be obtained from the Licensing Officer, Environment Regulation, Environment ACT, telephone (02) 6207 6376. Fisheries Act 2000 The new Fisheries Act 2000 is consistent with the corresponding NSW fishing legislation. The Act now has adequate provisions to protect native fish species by providing for bag, size and gear limits as well as being able to declare closed seasons or total protection for fish species. Land (Planning and Environment) Act 1991 The Land (Planning and Environment) Act 1991 is the primary authority for land planning and administration. It establishes the Territory Plan, which identifies nature reserves, national parks and wilderness areas within the Public Land estate. The Territory Plan also provides for flora and fauna guidelines which list criteria for the assessment of the potential impact of a land use proposal. These focus on a range of aspects of the ACT’s ecological resources, including the protection of vulnerable and endangered species along with their habitats. The conservation requirements of threatened species and their habitats are considered specifically during this process. The Act also establishes the Heritage Places Register. Places of natural heritage significance may be identified and conservation requirements specified. Environmental Assessments and Inquiries may be initiated in relation to land use and development proposals. NEW SOUTH WALES Fisheries Management Act 1994 The Fisheries Management Act 1994 includes provisions covering the identification, assessment and listing of endangered species, populations and ecological communities, vulnerable species and key threatening processes. They also provide for identification of critical habitat, mandatory impact assessment in the land use planning process and active recovery management. Consultation and Community Participation In 1995, a discussion paper on recreational fishing in the ACT was widely circulated for public comment (ACT Parks and Conservation Service 1995). The purpose of the paper was to outline current fisheries management in the ACT and present a series of proposed changes to management practices. A total of 194 submissions representing the views of 1290 individuals was received on the discussion paper with the majority of respondents supporting increased protection of aquatic resources (ACT Parks and Conservation Service 1996). Representatives from Environment ACT (WR&M; ACT Parks and Conservation Service) maintain regular contact with officers from Planning and Land Management in the Department of Urban Services, fishing clubs and the ACT Sport and Recreational Fishing Council to raise awareness of issues involving fish communities. A number of land management practices have the capacity to adversely affect fish populations, especially urban development, agricultural pursuits and forestry operations. These can generate soil erosion which leads to habitat destruction and deterioration in water quality. Environment ACT representation on appropriate intra- and interdepartmental committees and working groups will continue to provide opportunities for liaison on these matters. - Environment ACT will encourage community groups such as fishing clubs and the Australia New Guinea Fishes Association (ANGFA) to assist in the conservation of ACT fish populations and their habitats. Anglers will be encouraged to report any catches of threatened fish. Implementation, Evaluation and Review RESPONSIBILITY FOR IMPLEMENTATION Environment ACT (WR&M; ACT Parks and Conservation Service; Environment Planning and Legislation) have responsibility for coordinating implementation of this Action Plan. Implementation itself, will be a collaborative exercise between government agencies, land-holders and the community generally. NSW participation will be critical in some situations. Specific actions on Territory Land will be subject to the availability of Government resources. Primary responsibility for conservation and management of the species on Territory Land will rest with Environment ACT. The Flora and Fauna Committee will review implementation of this Action Plan after three years. The review will comprise an assessment of progress using the following performance indicators: - completion of commitments that can reasonably be expected to be finalised within the review timeframe (e.g. introduction of a statutory protection measure for a species; development of a management plan); - completion of a stage in a process with a time line that exceeds the review period (e.g. design or commencement of a research program); - commencement of a particular commitment that is of a continuing nature (e.g. design or commencement of a monitoring program for population abundance); and - achievement of conservation objectives of the Action Plan. The review will provide an opportunity for both the Flora and Fauna Committee and Environment ACT to assess progress, take account of developments in nature conservation knowledge, policy and administration, and review directions and priorities for future conservation action. The following conservation actions will be given priority attention: - establishment of a monitoring program to allow the detection of trends in relative population size at a number of sites; and - subject to resources, commencement of a research program, especially on priority topics, and encouragement of research by others. Access to unpublished information was provided by Mark Lintermans, Senior Aquatic Ecologist, Environment ACT. The illustration of the species (Figure 1) was provided by the Murray-Darling Basin Commission. ACT Government, 2000. Fish Stocking Plan for the Australian Capital Territory 2001-2005. Environment ACT, Canberra. ACT Parks and Conservation Service, 1995. A review of recreational fishing in the ACT. Public Discussion Paper, ACT Parks and Conservation Service, Canberra. ACT Parks and Conservation Service, 1996. Recreational fishing in the ACT: Summary of public responses to a discussion paper. ACT Parks and Conservation Service, Canberra. Allan, G. and Rowland, S. J. 1992. Development of an experimental diet for silver perch (Bidyanus bidyanus). Austasia Aquaculture 6(3): 39-40. Barlow, C. C., McLoughlin, R. and Bock, K. 1987. Complementary feeding habits of golden perch Macquaria ambigua (Richardson)(Percichthyidae) and silver perch Bidyanus bidyanus (Mitchell)(Teraponidae) in farm dams. Proceedings of the Linnaean Society of New South Wales 109: 143-152. Burchmore, J. J. & Battaglene, S., 1990. Introduced fishes in Lake Burrinjuck, New South Wales, Australia. In Pollard D., (ed.) Introduced and translocated fishes and their ecological effects, p. 114. Australian Society for Fish Biology Workshop.Bureau of Rural Resources Proceedings No. 8, Australian Government Publishing Service, Canberra. Cadwallader, P. L., 1978. Some causes of the decline in range and abundance of native fish in the Murray-Darling River system. Proceedings of the Royal Society of Victoria 90: 211-224. Cadwallader, P. L., Backhouse, G. N., Beumer, J. P. & Jackson, P. D., 1984. The conservation status of native freshwater fish of Victoria. Victorian Naturalist Clunie, P. and Koehn, J. 2001a. Silver Perch: A Recovery Plan. Final Report for Natural Resource Management Strategy Project R7002 to the Murray Darling Basin Commission. Clunie, P. and Koehn, J. 2001b. Silver Perch: A Resource Document. Final Report for Natural Resource Management Strategy Project R7002 to the Murray Darling Basin Commission. Dove, A. D. M., Cribb, T. H., Mockler, S. P. & Lintermans, M., 1997. The Asian Fish Tapeworm, Bothriocephalus acheilognathi, in Australian freshwater fishes. Marine and Freshwater Research 48: 181-183. Gehrke, P. C. 1990. Clinotactic responses of larval silver perch (Bidyanus bidyanus) and golden perch (Macquaria ambigua) to simulated environmental gradients. Australian Journal of Marine and Freshwater Research 41: 523-528. Greenham, P., 1981. Murrumbidgee River aquatic ecology study. Report to the National Capital Development Commission and the Department of the Capital Territory, Canberra. Guo, R., Mather, P. and Capra, M. F. 1995. Salinity tolerance and osmoregulation in silver perch Bidyanus bidyanus Mitchell (Teraponidae) an endemic Australian freshwater teleost. Marine and Freshwater Research 46: 947-952. Hancock, D. A., (ed.) 1993. Sustainable fisheries through sustaining fish habitat. Australian Society for Fish Biology Workshop, Victor Harbor, South Australia, 12-13 August. Bureau of Resource Sciences Proceedings, AGPS, Canberra. Jackson, P. D., Koehn, J. D. & Wager, R., 1993. Australia's threatened fishes 1992 listing - Australian Society for Fish Biology. In Hancock, D. A., (ed.) Sustainable fisheries through sustaining fish habitat, pp 213-227. Australian Society for Fish Biology Workshop, Victor Harbor, South Australia, 12-13 August. Bureau of Resource Sciences Proceedings, AGPS, Canberra. Keenan, C., Watts, R. and Serafini, L. 1996. Population genetics of golden perch, silver perch and eel-tailed catfish within the Murray-Darling Basin. In 1995 Riverine Environment Research Forum (Eds. R. J. Banens and R. Lehane) pp 17-26. October 1995, Attwood Victoria. Murray Darling Basin Commission, Canberra. Kibria, G., Nugegoda, D. Fairclough, R. and Lam, P. 1998. Biology and aquaculture of silver perch, Bidyanus bidyanus (Mitchell 1838) (Terapontidae): A review. Victorian Naturalist 115(2): 56–62. Koehn, J. D. & Morison, A. K., 1990. A review of the conservation status of native freshwater fish in Victoria. Victorian Naturalist 107: 13-25. Koehn, J. D. & O'Connor, W. G., 1990a. Biological information for management of native freshwater fish in Victoria. Department of Conservation and Environment, Victoria. Koehn, J. D. & O'Connor, W. G., 1990b. Threats to Victorian native freshwater fish. Victorian Naturalist 107: 5-12. Lake, J. S. 1967a. Rearing experiments with five species of Australian freshwater fishes. I. Inducement to spawning. Australian Journal of Marine and Freshwater Research 18: 137–153. Lake, J. S. 1967b. Rearing experiments with five species of Australian freshwater fishes. II Morphogenesis and ontogeny. Australian Journal of Marine and Freshwater Research 18(2): 155–173. Langdon, J. S., 1989a. Prevention and control of fish diseases in the Murray-Darling Basin. In Proceedings of the workshop on native fish management, Canberra, 16-18 June 1988. Murray-Darling Basin Commission, Canberra. Langdon, J. S., 1989b. Experimental transmission and pathogenicity of epizootic haematopoietic necrosis virus (EHNV) in Redfin Perch Perca fluviatilis L., and 11 other teleosts. Journal of Fish Diseases Langdon, J. S., Humphrey, J. D., Williams, L. M., Hyatt, A. D. & Westbury, H. A., 1986. First virus isolation from Australian fish: An iridovirus-like pathogen from Redfin Perch Perca fluviatilis L. Journal of Fish Diseases 9: 263-268. Langdon, J. S. & Humphrey, J. D., 1987. Epizootic haematopoietic necrosis, a new viral disease in Redfin Perch Perca fluviatilis L., in Australia. Journal of Fish Diseases 10: 289-297. Lintermans, M., 1991a. The decline of native fish in the Canberra region: The effects of habitat modification. Bogong 12(3): 4-7. Lintermans, M., 1991b. The decline of native fish in the Canberra region: the impacts of introduced species. Bogong 12(4): 18-22. Lintermans, M., 1995. Lower Molonglo Water Quality Control Centre biological monitoring program: 1994 fish monitoring report. Consultancy report to ACT Electricity and Water, Canberra. Lintermans, M., 1997. Lower Molonglo Water Quality Control Centre Biological Monitoring Program: 1996 Fish Monitoring Report. Consultancy report to ACTEW Corporation, Canberra. Lintermans, M., 1998. Lower Molonglo Water Quality Control Centre Biological Monitoring Program: 1997 Fish Monitoring Report. Consultancy report to ACTEW Corporation, Canberra. Lintermans, M. 2000. The Status of Fish in the Australian Capital Territory: A Review of Current Knowledge and Management Requirements. Technical Report 15, Environment ACT, Canberra. Lintermans, M., Rutzou, T. & Kukolic, K., 1990. Introduced fish of the Canberra region - recent range expansions. In Pollard, D., (ed.) Australian Society for Fish Biology Workshop: Introduced and translocated fishes and their ecological effects, pp 50-60. Bureau of Rural Resources Proceedings No. 8, Australian Government Publishing Service, Canberra. Maitland, P. S., 1987. Conserving fish in Australia: An overview of the conference on Australian threatened fishes. In: Harris, J. H., (ed.) Proceedings of the Conference on Australian Threatened Fishes, pp 63-67. Australian Society for Fish Biology and NSW Department of Agriculture, Sydney. Mallen-Cooper, M. 1993. Habitat changes and declines of freshwater fish in Australia: what is the evidence and do we need more? Pp 118–123 in D. Hancock (Ed), Australian Society for Fish Biology Workshop on Sustaining fisheries through sustaining habitat. Australian Government Publishing Service, Canberra. Mallen-Cooper, M. 1994. Swimming ability of adult golden perch, Macquaria ambigua (Percichthyidae), and adult silver perch, Bidyanus bidyanus (Teraponidae), in an experimental vertical-slot fishway. Australian Journal of Marine and Freshwater Research 45: 191-198. Mallen-Cooper, M., Stuart, I. G., Hides-Pearson, F. and Harris, J. H. 1995. Fish Migration in the Murray River and assessment of the Torrumbarry fishway. Final report to the Murray-Darling Basin Commission, Natural Resources Management Strategy Project N002, NSW Fisheries. Mallen-Cooper, M., Stuart, I., Hides-Pearson, F. and Harris, J. 1997. Fish migration in the River Murray and assessment of the Torrumbarry fishway. In 1995 Riverine Environment Research Forum (Eds. R. J. Banens and R. Lehane) pp 33-37. October 1995, Attwood Victoria. Murray Darling Basin Commission, Canberra. Merrick, J. R. 1996. Family Terapontidae: Freshwater grunters or perches. Pp 164-167 In: McDowall, R. M. (ed.). Freshwater Fishes of South-eastern Australia. Reed Books. Sydney. Merrick, J. R. & Schmida, G. E., 1984. Australian freshwater fishes: Biology and management. Published by J. Merrick, North Ryde, New South Wales. Moberly, S. J., 1993. Habitat is where it's at!: "It's more fun to fight over more fish than less fish". In Hancock, D. A., (ed.) Sustainable fisheries through sustaining fish habitat, pp 3-13. Australian Society for Fish Biology Workshop, Victor Harbor, South Australia, 12-13 August 1992. Bureau of Resource Sciences Proceedings, AGPS, Canberra. Morris, S. A., Pollard, D. A., Gehrke, P. C. and Pogonoski, J. J. 2001. Threatened and potentially threatened freshwater fishes of coastal New South Wales and the Murray-Darling Basin. Report to Fisheries Action Program and World Wide Fund for Nature. NSW Fisheries. NRE 2000. Threatened Vertebrate Fauna in Victoria —2000. Department of Natural Resources and Environment, Victoria. Reynolds, L. F. 1983. Migration patterns of five fish species in the Murray Darling River system. Australian Journal of Marine and Freshwater Research 34: 857–871. Rowland, S., Dirou, J. and Selosse, P. 1983. Production and stocking of golden and silver perch in NSW. Australian Fisheries September 1983: 24-28. Wager, R. N. E. 1993. The distribution and conservation status of Queensland freshwater fishes. Queensland Department of Primary Industries Information Series. List of Action Plans - May 2003 In accordance with Section 23 of the Nature Conservation Act 1980, the following Action Plans have been prepared by the Conservator of Flora and Fauna: No. 1: Natural Temperate Grassland - an endangered ecological community. No. 2: Striped Legless Lizard (Delma impar) - a vulnerable species. No. 3: Eastern Lined Earless Dragon (Tympanocryptis lineata pinguicolla) - an endangered species. No. 4: A leek orchid (Prasophyllum petilum)- an endangered species. No. 5: A subalpine herb (Gentiana baeuerlenii) - an endangered species. No. 6: Corroboree Frog (Pseudophryne corroboree) - a vulnerable species. No. 7: Golden Sun Moth (Synemon plana) - an endangered species. No. 8: Button Wrinklewort (Rutidosis leptorrhynchoides) - an endangered species. No. 9: Small Purple Pea (Swainsona recta) - an endangered species. No. 10: Yellow Box-Red Gum Grassy Woodland - an endangered ecological community. No 11: Two-spined Blackfish (Gadopsis bispinosus) - a vulnerable species. No. 12: Trout Cod (Maccullochella macquariensis) - an endangered species. No. 13: Macquarie Perch (Macquaria australasica) - an endangered species. No. 14: Murray River Crayfish (Euastacus armatus) - a vulnerable species. No. 15: Hooded Robin (Melanodryas cucullata) - a vulnerable species. No. 16: Swift Parrot (Lathamus discolor) - a vulnerable species. No. 17: Superb Parrot (Polytelis swainsonii) - a vulnerable species. No. 18: Brown Treecreeper (Climacteris picumnus) - a vulnerable species. No. 19: Painted Honeyeater (Grantiella picta) - a vulnerable species. No. 20: Regent Honeyeater (Xanthomyza phrygia) - an endangered species. No. 21: Perunga Grasshopper (Perunga ochracea) - a vulnerable species. No. 22: Brush-tailed Rock-wallaby (Petrogale penicillata) - an endangered species. No. 23: Smoky Mouse (Pseudomys fumeus) - an endangered species. No. 24: Tuggeranong Lignum (Muehlenbeckia tuggeranong) - an endangered species. No. 25: Ginninderra Peppercress (Lepidium ginninderrense - an endangered species. No. 26: Silver Perch (Bidyanus bidyanus) - an endangered species. Further information on this Action Plan or other threatened species and ecological communities can be obtained from: Environment ACT (Wildlife Research and Monitoring) Phone: (02) 6207 2126 Fax: (02) 6207 2122 This document should be cited as: ACT Government, 2003. Silver Perch (Bidyanus bidyanus)—an endangered species. Action Plan No. 26. Environment ACT, Canberra.
<urn:uuid:3f443d01-b738-45f8-88f0-bfd0bd7f8e07>
3.171875
10,511
Knowledge Article
Science & Tech.
40.432255
95,618,570
|Science & Environmental Health Network| Science, Ethics and Action in the Public Interest Nuclear energy: assessing the emissions| Kurt Kleiner reports on whether nuclear power deserves its reputation as a low-carbon energy source. Nature Published online 24 September 2008 Estimates of the emissions associated with producing nuclear energy vary widely. For decades nuclear power has been slated as being environmentally harmful. But with climate change emerging as the world's top environmental problem, the nuclear industry is now starting to enjoy a reputation as a green power provider, capable of producing huge amounts of energy with little or no carbon emissions1. As a result, the industry is gaining renewed support. In the United States, both presidential candidates view nuclear power as part of the future energy mix. The US government isn't alone in its support for an expansion of nuclear facilities. Japan announced in August that it would spend $4 billion on green technology, including nuclear plants. But despite the enthusiasm for nuclear energy's status as a low-carbon technology, the greenhouse gas emissions of nuclear power are still being debated. While it's understood that an operating nuclear power plant has near-zero carbon emissions (the only outputs are heat and radioactive waste), it's the other steps involved in the provision of nuclear energy that can increase its carbon footprint. Nuclear plants have to be constructed, uranium has to be mined, processed and transported, waste has to be stored, and eventually the plant has to be decommissioned. All these actions produce carbon emissions. Critics claim that other technologies would reduce anthropogenic carbon emissions more drastically, and more cost effectively. "The fact is, there's no such thing as a carbon-free lunch for any energy source," says Jim Riccio, a nuclear policy analyst for Greenpeace in Washington DC. "You're better off pursuing renewables like wind and solar if you want to get more bang for your buck." The nuclear industry and many independent analysts respond that the numbers show otherwise. Even taking the entire lifecycle of the plant into account nuclear energy still ranks with other green technologies, like solar panels and wind turbines, they say. "The fact is, there's no such thing as a carbon-free lunch for any energy source." The large variation in emissions estimated from the collection of studies arises from the different methodologies used - those on the low end, says Sovacool, tended to leave parts of the lifecycle out of their analyses, while those on the high end often made unrealistic assumptions about the amount of energy used in some parts of the lifecycle. The largest source of carbon emissions, accounting for 38 per cent of the average total, is the "frontend" of the fuel cycle, which includes mining and milling uranium ore, and the relatively energy-intensive conversion and enrichment process, which boosts the level of uranium-235 in the fuel to useable levels. Construction (12 per cent), operation (17 per cent largely because of backup generators using fossil fuels during downtime), fuel processing and waste disposal (14 per cent) and decommissioning (18 per cent) make up the total mean emissions. According to Sovacool's analysis, nuclear power, at 66 gCO2e/kWh emissions is well below scrubbed coal-fired plants, which emit 960 gCO2e/kWh, and natural gas-fired plants, at 443 gCO2e/kWh. However, nuclear emits twice as much carbon as solar photovoltaic, at 32 gCO2e/kWh, and six times as much as onshore wind farms, at 10 gCO2e/kWh. "A number in the 60s puts it well below natural gas, oil, coal and even clean-coal technologies. On the other hand, things like energy efficiency, and some of the cheaper renewables are a factor of six better. So for every dollar you spend on nuclear, you could have saved five or six times as much carbon with efficiency, or wind farms," Sovacool says. Add to that the high costs and long lead times for building a nuclear plant about $3 billion for a 1,000 megawatt plant, with planning, licensing and construction times of about 10 years and nuclear power is even less appealing. Power games But, says Paul Genoa, director of policy development for the Nuclear Energy Institute (NEI), a nuclear industry association based in Washington DC, "it's a fallacy to say one energy source is better, and that we should use it everywhere. The reality is that we need a portfolio solution that will include nuclear." "If you look at lifecycle emissions from renewable technologies, typically they are on the order of only 1 to 5 per cent of a coal plant," says Paul Meier, director of the Energy Institute at the University of Wisconsin-Madison. Looked at as a replacement for fossil fuels, existing nuclear plants prevent 681 million tonnes of carbon from being emitted every year in the United States alone, according to the NEI. Meier also points out that nuclear energy is capable of providing baseload power - that is, large amounts of power that can run consistently and reliably. Nuclear plants run 90 per cent of the time, while wind and solar power provide electricity only intermittently and have to be backed up, often by fossil fuel plants. "The modern electric grid relies on baseload power," says Genoa. "That's power that's running 24 hours a day, 365 days a year. It's only shut down for maintenance." Money spent on energy efficiency, however, is equivalent to increasing baseload power, since it reduces the overall power that needs to be generated, says Sovacool. And innovative energy-storage solutions, such as compressed air storage, could provide ways for renewables to provide baseload power. "For every dollar you spend on nuclear, you could have saved five or six times as much carbon with efficiency, or wind farms." Thomas Cochran, a nuclear physicist and senior scientist at the Natural Resources Defense Council (NRDC), an environmental group in Washington DC, says that although nuclear power has relatively low carbon emissions, it should not be subsidized by governments in the name of combating global warming. He argues that the expense and risk of building nuclear plants makes them uneconomic without large government subsidies, and that similar investment in wind and solar photovoltaic power would pay off sooner. "There are appropriate roles for federal subsidies in energy technologies," he says. "We subsidized heavily nuclear power when it was an emerging technology 30, 40, 50 years ago. Now it's a mature technology." Nevertheless, the Energy Policy Act of 2005 saw the US Congress offer billions of dollars in tax breaks and loan guarantees in an attempt to kickstart construction. Although a number of utilities are pursuing licences for a total of 30 new nuclear plants in the United States, none have been approved yet. Even assuming that new subsidies were to increase US nuclear power by 1.5 times the current capacity, the result would be only an additional 510 megawatts per year from now until the year 2021. Wind power, the NRDC estimates, provides more than 1,000 megawatts a year, and that figure is likely to increase. Another question has to do with the sustainability of the uranium supply itself. According to researchers in Australia at Monash University, Melbourne, and the University of New South Wales, Sydney, good-quality uranium ore is hard to come by. The deposits of rich ores with the highest uranium content are depleting leaving only lower-quality deposits to be exploited.3 As ore quality degrades, more energy is required to mine and mill it, and greenhouse gas emissions rise. "It is clear that there is a strong sensitivity of ... greenhouse gas emissions to ore grade, and that ore grades are likely to continue to decline gradually in the medium- to long-term," conclude the researchers. But the nuclear industry points to technological advances of its own that are likely to make nuclear power less expensive and less carbon intensive. Genoa says that new methods of mining uranium and building reactors designed to run on less uranium-rich fuel could make nuclear power even more attractive. "If we're using the same reactors in two centuries, then we've missed the boat. There are going to be other technologies," Genoa says. © 2008 Nature Publishing Group – partner of AGORA, HINARI, CrossRef and COUNTER This page URL: This Page was generated with web2printer 4 in: 0.000652 seconds http://www.printer-friendly.com
<urn:uuid:bb2fde97-1c34-401e-93e5-e537e1fa0c04>
3.296875
1,744
Truncated
Science & Tech.
38.418652
95,618,573
From the Labs: Materials New publications, experiments and breakthroughs in materials science–and what they mean. A polymer implant signals cells to combat cancer. Source: “Infection-Mimicking Materials to Program Dendritic Cells In Situ” David Mooney et al. Nature Materials 8: 151-158 Results: A new implant attracts immune cells and exposes them to molecules that stimulate them to attack cancerous tumors. When tested in mice that normally die of cancer within 25 days, the implants allowed 90 percent of the mice to survive. Similar experimental therapies based on transplanting immune cells are only about 60 percent effective. Why it matters: The implants could eventually be used to treat human cancers that don’t respond to other therapies, and they could also be used to treat immune disorders such as type 1 diabetes and arthritis. Other approaches that involve stimulating immune cells haven’t proved successful in clinical trials. Those techniques require the cells to be removed from the body and then reimplanted; many are damaged in the process and die, while survivors often fail to trigger attacks on cancerous tumors. The new implant stimulates cells inside the body, without subjecting them to stressful procedures. Methods: The spongelike implant is made of a biodegradable polymer that releases chemical signals called cytokines. In mice with melanoma, these signals attract immune cells called dendritic cells to the nooks and crannies of the implant. There the cells are exposed to a cancer antigen that stimulates them to attack tumors. When tissues from the mice were analyzed, the researchers found that dendritic cells had migrated to the lymph nodes and activated other immune cells, and the animals’ tumors had shrunk. Next steps: Before proceeding to clinical trials, the implants must pass safety tests in large animals. Long-term studies will then establish whether the immune system will attack cancer that may recur years after the implant has degraded. Ethanol Fuel Cell A new catalyst could make the technology usable in portable electronics. Source: “Ternary Pt/Rh/SnO2 Electrocatalysts for Oxidizing Ethanol to CO2” Radoslav Adzic et al. Nature Materials 8: 325-330 Results: A new catalyst efficiently breaks the strong carbon-carbon bond at the center of ethanol molecules, converting ethanol to carbon dioxide in a process that releases protons and electrons. It generates electrical currents 100 times greater than those produced with other catalysts that oxidize ethanol. Why it matters: Ethanol-powered fuel cells based on the catalyst could open the way for portable electronics that can be refueled faster than battery-powered devices can be recharged. The technology would also be safer than portable fuel cells that use toxic methanol. Previous catalysts used to free electrons from ethanol were inefficient: either they used a great deal of energy to break the carbon-carbon bond or they broke only the molecule’s weaker bonds, releasing just a few electrons per molecule. The new catalyst efficiently frees 12 electrons per molecule without requiring much energy. Methods: To make the catalyst, researchers at Brookhaven National Laboratory in New York deposited tiny clusters of platinum and rhodium on tin oxide nanoparticles. Rhodium had been shown to break bonds between carbon atoms, but only at high temperatures–200 to 300 °C. Combining the rhodium and platinum with tin oxide allowed it to break these bonds at room temperature, making the catalyst more practical for portable fuel cells. Next steps: The catalyst will be incorporated into fuel cells to determine whether the current produced can be increased from the 7.5 milliamps per square centimeter seen in initial tests to the hundreds of milliamps needed for most applications. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:b37c4b6f-304d-4c7b-882c-872b916f1cab>
3.328125
805
Truncated
Science & Tech.
33.815247
95,618,581
Dendrites are short stout tapering processes that are rich in ribosomes and Golgi elements, whereas axons are long thin processes of uniform diameter that are deficient in these organelles. It has been hypothesized that the unique morphological and compositional features of axons and dendrites result from their distinct patterns of microtubule polarity orientation. The microtubules within axons are uniformly oriented with their plus ends distal to the cell body, whereas microtubules within dendrites are nonuniformly oriented. The minus-end-distal microtubules are thought to arise via their specific transport into dendrites by the motor protein known as CHO1/MKLP1. According to this model, CHO1/MKLP1 transports microtubules with their minus ends leading into dendrites by generating forces against the plus-end-distal microtubules, thus creating drag on the plus-end-distal microtubules. Here we show that depletion of CHO1/MKLP1 from cultured neurons causes a rapid redistribution of microtubules within dendrites such that minus-end-distal microtubules are chased back to the cell body while plus-end-distal microtubules are redistributed forward. The dendrite grows significantly longer and thinner, loses its taper, and acquires a progressively more axon-like organelle composition. These results suggest that the forces generated by CHO1/MKLP1 are necessary for maintaining the minus-end-distal microtubules in the dendrite, for antagonizing the anterograde transport of the plus-end-distal microtubules, and for sustaining a pattern of microtubule organization necessary for the maintenance of dendritic morphology and composition. Thus, we would conclude that dendritic identity is dependent on forces generated by CHO1/MKLP1. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:e1bc378b-6f1e-4c0a-ab1e-d5c5d7969be7>
3.140625
406
Academic Writing
Science & Tech.
3.227092
95,618,625
12 July 2018 pH controlled nanofilters Published online 26 May 2011 Ultra-thin films riddled with nanoscale pores would be useful in applications ranging from water purification to drug delivery, especially if the pores could change size in response to their environment. Researchers from the King Abdullah University of Science and Technology (KAUST), Saudi Arabia, and DESY, Germany's high-energy physics laboratory in Hamburg, have demonstrated a relatively simple technique to manufacture such films, with pores responsive to changes in pH. Suzana Nunes and colleagues dissolved a polystyrene polymer in a solution also containing ions of a transition metal, such as copper. Immersion in water produced nanoscale polymer micelles — spherical polymer that arrange together to encase the hydrophobic tails away from water. The copper ions then bond to link these micelles together forming a film, with the gaps between the micelles becoming pores. The sizes of these pores depend on the pH of the environment. Testing the films to filter a dilute solution containing organic molecules with widely varying molecular weight and size, the researchers found that the flux through the film changed by a factor of a hundred as the pH varied from 6 to 4. Smaller pores thereby blocked larger molecules. Various microscopy techniques showed that the pore size is uniform across the film, which has a very high overall porosity. "The highest flux we measured is ten times larger than possible with commercial membranes with similar pore size," says Nunes. "These films have exceptional porosity." - Nunes, S. et al. Switchable pH-Responsive Polymeric Membranes Prepared via Block Copolymer Micelle Assembly. ACS Nano 5 (5), 3516-3522 (2011) | Article |
<urn:uuid:63c4c08d-c86b-4d02-bcb8-4e40dc90ff67>
3.375
366
Truncated
Science & Tech.
28.49
95,618,632
The study, which was co-authored by Eric Galbraith, of McGill's Department of Earth & Planetary Sciences, looked at marine sediment and found that that the dissolved oxygen concentrations in large parts of the oceans changed dramatically during the relatively slow natural climate changes at the end of the last Ice Age. This was at a time when the temperature of surface water around the globe increased by approximately 2 °C over a period of 10,000 years. A similar rise in temperature will result from human emissions of heat-trapping gases within the next 100 years, if emissions are not curbed, giving cause for concern. Most of the animals living in the ocean, from herring to tuna, shrimp to zooplankton, rely on dissolved oxygen to breathe. The amount of oxygen that seawater can soak up from the atmosphere depends on the water temperature at the sea surface. As temperatures at the surface increase, the dissolved oxygen supply below the surface gets used up more quickly. Currently, in about 15 per cent of the oceans - in areas referred to as dead zones - dissolved oxygen concentrations are so low that fish have a hard time breathing at all. The findings from the study show that these dead zones increased significantly at the end of the last Ice Age. "Given how complex the ocean is, it's been hard to predict how climate change will alter the amount of dissolved oxygen in water. As a result of this research, we can now say unequivocally that the oxygen content of the ocean is sensitive to climate change, confirming the general cause for concern." This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Institute for Advanced Research (CIFAR). The results of this study were published in Nature Geoscience http://www.nature.com/ngeo/journal/vaop/ncurrent/full/ngeo1352.html Katherine Gombay | Newswise Science News New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:57485175-9b92-4073-87cf-2c76a7f57d8f>
3.640625
1,026
Content Listing
Science & Tech.
40.409297
95,618,654
Sure beats little green men. When people imagine Martians, they tend to think of variations on human life. Percival Lowell, expounding the theory that the mythical canals of Mars were created by martians, even imagined the kind of bureaucracy they'd have to develop for such a public works project. Martians, that is to say, generally reflect life on Earth the way a western tends to show the values and tensions of the year it was made, not only the year in which it was set. But then along comes this theory of Martians published in The Salt Lake Tribune in 1912. Here we see that Martian life is all vegetation, but it's watched over and controlled by one enormous eye shooting what must be 100 miles into space. What did this massive Martian brain think about? "The vast intellect of Mars is occupied with the problems of gaining subsistence from the dying planet and then with investigations of the boundless universe that lies within its sight," the Tribune said. The paper attributes the theory to William Campbell, who was the director of the Lick Observatory. But that's a bald fabrication, as Campbell explained in a later letter. Really, it's not obvious who proposed the fantastic and fantastical story of Martians. There's no byline on the story and it resided in a part of the paper that contained other barely believable "weird" news. It grew out of some ideas that Lowell and others had about the canals being lined with vegetation that varied seasonally. But from there, it's pure creativity, weirdness, and pencil sketches. You can read the whole article here, but here is the best (i.e. most fascinatingly wrong) paragraph: Before considering this theory further, we must bear in mind a few of the proved facts about Mars. It has atmosphere, seasons, land, water, storms, clouds and mountains. It also rains and snows on Mars, as it does with us. Great white patches appear periodically upon its surface. These may be accumulations of snow and they have also been called "eyes." We want to hear what you think. Submit a letter to the editor or write to email@example.com. Alexis C. Madrigal is a staff writer at The Atlantic. He's the author of Powering the Dream: The History and Promise of Green Technology.
<urn:uuid:d8ead20e-d6e8-4f6a-8cb7-57c3b3fcdc40>
2.96875
484
Nonfiction Writing
Science & Tech.
58.724214
95,618,676
1. Describe how you would prepare 1 liter of each of the following solutions. a). 1.5 M glycine b). 0.5 mM glucose c). 10 mM ethanol d). 10 mM hemoglobin 2. Describe how you would prepare just 100ml of each of the solutions in problem #1 above 3. When preparing a solution, why do you dissolve the component in less diluent than the desired final volume of the solution? 4. If you were given the stock solutions of 1M Tris Bufffer and 0.5M EDTA, how would you prepare 10ml of 10mM Tris and 10ml of 1mM EDTA© BrainMass Inc. brainmass.com July 20, 2018, 10:23 pm ad1c9bdddf a). Molar mass of glycine = 75.07 g/mol (1.5 moles/Liter of glycine)*(1 Liter) = 1.5 moles glycine (1.5 moles glycine)*(75.07 g/mol) = 112.61 grams of glycine Dissolve 112.61 grams of glycine in 1 liter of water. You can change this mass to a volume through the density of glycine if you prefer. b). Molar mass of glucose = 180.16 g/mol (0.5 millimoles/Liter of glucose)*(1 Liter)*(1 mole / 1000 millimoles) = 0.0005 moles of glucose (0.0005 moles glucose)*(180.16 gram/mol)*(1000 ... This solution provides assistance with the simple chemistry questions.
<urn:uuid:3c05b776-89e8-4e27-a42d-f519967e0b1a>
3.265625
352
Q&A Forum
Science & Tech.
96.387667
95,618,699
Surrounding the sun is a vast atmosphere of solar particles, through which magnetic fields swarm, solar flares erupt, and gigantic columns of material rise, fall and jostle each other around. Now, using NASA's Solar Terrestrial Relations Observatory, scientists have found that this atmosphere, called the corona, is even larger than thought, extending out some 5 million miles above the sun's surface -- the equivalent of 12 solar radii. This information has implications for NASA's upcoming Solar Probe Plus mission, due to launch in 2018 and go closer to the sun than any man-made technology ever has before. These STEREO observations provide the first direct measurements of the inner boundary of the heliosphere -- the giant bubble sparsely filled with solar particles that surrounds the sun and all the planets. Combined with measurements from Voyager 1 of the outer boundary of the heliosphere, we have now defined the extent of this entire local bubble. "We've tracked sound-like waves through the outer corona and used these to map the atmosphere," said Craig DeForest of the Southwest Research Institute in Boulder, Colorado. "We can't hear the sounds directly through the vacuum of space, but with careful analysis we can see them rippling through the corona." The results were published in The Astrophysical Journal on May 12, 2014. The researchers studied waves known as magnetosonic waves, and they are a hybrid of sound waves and magnetic waves called Alfven waves. Unlike sound waves on Earth, which oscillate several hundred times per second, these waves oscillate about once every four hours -- and are about 10 times the length of Earth. Tracking magnetosonic waves showed DeForest and his team that the material throughout this extended space remained connected to the solar material much further in. That is to say that even out to 5 million miles from the sun, giant solar storms or coronal mass ejections can create ripple effects felt through the corona. Beyond that boundary, however, solar material streams away in a steady flow called the solar wind -- out there, the material has separated from the star and its movement can't affect the corona. Realizing that the corona extends much further than previously thought has important consequences for NASA's Solar Probe Plus because the mission will travel to within 4 million miles of the sun. Scientists knew the mission would be gathering information closer to the sun than ever before, but couldn't be sure it would travel through the corona proper. "This research provides confidence that Solar Probe Plus, as designed, will be exploring the inner solar magnetic system," said Marco Velli, a Solar Probe Plus scientist at NASA's Jet Propulsion Laboratory in Pasadena, California. "The mission will directly measure the density, velocity and magnetic field of the solar material there, allowing us to understand how motion and heat in the corona and solar wind are generated." With direct access to the sun's atmosphere, Solar Probe Plus will provide unprecedented information on how the solar corona is heated and revolutionize our knowledge of the origin and evolution of the solar wind. Susan Hendrix | Eurek Alert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:1e15fe2d-e970-46f7-a67b-56072b9d70ba>
4.0625
1,211
Content Listing
Science & Tech.
38.676087
95,618,707
Climate Science in Focus (Weather versus Climate) - Grade Level: - High School: Ninth Grade through Twelfth Grade - Lesson Duration: - 60 Minutes - Common Core Standards: - 6-8.RST.1, 6-8.RST.2, 6-8.RST.4, 6-8.RST.9, 9-10.RST.1, 9-10.RST.2 - State Standards: - Next Generation Science Standards Students will be able to: 1. Compare and contrast weather and climate. 2. Predict the effects of climate change on life. In “Climate Science in Focus (Weather versus Climate),” students will explore the differences between weather and climate and how changes in the environment can impact the local climate. Broken into eight days, these lessons require 58 minutes to complete. Designed around 9th grade Next Generation Science Standards, it is a unit easily adapted down for middle school or up for advanced high school classes. Teach the entire unit or pull out particular activities. This is lesson 2 of the unit. The Earth consists of four systems: the atmosphere, hydrosphere, geosphere, and biosphere, which are interconnected. Changes to one part of the system can have consequences on the others. Changes to global or regional climate can be caused by changes in the sun's energy output or Earth's orbit, tectonic events, ocean circulation, volcanic activity, glaciers, vegetation, and human activity. Water is essential for life on Earth. Relative water availability is a major factor in designating habitats for different living organisms. In the United States, things like agriculture and water rights are hot topics. Current models predict that average global temperatures are going to continue to rise even if regional climate changes remain complex and varied. These changes will have an impact on all of Earth's systems. Studies have shown that climate change is driven not only by natural effects but also by human activities. Knowledge of the factors that affect climate, coupled with responsible management of natural resources, are required for sustaining these Earth systems. Long-term change can be anticipated using science-based predictive models, making science and engineering essential to understanding global climate change and its possible impacts. National Parks can serve as benchmarks for climate science trends and effects over time because they are protected areas void of human influence. Understanding current climate trends will help set students up to be successful in interpreting and engaging in discussions about climate change, which will lead to informed decision making. Most of the materials for this unit are provided in the Stream Flow River Study Trunk or as downloadable files. Video Earth: Climate and Weather Video Time Lapse, Soda Springs Meadow October 2012-September 2013 3 slide power point for teacher instruction. Directions for students "Carry Capacity" Venn Diagram for comparing weather and climate Distribute the Do Now- How do animals adapt to their climate? 1. Show the video Earth: Climate and Weather. http://video.nationalgeographic.com/video/science/earth-sci/climate-weather-sci/ 2. Monitor students as they complete a Venn diagram showing similarities and differences between weather and climate Worksheet 2.1 1. Have students brainstorm: What changes can be observed during different seasons? (plants, animals, weather, water, daylight, temperature, …) Guide brainstorming activity. Record list or have volunteer record. 2. Lecture/Notes: Presentation to define morphological, physiological, and behavioral traits of animals. Students should record notes and participate in class discussion. 1. Carrying capacity bucket demo. Procedure 2.1 Assign task: In three paragraphs, in your own words, describe carrying capacity. (Intro, Body, Concl.) weather, climate, atmospheric circulation, climate change, feedback loops, physical process, chemical process, carrying capacity, morphological, physiological, behavioral traits, adaptation, redistribution Distribute exit ticket – Based on what you know about traits of animals, predict some of the effects of climate change on animals. Supports for Struggling Learners Give students who are struggling one point in each circle of the Venn Diagram. This will allow them to see an example of the types differences between weather and climate that are desired. Or an alternate unrelated example can be used to model. Related Lessons or Education Materials Other Lessons from this Unit Day 1- Earth as a System Day 2- Weather vs Climate Day 3- Watershed Day 5- Field Trip Day 6- NPS Connections Day 7- Project Preparation Day 8- Evaluations
<urn:uuid:4b977b61-b487-465e-a533-ebd4f9f3fc43>
3.65625
961
Product Page
Science & Tech.
41.626494
95,618,709
This is the best tl;dr I could make, original reduced by 86%. (I'm a bot) Climate Change Means 'Virtually No Male Turtles' Born In A Key Nesting Ground : The Two-Way Like many reptiles, the sex of a turtle is determined by how warm the egg is as it's being incubated. Scientists were surprised to find that "Virtually no male turtles" are being hatched in a key breeding ground in the northern Great Barrier Reef. "In the past 20 years since these turtles were hatched, there's been some sort of a drastic change, going from one male to seven females to now - one male for one hundred females." Extended Summary | FAQ | Feedback | Top keywords: turtle#1 female#2 Male#3 population#4 breed#5
<urn:uuid:29db901f-4ae7-47f9-830a-1f2cd188040e>
3.03125
168
Truncated
Science & Tech.
50.759688
95,618,711
vendredi 20 juillet 2018 ISS - Expedition 56 Mission patch. July 20, 2018 The Expedition 56 crew members continued their work Friday on more fertility research and microbe studies aboard the International Space Station. They also worked on science gear for a study seeking advanced therapies for diseases such as Alzheimer’s and diabetes. Commander Drew Feustel and Flight Engineer Serena Auñón-Chancellor examined biological samples for the Micro-11 fertility study. They looked at the samples through a microscope which were later stowed in a science freezer. The experiment seeks to determine if human reproduction would be possible off the Earth. Image above: NASA astronaut Ricky Arnold works on gear inside the International Space Station. Image Credit: NASA. Feustel also spent some time in the morning working on the Amyloid experiment to help doctors develop advanced treatments for Alzheimer’s disease and diabetes. He collected amyloid fibril samples from the Cell Biology Experiment Facility and stowed them in a science freezer for spectroscopy and microscopic analysis back on Earth. European astronaut Alexander Gerst and NASA astronaut Ricky Arnold were sampling the station’s atmosphere and surfaces for a pair of microbe investigations today. Gerst collected microbe samples and stowed them in a freezer for molecular analysis on Earth to identify potential pathogens on the station. Arnold processed microbial DNA using the Biomolecule Sequencer, a device that enables DNA sequencing in microgravity, to identify microbes able to survive in microgravity. Microbe samples: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Investigation.html?#id=1663 Microbial DNA: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Investigation.html?#id=7687 Biomolecule Sequencer: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Investigation.html?#id=1917 Expedition 56: https://cms.nasa.gov/mission_pages/station/expeditions/expedition56/index.html Space Station Research and Technology: https://www.nasa.gov/mission_pages/station/research/index.html International Space Station (ISS): https://www.nasa.gov/mission_pages/station/main/index.html Image (mentioned), Text, Credits: NASA/Mark Garcia. Best regards, Orbiter.ch Publié par Orbiter.ch à 14:22 NASA - JUNO Mission logo. July 20, 2018 This image captures a high-altitude cloud formation surrounded by swirling patterns in the atmosphere of Jupiter's North North Temperate Belt region. The North North Temperate Belt is one of Jupiter’s many colorful, swirling cloud bands. Scientists have wondered for decades how deep these bands extend. Gravity measurements collected by Juno during its close flybys of the planet have now provided an answer. Juno discovered that these bands of flowing atmosphere actually penetrate deep into the planet, to a depth of about 1,900 miles (3,000 kilometers). NASA’s Juno spacecraft took this color-enhanced image at 10:11 p.m. PDT on July 15, 2018 (1:11 a.m. EDT on July 16), as the spacecraft performed its 14th close flyby of Jupiter. At the time, Juno was about 3,900 miles (6,200 kilometers) from the planet's cloud tops, above a latitude of 36 degrees. Citizen scientist Jason Major created this image using data from the spacecraft’s JunoCam imager. JunoCam's raw images are available for the public to peruse and process into image products at https://missionjuno.swri.edu/junocam/. More information about Juno is at https://www.nasa.gov/juno and http://missionjuno.swri.edu. Image, Text, Credits: NASA/Jon Nelson/JPL-Caltech/SwRI/MSSS/Jason Major. Publié par Orbiter.ch à 14:17 July 20, 2018 Storm chasing takes luck and patience on Earth -- and even more so on Mars. Image above: Side-by-side movies shows how dust has enveloped the Red Planet, courtesy of the Mars Color Imager (MARCI) wide-angle camera onboard NASA's Mars Reconnaissance Orbiter (MRO). Image Credits:NASA/JPL-Caltech/MSSS. For scientists watching the Red Planet from data gathered by NASA's orbiters, the past month has been a windfall. "Global" dust storms, where a runaway series of storms creates a dust cloud so large it envelops the planet, only appear every six to eight years (that's three to four Mars years). Scientists still don't understand why or how exactly these storms form and evolve. Mars Before and After Dust Storm In June, one of these dust events rapidly engulfed the planet. Scientists first observed a smaller-scale dust storm on May 30. By June 20, it had gone global. For the Opportunity rover, that meant a sudden drop in visibility from a clear, sunny day to that of an overcast one. Because Opportunity runs on solar energy, scientists had to suspend science activities to preserve the rover's batteries. As of July 18th, no response has been received from the rover. Luckily, all that dust acts as an atmospheric insulator, keeping nighttime temperatures from dropping down to lower than what Opportunity can handle. But the nearly 15-year-old rover isn't out of the woods yet: it could take weeks, or even months, for the dust to start settling. Based on the longevity of a 2001 global storm, NASA scientists estimate it may be early September before the haze has cleared enough for Opportunity to power up and call home. When the skies begin to clear, Opportunity's solar panels may be covered by a fine film of dust. That could delay a recovery of the rover as it gathers energy to recharge its batteries. A gust of wind would help, but isn't a requirement for a full recovery. Mars Before and After Dust Storm. Animation Credits: NASA/JPL-Caltech/MSSS While the Opportunity team waits in earnest to hear from the rover, scientists on other Mars missions have gotten a rare chance to study this head-scratching phenomenon. The Mars Reconnaissance Orbiter, Mars Odyssey, and Mars Atmosphere and Volatile EvolutioN (MAVEN) orbiters are all tailoring their observations of the Red Planet to study this global storm and learn more about Mars' weather patterns. Meanwhile, the Curiosity rover is studying the dust storm from the Martian surface. Here's Here's how each mission is currently studying the dust storm, and what we might learn from it: With the THEMIS instrument (Thermal Emission Imaging System), scientists can track Mars' surface temperature, atmospheric temperature, and the amount of dust in the atmosphere. This allows them to watch the dust storm grow, evolve, and dissipate over time. "This is one of the largest weather events that we've seen on Mars," since spacecraft observations began in the 1960s, said Michael Smith, a scientist at NASA's Goddard Spaceflight Center in Greenbelt, Maryland who works on the THEMIS instrument. "Having another example of a dust storm really helps us to understand what's going on." Since the dust storm began, the THEMIS team has increased the frequency of global atmospheric observations from every 10 days to twice per week, Smith said. One mystery they're still trying to solve: How these dust storms go global. "Every Mars year, during the dusty season, there are a lot of local- or regional-scale storms that cover one area of the planet," Smith said. But scientists aren't yet sure how these smaller storms sometimes grow to end up encircling the entire planet. Mars Reconnaissance Orbiter (MRO) Mars Reconnaissance Orbiter has two instruments studying the dust storm. Each day, the Mars Color Imager (MARCI) maps the entire planet in mid-afternoon to track the evolution of the storm. Meanwhile, MRO's Mars Climate Sounder (MCS) instrument measures how the atmosphere's temperature changes with altitude. Since the end of May, the instruments have observed the onset and rapid expansion of a dust storm on Mars. With these data, scientists are studying how the dust storm changes the planet's atmospheric temperatures. Just as in Earth's atmosphere, changing temperature on Mars can affect wind patterns and even the circulation of the entire atmosphere. This provides a powerful feedback: Solar heating of the dust lofted into the atmosphere changes temperatures, which changes winds, which may amplify the storm by lifting more dust from the surface. Scientists want to know the details of the storm -- where is the air rising or falling? How do the atmospheric temperatures now compare to a storm-less year? And as with Mars Odyssey, the MRO team wants to know how these dust storms go global. "The very fact that you can start with something that's a local storm, no bigger than a small [U.S.] state, and then trigger something that raises more dust and produces a haze that covers almost the entire planet is remarkable," said Rich Zurek of NASA's Jet Propulsion Laboratory, Pasadena, California, the project scientist for MRO. Scientists want to find out why these storms arise every few years, which is hard to do without a long record of such events. It'd be as if aliens were observing Earth and seeing the climate effects of El Niño over many years of observations -- they'd wonder why some regions get extra rainy and some areas get extra dry in a seemingly regular pattern. Ever since the MAVEN orbiter entered Mars' orbit, "one of the things we've been waiting for is a global dust storm," said Bruce Jakosky, the MAVEN orbiter's principle investigator. But MAVEN isn't studying the dust storm itself. Rather, the MAVEN team wants to study how the dust storm affects Mars' upper atmosphere, about 62 miles (more than 100 kilometers) above the surface -- where the dust doesn't even reach. MAVEN's mission is to figure out what happened to Mars' early atmosphere. We know that at some point billions of years ago, liquid water pooled and ran along Mars' surface, which means that its atmosphere must have been thicker and more insulating, similar to Earth's. Since MAVEN arrived at Mars in 2014, its investigations have found that this atmosphere may have been stripped away by a torrent of solar wind over several hundred million years, between 3.5 and 4.0 billion years ago. But there are still nuances to figure out, such as how dust storms like the current one affect how atmospheric molecules escape into space, Jakosky said. For instance, the dust storm acts as an atmospheric insulator, trapping heat from the Sun. Does this heating change the way molecules escape the atmosphere? It is also likely that, as the atmosphere warms, more water vapor rises high enough to be broken down by sunlight, with the solar wind sweeping the hydrogen atoms into space, Jakosky said. The team won't have answers for a while yet, but each of MAVEN's five orbits per day will continue to provide invaluable data. Most of NASA's spacecraft are studying the dust storm from above. The Mars Science Laboratory mission's Curiosity rover has a unique perspective: the nuclear-powered science machine is largely immune to the darkened skies, allowing it to collect science from within the beige veil enveloping the planet. "We're working double-duty right now," said JPL's Ashwin Vasavada, Curiosity's project scientist. "Our newly recommissioned drill is acquiring a fresh rock sample. But we are also using instruments to study how the dust storm evolves." Curiosity has a number of "eyes" that can determine the abundance and size of dust particles based on how they scatter and absorb light. That includes its Mastcam, ChemCam, and an ultraviolet sensor on REMS, its suite of weather instruments. REMS can also help study atmospheric tides -- shifts in pressure that move as waves across the entire planet's thin air. These tides change drastically based on where the dust is globally, not just inside Gale crater. The global storm may also reveal secrets about Martian dust devils and winds. Dust devils can occur when the planet's surface is hotter than the air above it. Heating generates whirls of air, some of which pick up dust and become dust devils. During a dust storm, there's less direct sunlight and lower daytime temperatures; this might mean fewer devils swirling across the surface. Even new drilling can advance dust storm science: watching the small piles of loose material created by Curiosity's drill is the best way of monitoring winds. Scientists think the dust storm will last at least a couple of months. Every time you spot Mars in the sky in the weeks ahead, remember how much data scientists are gathering to better understand the mysterious weather of the Red Planet. Martian Dust Storm Grows Global: Curiosity Captures Photos of Thickening Haze Mars Odyssey: https://mars.nasa.gov/odyssey/ Mars Reconnaissance Orbiter (MRO): https://www.nasa.gov/mission_pages/MRO/main/index.html Curiosity (Mars Science Laboratory or MSL): https://www.nasa.gov/mission_pages/msl/index.html Image (mentioned), Animation (mentioned), Video (NASA), Text, Credits: NASA/JoAnna Wendel/JPL/Andrew Good. Publié par Orbiter.ch à 13:38 Blue Origin logo. July 20, 2018 New Shepard flew for the ninth time on July 18, 2018. During this mission, known as Mission 9 (M9), the escape motor was fired shortly after booster separation. The Crew Capsule was pushed hard by the escape test and we stressed the rocket to test that astronauts can get away from an anomaly at any time during flight. The mission was a success for both the booster and capsule. Most importantly, astronauts would have had an exhilarating ride and safe landing. Blue Origin Mission 9 landing This isn’t the first time we’ve done this type of extreme testing on New Shepard. In October of 2012, we simulated a booster failure on the launch pad and had a successful escape. Then in October of 2016, we simulated a booster failure in-flight at Max Q, which is the most physically strenuous point in the flight for the rocket, and had a completely successful escape of the capsule. Replay of Mission 9 Webcast This test on M9 allowed us to finally characterize escape motor performance in the near-vacuum of space and guarantee that we can safely return our astronauts in any phase of flight. Also on M9, New Shepard carried science and research payloads from commercial companies, universities and space agencies. Learn more about the payloads on board: https://www.blueorigin.com/news/news/payload-manifest-on-mission-9 For more information about Blue Origin, visit: https://www.blueorigin.com/ Image, Video, Text, Credit: Blue Origin. Publié par Orbiter.ch à 04:46 mercredi 18 juillet 2018 NASA - Chandra X-ray Observatory patch. July 18, 2018 Scientists may have observed, for the first time, the destruction of a young planet or planets around a nearby star. Observations from NASA’s Chandra X-ray Observatory indicate that the parent star is now in the process of devouring the planetary debris. This discovery gives insight into the processes affecting the survival of infant planets. Since 1937, astronomers have puzzled over the curious variability of a young star named RW Aur A, located about 450 light years from Earth. Every few decades, the star’s optical light has faded briefly before brightening again. In recent years, astronomers have observed the star dimming more frequently, and for longer periods. Image above: This artist’s illustration depicts the destruction of a young planet or planets, which scientists may have witnessed for the first time using data from NASA’s Chandra X-ray Observatory. Image Credits: Illustration: NASA/CXC/M. Weiss; X-ray spectrum: NASA/CXC/MIT/H. M.Günther. Using Chandra, a team of scientists may have uncovered what caused the star's most recent dimming event: a collision of two infant planetary bodies, including at least one object large enough to be a planet. As the resulting planetary debris fell into the star, it would generate a thick veil of dust and gas, temporarily obscuring the star’s light. “Computer simulations have long predicted that planets can fall into a young star, but we have never before observed that,” says Hans Moritz Guenther, a research scientist in MIT’s Kavli Institute for Astrophysics and Space Research who led the study. “If our interpretation of the data is correct, this would be the first time that we directly observe a young star devouring a planet or planets.” The star’s previous dimming events may have been caused by similar smash-ups, of either two planetary bodies or large remnants of past collisions that met head-on and broke apart again. RW Aur A is located in the Taurus-Auriga Dark Clouds, which host stellar nurseries containing thousands of infant stars. Very young stars, unlike our relatively mature sun, are still surrounded by a rotating disk of gas and clumps of material ranging in size from small dust grains to pebbles, and possibly fledgling planets. These disks last for about 5 million to 10 million years. RW Aur A is estimated to be several million years old, and is still surrounded by a disk of dust and gas. This star and its binary companion star, RW Aur B, are both about the same mass as the sun. The noticeable dips in the optical brightness of RW Aur A that occurred every few decades each lasted for about a month. Then, in 2011, the behavior changed. The star dimmed again, this time for about six months. The star eventually brightened, only to fade again in mid-2014. In November 2016, the star returned to its full brightness, and then in January 2017 it dimmed again. Chandra was used to observe the star during an optically bright period in 2013, and then dim periods in 2015 and 2017, when a decrease in X-rays was also observed. Because the X-rays come from the hot outer atmosphere of the star, changes in the X-ray spectrum – the intensity of X-rays measured at different wavelengths – over these three observations were used to probe the density and composition of the absorbing material around the star. The team found that the dips in both optical and X-ray light are caused by dense gas obscuring the star’s light. The observation in 2017 showed strong emission from iron atoms, indicating that the disk contained at least 10 times more iron than in the 2013 observation during a bright period. Guenther and colleagues suggest the excess iron was created when two planetesimals, or infant planetary bodies, collided. If one or both planetary bodies are made partly of iron, their smash-up could release a large amount of iron into the star’s disk and temporarily obscure its light as the material falls into the star. Chandra X-ray Observatory. Animation Credits: NASA/CXC A less favored explanation is that small grains or particles such as iron can become trapped in parts of a disk. If the disk’s structure changes suddenly, such as when the star’s partner star passes close by, the resulting tidal forces might release the trapped particles, creating an excess of iron that can fall into the star. The scientists hope to make more observations of the star in the future, to see whether the amount of iron surrounding it has changed – a measure that could help researchers determine the size of the iron’s source. For example, if about the same amount of iron appears in a year or two that may indicate it comes from a relatively massive source. “Much effort currently goes into learning about exoplanets and how they form, so it is obviously very important to see how young planets could be destroyed in interactions with their host stars and other young planets, and what factors determine if they survive,” Guenther says. Guenther is the lead author of a paper detailing the group’s results, which appears today in the Astronomical Journal. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations. Read more from NASA's Chandra X-ray Observatory: http://chandra.harvard.edu/photo/2018/rwaur/ For more Chandra images, multimedia and related materials, visit: http://www.nasa.gov/chandra Image (mentioned), Animation (mentioned), Text, Credits: NASA/Lee Mohon/Marshall Space Flight Center/Molly Porter/Chandra X-ray Center/Megan Watzke. Publié par Orbiter.ch à 15:00 ISS - Expedition 56 Mission patch. July 18, 2018 Cancer and rodent studies were on the crew’s timeline today to help doctors and scientists improve the health of humans in space and on Earth. The crew also conducted an emergency drill aboard the International Space Station. Image above: NASA astronauts Serena Auñón-Chancellor and Drew Feustel begin cargo operations shortly after the SpaceX Dragon cargo craft arrived at the International Space Station packed with more than 5,900 pounds of research, crew supplies and hardware. Image Credit: NASA. Flight Engineer Serena Auñón-Chancellor examined endothelial cells through a microscope for the AngieX Cancer Therapy study. The new cancer research seeks to test a safer, more effective treatment that targets tumor cells and blood vessels. Commander Drew Feustel partnered with astronaut Alexander Gerst and checked on mice being observed for the Rodent Research-7 (RR-7) experiment. RR-7 is exploring how microgravity impacts microbes living inside organisms. Astronaut Ricky Arnold and Gerst collected and stowed their blood samples for a pair of ongoing human research studies. Arnold went on to work a series of student investigations dubbed NanoRacks Module-9 exploring a variety of topics including botany, biology and physics. Image above: Flying over South Pacific Ocean, seen by EarthCam on ISS, speed: 27'571 Km/h, altitude: 421,54 Km, image captured by Roland Berga (on Earth in Switzerland) from International Space Station (ISS) using ISS-HD Live application with EarthCam's from ISS on July 18, 2018 at 21:14 UTC. Image Credits: Orbiter.ch Aerospace/Roland Berga. During the afternoon, all six Expedition 56 crew members joined forces to practice a simulated emergency. The orbital lab residents went over escape routes and safety procedures while coordinating communication and decision-making with mission controllers in Houston and Moscow. AngieX Cancer Therapy: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Investigation.html?#id=7502 Rodent Research-7 (RR-7): https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Investigation.html?#id=7425 NanoRacks Module-9: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/search.html?#q=%22module-9%22&i=&p=&c=&g=&s= Expedition 56: https://www.nasa.gov/mission_pages/station/expeditions/expedition56/index.html Space Station Research and Technology: https://www.nasa.gov/mission_pages/station/research/index.html International Space Station (ISS): https://www.nasa.gov/mission_pages/station/main/index.html Images (mentioned), Text, Credits: NASA/Mark Garcia/Orbiter.ch Aerospace/Roland Berga. Best regards, Orbiter.ch Publié par Orbiter.ch à 14:22 NASA - STEREO Mission logo. July 18, 2018 In 1610, Galileo redesigned the telescope and discovered Jupiter’s four largest moons. Nearly 400 years later, NASA’s Hubble Space Telescope used its powerful optics to look deep into space — enabling scientists to pin down the age of the universe. Suffice it to say that getting a better look at things produces major scientific advances. In a paper published on July 18 in The Astrophysical Journal, a team of scientists led by Craig DeForest — solar physicist at Southwest Research Institute’s branch in Boulder, Colorado — demonstrate that this historical trend still holds. Using advanced algorithms and data-cleaning techniques, the team discovered never-before-detected, fine-grained structures in the outer corona — the Sun’s million-degree atmosphere — by analyzing images taken by NASA’s STEREO spacecraft. The new results also provide foreshadowing of what might be seen by NASA’s Parker Solar Probe, which after its launch in the summer 2018 will orbit directly through that region. STEREO spacecrafts. Image Credit: NASA The outer corona is the source of the solar wind, the stream of charged particles that flow outward from the Sun in all directions. Measured near Earth, the magnetic fields embedded within the solar wind are intertwined and complex, but what causes this complexity remains unclear. “In deep space, the solar wind is turbulent and gusty,” said DeForest. “But how did it get that way? Did it leave the Sun smooth, and become turbulent as it crossed the solar system, or are the gusts telling us about the Sun itself?” Answering this question requires observing the outer corona — the source of the solar wind — in extreme detail. If the Sun itself causes the turbulence in the solar wind, then we should be able to see complex structures right from the beginning of the wind’s journey. But existing data didn’t show such fine-grained structure — at least, until now. “Previous images of the corona showed the region as a smooth, laminar structure,” said Nicki Viall, solar physicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and coauthor of the study. “It turns out, that apparent smoothness was just due to limitations in our image resolution.” To understand the corona, DeForest and his colleagues started with coronagraph images — pictures of the Sun’s atmosphere produced by a special telescope that blocks out light from the (much brighter) surface. How to Read a NASA STEREO Image Video above: This video shows a coronagraph image taken by the STEREO spacecraft in 2012, highlighting coronal streamers, the solar wind and a coronal mass ejection (CME). Video Credits: NASA's Goddard Space Flight Center/Joy Ng. These images were generated by the COR2 coronagraph aboard NASA’s Solar and Terrestrial Relations Observatory-A, or STEREO-A, spacecraft, which circles the Sun between Earth and Venus. In April 2014, STEREO-A would soon be passing behind the Sun, and scientists wanted to get some interesting data before communications were briefly interrupted. So they ran a special three-day data collection campaign during which COR2 took longer and more frequent exposures of the corona than it usually does. These long exposures allow more time for light from faint sources to strike the instrument’s detector — allowing it to see details it would otherwise miss. But the scientists didn’t just want longer-exposure images — they wanted them to be higher resolution. Options were limited. The instrument was already in space; unlike Galileo they couldn’t tinker with the hardware itself. Instead, they took a software approach, squeezing out the highest quality data possible by improving COR2’s signal-to-noise ratio. What is signal-to-noise ratio? The signal-to-noise ratio is an important concept in all scientific disciplines. It measures how well you can distinguish the thing you care about measuring — the signal — from the things you don’t — the noise. For example, let’s say that you’re blessed with great hearing. You notice the tiniest of mouse-squeaks late at night; you can eavesdrop on the whispers of huddled schoolchildren twenty feet away. Your hearing is impeccable — when noise is low. But it’s a whole different ball game when you’re standing in the front row of a rock concert. The other sounds in the environment are just too overpowering; no matter how carefully you listen, mouse-squeaks and whispers (the signal, in this case) can’t cut through the music (the noise). The problem isn’t your hearing — it’s the poor signal-to-noise ratio. COR2’s coronagraphs are like your hearing. The instrument is sensitive enough to image the corona in great detail, but in practice its measurements are polluted by noise — from the space environment and even the wiring of the instrument itself. DeForest and his colleagues’ key innovation was in identifying and separating out that noise, boosting the signal-to-noise ratio and revealing the outer corona in unprecedented detail. The first step towards improving signal-to-noise ratio had already been taken: longer-exposure images. Longer exposures allow more light into the detector and reduce the noise level — the team estimates noise reduction by a factor of 2.4 for each image, and a factor of 10 when combining them over a 20-minute period. But the remaining steps were up to sophisticated algorithms, designed and tested to extract out the true corona from the noisy measurements. They filtered out light from background stars (which create bright spots in the image that are not truly part of the corona). They corrected for small (few-millisecond) differences in how long the camera’s shutter was open. They removed the baseline brightness from all the images, and normalized it so brighter regions wouldn’t wash out dimmer ones. But one of the most challenging obstacles is inherent to the corona: motion blur due to the solar wind. To overcome this source of noise, DeForest and colleagues ran a special algorithm to smooth their images in time. Animations above: Views of the solar wind from NASA's STEREO spacecraft (left) and after computer processing (right). Scientists used an algorithm to dim the appearance of bright stars and dust in images of the faint solar wind. Animations Credits: NASA’s Goddard Space Flight Center/Craig DeForest, SwRI. Smoothing in time — with a twist If you’ve ever done a “double-take,” you know a thing or two about smoothing in time. A double-take — taking a second glance, to verify your first one — is just a low-tech way of combining two “measurements” taken at different times, into one measurement that you can be more confident in. Smoothing in time turns this idea into an algorithm. The principle is simple: take two (or more) images, overlap them, and average their pixel values together. Random differences between the images will eventually cancel out, leaving behind only what is consistent between them. But when it comes to the corona, there’s a problem: it’s a dynamic, persistently moving and changing structure. Solar material is always moving away from the Sun to become the solar wind. Smoothing in time would create motion blur — the same kind of blurring you see in photographs of moving objects. That’s a problem if your goal is to see fine detail. To undo motion blur from the solar wind, the scientists used a novel procedure: while they did their smoothing, they estimated the speed of the solar wind and shifted the images along with it. To understand how this approach works, think about taking snapshots of the freeway as cars drive past. If you simply overlapped your images, the result would be a big blurry mess — too much has changed between each snapshot. But if you could figure out the speed of traffic and shift your images to follow along with it, suddenly the details of specific cars would become visible. For DeForest and his coauthors, the cars were the fine-scale structures of the corona, and the freeway traffic was the solar wind. Of course there are no speed limit signs in the corona to tell you how fast things are moving. To figure out exactly how much to shift the images before averaging, they scooted the images pixel-by-pixel, correlating them with one another to compute how similar they were. Eventually they found the sweet spot, where the overlapping parts of the images were as similar as possible. The amount of shift corresponded to an average solar wind speed of about 136 miles per second. Shifting each image by that amount, they lined up the images and smoothed, or averaged them together. “We smoothed, not just in space, not just in time, but in a moving coordinate system,” DeForest said. “That allowed us to create motion blur that was determined not by the speed of the wind, but by how rapidly the features changed in the wind.” Now DeForest and his collaborators had high-quality images of the corona — and a way to tell how much it was changing over time. The most surprising finding wasn’t a specific physical structure — it was the simple presence of physical structure in and of itself. Compared with the dynamic, turbulent inner corona, scientists had considered the outer corona to be smooth and homogenous. But that smoothness was just an artifact of poor signal-to-noise ratio: “When we removed as much noise as possible, we realized that the corona is structured, all the way down to the optical resolution of the instrument,” DeForest said. Like the individual blades of grass you see only when you’re up close, the corona’s complex physical structure was revealed in unprecedented detail. And from among that physical detail, three key findings emerged. The structure of coronal streamers Coronal streamers — also known as helmet streamers, because they resemble a knight’s pointy helmet — are bright structures that develop over regions of the Sun with enhanced magnetic activity. Readily observed during solar eclipses, magnetic loops on the Sun’s surface are stretched out to pointy tips by the solar wind and can erupt into coronal mass ejections, or CMEs, the large explosions of matter that eject parts of the Sun into surrounding space. Image above: Coronal streamers observed by the Solar and Heliospheric Observatory (SOHO) spacecraft on Feb. 14, 2002. DeForest and his coauthors’ work indicates that these structures are actually composed of many individual fine strands. Image Credits: NASA/LASCO. DeForest and his coauthors’ processing of STEREO observations reveals that streamers themselves are far more structured than previously thought. “What we found is that there is no such thing as a single streamer,” DeForest said. “The streamers themselves are composed of myriad fine strands that together average to produce a brighter feature.” The Alfvén zone Where does the corona end and the solar wind begin? One definition points to the Alfvén surface, a theoretical boundary where the solar wind starts moving faster than waves can travel backward through it. At this boundary region, disturbances happening at a point farther away in the traveling solar material can never move backwards fast enough to reach the Sun. “Material that flows out past the Alfvén surface is lost to the Sun forever,” DeForest said. Physicists have long believed the Alfvén surface was just that — a surface, or sheet-like layer where the solar wind suddenly reached a critical speed. But that’s not what DeForest and colleagues found. “What we conclude is that there isn’t a clean Alfvén surface,” DeForest said. “There’s a wide ‘no-man’s land’ or `Alfvén zone’ where the solar wind gradually disconnects from the Sun, rather than a single clear boundary.” Animation above: A detailed view of the solar corona from the STEREO-A coronagraph after extensive data-cleaning. Animation Credits: Craig DeForest, SwRI. The observations reveal a patchy framework where, at a given distance from the Sun, some plasma is moving fast enough to stop backward communication, and nearby streams are not. The streams are close enough, and fine enough, to jumble the natural boundary of the Alfvén surface to create a wide, partially-disconnected region between the corona and the solar wind. Exploring the Unknown with Parker Solar Probe The newly processed images from STEREO reveal evidence for a new, unsuspected “no-man’s land” between the corona and solar wind: the so-called “Alfvén zone.” This result arrives just in time for Parker Solar Probe, NASA’s mission to touch the Sun, which launches in August 2018. Parker Solar Probe will fly through this newly identified territory and directly explore the environment within it. A mystery at 10 solar radii But the close look at coronal structure also raised new questions. The technique used to estimate the speed of the solar wind pinpointed the altitudes, or distances from the Sun’s surface, where things were changing rapidly. And that’s when the team noticed something funny. “We found that there’s a correlation minimum around 10 solar radii,” DeForest said. At a distance of 10 solar radii, even back-to-back images stopped matching up well. But they became more similar again at greater distances — meaning that it’s not just about getting farther away from the Sun. It’s as if things suddenly change once they hit 10 solar radii. “The fact that the correlation is weaker at 10 solar radii means that some interesting physics is happening around there,” DeForest said. “We don’t know what it is yet, but we do know that it is going to be interesting.” Where we go from here The findings create headway in a long-standing debate over the source of the solar wind’s complexity. While the STEREO observations don’t settle the question, the team’s methodology opens up a missing link in the Sun-to-solar-wind chain. “We see all of this variability in the solar wind just before it hits the Earth’s magnetosphere, and one of our goals was to ask if it was even possible that the variability was formed at the Sun. It turns out the answer is yes,” Viall said. “It allows us for the first time to really probe the connectivity through the corona and adjust how tangled we think the magnetic field gets in the corona versus the solar wind,” DeForest added. These first observations also provide key insight into what NASA’s upcoming Parker Solar Probe will find, as the first ever mission to gather measurements from within the outer solar corona. That spacecraft will travel to a distance of 8.86 solar radii, right into the region where interesting things may be found. DeForest and colleagues’ results allow them to make predictions of what Parker Solar Probe may observe in this region. “We should expect steep fluctuations in density, magnetic fluctuations and reconnection everywhere, and no well-defined Alfvén surface,” DeForest said. Complemented by Parker Solar Probe’s in situ measurements, long exposure imaging and noise reduction algorithms will become even more valuable to our understanding of our closest star. The study was supported by a grant from NASA’s Living With a Star - Targeted Research and Technology program. Solar and Terrestrial Relations Observatory-A (STEREO-A): http://nasa.gov/stereo Learn more about NASA’s STEREO mission: https://www.nasa.gov/mission_pages/stereo/mission/index.html Parker Solar Probe: https://www.nasa.gov/content/goddard/parker-solar-probe Images (mentioned), Animations (mentioned), Video (mentioned), Text, Credits: NASA/Rob Garner/Goddard Space Flight Center, by Miles Hatfield. Publié par Orbiter.ch à 14:11
<urn:uuid:b993e702-feba-47ef-bbd8-a61a0380c6a5>
2.65625
8,713
Content Listing
Science & Tech.
49.21544
95,618,727
Functions and subroutines (subs) are very similar; both enable you to create a reusable block of code that you can call from other locations in your application. The difference between a function and a subroutine is that a function can return data whereas a sub doesn’t. Together, functions and subroutines are referred to as methods. You can also check here what is log4net ? They can be parameterized. That is, you can pass in additional information that can be used inside the function or subs. ‘ Define a function Public Function FunctionName ([parameterList]) As DataType ‘ Define a subroutine Public Sub SubName ([parameterList]) // Define a function public datatype FunctionName([parameterList]) // Define a subroutine public void SubName([parameterList]) The complete first line, starting with Public, is referred to as the method signature because it defines the look of the function, including its name and its parameters. The Public keyword (public in C#) is called an access modifier and defines to what extent other web pages or code files can see this method. The name of the function is followed by parentheses, which in turn can contain an optional parameter list. Both the function and subroutine have a parameter list that enables you to define the name and data type of variables that are passed to the method. Inside the method you can access these variables as you would normal variables.
<urn:uuid:b7265383-fdb2-4504-9b74-7688b66259cd>
3.90625
315
Documentation
Software Dev.
30.105831
95,618,730
A new radar image comprised from seven Titan fly-bys over the last year and a half shows a north pole pitted with giant lakes and seas, at least one of them larger than Lake Superior in the USA, the largest freshwater lake on Earth. Approximately 60% of Titan's north polar region, above 60° north, has been mapped by Cassini's radar instrument. About 14% of the mapped region is covered by what scientists interpret as liquid hydrocarbon lakes. Titan's north polar region "This is our version of mapping Alaska, the northern parts of Canada, Greenland, Scandinavia and Northern Russia," said Rosaly Lopes, Cassini radar scientist at the Jet Propulsion Laboratory, USA. "It is like mapping these regions of Earth for the first time."Lakes and seas are very common at the high northern latitudes of Titan, which is in winter now. Scientists say as it rains methane and ethane there, these liquids are collected on the surface, filling the lakes and seas. Those lakes and seas then carve meandering rivers and channels on the moon's surface. Now Cassini is moving into unknown territory, down to the south pole of Titan. "We want to see if there are more lakes present there. Titan is indeed the land of lakes and seas, but we want to know if this is true now for the south pole as well," said Lopes. "We know there is at least one large lake near the south pole, but it will be interesting to see if there's a big difference between the north and south polar regions." It is summer at Titan's south pole but winter should roll over that region in 2017. A season on Titan lasts nearly 7.5 years, one quarter of a Saturn year, which is 29.5 years long. Monitoring seasonal change helps scientists understand the processes at work there. Scientists are making progress in understanding how the lakes may have formed. On Earth, lakes fill low spots or are created when the local topography intersects a groundwater table. Lopes and her colleagues think that the depressions containing the lakes on Titan may have been formed by volcanism or by a type of erosion (called karstic) of the surface, leaving a depression where liquids can accumulate. This type of lake is common on Earth. "The lakes we are observing on Titan appear to be in varying states of fullness, suggesting their involvement in a complex hydrologic system akin to Earth's water cycle. This makes Titan unique among the extra-terrestrial bodies in our solar system," said Alex Hayes, a graduate student who studies Cassini radar data at the California Institute of Technology in the USA. "The lakes we have seen so far vary in size from the smallest observable, approximately 1 square km, to greater than 100 000 square km, which is slightly larger than the great lakes in midwestern USA," Hayes said. "Of the roughly 400 observed lakes, 70% of their area is taken up by large ‘seas’ greater than 26 000 square km." Jean-Pierre Lebreton | alfa Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:b75fe3e1-ef34-4752-9184-79738f4ad20a>
3.671875
1,202
Content Listing
Science & Tech.
47.759779
95,618,732
Permanent Magnet- Magnetic Field. http://www.diracdelta.co.uk/science/source/m/a/magnetic%20field/source.html Magnets have two poles called North and South. Similar (like) magnetic poles repel. Unlike magnetic poles attract. A magnet attracts a piece of iron. The most important of the two properties of attraction and repulsion is repulsion. The only way to tell if an object is magnetised is to see if it repels another magnetised object. The strength and direction of a magnetic field is represented by magnetic field lines. Field lines by convention go from North to South. A magnetic field is three-dimensional, although this is not often seen on a drawing of magnetic field lines. A magnetic field exists around all wires carrying a current. When there is no current the compass needles in the diagram shown line up with the Earth’s magnetic field. A current through the wire produces a circular magnetic field. See what happens when there is a current in the wire. The magnetic field for a coil of wire is shown below. The magnetic fields from each of the turns in the coil add together, so the total magnetic field is much stronger. This produces a field which is similar to that of a bar magnet. A coil of wire like this is often called a solenoid. An electromagnet consists of a coil of wine, through which a current can be passed, wrapped around a soft iron core. This core of magnetic material increases the strength of the field due to the coil. ‘Soft’ iron is easily magnetised, and easy to demagnetise- it does not retain its magnetism after the current is switched off. Steel, on the other hand, is hard to magnetise and demagnetise, and so it retains in magnetism. It is used for permanent magnets. The strength of an electromagnet depends on: The size of the current flowing through the coil The number of turns in the coil The material inside of the coil Domains – http://hyperphysics.phy-astr.gsu.edu/hbase/solids/ferro.html#c4 Ferromagnetic materials exhibit a long-range ordering phenomenon at the atomic level which causes the unpaired electron spins to line up parallel with each other in a region called a domain. Within the domain, the magnetic field is intense, but in a bulk sample the material will usually be unmagnetized because the many domains will themselves be randomly oriented with respect to one another. The main implication of the domains is that there is already a high degree of magnetization in ferromagnetic materials within individual domains, but that in the absence of external magnetic fields those domains are randomly oriented. A modest applied magnetic field can cause a larger degree of alignment of the magnetic moments with the external field, giving a large multiplication of the applied field. Iron, nickel, cobalt and some of the rare earths (gadolinium, dysprosium) exhibit a unique magnetic behavior which is called ferromagnetism because iron (ferrum in Latin) is the most common and most dramatic example.
<urn:uuid:8dfa9b8d-3bdc-401b-9b1e-d145970705d2>
4.0625
657
Knowledge Article
Science & Tech.
50.617857
95,618,746
Too much of a good thing could be harmful to the environment. For years, scientists have known about silver’s ability to kill harmful bacteria and, recently, have used this knowledge to create consumer products containing silver nanoparticles. Now, a University of Missouri researcher has found that silver nanoparticles also may destroy benign bacteria that are used to remove ammonia from wastewater treatment systems. The study was funded by a grant from the National Science Foundation. Several products containing silver nanoparticles already are on the market, including socks containing silver nanoparticles designed to inhibit odor-causing bacteria and high-tech, energy-efficient washing machines that disinfect clothes by generating the tiny particles. The positive effects of that technology may be overshadowed by the potential negative environmental impact. “Because of the increasing use of silver nanoparticles in consumer products, the risk that this material will be released into sewage lines, wastewater treatment facilities, and, eventually, to rivers, streams and lakes is of concern,” said Zhiqiang Hu, assistant professor of civil and environmental engineering in MU’s College of Engineering. “We found that silver nanoparticles are extremely toxic. The nanoparticles destroy the benign species of bacteria that are used for wastewater treatment. It basically halts the reproduction activity of the good bacteria.” Hu said silver nanoparticles generate more unique chemicals, known as highly reactive oxygen species, than do larger forms of silver. These oxygen species chemicals likely inhibit bacterial growth. For example, the use of wastewater treatment “sludge” as land-application fertilizer is a common practice, according to Hu. If high levels of silver nanoparticles are present in the sludge, soil used to grow food crops may be harmed. Hu is launching a second study to determine the levels at which the presence of silver nanoparticles become toxic. He will determine how silver nanoparticles affect wastewater treatment processes by introducing nanomaterial into wastewater and sludge. He will then measure microbial growth to determine the nanosilver levels that harm wastewater treatment and sludge digestion. The Water Environment Research Foundation recently awarded Hu $150,000 to determine when silver nanoparticles start to impair wastewater treatment. Hu said nanoparticles in wastewater can be better managed and regulated. Work on the follow-up research should be completed by 2010. Bryan E. Jones | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:525dd49d-5743-4b98-bd40-3444e72aca8f>
3.203125
1,136
Content Listing
Science & Tech.
31.471326
95,618,774
But a new study suggests that, as a way to fight global warming, the effectiveness of this strategy depends heavily on where these trees are planted. In particular, tropical forests are very efficient at keeping the Earth at a happy, healthy temperature. The researchers, including Ken Caldeira of Carnegie’s Department of Global Ecology and Govindasamy Bala at Lawrence Livermore National Laboratory, found that because tropical forests store large amounts of carbon and produce reflective clouds, they are especially good at cooling the planet. In contrast, forests in snowy areas can warm the Earth, because their dark canopy absorbs sunlight that would otherwise be reflected back to space by a bright white covering of snow. The work simulates the effects of large-scale deforestation, and accounts for the positive and negative climate effects of tree cover at different latitudes. The result, which appears in this week’s early online edition of the Proceedings of the National Academy of Sciences, makes a strong case for protecting and restoring tropical forests. "Tropical forests are like Earth’s air conditioner," Caldeira said. "When it comes to rehabilitating forests to fight global warming, carbon dioxide might be only half of the story; we also have to account for whether they help to reflect sunlight by producing clouds, or help to absorb it by shading snowy tundra." Forests in colder, sub-polar latitudes evaporate less water and are less effective at producing clouds. As a result, the main climate effect of these forests is to increase the absorption of sunlight, which can overwhelm the cooling effect of carbon storage. However, Caldeira believes it would be counterproductive to cut down forests in snowy areas, even if it could help to combat global warming. "A primary reason we are trying to slow global warming is to protect nature," he explains. "It just makes no sense to destroy natural ecosystems in the name of saving natural ecosystems." Ken Caldeira | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:ef27e464-2193-4fb4-acc4-94e71cda7538>
3.859375
1,062
Content Listing
Science & Tech.
38.128422
95,618,775
Java 5 Feature: An intro to Autoboxing Boxing is to place a primitive data-type within an object so that the primitive can be used like an object. For example, a List, prior to JDK 1.5, can't store primitive data types. So, to store int type data in a List, a manual boxing was required (i.e from int to integer). Similarly, to retrieve the data back, an unboxing process was required to convert Integer back to int. Autoboxing is the term for treating a primitive type as an object type. The compiler automatically supplies the extra code needed to perform the type conversion. For example, in JDK 1.5, it now allows the programmer to create an ints. This does not contradict what was said above for ArrayList still only lists objects, and it cannot list primitive types. But now, when Java expects an object but receives a primitive type, it immediately converts that primitive type to an object. This action is called Autoboxing, because it is boxing that is done automatically and implicitly instead of requiring the programmer to do so manually.Unboxing is the term for treating an object type as a primitive type without any extra code. For example, in versions of Java prior to 1.5, the following code did not compile: Integer i = new Integer(9);Originally, the compiler would not accept the last line. As Integer j = new Integer(13); Integer k = i + j; // error prior to JDK 1.5 Integers are objects, mathematical operators such as +were not meaningfully defined for them. But the following code would of course be accepted without complaint: int i = 9; int j = 13; int k = i + j; int x = 4;The int y = 5; // Integer qBox = new Integer(x + y); Integer qBox = x + y;//would have been error,but okay now-equi to prev line Integeris unboxed into an int, the two are added, and then the sum is autoboxed into a new Pros and Cons: - Autoboxing unclutters your code, but comes with important considerations in terms of performance and sometimes unexpected behavior. - Understand what == means. For objects, you are comparing identity, for primitive types, you are comparing value. In the case of auto-unboxing, the value based comparison happens. - If the Integer object is null, assigning in to int will result in a runtime exception – NullPointerException – being thrown.
<urn:uuid:ca417d38-7985-49e7-977b-307dc3a1214d>
3.71875
546
Personal Blog
Software Dev.
47.618162
95,618,788
API: How To Make A Call Using PHP - YouTube Published on Sep 17, 2013 Guess You Like Tools for Entrepreneurs: Introduction to APIs What is an API? - Application Programming Interface Creating a RESTful Web Service in PHP 10 Programming Languages in ONLY 15 minutes! Calling a Web API for data I Adopted Rich People’s Habits, See How My Life Changed How to Remove Scratches from Car PERMANENTLY (EASY) A Day in the Life of a Harvard Computer Science Student How I Learned to Code - and Got a Job at Google! I Tried Minimalism For A Week Using PHP and MySQL to Extend Asterisk Make an Outgoing Call Using the Twilio Rest API & PHP REST Vs SOAP - What is the difference? | Tech Primers Working with JSON and PHP JSON and AJAX Tutorial: With Real Examples Intro to REST 5 tips to improve logic building in programming API Webinar Series: Learn How to Use and Create APIs PHP Security: XSS (Cross-site Scripting) Get The Newest Version 2018 TubeMate ALL RIGHTS RESERVED.
<urn:uuid:e99b40f6-d0ba-4c44-8cdd-c79d376bf998>
2.59375
253
Content Listing
Software Dev.
41.511474
95,618,804
A UCSB geochemist uses helium and lead isotopes to gain insight into the makeup of the planet’s deep interior A UC Santa Barbara geochemist studying Samoan volcanoes has found evidence of the planet’s early formation still trapped inside the Earth. Known as hotspots, volcanic island chains such as Samoa can ancient primordial signatures from the early solar system that have somehow survived billions of years. Matthew Jackson, an associate professor in UCSB’s Department of Earth Science, and colleagues utilized high-precision lead and helium isotope measurements to unravel the chemical composition and geometry of the deep mantle plume feeding Samoa’s volcanoes. Their findings appear today in the journal Nature. In most cases, volcanoes are located at the point where two tectonic plates meet, and are created when those plates collide or diverge. Hotspot volcanoes, however, are not located at plate boundaries but rather represent the anomalous melting in the interior of the plates. Such intraplate volcanoes form above a plume-fed hotspot where the Earth’s mantle is melting. The plate moves over time — at approximately the rate human fingernails grow (3 inches a year) — and eventually the volcano moves off the hotspot and becomes extinct. Another volcano forms in its place over the hotspot and the process repeats itself until a string of volcanoes evolves. “So you end up with this linear trend of age-progressive volcanoes,” Jackson said. “On the Pacific plate, the youngest is in the east and as you go to the west, the volcanoes are older and more deeply eroded. Hawaii has two linear trends of volcanoes — most underwater — which are parallel to each other. There’s a southern trend and a northern trend.” Because the volcanic composition of parallel Hawaiian trends is fundamentally different, Jackson and his team decided to look for evidence of this in other hotspots. In Samoa, they found three volcanic trends exhibiting three different chemical configurations as well as a fourth group of a late-stage eruption on top of the third trend of volcanoes. These different groups exhibit distinct compositions. “Our goal was to figure out how we could use this distribution of volcano compositions at the surface to reverse-engineer how these components are distributed inside this upwelling mantle plume at depth,” Jackson said. Each of the four distinct geochemical compositions, or endmembers, that the scientists identified in Samoan lavas contained low Helium-3 (He-3) and Helium-4 (He-4) ratios. The surprising discovery was that they all exhibited evidence for mixing with a fifth, rare primordial component consisting of high levels of He-3 and He-4. “We have really strong evidence that the bulk of the plume is made of the high Helium-3, -4 component,” Jackson said. “That tells us that most of this plume is primordial material and there are other materials hosted inside of this plume with low Helium-3, -4, and these are likely crustal materials sent into the mantle at ancient subduction zones.” The unique isotopic topology revealed by the researchers’ analysis showed that the four low-helium endmembers do not mix efficiently with one another. However, each of them mixes with the high He-3 and He-4 component. “This unique set of mixing relationships requires a specific geometry for the four geochemical flavors within the upwelling plume: They must be hosted within a matrix that is composed of the rare fifth component with high He-3,” Jackson explained. “This new constraint on plume structure has important implications for how deep mantle material is entrained in plumes, and it gives us the clearest picture yet for the chemical structure of an upwelling mantle plume.” Co-authors of the paper include Stanley R. Hart, Jerzy S. Blusztajn and Mark D. Kurz of the Woods Hole Oceanographic Institution, Jasper G. Konter of the University of Hawaii and Kenneth A. Farley of the California Institute of Technology. This research was funded by the National Science Foundation. Julie Cohen | Eurek Alert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:d6411cca-e0fc-4e22-8bd3-36a930a96f80>
3.578125
1,517
Content Listing
Science & Tech.
40.919194
95,618,813
If the icy surface of Pluto's giant moon Charon is cracked, analysis of the fractures could reveal if its interior was warm, perhaps warm enough to have maintained a subterranean ocean of liquid water, according to a new NASA-funded study. Pluto is an extremely distant world, orbiting the sun more than 29 times farther than Earth. With a surface temperature estimated to be about 380 degrees below zero Fahrenheit (around minus 229 degrees Celsius), the environment at Pluto is far too cold to allow liquid water on its surface. Pluto's moons are in the same frigid environment. Pluto's remoteness and small size make it difficult to observe, but in July of 2015, NASA's New Horizons spacecraft will be the first to visit Pluto and Charon, and will provide the most detailed observations to date. "Our model predicts different fracture patterns on the surface of Charon depending on the thickness of its surface ice, the structure of the moon's interior and how easily it deforms, and how its orbit evolved," said Alyssa Rhoden of NASA's Goddard Space Flight Center in Greenbelt, Maryland. "By comparing the actual New Horizons observations of Charon to the various predictions, we can see what fits best and discover if Charon could have had a subsurface ocean in its past, driven by high eccentricity." Rhoden is lead author of a paper on this research now available online in the journal Icarus. Some moons around the gas giant planets in the outer solar system have cracked surfaces with evidence for ocean interiors – Jupiter's moon Europa and Saturn's moon Enceladus are two examples. In Charon's case, this study finds that a past high eccentricity could have generated large tides, causing friction and surface fractures. The moon is unusually massive compared to its planet, about one-eighth of Pluto's mass, a solar system record. It is thought to have formed much closer to Pluto, after a giant impact ejected material off the planet's surface. The material went into orbit around Pluto and coalesced under its own gravity to form Charon and several smaller moons. Initially, there would have been strong tides on both worlds as gravity between Pluto and Charon caused their surfaces to bulge toward each other, generating friction in their interiors. This friction would have also caused the tides to slightly lag behind their orbital positions. The lag would act like a brake on Pluto, causing its rotation to slow while transferring that rotational energy to Charon, making it speed up and move farther away from Pluto. "Depending on exactly how Charon's orbit evolved, particularly if it went through a high-eccentricity phase, there may have been enough heat from tidal deformation to maintain liquid water beneath the surface of Charon for some time," said Rhoden. "Using plausible interior structure models that include an ocean, we found it wouldn't have taken much eccentricity (less than 0.01) to generate surface fractures like we are seeing on Europa." "Since it's so easy to get fractures, if we get to Charon and there are none, it puts a very strong constraint on how high the eccentricity could have been and how warm the interior ever could have been," adds Rhoden. "This research gives us a head start on the New Horizons arrival – what should we look for and what can we learn from it. We're going to Pluto and Pluto is fascinating, but Charon is also going to be fascinating." Based on observations from telescopes, Charon's orbit is now in a stable end state: a circular orbit with the rotation of both Pluto and Charon slowed to the point where they always show the same side to each other. Its current orbit is not expected to generate significant tides, so any ancient underground ocean may be frozen by now, according to Rhoden. Since liquid water is a necessary ingredient for known forms of life, the oceans of Europa and Enceladus are considered to be places where extraterrestrial life might be found. However, life also requires a useable energy source and an ample supply of many key elements, such as carbon, nitrogen, and phosphorus. It is unknown if those oceans harbor these additional ingredients, or if they have existed long enough for life to form. The same questions would apply to any ancient ocean that may have existed beneath the icy crust of Charon. This research was funded by the NASA Postdoctoral Program at the NASA Goddard Space Flight Center, administered by Oak Ridge Associated Universities, and NASA Headquarters through the Science Innovation Fund. Bill Steigerwald | Eurek Alert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:318ff679-a390-4a0d-9039-2cfb95b58e28>
3.953125
1,572
Content Listing
Science & Tech.
41.917878
95,618,814
Aquifer sensitivity in shallow aquifers: Indiana - Indiana Geological Survey - Aquifer_Sensitivity_Near_Surface_IN is a raster data layer that displays a reclassified raster data layer of estimated rates of diffuse groundwater recharge to the water table in shallow aquifers in Indiana (see Aquifer_Recharge_Near_Surface_IN). The goal of this project was to use an objective, reproducible methodology to rank aquifer sensitivity in shallow aquifers based on estimates of diffuse groundwater recharge rates. To achieve the aquifer sensitivity ranking, the groundwater recharge rates were classified by standard deviation, and the classifications were then compared to databases of contaminants in groundwater to validate the classification. - Indiana, United States - Geoscientific Information, Hydrology, and Aquifers - Contributed by: - More details at |Click on map to inspect values|
<urn:uuid:d3b32bf3-ff6a-4d92-b9b3-93a47607d2f4>
3.125
188
Content Listing
Science & Tech.
-34.488462
95,618,827
Jump to navigation Jump to search “Catalyst” redirects here. An air filter that utilizes low-temperature zeolite catalyst pdf catalyst used to convert carbon monoxide to less toxic carbon dioxide at room temperature. It can also remove formaldehyde from the air. Often only tiny amounts of catalyst are required in principle. In general, the reactions occur faster with a catalyst because they require less activation energy. In catalyzed mechanisms, the catalyst usually reacts to form a temporary intermediate which then regenerates the original catalyst in a cyclic process. Catalysts may be classified as either homogeneous or heterogeneous. A heterogeneous catalyst is one whose molecules are not in the same phase as the reactants, which are typically gases or liquids that are adsorbed onto the surface of the solid catalyst. In the presence of a catalyst, less free energy is required to reach the transition state, but the total free energy from reactants to products does not change. A catalyst may participate in multiple chemical transformations. However, the detailed mechanics of catalysis is complex.
<urn:uuid:d751690d-eb79-42ac-a1b0-6afa26f1e0ae>
3.6875
216
Knowledge Article
Science & Tech.
19.226731
95,618,834
Credit: John McMillan Steelhead trout, a member of the salmon family that live and grow in the Pacific Ocean, genetically adapted to the freshwater environment of Lake Michigan in less than 120 years. Steelhead were intentionally introduced into Lake Michigan in the late 1800s in order to bolster recreational and commercial fisheries. In their native range, which extends from California to Russia, steelhead hatch in freshwater rivers, migrate to the ocean, and return to freshwater to spawn. This migration allows steelhead to feed in the ocean, where they can grow larger and produce more eggs than if they remained in freshwater streams for their entire lives. The steelhead introduced into Lake Michigan continue to spawn in small freshwater tributaries and streams, but now treat the entirely freshwater habitat of the Great Lakes as a surrogate ocean. After their introduction into Lake Michigan, steelhead began to naturally reproduce and established self-sustaining populations throughout the Great Lakes. To examine how these fish adapted to this novel environment, a team led by Mark Christie, an assistant professor of biological sciences at Purdue University, sequenced the complete genomes of 264 fish. The team then compared steelhead from Lake Michigan to those from their ancestral range, searching for outlier regions associated with genetic adaptation. The research, which was published in in the journal Molecular Ecology, found that regions of three chromosomes in steelhead evolved after they were introduced in Lake Michigan, offering insight into how this ocean-migrating fish adapted to an entirely freshwater environment. Two of the three regions on chromosomes that experienced genetic changes are critical to the process that maintains salt and ion balance across membranes in the body, known as osmoregulation. Freshwater fish actively take in ions from their environments to compensate for salts lost via passive diffusion, while saltwater fish expel ions to compensate for the uptake of salts into their bodies. These changes to regions of chromosomes that affect how this process works help explain how steelhead have survived in an entirely freshwater environment. The third region that changed is involved in metabolism and wound-healing. This adaptation might have allowed steelhead to take advantage of alternative prey or allocate additional resources to activity in their new environment, according to the study. Alternatively, this region might have adapted as a response to a novel threat: parasitic sea lamprey. These parasitic creatures were unintentionally introduced to Lake Michigan in the 1930s. They latch onto fish like leeches and leave large wounds, often killing large numbers of the fish they prey on. "If you think about having an open wound in saltwater versus freshwater, the effects are more severe in freshwater because cells can rupture at a faster rate. It makes sense that steelhead might want to counteract those effects more quickly or do it in different ways," said Janna Willoughby, a postdoctoral researcher at Purdue and coauthor on the study. "Furthermore, parasitic lamprey occur in really high densities in the Great Lakes but rarely interact with steelhead in their native range - meaning that they may simply be a strong selective force." The study also found that genetic diversity was much lower in steelhead in the new environment than fish from their native range. This reduced genetic diversity, sometimes called a founder effect, is common when a new colony is started by only a few members of the original population. "Even if you have a reduced population due to an introduction event or founder effect, populations still adapt to changing environmental conditions," said Christie. "Figuring out which populations can adapt and why remains a pressing question, particularly in the face of climate change and other conservation issues." Like this article? Click here to subscribe to free newsletters from Lab Manager
<urn:uuid:ce2e1064-e0aa-4065-b954-06ca2b82c8b9>
4
738
Truncated
Science & Tech.
22.287516
95,618,838
“Forecasting how an invader will affect the growth and production of a specific native fish species is very relevant to conservation groups and government agencies hoping to conserve those fish,” says Biology graduate student Mike Yuille. Mr. Yuille is the lead author of a study that suggests for the first time that several native fish species have incorporated the bloody red shrimp into their diet over a multi-seasonal period. In addition to using traditional stomach content analyses, researchers measured the carbon and nitrogen signatures of muscle tissues of three potential Hemimysis predators (round goby, yellow perch, and alewife) to get a long-term picture of eating habits. All three predators exhibited increased nitrogen or carbon signatures, suggesting they had been feeding on prey with signatures very similar to Hemimysis. The team found these signatures in sites with dense populations of bloody red shrimp. Like zebra mussels, Hemimysis anomala is native to the Black Sea and Caspian Sea. It probably arrived in the Great Lakes through the ballast water of transoceanic ship. In 2006 it was discovered in Lake Michigan and has now been found in all of the Great Lakes except Lake Superior. Mr. Yuille co-authored the research with Queen’s associate professor Shelley Arnott, Linda Campbell, and Timothy Johnson at the Ontario Ministry of Natural Resources’ Glenora Fisheries Station in Picton. These findings will be published in the Journal of Great Lakes Research. Anne Craig | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:66450165-e830-4851-9c73-056afff83be1>
3.328125
908
Content Listing
Science & Tech.
36.595104
95,618,846
"Somewhere, something incredible is waiting to be known" Carl Sagan sábado, 29 de octubre de 2011 Evolution - "Junk DNA" defines differences between Humans and Chimps For years, scientists believed the vast phenotypic differences between humans and chimpanzees would be easily explained – the two species must have significantly different genetic makeups. However, when their genomes were later sequenced, researchers were surprised to learn that the DNA sequences of human and chimpanzee genes are nearly identical. What then is responsible for the many morphological and behavioral differences between the two species? Researchers at the Georgia Institute of Technology have now determined that the insertion and deletion of large pieces of DNA near genes are highly variable between humans and chimpanzees and may account for major differences between the two species. The research team lead by Georgia Tech Professor of Biology John McDonald has verified that while the DNA sequence of genes between humans and chimpanzees is nearly identical, there are large genomic “gaps” in areas adjacent to genes that can affect the extent to which genes are “turned on” and “turned off.” The research shows that these genomic “gaps” between the two species are predominantly due to the insertion or deletion (INDEL) of viral-like sequences called retrotransposons that are known to comprise about half of the genomes of both species. The findings are reported in the most recent issue of the online, open-access journal Mobile DNA. “These genetic gaps have primarily been caused by the activity of retroviral-like transposable element sequences,” said McDonald. “Transposable elements were once considered ‘junk DNA’ with little or no function. Now it appears that they may be one of the major reasons why we are so different from chimpanzees.” McDonald’s research team, comprised of graduate students Nalini Polavarapu, Gaurav Arora and Vinay Mittal, examined the genomic gaps in both species and determined that they are significantly correlated with differences in gene expression reported previously by researchers at the Max Plank Institute for Evolutionary Anthropology in Germany. “Our findings are generally consistent with the notion that the morphological and behavioral differences between humans and chimpanzees are predominately due to differences in the regulation of genes rather than to differences in the sequence of the genes themselves,” said McDonald. The current analysis of the genetic differences between humans and chimpanzees was motivated by the group’s previously published findings (2009) that the higher propensity for cancer in humans vs. chimpanzees may have been a by-product of selection for increased brain size in humans.
<urn:uuid:6c0e740e-2213-431a-b911-361c3ce6d871>
3.734375
548
News Article
Science & Tech.
13.516689
95,618,855
Sinkholes are cavities that form when water erodes easily dissolved, or soluble, rock located beneath the ground surface. Water moves along joints, or fractures, enlarging them to form a channel that drains sediment and water into the subsurface. As the rock erodes, materials above subside into the openings. At the surface, sinkholes often appear as bowl-shaped depressions. If the drain becomes clogged with rock and soil , the sinkhole may fill with water. Many ponds and small lakes form via sinkholes. Abundant sinkholes as well as caves, disappearing streams, and springs , characterize a type of landscape known as karst topography . Karst topography forms where ground-water erodes subsurface carbonate rock, such as limestone and dolomite , or evaporite rock, such as gypsum and halite (salt). Carbon dioxide (CO2), when combined with the water in air and soil, acidifies the water. The slight acidity intensifies the corrosive ability of the water percolating into the soil and moving through fractured rock. Geologists classify sinkholes mainly by their means of development. Collapse sinkholes are often funnel shaped. They form when soil or rock material collapses into a cave . Collapse may be sudden and damage is often significant; cars and homes may be swallowed by sinkholes. Solution sinkholes form in rock with multiple vertical joints. Water passing along these joints expands them, allowing cover material to move into the openings. Solution sinkholes usually form slowly and minor damage occurs, such as cracking of building foundations. Alluvial sinkholes are previously exposed sinkholes that, over time, partly or completely filled with Earth material. They can be hard to recognize and some are relatively stable. Rejuvenated sinkholes are alluvial sinkholes in which the cover material once again begins to subside, producing a growing depression. Uvalas are large sinkholes formed by the joining of several smaller sinkholes. Cockpits are extremely large sinkholes formed in thick limestone; some are more than a kilometer in diameter. Sinkholes occur naturally, but are also induced by human activities. Pumping water from a well can trigger sinkhole collapse by lowering the water table and removing support for a cave's roof. Construction over sinkholes can also cause collapse. Sinkhole development may damage buildings, pipelines and roadways. Damage from the Winter Park sinkhole in Florida is estimated at greater than $2 million. Sinkholes may also serve as routes for the spread of contamination to groundwater when people use them as refuse dumps. In areas where evaporite rock is common, human activities play an especially significant role in the formation of sinkholes. Evaporites dissolve in water much more easily than carbonate rocks. Salt mining and drilling into evaporite deposits allows water that is not already saturated with salt to easily dissolve the rock. These activities have caused the formation of several large sinkholes. Sinkholes occur worldwide, and in the United States are common in southern Indiana, southwestern Illinois, Missouri, Kentucky, Tennessee, and Florida. In areas with known karst topography, subsurface drilling or geophysical remote sensing may be used to pinpoint the location of sinkholes. See also Hydrogeology; Hydrologic cycle; Landscape evolution; Weathering and weathering series "Sinkholes." World of Earth Science. . Encyclopedia.com. (July 22, 2018). http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/sinkholes "Sinkholes." World of Earth Science. . Retrieved July 22, 2018 from Encyclopedia.com: http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/sinkholes
<urn:uuid:5e951f1f-fad7-4178-aa9c-3e62d39bae9f>
4.125
806
Knowledge Article
Science & Tech.
27.165224
95,618,865
posted by Akitsuke 1. How much ice at -10 degrees Celsius is required to cool a mixture of 0.1 kg ethyl alcohol and 0.1 kg water from 20 degrees Celsius to 0 degrees Celsius? 2. Find the heat produced by 1 KW heating element in one hour. 3. A 3g lead bullet at 30 degrees Celsius is fired at a speed of 240 m/s into a large block of ice at 0 degrees Celsius in which it becomes embedded. What quantity of ice melt? 1. Write an energy balance equation: Heat absorbed by heating (from -10) and melting X g of ice equals heat removed from the liquid water and alcohol as they is lowered to 0 C. Solve for the amount of melting ice, X 2. Watts is Joules per seond. Multiply 1000 W by the number of seconds 3. First compute the number of Joules of kinetic energy in the bullet. It will equal the heat available to melt, Q. Assume a final equilibrium of 0 C and write an equation saying that much Q lowers the temp of the lead to 0 C while melting X g of ice. X will be the only unknown in that equation. Solve for it.
<urn:uuid:8e75011f-f590-42f5-add2-5db7ede81689>
3.171875
253
Q&A Forum
Science & Tech.
85.837712
95,618,890
3 months ago Last week, SpaceX launched a Falcon 9 that was carrying a NASA spacecraft into orbit. The Transiting Exoplanet Survey Satellite (TESS) is on a mission to search for exoplanets. The spacecraft will be using a never-before-used lunar resonant orbit, which will allow TESS to observe both nearby stars and transmit data back to Earth with minimal energy expenditure. According to Ars Technica, science missions often need continuous and unobstructed views of their targets, and it seems like TESS is no different, since it will be monitoring about 200,000 relatively nearby stars for even the smallest changes in their brightness. NASA’s previous planet-hunter, the Kepler mission, observed its targets from an Earth-trailing, heliocentric orbit about 10 million km from our planet. This orbit takes a lot of energy to reach and it has data limitations. But the lunar resonant orbit will bring TESS within as close as 108,000km of Earth’s surface and as far out as 373,000km away from Earth. During a three hour period at its closest approach, TESS will orient itself to send lots of data back to Earth, but will spend most of its time observing stars. It will take a couple of weeks to reach this orbit, and then it will undergo about two months of checkouts before beginning science operations.Read the full story at Arstechnica
<urn:uuid:d7c4c458-ac60-4d7b-b38c-e11a1d2d575f>
3.78125
292
Truncated
Science & Tech.
49.030038
95,618,927
This article has multiple issues. Please help talk page. (Learn how and when to remove these template messages)( or discuss these issues on the Learn how and when to remove this template message) An interpreted language is a type of programming language for which most of its implementations execute instructions directly and freely, without previously compiling a program into machine-language instructions. The interpreter executes the program directly, translating each statement into a sequence of one or more subroutines, and then into another language (often machine code). The terms interpreted language and compiled language are not well defined because, in theory, any programming language can be either interpreted or compiled. In modern programming language implementation, it is increasingly popular for a platform to provide both options. Interpreted languages can also be contrasted with machine languages. Functionally, both execution and interpretation mean the same thing -- fetching the next instruction/statement from the program and executing it. Although interpreted byte code is additionally identical to machine code in form and has an assembler representation, the term "interpreted" is practically reserved for "software processed" languages (by virtual machine or emulator) on top of the native (i.e. hardware) processor. In principle, programs in many languages may be compiled or interpreted, emulated or executed natively, so this designation is applied solely based on common implementation practice, rather than representing an essential property of a language. Many languages have been implemented using both compilers and interpreters, including BASIC, C, Lisp, Pascal, and Python. Java and C# are compiled into bytecode, the virtual-machine-friendly interpreted language. Lisp implementations can freely mix interpreted and compiled code. In the early days of computing, language design was heavily influenced by the decision to use compiling or interpreting as a mode of execution. For example, Smalltalk (1980), which was designed to be interpreted at run-time, allows generic objects to dynamically interact with each other. Initially, interpreted languages were compiled line-by-line; that is, each line was compiled as it was about to be executed, and if a loop or subroutine caused certain lines to be executed multiple times, they would be recompiled every time. This has become much less common. Most so-called interpreted languages use an intermediate representation, which combines compiling and interpreting. The intermediate representation can be compiled once and for all (as in Java), each time before execution (as in Perl or Ruby), or each time a change in the source is detected before execution (as in Python). Interpreting a language gives implementations some additional flexibility over compiled implementations. Features that are often easier to implement in interpreters than in compilers include: Furthermore, source code can be read and copied, giving users more freedom. Disadvantages of interpreted languages are: Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your Digital Marketing and Technology knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.Visit defaultLogic's partner sites below:
<urn:uuid:45defd22-7186-49f1-a56d-e0d6b6107fdd>
4.4375
634
Knowledge Article
Software Dev.
11.516969
95,618,956
Atoms are the building blocks of matter and account for all structure that can be seen in the observable universe. Atoms consist of a positively charged nucleus that is surrounded by a cloud of negatively charged electrons. In a neutral atom, the number of positively charged protons within the nucleus is equal to the number of negatively charged electrons. However, an atom can gain or lose an electron. Atoms that are not electrically neutral are called ions and examples of atoms frequently found in their ionic for include sodium, chlorine and magnesium. Electrons surround atoms in discrete shells and each shell type can hold a fixed number of electrons. For example s-shells can hold 2 electrons, and p-shells can hold 6 electrons. Atoms are most energetically stable when the outer electron shell is full; therefore it is sometimes stabilizing for an electron to be lost producing a positive ion or for an electron to be gained producing a negative ion. Neutral sodium atoms consist of 11 protons and 11 electrons. Sodium has the electron configuration: 1s2 2s2 2p6 3s1 This means that the 1s electron shell is occupied by 2 electrons and is therefore full. The 2s and 2p shells are also full but the 3s shell is occupied by only 1 electron. The loss of an electron in the 3s shell, leads to a more stable electronic configuration since the lower 2p shell is full. When a sodium atom loses it's outer 3s electron, it becomes positively charged. The symbol for a positively charged sodium ion is Na+. Chlorine atoms consist of 17 protons and 17 electrons. The electron configuration of chlorine is: 1s2 2s2 2p6 3s2 3p5 Since a p-shell can hold six electrons, chlorine is very close to a stable electron configuration. Chlorine's 3p shell can gain the required electron at the expense of the atom becoming negatively charged. The symbol for a chlorine ion is Cl-. Magnesium atoms consist of 12 protons and 12 electrons. The electron configuration of magnesium is: 1s2 2s2 2p6 3s2 Magnesium can lose one or two electrons in its 3s shell, yielding an ion with a charge of +1 or +2. The symbol for magnesium ions are Mg+ and Mg2+, depending upon the total charge.
<urn:uuid:c1468fdb-f63b-4838-b780-1a5544c26fe5>
4.375
490
Knowledge Article
Science & Tech.
52.353078
95,618,963
Record-high magnetic fields in the lab, almost a Gigagauss in magnitude, have been achieved by aiming intense laser light at a dense plasma, expanding the possibilities for laboratory re-creations of astrophysical events. At last weeks APS Division of Plasma Physics Meeting in Orlando, researchers from Imperial College, London, and the Rutherford Appleton Lab in the UK announced evidence of super-strong magnetic fields that are hundreds of times more intense than any previous magnetic field created in an Earth laboratory and up to a billion times stronger than our planets natural magnetic field. Such intense magnetic fields may soon enable researchers to recreate extreme astrophysical conditions, such as the atmospheres of neutron stars and white dwarfs, in their very own laboratories. At the Rutherford Appleton Laboratory near Oxford in the UK, researchers at the VULCAN facility aimed intense laser pulses, lasting only picoseconds (trillionths of a second), at a dense plasma. The resulting magnetic fields in the plasma were on the order of 400 Megagauss. Phil Schewe | Physics news update 614 Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:237684ff-f920-4852-a12b-3430e75cd55b>
2.90625
793
Content Listing
Science & Tech.
34.998653
95,618,964
Electronegativity, symbol ?, is a chemical property that describes the tendency of an atom to attract a shared pair of electrons (or electron density) towards itself. An atom's electronegativity is affected by both its atomic number and the distance at which its valence electrons reside from the charged nucleus. The higher the associated electronegativity number, the more an element or compound attracts electrons towards it. The term "electronegativity" was introduced by Jöns Jacob Berzelius in 1811, though the concept was known even before that and was studied by many chemists including Avogadro. In spite of its long history, an accurate scale of electronegativity was not developed until 1932, when Linus Pauling proposed an electronegativity scale, which depends on bond energies, as a development of valence bond theory. It has been shown to correlate with a number of other chemical properties. Electronegativity cannot be directly measured and must be calculated from other atomic or molecular properties. Several methods of calculation have been proposed, and although there may be small differences in the numerical values of the electronegativity, all methods show the same periodic trends between elements. The most commonly used method of calculation is that originally proposed by Linus Pauling. This gives a dimensionless quantity, commonly referred to as the Pauling scale (?r), on a relative scale running from around 0.7 to 3.98 (hydrogen = 2.20). When other methods of calculation are used, it is conventional (although not obligatory) to quote the results on a scale that covers the same range of numerical values: this is known as an electronegativity in Pauling units. As it is usually calculated, electronegativity is not a property of an atom alone, but rather a property of an atom in a molecule. Properties of a free atom include ionization energy and electron affinity. It is to be expected that the electronegativity of an element will vary with its chemical environment, but it is usually considered to be a transferable property, that is to say that similar values will be valid in a variety of situations. On the most basic level, electronegativity is determined by factors like the nuclear charge (the more protons an atom has, the more "pull" it will have on electrons) and the number/location of other electrons present in the atomic shells (the more electrons an atom has, the farther from the nucleus the valence electrons will be, and as a result the less positive charge they will experience--both because of their increased distance from the nucleus, and because the other electrons in the lower energy core orbitals will act to shield the valence electrons from the positively charged nucleus). The opposite of electronegativity is electropositivity: a measure of an element's ability to donate electrons. Caesium is the least electronegative element in the periodic table (=0.79), while fluorine is most electronegative (=3.98). Francium and caesium were originally both assigned 0.7; caesium's value was later refined to 0.79, but no experimental data allows a similar refinement for francium. However, francium's ionization energy is known to be slightly higher than caesium's, in accordance with the relativistic stabilization of the 7s orbital, and this in turn implies that francium is in fact more electronegative than caesium. Pauling first proposed the concept of electronegativity in 1932 as an explanation of the fact that the covalent bond between two different atoms (A-B) is stronger than would be expected by taking the average of the strengths of the A-A and B-B bonds. According to valence bond theory, of which Pauling was a notable proponent, this "additional stabilization" of the heteronuclear bond is due to the contribution of ionic canonical forms to the bonding. The difference in electronegativity between atoms A and B is given by: where the dissociation energies, Ed, of the A-B, A-A and B-B bonds are expressed in electronvolts, the factor (eV)- being included to ensure a dimensionless result. Hence, the difference in Pauling electronegativity between hydrogen and bromine is 0.73 (dissociation energies: H-Br, 3.79 eV; H-H, 4.52 eV; Br-Br 2.00 eV) As only differences in electronegativity are defined, it is necessary to choose an arbitrary reference point in order to construct a scale. Hydrogen was chosen as the reference, as it forms covalent bonds with a large variety of elements: its electronegativity was fixed first at 2.1, later revised to 2.20. It is also necessary to decide which of the two elements is the more electronegative (equivalent to choosing one of the two possible signs for the square root). This is usually done using "chemical intuition": in the above example, hydrogen bromide dissolves in water to form H+ and Br- ions, so it may be assumed that bromine is more electronegative than hydrogen. However, in principle, since the same electronegativities should be obtained for any two bonding compounds, the data are in fact overdetermined, and the signs are unique once a reference point is fixed (usually, for H or F). To calculate Pauling electronegativity for an element, it is necessary to have data on the dissociation energies of at least two types of covalent bond formed by that element. A. L. Allred updated Pauling's original values in 1961 to take account of the greater availability of thermodynamic data, and it is these "revised Pauling" values of the electronegativity that are most often used. The essential point of Pauling electronegativity is that there is an underlying, quite accurate, semi-empirical formula for dissociation energies, namely: or sometimes, a more accurate fit This is an approximate equation, but holds with good accuracy. Pauling obtained it by noting that a bond can be approximately represented as a quantum mechanical superposition of a covalent bond and two ionic bond-states. The covalent energy of a bond is approximately, by quantum mechanical calculations, the geometric mean of the two energies of covalent bonds of the same molecules, and there is an additional energy that comes from ionic factors, i.e. polar character of the bond. The geometric mean is approximately equal to the arithmetic mean - which is applied in the first formula above - when the energies are of the similar value, e.g., except for the highly electropositive elements, where there is a larger difference of two dissociation energies; the geometric mean is more accurate and almost always gives a positive excess energy, due to ionic bonding. The square root of this excess energy, Pauling notes, is approximately additive, and hence one can introduce the electronegativity. Thus, it is this semi-empirical formula for bond energy that underlies Pauling electronegativity concept. The formulas are approximate, but this rough approximation is in fact relatively good and gives the right intuition, with the notion of polarity of the bond and some theoretical grounding in quantum mechanics. The electronegativities are then determined to best fit the data. In more complex compounds, there is additional error since electronegativity depends on the molecular environment of an atom. Also, the energy estimate can be only used for single, not for multiple bonds. The energy of formation of a molecule containing only single bonds then can be approximated from an electronegativity table, and depends on the constituents and sum of squares of differences of electronegativities of all pairs of bonded atoms. Such a formula for estimating energy typically has relative error of order of 10%, but can be used to get a rough qualitative idea and understanding of a molecule. Robert S. Mulliken proposed that the arithmetic mean of the first ionization energy (Ei) and the electron affinity (Eea) should be a measure of the tendency of an atom to attract electrons. As this definition is not dependent on an arbitrary relative scale, it has also been termed absolute electronegativity, with the units of kilojoules per mole or electronvolts. However, it is more usual to use a linear transformation to transform these absolute values into values that resemble the more familiar Pauling values. For ionization energies and electron affinities in electronvolts, and for energies in kilojoules per mole, The Mulliken electronegativity can only be calculated for an element for which the electron affinity is known, fifty-seven elements as of 2006. The Mulliken electronegativity of an atom is sometimes said to be the negative of the chemical potential. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is possible to show that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons., i.e., A. Louis Allred and Eugene G. Rochow considered that electronegativity should be related to the charge experienced by an electron on the "surface" of an atom: The higher the charge per unit area of atomic surface the greater the tendency of that atom to attract electrons. The effective nuclear charge, Zeff, experienced by valence electrons can be estimated using Slater's rules, while the surface area of an atom in a molecule can be taken to be proportional to the square of the covalent radius, rcov. When rcov is expressed in picometres, R.T. Sanderson has also noted the relationship between Mulliken electronegativity and atomic size, and has proposed a method of calculation based on the reciprocal of the atomic volume. With a knowledge of bond lengths, Sanderson's model allows the estimation of bond energies in a wide range of compounds. Sanderson's model has also been used to calculate molecular geometry, s-electrons energy, NMR spin-spin constants and other parameters for organic compounds. This work underlies the concept of electronegativity equalization, which suggests that electrons distribute themselves around a molecule to minimize or to equalize the Mulliken electronegativity. This behavior is analogous to the equalization of chemical potential in macroscopic thermodynamics. where ?s,p are the one-electron energies of s- and p-electrons in the free atom and ns,p are the number of s- and p-electrons in the valence shell. It is usual to apply a scaling factor, 1.75×10-3 for energies expressed in kilojoules per mole or 0.169 for energies measured in electronvolts, to give values that are numerically similar to Pauling electronegativities. The one-electron energies can be determined directly from spectroscopic data, and so electronegativities calculated by this method are sometimes referred to as spectroscopic electronegativities. The necessary data are available for almost all elements, and this method allows the estimation of electronegativities for elements that cannot be treated by the other methods, e.g. francium, which has an Allen electronegativity of 0.67. However, it is not clear what should be considered to be valence electrons for the d- and f-block elements, which leads to an ambiguity for their electronegativities calculated by the Allen method. Electronegativity using the Allen scale |See also: Electronegativities of the elements (data page)| The wide variety of methods of calculation of electronegativities, which all give results that correlate well with one another, is one indication of the number of chemical properties which might be affected by electronegativity. The most obvious application of electronegativities is in the discussion of bond polarity, for which the concept was introduced by Pauling. In general, the greater the difference in electronegativity between two atoms the more polar the bond that will be formed between them, with the atom having the higher electronegativity being at the negative end of the dipole. Pauling proposed an equation to relate "ionic character" of a bond to the difference in electronegativity of the two atoms, although this has fallen somewhat into disuse. Several correlations have been shown between infrared stretching frequencies of certain bonds and the electronegativities of the atoms involved: however, this is not surprising as such stretching frequencies depend in part on bond strength, which enters into the calculation of Pauling electronegativities. More convincing are the correlations between electronegativity and chemical shifts in NMR spectroscopy or isomer shifts in Mössbauer spectroscopy (see figure). Both these measurements depend on the s-electron density at the nucleus, and so are a good indication that the different measures of electronegativity really are describing "the ability of an atom in a molecule to attract electrons to itself". In general, electronegativity increases on passing from left to right along a period, and decreases on descending a group. Hence, fluorine is the most electronegative of the elements (not counting noble gases), whereas caesium is the least electronegative, at least of those elements for which substantial data is available. This would lead one to believe that caesium fluoride is the compound whose bonding features the most ionic character. There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon, respectively, because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity (see Allred-Rochow electronegativity, Sanderson electronegativity above). The anomalously high electronegativity of lead, in particular when compared to thallium and bismuth, appears to be an artifact of data selection (and data availability)--methods of calculation other than the Pauling method show the normal periodic trends for these elements. In inorganic chemistry it is common to consider a single value of the electronegativity to be valid for most "normal" situations. While this approach has the advantage of simplicity, it is clear that the electronegativity of an element is not an invariable atomic property and, in particular, increases with the oxidation state of the element. Allred used the Pauling method to calculate separate electronegativities for different oxidation states of the handful of elements (including tin and lead) for which sufficient data was available. However, for most elements, there are not enough different covalent compounds for which bond dissociation energies are known to make this approach feasible. This is particularly true of the transition elements, where quoted electronegativity values are usually, of necessity, averages over several different oxidation states and where trends in electronegativity are harder to see as a result. The chemical effects of this increase in electronegativity can be seen both in the structures of oxides and halides and in the acidity of oxides and oxoacids. Hence CrO3 and Mn2O7 are acidic oxides with low melting points, while Cr2O3 is amphoteric and Mn2O3 is a completely basic oxide. The effect can also be clearly seen in the dissociation constants of the oxoacids of chlorine. The effect is much larger than could be explained by the negative charge being shared among a larger number of oxygen atoms, which would lead to a difference in pKa of log10() = -0.6 between hypochlorous acid and perchloric acid. As the oxidation state of the central chlorine atom increases, more electron density is drawn from the oxygen atoms onto the chlorine, reducing the partial negative charge on the oxygen atoms and increasing the acidity. In organic chemistry, electronegativity is associated more with different functional groups than with individual atoms. The terms group electronegativity and substituent electronegativity are used synonymously. However, it is common to distinguish between the inductive effect and the resonance effect, which might be described as ?- and ?-electronegativities, respectively. There are a number of linear free-energy relationships that have been used to quantify these effects, of which the Hammett equation is the best known. Kabachnik parameters are group electronegativities for use in organophosphorus chemistry. Mainly, this is an attribute of metals, meaning that, in general, the greater the metallic character of an element the greater the electropositivity. Therefore, the alkali metals are most electropositive of all. This is because they have a single electron in their outer shell and, as this is relatively far from the nucleus of the atom, it is easily lost; in other words, these metals have low ionization energies.
<urn:uuid:31985b65-e7bc-43e3-a027-1fd40dbfdb54>
4
3,579
Knowledge Article
Science & Tech.
18.713849
95,618,972
The Pleistocene, the geologic era immediately preceding our own, was an age of giants. North America was home to mastodons and saber-tooth cats; mammoths and wooly rhinos roamed Eurasia; giant lizards and bear-sized wombats strode across the Australian outback. Most of these giants died at the by the end of the last Ice Age, some 14,000 years ago. Whether this wave of extinctions was caused by climate change, overhunting by humans, or some combination of both remains a subject of intense debate among scientists. Complicating the picture, though, is the fact that a few Pleistocene giants survived the Quaternary extinction event and nearly made it intact to the present. Most of these survivor species found refuge on islands. Giant sloths were still living on Cuba 6,000 years ago, long after their relatives on the mainland had died out. The last wooly mammoths died out just 4,000 years ago. They lived in a small herd on Wrangel Island north of the Bering Strait between the Chukchi and East Siberian Seas. Two-thousand years ago, gorilla-sized lemurs were still living on Madagascar. A thousand years ago, 12-foot-tall moa birds were still foraging in the forests of New Zealand. Unlike the other long-lived megafauna, Steller’s sea cows, one of the last of the Pleistocene survivors to die out, found their refuge in a remote scrape of the ocean instead of on land. The sea cows were relatives of the manatee and dugong. Unlike those two species, they were adapted to living in frigid Arctic waters. They were also much larger, growing to be as long as 30 feet from tail to snout, versus 10 for a manatee. Before the Ice Age, they seem to have been ubiquitous along the edge of the Pacific, living everywhere from Japan to the Baja Peninsula. By the 18th century, when they were first made known to Western science, the sea cows were confined to waters surrounding two tiny Arctic Islands in the Commander Chain, in between the Aleutians and the Kamchatka Peninsula. The sea cows were first described by the German naturalist Georg Steller in the 18th century. Steller was part of an expedition organized led by the Danish explorer Vitus Bering. Financed by the Imperial Russian government, its mission was to chart the waters between Siberia and North America, and find a workable route between the two if possible. The expedition set sail from Kamchatka in June of 1741. A few weeks later, they had reached Alaska. Bering allowed Steller a single day to search for new species. In that brief time, his only visit to the North American continent, Steller managed to name several species of bird, including Steller’s Jay, ubiquitous in the hills behind my Berkeley apartment. By the beginning of winter, the two ships that made up the expedition had become separated, two landing parties had vanished, and so many sailors on Bering’s flagship had scurvy that they could barely man the sails. In November, the St. Peter ran aground on an uninhabited island. Many of members of the expedition thought that it was attached to the Siberian mainland and that they would eventually be able to walk to safety, but they were soon proven wrong. A short time after reaching land, the ship broke apart in a storm, and the captain died of scurvy. Steller, who knew how to combat the Vitamin C–deficiency by foraging for herbs, was one of the few crew members still in good health. Steller quickly realized that the landmass they were on was an island, and one that likely had never been visited by human beings before. Everywhere he went, he was followed by foxes, which showed no fear but eagerly stole any implements or food they could grab in their jaws. One day, walking along the beach searching for firewood, he saw a huge, black shape moving slowly about in the shallows like an overturned boat. Every few minutes a snout would surface for a moment and draw breath with a noise like a horse’s snort. This was the sea cow, seen by the human eyes for the first time in thousands of years. Steller was shocked to realize that this creature was a type of manatee, thousands of miles from its nearest relatives in the tropics. He describes the sea cows as gentle giants, whose only real defense against being harpooned was their incredibly thick hides. He also notes that they seem to have been unusually loyal to one another, which proved to be more of a liability than an asset when the Russians began hunting them for food. They had, in his words, “an uncommon love for one another, which even extended so far that, when one of them was hooked, all the others were intent upon saving him.” When the Russians harpooned one of the sea cows, others would come to its defense, making a circle around their wounded comrade. When they killed a female, they were astonished to see its mate visit the beach where its body lay day after day, “as if he would inform himself about her condition.” Weighing close to 10 tons, a single sea cow could feed the surviving crew of the St. Peter for a month. Steller writes that its meat was delicious—far superior to the sea otter they had grown accustomed to eating. He compares the sea cows’ fat to the best Holland butter, and says that it tasted of almond oil when boiled down. While still marooned on what would come to be called Bering Island, Steller already envisioned a future in which the fur trade would flourish in this desolate spot, with Russian hunters amply provisioned by what he thought was a nearly inexhaustible supply of sea-cow meat. The waters around the island were also teeming with sea otters, whose pelts could be sold at a tremendous mark-up to the Chinese market. Steller shared the belief of most 18th-century naturalists that the sea was inexhaustible, and extinction impossible. He would swiftly be proven wrong. Archaeologists now estimate that it took about a hundred years for the giant moa birds to go extinct after the Maori landed on New Zealand. Steller’s sea cows survived just 27. The last sea cow seen in the wild was spotted by fur hunters in 1768. The apparent disappearance of Steller’s sea cow helped persuade European biologists that extinction was possible (at the time, the dodo was thought to be still alive, or imaginary). In 1812, the German scientist Georg Heinrich von Langsdorff listed it among the beings “lost from the animal kingdom,” along with the mammoth and the “carnivorous elephant of Ohio.” According to the environmental historian Ryan Tucker Jones, the disappearance of the sea cow helped to usher in the modern science of extinction. It may also by the key for understanding how vanished ecosystems functioned, and how the overhunting of one species can lead to the extinction of another. Recently, a team of marine ecologists led by James Estes of the University of California, Santa Cruz have argued that Steller’s sea cows provide a possible “Rosetta Stone” for how megafauna extinctions might have played out in prehistory. Drawing on old archival data and using mathematical simulations to model community interactions, Estes and his co-authors argue that the sea cows weren’t hunted to extinction. Rather, their disappearance was a byproduct of the overexploitation of sea otters of Russian and Aleut hunters. Sea cows were obligate algivores. That means they ate seaweed—mostly kelp—and nothing else. Sea otters also thrive in kelp forests, but their main source of food are sea urchins, which also eat kelp. When sea otters are absent, the urchins go wild. With no predators to limit their numbers, the urchins spread across the ocean floor like a wave of algae-munching tribbles, creating kelp-free dead zones wherever they go. Estes and his colleagues estimate that the decline in the number of sea otters around the Commander Islands happened so swiftly that it could have rippled through the ecosystem in just three decades, leaving the sea cows with nothing to eat and nowhere to go. In other words, the sea cows weren’t murdered; they were collateral victims in a separate crime. The swift demise of the sea cows is a reminder that the giants of the Ice Age didn’t live alone. They were parts of complex ecologies that have now vanished, intricate webs connecting herbivores to plant communities, and predators to prey. Trophic cascades, in which the elimination of one species leads to a chain reaction that reshapes a whole habitat, have been implicated in the disappearance of a few animals besides the sea cow. Haast’s Eagle—the largest to have ever existed—vanished from New Zealand along with its prey, the giant moa. The decline of the California condor has likewise been linked to the loss of the megafauna carcasses it fed on before the end of the last Ice Age. These are two examples, but there may have been more. Paleo-ecologists have spent decades trying to reconstruct and unravel these relationships, but we still don’t understand all the ways the world in which we live in is impoverished by their disappearance. It’s clear that certain species—like the otter in the Commander Islands, or the mammoth in the now-vanished grasslands of the Arctic (the so-called “mammoth steppe”)—played a crucial role in maintaining the balance of their respective ecosystem. But just how bad the damage from losing one of these keystone species could be is still uncertain. Thanks to Steller, the demise of the sea cow was one of the very few megafauna extinctions for which we have eyewitness testimony. His own fate was rather tragic in its way as well. He wrote up his notes from the voyage in a thick Latin volume titled On the Beast of the Sea, but he never made it home to see it published. He died of fever outside the Siberian town of Tyumen. After he was buried, grave robbers broke into his tomb to steal his fine red cloak. Wolves ate his eyes. He lives on in the names of his eponymous Jay, a species of sea duck, a sea eagle, a sea lion, and of course, the long-vanished sea cow. They are known to us now only in the form of a handful of skeletons and in the words of Steller’s description in which they appear forever the same: placid, loyal, and delicious. We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:616178b0-da73-415a-b1f5-cb2c56da51f6>
3.65625
2,291
Nonfiction Writing
Science & Tech.
50.358496
95,618,980
##Python for Exploratory Computing Lots of books are written on scientific computing, but very few deal with the much more common exploratory computing (a term coined by Fernando Perez), which represents daily tasks of many scientists and engineers that try to solve problems but are not computer scientists. This set of Notebooks is written for scientists and engineers who want to use Python programming for exploratory computing, scipting, data analysis, and visualization. Python makes many of these programmig tasks quick and easy and, probably most importantly, fun. No prior knowledge of computer programming is assumed. Each Notebook covers a specific topic and includes a number of exercises. The exercises should take less than 4 hours to complete for each Notebook. Download the Notebooks and accompanying data files from the github repositories. These Notebooks contain empty output cells. Running the output cells is part of learning Python. Notebooks with output cells cells have the addition _sol and can be viewed by clicking on the links to the Notebook Viewer below. The following Notebooks are available (they are under development; more will follow soon): ###Applications to Water Management Notebook WM1: Time Series Data and Pandas Notebook WM2: Manning's Equation and Emptying Reservoir Notebook WM3: Water Distribution Systems ###Applications to Probability and Statistics Notebook S1: Discrete Random Variables Notebook S2: Continuous Random Variables Notebook S3: Distribution of the Mean, Hypothesis Tests, and the Central Limit Theorem Notebook S4: Linear regression and curve fitting ###Some More Advanced Python Topics Notebook ADV1: Finding the Zero of a Function Notebook ADV2: Systems of linear equations Notebook ADV3: Object oriented programming Notebook ADV4 : Interactive Graphics with Matplotlib Widgets
<urn:uuid:a5bd476f-3a26-4f61-bed3-bd47502630ab>
2.796875
389
Product Page
Software Dev.
17.491706
95,618,992
The Laplacian of a Graph The Laplacian is another important matrix associated with a graph, and the Laplacian spectrum is the spectrum of this matrix. We will consider the relationship between structural properties of a graph and the Laplacian spectrum, in a similar fashion to the spectral graph theory of previous chapters. We will meet Kirchhoff’s expression for the number of spanning trees of a graph as the determinant of the matrix we get by deleting a row and column from the Laplacian. This is one of the oldest results in algebraic graph theory. We will also see how the Laplacian can be used in a number of ways to provide interesting geometric representations of a graph. This is related to work on the Colin de Verdiere number of a graph, which is one of the most important recent developments in graph theory. KeywordsSpan Tree Adjacency Matrix Connected Graph Regular Graph Hamilton Cycle Unable to display preview. Download preview PDF. - H. van der Holst, L. Lovász, and A. Schrijver, The Colin de Verdière graph parameter, in Graph theory and combinatorial biology (Balatonlelle, 1996), Janos Bolyai Math. Soc, Budapest, 1999, 29–85.Google Scholar
<urn:uuid:db576f59-1f16-433d-9294-7db95d55dedb>
2.53125
276
Truncated
Science & Tech.
53.886015
95,619,007
- Short Report - Open Access Identification and characterization of microsatellite loci in two socially complex old world tropical babblers (Family Timaliidae) © Kaiser et al. 2015 Received: 14 September 2015 Accepted: 10 November 2015 Published: 24 November 2015 Although the highest diversity of birds occurs in tropical regions, little is known about the genetic mating systems of most tropical species. We describe microsatellite markers isolated in the chestnut-crested yuhina (Staphida everetti), endemic to the island of Borneo, and the grey-throated babbler (Stachyris nigriceps), widely distributed across Southeast Asia. Both species belong to the avian family Timaliidae and are highly social, putatively cooperatively breeding birds in which helpers attend the nests of members of their social group. We obtained DNA from individuals in social groups breeding in Kinabalu Park, Malaysian Borneo. We used a shotgun sequencing approach and 454-technology to identify 36 microsatellite loci in the yuhina and 40 in the babbler. We tested 13 primer pairs in yuhinas and 20 in babblers and characterized eight polymorphic loci in 20 unrelated female yuhinas and 21 unrelated female babblers. Polymorphism at the yuhina loci ranged from 3 to 9 alleles, observed heterozygosities from 0.58 to 1.00, and expected heterozygosities from 0.64 to 0.81. Polymorphism at the babbler loci ranged from 3 to 12 alleles, observed heterozygosities from 0.14 to 0.90 and expected heterozygosities from 0.14 to 0.87. One locus in the yuhina deviated significantly from Hardy–Weinberg equilibrium. We detected nonrandom allele associations between two pairs of microsatellite loci in each species. Microsatellite markers will be used to describe the genetic mating system of these socially complex species and to measure genetic parentage and relatedness within social groups. Tropical regions support the highest diversity of bird species than any other region worldwide . Yet, we know little about the social behavior and mating systems of tropical birds [2, 3], especially Old World tropical species . The chestnut-crested yuhina (Staphida everetti) is an endemic, resident bird that lives in tropical montane forests on the island of Borneo [4, 5]. The grey-throated babbler (Stachyris nigriceps) is a common resident of the tropical submontane forests of Northeast Indian subcontinent, southern China, Southeast Asia and Sumatra . Both species belong to the Old World avian family Timaliidae, which is compromised of oscine passerine birds generally known as babblers . Babblers show striking diversity in their social behaviors and mating systems . The chestnut-crested yuhina (hereafter, yuhina) is a highly social bird that forages throughout the canopy in large single-species flocks of 10–30 birds and the grey-throated babbler (hereafter, babbler) forages in small groups of 5–8 individuals during the breeding months [4, 7]. Both species are putatively cooperative breeders in which helpers attend the nests of their social group members (T. E. Martin unpubl. data). We describe the isolation and characterization of eight polymorphic microsatellite loci in each species that we will use to measure genetic parentage and relatedness between breeders and their offspring and helpers and to investigate the social structure, dispersal, and genetic mating system of these species. We used 454 GS-Junior shotgun sequencing to develop species-specific microsatellite markers. Genomic DNA was extracted from whole blood stored in lysis buffer with the DNEasy Blood and Tissue DNA Kit (Qiagen, Valencia, CA). Genomic DNA (2 µg) was sheared on the Q800R sonicator (QSonica, Newton, CT) for 2 min into 300–500 bp fragments. The sheared DNA sample was purified with Sera-Mag Speed Beads (2×) (Thermo Fisher Scientific, Waltham, MA) and eluted in 15 μl of ddH20. Purified samples were prepared for 454 sequencing using a shotgun library preparation protocol . DNA fragments were blunt-ended and short adapters ligated to the 3′ and 5′ ends of each fragment with the NEB Quick Blunting and Quick Ligase Kits (New England Biolabs, Ipswich, MA). One end of each fragment contained a unique sample-specific 8 bp barcode. Fragments with adapters successfully ligated were reamplified using emulsion PCR (emPCR) primers and libraries were purified with Sera-Mag Speed Beads in PEG solution and size selected by gel extraction from a 1.5 % agarose gel with the MinElute Gel Extraction Kit (Qiagen). Libraries were quantified with the 454 Library Quantification Kit (Kapa). The yuhina single-stranded DNA (ssDNA) library had an average length of 500 bp and the babbler ssDNA library had an average length of 600 bp. We pooled the barcoded ssDNA libraries with one other individually barcoded species and conducted an emPCR at a concentration of 0.6 copies per bead with Lib-L Roche kits and reagents. The emPCR yielded 3 % enriched beads for sequencing a single PicoTiter plate on the 454 Genome Sequencer Junior System (GS-Junior, 454 Life Sciences, a Roche Company, Branford, CT). We used the Roche software shotgun pipeline for quality filtering on the GS-Junior, resulting in a total of 38,619 sequenced fragments for the three species. The 454 datasets were demultiplexed using a MIDconfig.parse file from the sfffile program. The single run yielded 17,645 reads for the yuhina (range 32–580 bp) and 8029 reads (range 32–548 bp) for the babbler, both with an average read length of 350 bp. We filtered the reads (min length = 60, max length = 400, ambiguity max 1 % of N, mean quality score = 15–25) and trimmed the run of low quality sequences (mean quality score: 5′ = 20, 3′ = 20) with PRINSEQ , resulting in 14,492 good reads for the yuhina and 6591 good reads for the babbler. We used 454 sequence data to identify microsatellites and design PCR primer pairs. We screened for perfect and imperfect (>85 %) microsatellites (repeats of di-, tri- and tetranucleotides) with minimum repeat lengths of 20, 24, and 28 bp, respectively, with the Phobos plugin in Geneious . We selected unique repeat motifs with adequate length of flanking sequences (~25 bp) to design primer pairs (~18 bp) for amplifying these repeats. We used BLAST to identify and remove microsatellite sequences that matched known bacteria over the length of the read. We aligned all reads with microsatellites to remove exact duplicates with Geneious. This resulted in 36 unique microsatellites in the yuhina and 40 in the babbler for primer design. We designed five primer pairs for each microsatellite locus with the Primer3 plugin and chose the best pair of primers for each microsatellite based on a combination of least dimer (pair, self, and/or hairpin) and matching melting temperatures between primers with a size <300 bp. Thirteen yuhina primer pairs and 20 babbler primer pairs were tested for amplification, optimized, and screened for polymorphism using DNA from unrelated females each from different social groups (yuhinas = 20 females, babblers = 21 females) sampled in Kinabalu Park, Malaysian Borneo. We amplified 1 μL of genomic DNA from each individual at each locus in a 10 μL PCR containing 4.15 μL dH2O, 1 μL 10× PCR buffer, 1.0 μL 25 mM MgCl2 (2.5 mM final concentration), 1.0 μL 10 mM deoxyribonucleotide triphosphates, 0.4 μL 10 μM forward and pigtail reverse primers, 1.0 μL of 2.5× bovine serum albumin, and 0.05 μL 5.0 U μL−1 AmpliTaq Gold DNA polymerase [Applied Biosystems (ABI), Carlsbad, CA]. We initially used touchdown cycling conditions decreasing by 0.5 °C for each cycle (55–65 °C for yuhinas and 50–60 °C for babblers) with unlabeled forward primers to test for amplification and to optimize the annealing temperatures (TA) for each primer pair. After determining the optimal range of TA, we added a 5′ fluorescent label (6-FAM, 5-HEX; Eurofins MWG Operon; NED, ABI) to the forward primer and a six base-pair ‘pigtail’ (GTTTCT) to the 5′ end of the reverse primer (babbler primers only) to promote adenylation of the 3′ end of the forward strand to improve genotyping accuracy . We ran PCRs with fluorescently labeled forward primers on a DYAD thermal cycler (MJ Research) under the following conditions for yuhina primers: initial denaturing at 94 °C for 8 min, followed by eight cycles of 94 °C for 30 s, primer-specific upper TA for 30 s and decreasing by 0.5 °C for each cycle, 72 °C for 1 min, followed by 35 cycles of 94 °C for 30 s, primer-specific lower TA for 30 s, and 72 °C for 1 min, then a final extension at 72 °C for 30 min. We ran PCRs under the following conditions for babbler primers: initial denaturing at 94 °C for 8 min, followed by 45 cycles of 92 °C for 30 s, primer-specific TA for 40 s, 72 °C for 40 s and a final extension at 72 °C for 7 min. The labeled PCR products were analyzed on an ABI PRISM 3130 Genetic Analyzer (ABI) and allele sizes were scored with the GeneScan 500 ROX size standard (ABI) in Genemapper v.4.1 (ABI). All primers amplified and eight of these primers were polymorphic in each species. Characteristics of eight microsatellite loci developed and optimized from the chestnut-crested yuhina, Staphida everetti Primer sequence (5′–3′) Size range (bp) GenBank accession no. Characteristics of eight microsatellite loci in the grey-throated babbler, Stachyris nigriceps Primer sequence (5′–3′) Size range (bp) GenBank accession no. Availability of supporting data SK developed grey-throated babbler microsatellite markers and JD and LB developed chestnut-crested yuhina microsatellite markers. SK analyzed the data and drafted the manuscript. RF designed the study and helped to draft the manuscript. All authors read and approved the final manuscript. We thank T. Martin and the numerous field technicians for collection of Staphida everetti and Stachyris nigriceps blood samples in Borneo. All research activities were performed under protocols approved by the Animal Care and Use Committees of the authors’ institutions and all federal and international permits were in hand when the research was conducted. We are grateful to Sabah Parks and the Sabah Biodiversity Centre in Malaysia for help in facilitating this study. This research was supported by a National Science Foundation Grant awarded to T. Martin, R. C. Fleischer, and E. Martinsen (DEB 1241041). The authors declare that they have no competing interests. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. - Stutchbury BJM, Morton ES. Behavioral ecology of tropical birds. San Diego: Academic; 2001.Google Scholar - Macedo RH, Karubian J, Webster MS. Extrapair paternity and sexual selection in socially monogamous birds: are tropical birds different? Auk. 2008;125:769–77.View ArticleGoogle Scholar - Tori WP, Durães R, Ryder TB, Anciães M, Macedo RH, Uy JAC, Parker PG, Smith TB, Stein AC, Webster MS, Blake JG, Loiselle BA. Advances in sexual selection theory: insights from tropical avifauna. Ornitol Neotrop. 2008;19:151–63.Google Scholar - Myers S. Birds of Borneo: Brunei, Sabah, Sarawak, and Kalimantan. Princeton: University Press; 2009.Google Scholar - Collar N, Robson C. Chestnut-crested Yuhina (Staphida everetti). In: del Hoyo J, Elliot A, Christie D, editors. Handbook of the birds of the world: Picathartes to tits and chickadees, vol. 12. Barcelona: Lynx Edicions; 2007.Google Scholar - Moyle RG, Andersen MJ, Oliveros CH, Steinheimer FD, Reddy S. Phylogeny and biogeography of the core babblers (Aves: Timaliidae). Syst Biol. 2012;61:631–51.View ArticlePubMedGoogle Scholar - Collar N, Robson C. Grey-throated Babbler (Stachyris nigriceps). In: del Hoyo J, Elliot A, Christie D, editors. Handbook of the birds of the world: Picathartes to tits and chickadees, vol. 12. Barcelona: Lynx Edicions; 2007.Google Scholar - White PS, Densmore LD. Mitochondrial DNA isolation. In: Hoelzel AR, editor. Molecular genetic analysis of populations: a practical approach. New York: University Press; 1992. p. 50–1.Google Scholar - Hofman CA, Rick TC, Hawkins MTR, Funk WC, Ralls K, Boser CL, Collins PW, Coonan T, King JL, Morrison SA, Newsome SD, Sillett TS, Fleischer RC, Maldonado JE. Mitochondrial genomes suggest rapid evolution of dwarf California channel islands foxes (Urocyon littoralis). PLoS One. 2015;10:e0118240.PubMed CentralView ArticlePubMedGoogle Scholar - Schmieder R, Edwards R. Quality control and preprocessing of metagenomic datasets. Bioinformatics. 2011;27:863–4.PubMed CentralView ArticlePubMedGoogle Scholar - Mayer C: Phobos. 3.3.11. 2006–2010. http://www.rub.de/speezoo/cm/cm_phobos.htm. - Kearse M, Moir R, Wilson A, Stones-Havas S, Cheung M, Sturrock S, Buxton S, Cooper A, Markowitz S, Duran C, Thierer T, Ashton B, Mentjies P, Drummond A. Geneious basic: an integrated and extendable desktop software platform for the organization and analysis of sequence data. Bioinformatics. 2012;28:1647–9.PubMed CentralView ArticlePubMedGoogle Scholar - Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ. Basic local alignment search tool. J Mol Biol. 1990;215:403–10.View ArticlePubMedGoogle Scholar - Untergasser A, Cutcutache I, Koressaar T, Ye J, Faircloth BC, Remm M, Rozen SG. Primer3—new capabilities and interfaces. Nucleic Acids Res. 2012;40:e115.PubMed CentralView ArticlePubMedGoogle Scholar - Brownstein MJ, Carpten JD, Smith JR. Modulation of non-templated nucleotide addition by Taq DNA polymerase: primer modifications that facilitate genotyping. Biotechniques. 1996;20:1004–10.PubMedGoogle Scholar - Excoffier L, Laval G, Schneider S. Arlequin ver. 3.0: an integrated software package for population genetics data analysis. Evol Bioinform. 2005;1:47–50.Google Scholar - Rousset F. GENEPOP’007: a complete re-implementation of the GENEPOP software for Windows and Linux. Mol Ecol Resour. 2008;8:103–6.View ArticlePubMedGoogle Scholar - Van Oosterhout C, Hutchinson WF, Wills D, Shipley P. MICRO-CHECKER: software for identifying and correcting genotyping errors in microsatellite data. Mol Ecol Notes. 2004;4:535–8.View ArticleGoogle Scholar
<urn:uuid:38160a18-35ff-44fd-877a-5726905cb87d>
2.515625
3,787
Academic Writing
Science & Tech.
53.289686
95,619,016
Correspondence between Geographic Proximity and Phenetic Similarity among Pinus Brutia Ten. Populations in Southern Turkey Pinus brutia Ten. is distributed mainly in the eastern Mediterranean and Aegean regions, from sea level up to 1400 m., and rarely in the Black sea coast from 0 up to 500 m., growing under diverse ecological conditions (Critchfield and Little 1966). The species exhibits considerable variation both in form and growth characteristics in its range, suggesting that at least some portion of such variation is under genetic control. Understanding of geographic variation patterns of a tree species is of vital importance in afforestation and reforestation programs, since unknown seed sources often results in great economic losses due to lack of adaptations to a given region (Callaham 1964). Among other things, degrees of within and between population variations, and distances of seed transfer zones for a given population both vertically and horizontally in a given space must primarily be determined for selection and breeding processes. KeywordsGeographic Proximity Aegean Region Reforestation Program Great Economic Loss Seedling Character Unable to display preview. Download preview PDF. - Callaham, R. Z. 1964. Provenance research: Investigation of genetic diversity associated with geography. Unasylva 18 (2–3): 2–12.Google Scholar - Critchfield, W.B. and E. L. Little. 1966. Geographic Distribution of the Pines of the World. US. Dept. of Agric., Forest Service, Washington, D.C. 97 pp.Google Scholar - Dixon, W.J. 1973. BMD-Biomedical Computer Programs. Univ. of Calif., Berkeley, 600 pp.Google Scholar - Işik, K. 1980. Kizilçamda (Pinus brutia Ten.) Populasyonlar Arasi ve Populasyonlar tçi Genetik Çe0.tlili4in Aratirilmasi. I: Tohum ve Fi-dan Karakterleri. TUBITAK/TOAG No. 335, Ankara, 149 pp.Google Scholar - Sneath, P. H. A. and R. R. Sokal. 1973. Numerical Taxonomy. W. H. Freeman and Co., San Francisco, 573 pp.Google Scholar
<urn:uuid:541d6abd-8883-4d4c-b603-dd1190076e3d>
2.578125
484
Truncated
Science & Tech.
49.529758
95,619,018
The term coordination geometry is used in a number of related fields of chemistry and solid state chemistry/physics. The coordination geometry of an atom is the geometrical pattern formed by atoms around the central atom. Inorganic coordination complexesEdit In the field of inorganic coordination complexes it is the geometrical pattern formed by the atoms in the ligands that are bonded to the central atom in a molecule or a coordination complex. The geometrical arrangement will vary according to the number and type of ligands bonded to the metal centre, and to the coordination preference of the central atom, typically a metal in a coordination complex. The number of atoms bonded, (i.e. the number of σ-bonds between central atom and ligands) is termed the coordination number. The geometrical pattern can be described as a polyhedron where the vertices of the polyhedron are the centres of the coordinating atoms in the ligands. One of the most common coordination geometries is octahedral, where six ligands are coordinated to the metal in a symmetrical distribution, leading to the formation of an octahedron if lines were drawn between the ligands. Other common coordination geometries are tetrahedral and square planar. Crystal field theory may be used to explain the relative stabilities of transition metal compounds of different coordination geometry, as well as the presence or absence of paramagnetism, whereas VSEPR may be used for complexes of main group element to predict geometry. In a crystal structure the coordination geometry of an atom is the geometrical pattern of coordinating atoms where the definition of coordinating atoms depends on the bonding model used. For example, in the rock salt ionic structure each sodium atom has six near neighbour chloride ions in an octahedral geometry and each chloride has similarly six near neighbour sodium ions in an octahedral geometry. In metals with the body centred cubic (bcc) structure each atom has eight nearest neighbours in a cubic geometry. In metals with the face centred cubic (fcc) structure each atom has twelve nearest neighbours in a cuboctahedral geometry. Table of coordination geometriesEdit A table of the coordination geometries encountered is shown below with examples of their occurrence in complexes found as discrete units in compounds and coordination spheres around atoms in crystals (where there is no discrete complex). |Coordination number||Geometry||Examples of discrete (finite) complex||Examples in crystals| |2||linear||Ag(CN)2− in KAg(CN)2 ||Ag in silver cyanide,| Au in AuI |3||trigonal planar||HgI3−||O in TiO2 rutile structure| |4||tetrahedral||CoCl42−||Zn and S in zinc sulfide, Si in silicon dioxide| |5||square pyramidal||InCl52− in (NEt4)2InCl5| |6||octahedral||Fe(H2O)62+||Na and Cl in NaCl| |6||trigonal prismatic||Mo(SCHCHS)3||As in NiAs, Mo in MoS2| |7||pentagonal bipyramidal||ZrF73− in (NH4)3ZrF7||Pa in PaCl5| |7||face capped octahedral||[HoIII(PhCOCHCOPh)3(H2O)]||La in A-La2O3| |7||trigonal prismatic, square face monocapped||TaF72− in K2TaF7| |8||cubic||Caesium chloride, calcium fluoride| |8||square antiprismatic||TaF83− in Na3TaF8 Zr(H2O)84+ aqua complex (note: whilst this is the term generally used, the correct term is "bisdisphenoid" or "snub disphenoid" as this polyhedron is a deltahedron) |Mo(CN)84− in K4[Mo(CN)8].2H2O||Zr in K2ZrF6| |8||hexagonal bipyramidal||N in Li3N| |8||octahedral, trans-bicapped||Ni in nickel arsenide, NiAs; 6 As neighbours + 2 Ni capping| |8||trigonal prismatic, triangular face bicapped||Ca in CaFe2O4| |8||trigonal prismatic, square face bicapped||PuBr3| |9||tricapped trigonal prismatic, (three rectangular faces capped)||[ReH9]2− in potassium nonahydridorhenate Th(H2O)94+ aqua complex |SrCl2.6H2O, Th in RbTh3F13| |9||monocapped square antiprismatic||[Th(tropolonate)4(H2O)]||La in LaTe2| |10||bicapped square antiprismatic||Th(C2O4)42− | |11||Th in [ThIV(NO3)4(H2O)3] (NO3− is bidentate) | |12||icosahedron||Th in Th(NO3)62− ion in Mg[Th(NO3)6].8H2O | |12||cuboctahedron||ZrIV(η3−(BH4)4)||atoms in fcc metals e.g. Ca| |12||anticuboctahedron (triangular orthobicupola)||atoms in hcp metals e.g. Sc | |14||bicapped hexagonal antiprismatic||U(BH4)4| Naming of inorganic compoundsEdit IUPAC have introduced the polyhedral symbol as part of their IUPAC nomenclature of inorganic chemistry 2005 recommendations to describe the geometry around an atom in a compound. IUCr have proposed a symbol which is shown as a superscript in square brackets in the chemical formula. For example, CaF2 would be Ca[8cb]F2[4t], where [8cb] means cubic coordination and [4t] means tetrahedral. The equivalent symbols in IUPAC are CU−8 and T−4 respectively. The IUPAC symbol is applicable to complexes and molecules whereas the IUCr proposal applies to crystalline solids. - J. Lima-de-Faria; E. Hellner; F. Liebau; E. Makovicky; E. Parthé (1990). "Report of the International Union of Crystallography Commission on Crystallographic Nomenclature Subcommittee on the Nomenclature of Inorganic Structure Types". Acta Crystallogr. A. 46: 1–11. doi:10.1107/S0108767389008834. - Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. ISBN 0-08-037941-9. - Wells A.F. (1984) Structural Inorganic Chemistry 5th edition Oxford Science Publications ISBN 0-19-855370-6 - Crystal and molecular structure of the heptacoordinate complex tris(diphenylpropanedionato)aquoholmium, Ho(PhCOCHCOPh)3.H2O, Zalkin A., Templeton D.H., Karraker D.G, Inorganic chemistry, 1969, 8, 12,2680 - 2684; doi:10.1021/ic50082a029 - Persson, Ingmar (2010). "Hydrated metal ions in aqueous solution: How regular are their structures?". Pure and Applied Chemistry. 82 (10): 1901–1917. doi:10.1351/PAC-CON-09-10-22. ISSN 0033-4545. - David G. Pettifor, Bonding and Structure of Molecules and Solids, 1995, Oxford University Press,ISBN 0-19-851786-6 - NOMENCLATURE OF INORGANIC CHEMISTRY IUPAC Recommendations 2005 ed. N. G. Connelly et al. RSC Publishing http://www.chem.qmul.ac.uk/iupac/bioinorg/
<urn:uuid:1bd6a509-7667-48f7-961b-40026f40921e>
3.765625
1,842
Knowledge Article
Science & Tech.
50.785577
95,619,042
Mechanisms of uranium and thorium transfer to the crust A variety of transfer mechanisms can be conceived which are compatible with the evidence. It is assumed that the initial condition of all three radioelements was one of more or less homogeneous dispersion. If the earth accreted from solid fragments or dust, homogeneous dispersion is not to be expected; but it would be expected if the earth consolidated from gases. Broadly, matter can be transferred in three states: solid, liquid, or gas. Five mechanisms involving the three states are: Bodily transfer of large masses without melting, by creep. Bodily transfer of large masses by total melting. Transfer of elements selected at depth by partial melting and concentrated at depth into a differentiated magma. Gas and/or Liquid Upward volatile diffusion of selected elements or ions without significant rock melting. These elements would remain dispersed and low in concentration. They would appear in the’crust as pervasive, widespread disseminations. Upward transfer of large masses of gases or fluids selectively volatilized and collected and concentrated at depth. These would appear in the crust as concentrated hydrothermal fluids. At the outset of any of the transfer processes, the three radioelements might be considered to act similarly and together because their dispersions, abundances, and chemistry would be reasonably comparable. However, the fact that this similarity of action has not persisted into the crust is apparent because of the different degrees of enrichment in different rocks. In the crust, uranium and thorium continue to act similarly to produce trace-element disseminations through the magmatic stage, but potassium concentrates sufficiently at an early stage to become a rock-forming constituent. Eventually, uranium and thorium separate completely to produce independent deposits in the low-temperature realm.
<urn:uuid:4d73786e-5180-47cb-9eb4-49c9f0e6be25>
3.984375
370
Academic Writing
Science & Tech.
24.94625
95,619,043
Page Count122 Pages About the e-Book Object-oriented Programming in R pdf Object-oriented programming is a powerful paradigm for constructing reusable and maintainable code. This book gives an introduction to object-oriented programming in the R programming language. Object-oriented programming is a style of programming that focuses on data as “objects” that have state and can be manipulated by polymorphic or generic methods. In object-oriented programming, you model your programs by describing which states an object can be in and how methods will reveal or modify that state. Object-oriented programming achieves high flexibility through so-called polymorphism, where which concrete methods are executed depends on the type of data being manipulated. In this book, I teach you how to write object-oriented programs. How to construct classes and class hierarchies in the three object-oriented systems available in the R programming language, and how to exploit polymorphism to write flexible and extendable software. Preview Object-oriented Programming in R Pdf Download Object-oriented Programming in R 1st Edition Pdf This site comply with DMCA digital copyright. We do not store files not owned by us, or without the permission of the owner. We also do not have links that lead to sites DMCA copyright infringement. If You feel that this book is belong to you and you want to unpublish it, Please Contact us . PowerShell Command Line Tips
<urn:uuid:ffcfbfdd-6b5b-4d51-aad3-5d9f5768003e>
2.78125
287
Product Page
Software Dev.
37.194682
95,619,057
Engineers have found a way to pinpoint and identify the tiny iron oxide particles associated with Alzheimer’s and other neurodegenerative diseases in the brain. The technique is likely to accelerate research on the cause of the diseases and could lead to the first diagnostic procedure for Alzheimer’s in patients while they are alive. “We’re the first to be able to tell you both the location of the particles and what kind of particles they are,” said Mark Davidson, a University of Florida engineer in UF’s materials science and engineering department. Mark Davidson | EurekAlert! The genes are not to blame 20.07.2018 | Technische Universität München Targeting headaches and tumors with nano-submarines 20.07.2018 | Universitätsmedizin der Johannes Gutenberg-Universität Mainz A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Materials Sciences 20.07.2018 | Physics and Astronomy 20.07.2018 | Materials Sciences
<urn:uuid:ae16482a-0357-4c2b-983d-065d5ea9c572>
2.953125
703
Content Listing
Science & Tech.
36.49491
95,619,059
April 9, 2009 I know science is amazing buy how can they possibly know this asteroid is 1837 feet? Guestimating it to be even 1/4 mile long would be amazing, but to claim the size down to 37 feet, seems impossible. Asteroid Could Threaten the Earth in 2182 Buzz up!4 votes ShareretweetEmailPrint Play Video Space Video:Russians step outside space station Reuters Play Video Space Video:Earth -- our sun -- monster sun Reuters Space Video:New star discovered Australia 7 News Tariq Malik SPACE.com Managing Editor SPACE.com tariq Malik space.com Managing Editor space.com – Tue Jul 27, 12:45 pm ET A large asteroid in space that has a remote chance of slamming into the Earth would be most likely hit in 2182, if it crashed into our planet at all, a new study suggests. The asteroid, called 1999 RQ36, has about a 1-in-1,000 chance of actually hitting the Earth, but half of that risk corresponds to potential impacts in the year 2182, said study co-author María Eugenia Sansaturio of the Universidad de Valladolid in Spain. Sansaturio and her colleagues used mathematical models to determine the risk of asteroid 1999 RQ36 impacting the Earth through the year 2200. They found two potential opportunities for the asteroid to hit Earth in 2182. The research is detailed in the science journal Icarus. The asteroid was discovered in 1999 and is about 1,837 feet (560 meters) across. A space rock this size could cause widespread devastation at an impact site in the remote chance that it hit Earth, according to a recent report by the National Academy of Sciences. Scientists have tracked asteroid 1999 RQ36's orbit through 290 optical observations and 13 radar surveys, but there is still some uncertainty because of the gentle push it receives from the so-called Yarkovsky effect, researchers said. The Yarkovsky effect, named after the Russian engineer I.O. Yarkovsky who proposed it around 1900, describes how an asteroid gains momentum from thermal radiation that it emits from its night side. Over hundreds of years, the effect's influence on an asteroid's orbit could be substantial. Sansaturio and her colleagues found that through 2060, the chances of Earth impacts from 1999 RQ36 are remote, but the odds increase by a magnitude of four by 2080 as the asteroid's orbit brings it closer to the Earth. The odds of impact then dip as the asteroid would move away, and rise in 2162 and 2182, when it swings back near Earth, the researchers found. It's a tricky orbital dance that makes it difficult to pin down the odds of impact, they said. "The consequence of this complex dynamic is not just the likelihood of a comparatively large impact, but also that a realistic deflection procedure (path deviation) could only be made before the impact in 2080, and more easily, before 2060," Sansaturio said in a statement. After 2080, she added, it would be more difficult to deflect the asteroid. "If this object had been discovered after 2080, the deflection would require a technology that is not currently available," Sansaturio said. "Therefore, this example suggests that impact monitoring, which up to date does not cover more than 80 or 100 years, may need to encompass more than one century." By expanding the timeframe for potential impacts, researchers would potentially identify the most threatening space rocks with enough time to mount deflection campaigns that are both technologically and financially feasible, Sansaturio said. Images - Asteroids Up Close, Astronauts on Asteroids NASA's New Asteroid Mission Could Save the Planet Will an Asteroid Hit Earth? Are We All Doomed? Original Story: Asteroid Could Threaten the Earth in 2182 SPACE.com offers rich and compelling content about space science, travel and exploration as well as astronomy, technology, business news and more. The site boasts a variety of popular features including our space image of the day and other space pictures,space videos, Top 10s, Trivia, podcasts and Amazing Images submitted by our users. Join our community, sign up for our free newsletters and register for our RSS Feeds today! Most Users Ever Online: 288 Currently Online: Alex Currently Browsing this Page: Guest Posters: 2 Newest Members:Alex, Jordan Davidson, HotRod, adrainsmith, brooklyn, Paul Willis, The_Real_Mike, Accurateengg, adityaengg, james weber Administrators: John Greenewald: 538, blackvault: 1776
<urn:uuid:76094a94-967b-480d-9a1b-0068d67db0b2>
3.359375
984
Comment Section
Science & Tech.
45.747731
95,619,066
Snow plays an important role in the climate system of high elevations and cold regions because of its high albedo and low heat conductivity and it is also an important water resource for these regions.12.3.–4 Snow sublimation has been identified as an important hydrological process and results in a significant loss of snowpack water, which involves complex mass and energy exchanges.45.6.–7 Extreme cases of sublimation have been shown to be very efficient at removing snowpack water with losses of up to 90% of annual snowpack on preferred alpine crests.5 A wide range of snow sublimation rates have been reported in the literature.78.9.10.–11 High sublimation rates were observed in the Colorado Front Range with daily values of 2.35 mm and in the alpine region of the Sierra Nevada with daily values of 2.17 mm.8,9 However, sublimation rates were reported with 0.39 and at wind-exposed and wind-sheltered sites, respectively, at Owyhee Mountains of the USA.7 These various results may be explained by environmental differences among study sites because snow sublimation is affected by multiple factors such as wind speed, net radiation, air temperature, and vapor pressure deficit,5 and the spatial variability of snow sublimation is dependent on the spatial variability of these factors.5,10,11 Sublimation as the moisture fluxes between the snow and the atmosphere is commonly measured by an evaporation pan. The evaporation pan has been widely used with the advantage of simplicity and economy but with more manual work.4,1213.14.–15 Schmidt et al.13 estimated that sublimation from snowpack was 78-mm water equivalent and represented about 20% of the normal peak water equivalent of the snowpack by a 65-cm diameter pan during a 40-day accumulation period in a Colorado subalpine forest. However, measurements by an evaporation pan were found to be invalid when precipitation or blowing snow occurred.15 The eddy covariance (EC) method is another approach to measure snow sublimation and is widely used.7,1622.214.171.124.–21 EC uses high-frequency (10 to 20 Hz) measurements of wind vector and scalar quantities as the basis for flux computations. A detailed description of the EC method and data processing to determine sublimation can be found in Reba et al.17 EC sensors respond poorly during precipitation events,7,17 however. Additionally, the so-called energy balance closure problem is a widespread occurrence in EC measurements with reported discrepancy of 10% to 30%.22,23 Despite these challenges, EC data can be a very valuable data source for snow research when thoroughly corrected to ensure accuracy.7,17,18,21 In addition to in situ observation, methods to estimate snow sublimation were developed in the past several decades. Initially, empirical formulas were widely adopted, based on the statistical relationship between snow sublimation and meteorological conditions (e.g., wind speed and vapor pressure deficit).24,25 Strasser et al.5 used empirical formulas to estimate sublimation from the snowpack to be about 100-mm snow water equivalent (SWE) during the 2003/2004 winter period in a mountainous region in the southeast of Germany. Empirical methods are simple but have low accuracy and poor physical basis.8,10 The aerodynamic profile method was developed later based on complex turbulent transfer theory and is more reliable than the empirical methods.8,10,14 Hood et al.8 used the aerodynamic profile method to estimate snow sublimation: total net sublimation was 195-mm water equivalent or 15% of annual snow accumulation at Niwot Ridge in the Colorado Front Range during the 1994/1995 snow season. The snowpack is generally assumed to be a saturated surface, which has been proved by many studies,2627.28.29.–30 thus the bulk aerodynamic (BA) method was developed to simplify the aerodynamic profile method and remove the need for multilevel observations. The BA model needs only meteorological measurements at a single height above the snowpack and soon became the most widely used method due to a widespread lack of multilevel meteorological data.7,2627.28.29.–30 Reba et al.7 found that the measured snow sublimation by EC was very similar to estimated snow sublimation by the BA model with a correlation coefficient of 0.85 and root mean square difference of . Many of the physical snow sublimation models have been based upon the energy balance approach.21,29,31,32 The relation between the amount of energy used for snow sublimation with net radiation and subsurface heat flux can be described by a conceptual model of the energy balance of a control volume, which applies to a single, thin layer of snow placed at the air–snow interface (Fig. 1). The snow surface energy balance equation can be written as The method based on the snow surface energy balance also applies one-level meteorological measurements to estimate snow sublimation with the P–M equation, which has been widely adopted in estimating evaporation.33,34 The P–M method for estimating latent heat flux calculation has been extensively used in hydrological, atmospheric, and environmental modeling, and can be used to estimate snow sublimation as air in the vicinity of the snowpack that is generally assumed to be vapor saturated.29,31,32 Furthermore, many studies on snow sublimation are constrained by the unavailable or very sparse ground observation. At high latitudes, meteorological stations are sparse and located mostly at lower elevations.2 However, snow is more abundant at high latitudes and altitudes. Although many ground-based observations and modeling studies of snow sublimation have provided valuable information (i.e., snow surface temperature and snow depth) at a particular location, it is difficult to extend such information to large scales. The spatial variability of snow sublimation over the vast regions remains largely unknown.2 Satellite observations have long been recognized as effective in providing spatially distributed snow information to estimate snow sublimation.35 Remotely sensed data provide temporally and spatially continuous information over snow surface, i.e., snow surface temperature and fractional snow cover (FSC).36,37 The moderate resolution imaging spectroradiometers (MODIS) onboard NASA’s Terra and Aqua satellites provide unprecedented information regarding snow and surface energies, which can be used for regional- and global-scale snow sublimation estimation in near real-time.38,39 However, to our knowledge, very few studies have been published on the estimation of snow sublimation with satellite observations. The BA method and P–M equation were employed in this study because both methods assume that the snow surface is vapor saturated and use atmospheric measurements at only one level above the snow surface. This is the foundation of calculating the snow sublimation by remote sensing because we cannot get gradient measurements in the surface boundary layer with satellite observations. We present an approach to estimate subpixel snow sublimation using multiple satellite observations by characterizing the subpixel energy balance of the snow cover fraction. We have evaluated our method using both tower and satellite observations as input and compared alternate parameterizations of snow sublimation. The objectives of our study were: (1) to evaluate the P–M equation and the BA parameterization using tower observations, (2) to analyze the energy balance of the subpixel snow cover fractional area, and (3) to present and validate the regional snow sublimation estimated using satellite data. Data and Materials Description of Study Area and Measurements The upper reaches of the Heihe River Basin (Fig. 2) are located between 37°41′ N to 39°05′ N and 98°34′ E to 101°11′ E, and cover an area of . Elevation ranges from 1631 to 5245 m, the mean annual precipitation is 350 mm and the mean annual air temperature is 2.0°C.40 Glaciers covering are distributed above elevation 4500 m. At elevations above 4000 m, vegetation is very sparse and dominated by cushion plants, while meadows and shrubs occur below 3300 m.41 Temporary snow usually exists under 2700 m, patchy snowpack from 2700 to 3400 m, and permanent snowpack exists above 3400 m.42 Snowmelt provides most water resources in the Heihe River Basin, which supplies agriculture in the middle reaches and the arid ecosystem in the lower reaches.43 The data were collected at two observation stations, Dadongshu mountain pass station (4101 m, 100.23°E, 38.01°N) and Dashalong station (3739 m, 98.94°E, 38.84°N), part of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER) project.44,45 These stations were selected because of the locally flat terrain and homogeneous land cover without major obstacles, such as houses and trees. The land cover at these sites is alpine meadow and bare soil. The land surface at these sites is always covered by snow most of the time in the late autumn, winter, and early spring, but snowfall is larger in spring and autumn than in winter.46 The sensible and latent heat flux data during the period between November 1, 2014, and January 31, 2015, at Dadongshu station, and October 17, 2014, and December 31, 2014, at Dashalong station were collected. The fluxes were measured by the EC systems installed at 3 m at the Dadongshu site and 4.5 m at Dashalong site above the ground. The raw data were sampled at 10 Hz, then 30-min mean values were obtained after a series of corrections, such as data filtering, sonic temperature correction, air density correction, and coordinate rotation.44 The auxiliary data, including the net radiation, air temperature, relative humidity, snow depth, air pressure, wind speed, and snow infrared surface temperature, were also observed by automatic weather stations (AWS) near the EC system in the same period. All the data were processed and recorded at 30-min intervals. There were no snow depth measurements at Dashalong station. The latent heat flux at this station was used to validate the estimates based on satellite data. Satellite and Regional Forcing Data The satellite and regional forcing data used in this study are listed in Table 1. They are acquired from various sources and comprised of well-validated products. Remote sensing data and regional atmospheric forcing data used for computing snow sublimation at regional scale in the upstream of Heihe River Basin from November 1, 2014, to January 31, 2015. |Variables||Source||Temporal resolution||Spatial resolution| |Downward longwave radiation||HiWATER||Hourly||5 km| |Downward shortwave radiation||HiWATER||Hourly||5 km| |Air pressure||HiWATER||Hourly||5 km| |Air temperature||HiWATER||Hourly||5 km| |Wind speed||HiWATER||Hourly||5 km| |Specific humidity||HiWATER||Hourly||5 km| |Surface temperature||NASA, MOD11A1||Daily||1 km| |FSC||NSIDC, MOD10A1||Daily||500 m| |Land cover type||HiWATER||—||1 km| Land surface temperature was retrieved using the radiometric data acquired by the MODIS on Terra and downloaded from National Aeronautics and Space Administration (NASA), which covers the entire upper reach of the Heihe River Basin. We used the FSC retrieved with the MODIS radiometric data (MOD10A1 product) and downloaded from the National Snow and Ice Data Center (NSIDC). The MOD10A1 FSC is estimated by applying an empirical relationship with the normalized difference snow index (NDSI), established using FSC retrieved at higher spatial resolution with Landsat ETM+ data.37 The original MOD10A1 has a 500-m spatial resolution and was resampled to a 1-km grid. Land cover type is required to unmix the land surface temperature and estimate the radiometric surface temperature of the snow cover fraction. The land cover map at a spatial resolution of 1 km was obtained from the Cold and Arid Regions Sciences Data Center.47,48 The land cover map was generated by combining multisource land cover/land use classification maps including a 1:1,000,000 scale vegetation map, a 1:100,000 scale land use map for the year 2000, a 1:1,000,000 scale swamp-wetland map, a glacier map, and a MODIS land cover map for China in 2001.48 The dominant land cover types in winter in these regions were forest and bare soil. We aggregated the land cover classes to just two: (a) snow and forest and (b) snow and bare soil. The regional atmospheric forcing data were obtained from the Cold and Arid Regions Sciences Data Center.4950.–51 Data with a spatial resolution of 5 km were downscaled to 1 km by applying a statistical downscaling approach to hourly downward longwave and shortwave radiations, air temperature, air pressure, and specific humidity.52 Data of wind speed and precipitation were downscaled to 1 km using the bilinear resampling method. Two methods to estimate snow sublimation were evaluated: the widely used BA method and P–M combination equation adapted to snow cover. The two methods were applied assuming the air near the surface to be vapor saturated and the results were compared with the snow sublimation measured by the EC system in two experimental stations in the upper reaches of the Heihe River Basin in China. The theoretical background of these two methods is described in the following sections. Penman–Monteith Combination Equation The Penman53 equation was first derived to estimate evaporation from open water and water-saturated surfaces, where equilibrium at the surface, i.e., under equilibrium with vapor–saturated air. Later, this equation was extended by Monteith33 to nonsaturated surface, i.e., to actual evaporation. The combination equation is obtained by rewriting the surface energy balance equation using the Clausius–Clapeyron equation for (liquid and frozen) water–vapor equilibrium. In Penman and Monteith a liquid water–vapor equilibrium was assumed, while in our case the equilibrium is frozen water (ice)–vapor, i.e., we need to use the appropriate dependence of the saturated water vapor pressure on temperature. The P–M combination equation was applied in the form:33 For full snow cover For FSC31 () is the net radiation flux, () is the snow subsurface heat flux due to the energy transfer in the snowpack, and was estimated using a linear relationship between and obtained from AWS and EC observations. () is the air density, () is the specific heat capacity of air, (Pa) is the saturation vapor pressure at the air temperature () in the frozen water–air equilibrium, i.e., (here is in °C). (Pa) is the actual water vapor pressure, () is the aerodynamic resistance, and () is the psychrometric constant. Equation (2) is a particular form of the combination equation. Menenti derived a general equation and showed that both the Monteith and Penman equations can be obtained as limiting conditions of the Menenti equation. 54 Both the Monteith and the Penman equations applied to a case, where the liquid (frozen) water–vapor phase transition occurs at a surface, where air is vapor saturated at its temperature. More in general, the liquid (frozen) water–vapor phase transition occurs at locations underneath the surface of the evaporating (sublimating) body, such as soil, stomata, or the snowpack particularly when meltwater may be present within the snowpack. Our assumptions, i.e., that the sublimation occurs at the surface of the snowpack and that air in the vicinity of the surface is vapor saturated at the temperature of the snowpack surface, may lead to differences between sublimation estimated with Eq. (2) and total latent heat flux measured by an eddy covariance device. The latter may include contribution due to subsurface sublimation and evaporation of meltwater. The aerodynamic resistance is calculated as follows:30 is the von Karman’s constant (), () is the wind speed at height (m), (−) is the flux–profile relationship for a stable or unstable boundary layer and can be expressed as follows:28,30 28 The MOD11A1 land surface radiometric temperature retrieved from MODIS radiometric data is an average applying to an entire pixel. We assumed that in the case of mixed pixels, e.g., composed of snow and other land cover types such as bare soil or vegetation, the fractional cover of snow FSC is the MODIS data product, while the fractional abundance () is assigned to the land cover type given by a land cover map for the entire pixel. To retrieve the surface temperature of the snow fractional area, we applied the method proposed by Sun et al.55 The component temperatures of a mixed pixel are assumed to include just two end-members, in our case either snow and vegetation (forest) or snow and bare soil. The end-member radiometric temperatures were estimated using MODIS land surface temperature and the spectral emissivities in the MODIS bands 31 and 32 Net radiation flux () is the principal driver of the snow sublimation and was expressed as56,57 58 In this work, a snowfall exceeding 3-mm SWE is defined as a “new snow” event, and the time between two consecutive “new snow” events is counted as length of elapsed time after the first snowfall. If the precipitation (snowfall here) from the forcing data is (SWE), then the snow albedo will be the maximum value of 0.85. Bulk Aerodynamic Model For full snow cover Metrics for Accuracy Assessment We applied multiple metrics to evaluate the accuracy of our estimated sublimation. The mean relative error (MRE) describes the ratio between the deviation of the estimate from the observation and the observed value. The root mean square error (RMSE) measures the absolute difference between estimation and observation. The coefficient of determination () is a measure of the consistency of estimated with measured values, e.g., in time. These metrics were calculated as follows: Results and Discussion At Dadongshu station, there were snowfall events from October 10, 2014, as shown in Fig. 3 and the total snowfall was 41.0 mm (SWE). The maximum value was in October 10, 2014. After several snowfall events in October 2014, the snow depth on 1 November reached a maximum depth of 0.2 m, and then declined until January 31, 2015. On January 31, 2015, soil surface was exposed and LE measurements by EC included soil evaporation. So we choose the period from November 1, 2014, to January 31, 2015, because there was enough snow on the ground, particularly during this period, we could neglect the contribution of soil evaporation to the LE measured by our EC device. Meteorological conditions from November 1, 2014, to January 31, 2015, are shown in Fig. 4. The snow depth near the instrument station was 0.2 m and decreased to 0.12 m during the study period. The mean air temperature was , ranging from highs of to lows near . The mean wind speed for the study period was with a maximum of and a minimum of nearly at some times. The mean relative humidity at the Dadongshu site was 45.2%, ranging from 9.1% to 87.2%. High relative humidity, which means large amount of water vapor in the air, will inhibit snow sublimation. Snow Subsurface Heat Flux The snow subsurface heat flux () was not measured and we estimated as the residual of the energy balance at the surface of the snowpack. This was done only in the Dadongshu site, when it was covered by a homogeneous snowpack during the snow season, for all available complete daytime (11:00 to 17:00) measurements between November 1, 2014, and January 31, 2015. Next, we determined a linear correlation between and , obtaining a regression coefficient of 0.575 and (Fig. 5) This relationship in Eq. (19) is further applied to estimate regional snow sublimation using satellite data. Validation of Estimated Snow Sublimation We evaluated first estimates of sublimation obtained with both Eqs. (2) and (13) using in situ measurements at the Dadongshu site of all the energy balance components, i.e., net radiation, latent heat flux, and sensible heat flux for the period with complete snow cover (). The obtained with the P–M equation [Eq. (2)] and the BA parameterization [BA, Eq. (13)] were 0.65 and 0.54, respectively (Fig. 6), while RMSE was 10.48 and , respectively. The MRE for P–M was 28.1%, while it was 1.12% for BA. The relatively small MRE of BA is due to the offset effect of the positive and negative biases. It is clear that the P–M equation overestimates the sublimation of snow more than the BA approach (Fig. 7). Overall, the evaluation against in situ measurements showed that the performance of both P–M and BA is satisfactory, especially taking into account the low RMSE values. An overall impression of the accuracy of both P–M and BA estimated snow sublimation is obtained by considering the entire time series of observations (Fig. 8). The mean daytime snow sublimation estimated with P–M was , while it was with BA, against with the EC measurements. The sensitivity of estimated sublimation to input data and parameters was evaluated to identify the most influential ones and understand how to deal with data of different accuracies. The sensitivity of estimated snow sublimation to net shortwave radiation, snow surface temperature, air temperature, wind speed, and relative humidity was evaluated. The sensitivity analysis (SA) was performed on one variable at a time by assuming a range between and of error for each input variable, while keeping the others constant.59 A nondimensional sensitivity coefficient was calculated as The reference value of each variable was the mean value between 11:00 and 17:00 during the period November 1, 2014, till January 31, 2015, at the Dadongshu site. The reference values of both input variables and estimated sublimation are given in Table 2. The sensitivity results are shown in Fig. 9. The SA indicates that the wind speed is the most influential variable when using either P–M or BA. The P–M equation is less sensitive, i.e., , to wind speed, however, than BA, i.e., . This is a clear advantage of the P–M equation for large area estimates because the regional wind field is less accurate than in situ measurements.52 Another influential variable is relative humidity with and for BA and P–M respectively. Relative humidity has a negative impact on LE for both methods. The influence of net shortwave radiation on P–M estimates of sublimation is comparable to relative humidity, i.e., . Both approaches showed the least sensitivity to air temperature and surface temperature. These agreed with former studies on the sensitivity of snow sublimation calculation to the error of input variables, which highlighted that snow sublimation was highly sensitive to error of winds speed and relative humidity, while not sensitive to the error of temperature.31,60,61 Reference values of input variables and the SC values for the P–M equation (SC_P) and the BA parameterization (SC_B). |Wind speed||Relative humidity||Air temperature||Surface temperature||Net shortwave radiation| |Reference values||4.52 ()||44.72 (%)||(°C)||(°C)||101.67 ()| Regional Estimates of Snow Sublimation with Remote Sensing Data The evaluation of the P–M and BA methods with in situ measurements shows that the P–M estimation is more accurate. Since it is derived from the surface energy balance equation, the P–M equation is also most suitable for regional analyses based on the multispectral radiometric data collected by spaceborne imaging spectroradiometers. This notwithstanding, we applied both methods for our regional case study. Regional snow sublimation was estimated for the entire upper reaches of the Heihe River Basin by applying retrievals from satellite data, i.e., subpixel snow surface temperature and FSC, in combination with atmospheric forcing data. The regional analyses were carried out under clear sky conditions within the period from November 1, 2014, to January 31, 2015. The snow component surface temperature was obtained using the method described in Sec. 3.1. The map of estimated sublimation obtained on a fully clear day (i.e., November 8, 2014) over the entire upper reaches of the Heihe River Basin was chosen to demonstrate our method. The FSC was rather high and the subpixel snow surface temperature was in the range of 260 to 273 K (Fig. 10). The spatial distribution of snow sublimation estimated by the P–M model and BA model is shown in Fig. 10. The estimated sublimation over the study area ranges between 0 and with the highest frequency values around 0 to when applying the P–M equation [Figs. 11(a) and 11(b)]. The BA estimates were similar, although the highest frequency values were slightly lower, i.e., around 0 to [Figs. 11(c) and 11(d)]. The regional estimates of snow sublimation under clear sky conditions were evaluated against the in situ EC measurements at the Dadongshu and Dashalong sites. In the period November 1, 2014, till January 31, 2015, there were 18 usable remotely sensed measurements at Dadongshu and 17 measurements at Dashalong. The P–M estimated sublimation at Dashalong was in good agreement with the measurements (Fig. 11) with , , and . The performance of the P–M method was poorer at Dadongshu (Fig. 12) with , , and . The BA estimates of sublimation were poorer at both Dashalong and Dadongshu (Fig. 13), with , , and at Dashalong and , , and at Dadongshu. Overall the P–M equation performed better than BA, while both performed better at Dashalong than at Dadongshu. It is important to note here that the EC measurements are essentially local-scale measurements representing only very small spatial scales when compared with the 1-km pixel size of the snow sublimation estimated with satellite data. These observations were considered; however, the best option available for evaluating the model performance because we lacked the consistent and long-term snow sublimation data measured by other methods. The daily mean value of snow sublimation measured by EC from November 1, 2014, till January 31, 2015, at Dadongshu was , compared with in the Rocky Mountains of Colorado by Molotch et al.62 and in Mongolia by Zhang et al.10 The magnitude of sublimation has been shown to vary widely across different land surface environments and elevations.5,10,11,62 The different values of snow sublimation may be explained by environmental differences among study sites.5 Our estimates of daily sublimation are well within the bounds of previous research. Both the P–M and the BA methods performed well at the Dadongshu site. The lower sensitivity to the wind speed gives the P–M method an enormous advantage when applied to regional scales considering the scarce knowledge of the large area wind field at the spatial resolution. Despite the ability of the P–M method to capture the spatial and temporal pattern of snow sublimation distribution, there are multiple uncertainties to deal with. First, we did not account for sublimation from blowing snow. Pomeroy and Essery63 found that blowing snow sublimation has been estimated to account for as much as 10% to 50% of snowfall in North American Prairies and Arctic environments. The occurrence of blowing snow in the upper reaches of the Heihe River Basin has most likely significant impacts on the estimates of sublimation during the snow season. The actual amount of sublimation in this catchment could reasonably be higher than what has been reported here. The second source of error is the accuracy of the spatial input data. These errors can result from both inaccurate forcing data and remote sensing data. Data on hourly 2-m air temperature, 2-m relative humidity, and snow surface temperature were in good agreement with tower measurements, while estimates of 10-m wind speed correlated poorly with in situ measurements (Fig. 14). The latter is a major challenge in regional analyses of sublimation with either method, given the poor accuracy of the large area wind field.64 Errors on relative humidity, air temperature, and snow surface temperature are smaller but not negligible. Snow sublimation plays a significant role in hydrological processes in cold regions and seasons, and it varies with atmospheric and surface conditions. In this study, we developed a method to estimate snow surface sublimation with satellite observations and validated the results with EC measurements of latent heat flux. We applied the P–M equation and the BA parameterization, which use satellite data products to estimate regional snow sublimation at a 1-km spatial resolution. Model estimates have been compared with ground EC measurements. The P–M estimates were in good agreement with measurements at two sites (Dashalong and Dadongshu), with and , respectively. The BA estimates were less accurate. The encouraging performance of the P–M suggested the extension of this approach to large area estimates of snow sublimation based on multiple satellite data. Remote sensing data at moderate spatial resolution enable us to estimate and map snow sublimation over large areas. The applicability and accuracy of the P–M method to estimate snow sublimation relies on the availability and accuracy of a suite of remotely sensed input data products, i.e., snow surface temperature, FSC, and atmospheric forcing data. Anyway, remote sensing has a large advantage to get the snow information with large-scale supervising ability. However, there are still some challenges. For example, clouds will greatly influence the quality of remote sensing data such as snow surface temperature and FSC. It is important to realize that the present study is limited by the unavailability of clear sky remote sensing data. As such, higher quality and accuracy of these input data are vital and expected. This work was supported by the National Key Basic Research Program of China (Grant No. 2015CB953702), the National Natural Science Foundation of China (Grant Nos. 91425303 and 91325203), and the SAFEA Long-Term-Projects of the 1000 Talent Plan for High-Level Foreign Experts (Grant No. WQ20141100224). We gratefully acknowledge the Cold and Arid Regions Sciences Data Center at Lanzhou for providing EC data and atmospheric forcing data. We are grateful to the anonymous reviewer for the useful suggestions. Ning Wang is a PhD candidate at the University of Chinese Academy of Sciences, China. He received his master’s degree in GIS from Guilin University of Technology in 2012. His research interests include remote sensing of snow and image processing. Li Jia received her PhD in environmental science from Wageningen University of the Netherlands in 2004. She is a professor at the State Key Laboratory of Remote Sensing Science, jointly sponsored by the Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences and Beijing Normal University. Her research interests are on the study of earth observation and its applications in hydrometeorology, water resources, agriculture, and climate change. Chaolei Zheng received his PhD in environment energy system at Shizuoka University, Japan, in 2013. Currently, he is an assistant professor at the Institute of Remote Sensing and Digital Earth, Chinese Academy of Science. His research focuses on earth observations of the water cycle, climate change, ecohydrology, and remote sensing. Massimo Menenti received his PhD from Wageningen Agriculture University of The Netherlands in 1984. He is a full professor at Delft University of Technology of The Netherlands. He is also a professor at the Institute of Remote Sensing and Digital Earth of Chinese Academy. His research interests focus on the use of earth observation to study the hydrology and hydrometeorology of the global land surface.
<urn:uuid:575c5022-27ec-4183-be00-d165ce7dd661>
3.71875
6,845
Academic Writing
Science & Tech.
40.376967
95,619,092
When you first starting out with Android understanding different layouts, but mostly attributes can be tricky. Let’s look at the most common and confusing ones here One of the most common ones is layout_, it’s in layout_width/height, layout_margin and many more. What it means is this attribute meant for parent layout to size and position this view. Ones without layout_ is for view itself, like padding layout_ itself doesn’t mean anything, Android could’ve just skipped it and call android:width. If you want to learn about creating custom view’s and attributes for them – check this post Does it make any difference in creating layouts? I don’t think so, just good to know meaning of it This one is very common. Especially you see it everywhere with CoordinatorLayout. And there’s many more attributes like this which you just use, if they work – great, no need to understand actually how. And in most cases it’s like this, all templates have this attribute set if needed. Default one is actually true for views and false for layouts seems like. When set to true, your view goes below status bar. By below I mean there’s a status bar and then there’s your view, without overlay. The best is just to test it with NavigationDrawer template. For example if you want to have translucent status/navigation bar and draw content under them so that they overlay content, you need to set fitSystemWindows to false. Anyways, it’s better to read right from Google about this here Another very common thing is to see attributes from wrong layouts. For example having orientation in other than LinearLayout, layout_gravity in RelativeLayout etc. It doesn’t affect anything and lint will actually tell you that this attribute is useless there P.S. there’s also nice set of tutorials about ConstraintLayout that you can check here Formerly an Android developer, lately picked up some Flutter. This blog is everything that I find exciting about Android and Flutter development. Stay tuned and hope to see you again!
<urn:uuid:682537a7-30c2-455f-9fcd-f5896d263823>
2.53125
450
Personal Blog
Software Dev.
58.739138
95,619,094
Scientists at the Swiss Nanoscience Institute at the University of Basel have used resonators made from single-crystalline diamonds to develop a novel device in which a quantum system is integrated into a mechanical oscillating system. For the first time, the researchers were able to show that this mechanical system can be used to coherently manipulate an electron spin embedded in the resonator – without external antennas or complex microelectronic structures. The results of this experimental study will be published in Nature Physics. In previous publications, the research team led by Georg H. Endress Professor Patrick Maletinsky described how resonators made from single-crystalline diamonds with individually embedded electrons are highly suited to addressing the spin of these electrons. These diamond resonators were modified in multiple instances so that a carbon atom from the diamond lattice was replaced with a nitrogen atom in their crystal lattices with a missing atom directly adjacent. In these “nitrogen-vacancy centers,” individual electrons are trapped. Their “spin” or intrinsic angular momentum is examined in this research. When the resonator now begins to oscillate, strain develops in the diamond’s crystal structure. This, in turn, influences the spin of the electrons, which can indicate two possible directions (“up” or “down”) when measured. The direction of the spin can be detected with the aid of fluorescence spectroscopy. Extremely fast spin oscillation In this latest publication, the scientists have shaken the resonators in a way that allows them to induce a coherent oscillation of the coupled spin for the first time. This means that the spin of the electrons switches from up to down and vice versa in a controlled and rapid rhythm and that the scientists can control the spin status at any time. This spin oscillation is fast compared with the frequency of the resonator. It also protects the spin against harmful decoherence mechanisms. It is conceivable that this diamond resonator could be applied to sensors – potentially in a highly sensitive way – because the oscillation of the resonator can be recorded via the altered spin. These new findings also allow the spin to be coherently rotated over a very long period of close to 100 microseconds, making the measurement more precise. Nitrogen-vacancy centers could potentially also be used to develop a quantum computer. In this case, the quick manipulation of its quantum states demonstrated in this work would be a decisive advantage. Arne Barfuss, Jean Teissier, Elke Neu, Andreas Nunnenkamp, Patrick Maletinsky Strong mechanical driving of a single electron spin Nature Physics (2015), doi: 10.1038/nphys3411 Professor Patrick Maletinsky, University of Basel / Swiss Nanoscience Institute, tel. +41 61 267 37 63, email: firstname.lastname@example.org http://dx.doi.org/10.1038/nphys3411 - Abstract Reto Caluori | Universität Basel Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Science Education 23.07.2018 | Health and Medicine 23.07.2018 | Life Sciences
<urn:uuid:3b1719c9-4211-4fbb-bc39-5b7c835e2e45>
3.390625
1,204
Content Listing
Science & Tech.
35.604504
95,619,101
Formal Theory of Basic COSY In every expression as described above, the semicolon specifies sequential occurrences of the events named or subexpressions, and comma specifies a mutually exclusive occurrence of one of the events named or subexpressions. The comma binds more strongly than the semicolon, so that the expression “a;b, c” means “first event a must occur, after which exclusively either event b or event c must occur”. An expression may be enclosed in conventional parentheses with Kleene star appended, as for instance “(d, e)*” which means that the enclosed specification applies zero or more times. In other words, an expression between path and end may be understood as an ordinary regular expression. The only difference is that “∪” is replaced by “,”, concatenation is replaced by “;”, and mutually exclusive choice binds more strongly than concatenation1. Thus for instance “a; b, c” is equivalent to “a(b ∪ c)” in the traditional notation for regular expressions. Moreover, by definition, the parentheses path and end correspond to “(” and “)*” respectively, so that a single path specifies repeated (or cyclic) sequences of event occurrences. KeywordsPartial Order Formal Theory Regular Expression Vector Sequence Firing Sequence Unable to display preview. Download preview PDF.
<urn:uuid:996c84ee-bb3b-4dbe-8243-ffaf32342784>
3.34375
303
Truncated
Science & Tech.
23.731576
95,619,127
|RHODOPHYTA : NEMALIALES : Liagoraceae||RED ALGAE| Description: Gametangial plants are attached by discoid holdfast growing to 25 cms in height, well branched and mucilaginous at least when young. Tetrasporangial plants only known in culture. Habitat: Epilithic on Polyides rotundus or on gravel, shells etc. Sublittoral to 30 m. Distribution: Generally south-western in the British Isles, but recorded from Scotland including the Shetlands. Europe: Mediterranean, Atlantic coast of France and Denmark. Similar Species: Nemalion helminthoides has relatively few branches and Helminthocladia calvadosii. Key Identification Features: Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK. WoRMS: Species record : World Register of Marine Species. |Morton, O. & Picton, B.E. (2016). Helminthora divaricata (C Agardh) J Agardh. [In] Encyclopedia of Marine Life of Britain and Ireland. | http://www.habitas.org.uk/marinelife/species.asp?item=ZM1660 Accessed on 2018-07-19 |Copyright © National Museums of Northern Ireland, 2002-2015|
<urn:uuid:397ea6be-52b8-46ab-9f42-09753b3dfecc>
2.875
301
Knowledge Article
Science & Tech.
32.2785
95,619,154
Red flour beetle |Red flour beetle| |Wikispecies has information related to Tribolium castaneum| The red flour beetle (Tribolium castaneum) is a species of beetle in the family Tenebrionidae, the darkling beetles. It is a worldwide pest of stored products, particularly food grains, and a model organism for ethological and food safety research. The red flour beetle attacks stored grain and other food products including flour, cereals, pasta, biscuits, beans, and nuts, causing loss and damage. The United Nations, in a recent post-harvest compendium, estimated that Tribolium castaneum and Tribolium confusum, the confused flour beetle, are "the two most common secondary pests of all plant commodities in store throughout the world." The red flour beetle is of Indo-Australian origin and less able to survive outdoors than the closely related species Tribolium confusum. It has, as a consequence, a more southern distribution, though both species are worldwide in heated environments. The adult is long-lived, sometimes living more than three years. Although previously regarded as a relatively sedentary insect, it has been shown in molecular and ecological research to disperse considerable distances by flight. This species closely resembles the confused flour beetle, except with three clubs at the end of each of its antennae. Female red flour beetles are polyandrous in mating behavior. Within a single copulation period, a single female will mate with multiple different males. Female red flour beetles engage in polyandrous mating behavior in order to increase their fertility assurance. By mating with an increased number of males, female beetles obtain a greater amount of sperm. Obtaining a greater amount of sperm is especially important since many sexually active male red flour beetles are non-virgins and may be sperm-depleted. It is important to note that red flour beetles engage in polyandry to obtain a greater amount of sperm from males, not to increase the likelihood of finding genetically compatible sperm. - 1 Patterns of reproductive fitness and variation in the red flour beetle - 1.1 Polyandry and multiple mating in the red flour beetle - 1.2 Potential fitness benefits of the polyandrous mating behavior - 1.3 Potential fitness detriments for polyandrous behavior - 1.4 Variation in polyandrous behavior and mate choice - 2 Polygamy in red flour beetles - 3 See also - 4 References - 5 External links - 6 Further reading Patterns of reproductive fitness and variation in the red flour beetle Polyandry and multiple mating in the red flour beetle Red Flour beetles engage in polyandrous mating behavior. Polyandry specifically refers to when a female mates with multiple males. For females, polyandry can serve as a fertility assurance, thereby increasing the number of progeny. Potential fitness benefits of the polyandrous mating behavior Multiple mating events can ensure that females obtain a greater net amount of sperm, resulting in an increased likelihood of successful fertilization. In nature, repeated matings could result in males that have a low sperm count. Due to the males' low sperm count, a female may need to mate with several males before being successfully inseminated. Although multiple mating events may result in an increased likelihood for finding genetically compatible sperm, genetic compatibility cannot always be considered a major fitness advantage for polyandrous behavior. The increased viability of embryos—due to increased genetic compatibility—did not significantly increase the number of adult beetles over time, and therefore, did not play a significant role in the fitness of the overall population. However, it must be noted that increased genetic compatibility could increase the genetic diversity of the population, which maybe useful in various different environments. High genetic diversity within a population can lead to high phenotypic variation, which can subsequently enable some variants to better survive and reproduce given a sudden environmental change. Potential fitness detriments for polyandrous behavior Male competition for access to females The availability of resources and population size can greatly affect how many matings each individual participates in. Increased population size within a given area with fixed resources can limit how many offspring can survive. Therefore, males must often compete with other males to be the last male that mates with the female, to increase his fertilization rate. By being the last male to mate with a female, it is likely that his ejaculate removed previous ejaculate from previous males, increasing the chances that his sperm fertilizes the female. In fact, in areas with limited resources, higher rates of cannibalism among competitor males can result in an overall decrease in fitness of the population since there is a net decrease in offspring production and survival. Reduced offspring fitness Polyandrous behavior may not always result in the propagation of adaptive genes. In the red flour beetles, the ability of a male to attract females—through pheromones—is genetically based.Males vary in the ability to attract females. However, offspring fitness is not related to the ability of the males to attract females. In other words, just because a male reproduced more often due to increased ability to attract females, does not necessarily mean the offspring have inherited the traits that result in increased fitness. Variation in polyandrous behavior and mate choice Variation in mating behavior between different populations Females of different geographic regions—and subsequently, different genetic backgrounds—often show great variation in mating behavior. Certain strains of females avoid multiple mating events while other strains of female engage in higher degrees of polyandry. This variation suggests that polyandry can be advantageous in some populations but not in others. Mate choice in female beetles Female beetles vary in which males they choose to copulate with. Moreover, female beetles can specifically choose which male's sperm is utilized for fertilization through cryptic choice. Females that have multiple sperm receptacles can store sperm from different males and can later choose which sperm is used for fertilization. Mate choice in male beetles Male beetles can also vary in the females they choose to mate with. Males are extremely selective in their mate choice. They prefer to mate with mature, virgin females. If a male mates with a virgin female, his sperm has an extremely high chance to fertilize the female if another male does not mate with her. Males are able to differentiate between virgin females and non-virgin females through scent; the wax-like secretions of competitor males could be found on the reproductive glands of non-virgin females, but not on virgin females. Males that possess an increase in the number of odor receptors are better able to choose which females to reproduce with and subsequently, increase their fitness. Some males possess better suited characteristics to detect the maturity and reproductive status of the female, and as such, will preferentially breed with only those females that will have the highest production of offspring. Likewise, males that deposit stronger scents will have an indirect fitness advantage due to their odor deterring other potential mates from an already inseminated female. Polygamy in red flour beetles Polygamy in red flour beetles is a behavior common to both males and females of this species. Polyandry is thus polygamy in the female members of a population as discussed in the section above. On the other hand, polygyny refers to polygamy practiced by males in a population. Polygamy in populations that lack genetic diversity In red flour beetles, females that engage in polygamous behavior produce more offspring than those that are less polygamous. Polygamy is mostly seen in populations that lack genetic diversity. Polygamy in less genetically diverse populations is a means of avoiding fertilization between beetles that are closely related since they may be genetically incompatible. The more partners that a male or female has, the higher the chances that at least one of the matings is with an unrelated partner and the greater the genetic diversity in the offspring. In this way, genetic incompatibility is reduced and diversity is increased in a population. For this reason, females copulate with more males when genetic diversity is low in order to attain fertilization success and also increase fitness in their subsequent offspring. In some studies, however, it has been noted that fertilization can still occur when related beetles mate. Nonetheless, it is worth noting that there is a significantly lower number of offspring produced when inbred beetles mate than when the matings are between out-bred partners. Successful fertilization observed in a small portion of research in related beetles has led some biologists to claim that there may be no inbreeding depression in red flour beetles. Even though there is successful fertilization, it is observed that a lower number of total offspring is produced, which can be argued to be a type of inbreeding depression since it lowers reproduction fitness. Male and female recognition of relatives During mating, red flour beetles are known to engage in polygamous behavior. Male flour beetles have been known to recognize their relatives while the females do not have this capability. Lack of the ability to recognize their relatives has led females to mate with any male within the population. Female red flour beetles are also known to store sperm after mating. More sperm is stored by the first mating, which leads to less sperm stored in subsequent matings. However, amount of stored sperm does not stop the last male mate from fertilizing the egg. This is due to the fact that with each mating, males can remove previously stored sperm thus giving their own sperm an advantage to fertilize the egg. Polygyny and fertilization success in red flour beetles In red flour beetles, males are known to engage in polygamous behavior. Research largely shows that Male red flour beetles engage in polygamous behavior to avoid inbreeding depression, especially when there is competition from other males. There is a higher fertilization success in out-bred males when they compete with inbred males to fertilize the same female. In polygamous beetles, the male that last fertilizes the female ends up having a higher fertilization success. Polygamy can thus be seen as an evolutionary result as males compete to be the last to fertilize the female's egg and contribute more to the next generation. Sperm precedence is thus a means of evolutionary competition through which the males try to achieve greater reproductive success. - Grünwald, S.; et al. (2013). "The Red Flour Beetle Tribolium castaneum as a Model to Monitor Food Safety and Functionality". Adv Biochem Eng Biotechnol. 135: 111–122. doi:10.1007/10_2013_212. PMID 23748350. - Sallam, M.N. (2008). "Insect damage: damage on post-harvest" (PDF). In compendium on post-harvest operations. - Ridley, A.; et al. (2011). "The spatiotemporal dynamics of Tribolium castaneum (Herbst): adult flight and gene flow". Molecular Ecology. 20 (8): 1635–1646. doi:10.1111/j.1365-294X.2011.05049.x. PMID 21375637. - Pai, Aditi; Bennett, Lauren; Yan, Guiyun (2005). "Female multiple mating for fertility assurance in red flour beetles (Tribolium castaneum)". Canadian Journal of Zoology. 83 (7): 913–919. doi:10.1139/z05-073. - Pai, Aditi; Feil, Stacy; Yan, Guiyun (2007). "Variation in polyandry and its fitness consequences among populations of the red flour beetle, Tribolium castaneum". Evolutionary Ecology. 21 (5): 687–702. doi:10.1007/s10682-006-9146-4. - Arnaud, Haubruge, L,E (1999). "Mating Behavior and Male Mate Choice in Tribolium castaneum (Coleoptera, Tenebrionidae)". Behaviour. 136: 67–77. doi:10.1163/156853999500677. - Boake, Christine R. B. (1985). "Genetic Consequences of Mate Choice: A Quantitative Genetic Method for Testing Sexual Selection Theory". Science. 227 (4690): 1061–1063. doi:10.1126/science.227.4690.1061. PMID 17794229. - Fedina, T. Y.; Lewis, S. M. (2004). "Female influence over offspring paternity in the red flour beetle Tribolium castaneum". Proceedings of the Royal Society B: Biological Sciences. 271 (1546): 1393–1399. doi:10.1098/rspb.2004.2731. PMC . PMID 15306338. - Welsh Jennifer. (2011)."Inbreeding makes female beetles frisky." Live Science. - Tyler, F; Tregenza, T (2012). "Why do so many flour beetle copulations fail?". Entomologia Experimentalis et Applicata. 146: 199–206. doi:10.1111/j.1570-7458.2012.01292.x. - Lewis, Jutkiewicz (1998). "Sperm Precedence and sperm storage in multiply mates red flour beetles". Behavioral Ecology and Sociobiology. 43: 365–369. doi:10.1007/s002650050503. - Michalczyk, L; Martin, O; Millard, A; Emerson, B; Gage, M (2010). "Inbreeding depresses sperm competitiveness, but not fertilization or mating success in male Tribolium castaneum". Proceedings of the Royal Society B. 277: 3483–3491. doi:10.1098/rspb.2010.0514. PMC . - Arnaud, L; Gage, M; Haubruge, E (2001). "The dynamics of second- and third-male fertilization precedence in Tribolium castaneum". Entomologia Experimentalis et Applicata. 99: 55–64. doi:10.1046/j.1570-7458.2001.00801.x. - Tribolium castaneum genome. Beetlebase. - Tribolium species comparison. - Confused and red flour beetles. University of Florida IFAS. - Granousky, T. A. 1997. "Stored Product Pests". In: Handbook of Pest Control, 8th Ed. Hedges, S.A. and D. Moreland (editors). Mallis Handbook and Technical Training Company.
<urn:uuid:d12c84ea-b8c5-47b1-a061-a486ec9b2e86>
3.390625
3,041
Knowledge Article
Science & Tech.
42.00717
95,619,159
One company in Tucson, Arizona, earned two grants from NASA in order to continue its radiation study. According to the Phoenix Business Journal, the company World View, who “manufactures high-tech balloons that operate high in the earth’s atmosphere,” got the two grants to take the next step in improving the safety of flight crews from radiation. It’s called NASA’s Automated Radiation Measurements for Aerospace Safety–High Altitude (ARMAS-Hi) study, with its purpose being “to forecast radiation exposure to flight crews and passengers. Radiation in space can penetrate down to the 35,000- to 37,000-foot level where passenger aircraft fly.” The Phoenix Business Journal said the study will last five to 10 years to collect all of the necessary data, and to create its forecasting system. - Tempe child has tooth pulled by accident using toy car - Ex-postal worker admits stealing packages that had marijuana - Valley coach suspended by USA Gymnastics for sexual misconduct - Study: Arizonans among least tolerant of neighbor’s political yard signs - Arizona man pleads guilty to killing Utah youth rehab worker
<urn:uuid:46915407-cfb2-43f9-a296-c2a0c95c87e8>
2.640625
245
News Article
Science & Tech.
23.794762
95,619,186
July highlights: Earth’s orbit around the sun isn’t perfectly circular. It’s oval in shape, so that at some date we are closest to the sun and six months later we are farthest from the sun. On Friday, we will be at aphelion, the farthest point from the sun. The sun’s center will be 94,508,365 miles from the center of Earth. Perihelion, when we are closest to the sun, occurred back on January 3, when the Earth and the sun were 91,402,705 miles apart. I know, it’s counterintuitive that we’d be closer to the sun when it’s cold and farthest from the sun when it’s hot. But it’s just the reverse for the southern hemisphere. A partial solar eclipse occurs on the July 12 with an associated total lunar eclipse following on the July 27. To see either, you’ll need to travel to the Southern hemisphere. We’ll get none of either in Oklahoma. Planet visibility report: For the first three weeks of July, both Venus and Mercury grace the western horizon after sunset. Mercury is never very far from the sun, so your best bet to see it is from a location with an unobstructed view of the horizon. Venus is higher and brighter and you should have no trouble spotting it. Jupiter and Saturn are also up at sunset, but much farther west. Mars rises around 11 p.m. early in the month and hits the eastern horizon around 9 p.m. at month’s end. At chart time, 10 p.m. on July 15, four of the five visible planets are up: Venus, Jupiter, Saturn and Mars. New moon occurs on the July 12 with the partial solar eclipse and full moon comes on the July 27 with the lunar eclipse. Wayne Harris-Wyrick is an Oklahoma astronomer and former director of the Kirkpatrick Planetarium at Science Museum Oklahoma. Questions or comments may be emailed to email@example.com.
<urn:uuid:d1522b3e-a087-4722-b5e0-f2a2aecbd5f9>
3.359375
430
Personal Blog
Science & Tech.
73.549354
95,619,198
Heavy rainfall recently caused flooding, landslides and power outages in some areas of Peru. NASA’s Integrated Multi-satellitE Retrievals for GPM (IMERG) measured that rainfall by using a merged precipitation product from a constellation of satellites. GPM is the Global Precipitation Measurement mission, which is a satellite co-managed by NASA and the Japan Aerospace Exploration Agency and is used in NASA’s IMERG data. GPM provides next-generation observations of rain and snow worldwide every three hours. Extremely heavy rainfall was reported in northern Peru on February 26 and February 27, 2016. Thousands were made homeless and at least two people were reportedly killed from the severe weather. The strong El Niño was partially blamed for the abnormally high rainfall in that area. NASA’s IMERG data collected from February 23-29, 2016 were used to estimate rainfall totals over this area of South America. The highest rainfall total estimates for this period were over 700 mm (27.6 inches). These extreme rainfall total estimates were shown east of the Andes in southeastern Peru and Bolivia. The satellites used in IMERG include DMSPs from the U.S. Department of Defense, GCOM-W from the Japan Aerospace Exploration Agency (JAXA), Megha-Tropiques from the Centre National D’etudies Spatiales (CNES) and Indian Space Research Organization (ISRO), NOAA series from the National Oceanic and Atmospheric Administration (NOAA), Suomi-NPP from NOAA-NASA, and MetOps from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT). All of the instruments (radiometers) onboard the constellation partners are intercalibrated with information from the GPM Core Observatory’s GPM Microwave Imager (GMI) and Dual-frequency Precipitation Radar (DPR). On March 3, Peru’s National Meteorological and Hydrological Service said that rain was forecast to continue along the North Coast. The service said that in 10 hours, the Lancones (Piura) station recorded a total of 4.3 inches (110 mm), while in the city of Tumbes recorded 2.4 inches (60 mm). |Enjoy the article? Then please consider donating today to ensure that Eurasia Review can continue to be able to provide similar content.|
<urn:uuid:6f1d8fef-b10d-45c3-a611-a3ae3632fde7>
3.265625
500
Truncated
Science & Tech.
34.826266
95,619,210
The theory of relativity encompasses two theories, special relativity and general relativity. The concepts introduced by the theories are: - Measurements of various quantities are relative to the velocities of observers. - Spacetime: space and time should be considered together and in relation to each other. - The speed of light is nonetheless invariant, the same for all observers. Special relativity is the theory of the structure of spacetime. It is based upon two postulates. 1. The laws of physics are the same for all observers in uniform motion relative to one another (principle of relativity). 2. The speed of light in a vacuum is the same for all observers, regardless of their relative motion or of the motion of the source of light. The theory copes with experiments better than classical mechanics but copes with many consequences such as - Relativity of simultaneity - Time dilation - Length contraction - Mass-energy equivalence - Maximum speed is finite The feature that defines special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. General relativity is a theory of gravitation. The development of general relativity began with the equivalence principle. This principle states accelerated motion and being at rest in the gravitational field are physically identical. Some consequences of general relativity are: - Gravitational time dilation - Orbits precess in a way unexpected in Newton’s theory of gravity - Rays of light bend in the presence of a gravitational field - Rotating masses “drag along” the spacetime around them. This is called “frame-dragging” - The universe is expanding and the far parts are moving away faster than the speed of light. © BrainMass Inc. brainmass.com July 16, 2018, 12:17 pm ad1c9bdddf General relativity is the theory of gravitation whose defining feature is used in Einstein field equations.
<urn:uuid:8ef38508-d656-4f87-8431-446a6e2e13a5>
4.34375
406
Knowledge Article
Science & Tech.
24.292529
95,619,243
Travelling Salesman Uncorks Synthetic Biology Bottleneck News Jan 07, 2016 Researchers have created a computer program that will open a challenging field in synthetic biology to the entire world. In the past decade, billions of dollars have been spent on technology that can quickly and inexpensively read and write DNA to synthesize and manipulate polypeptides and proteins. That technology, however, stumbles when it encounters a repetitive genetic recipe. This includes many natural and synthetic materials used for a range of applications from biological adhesives to synthetic silk. Like someone struggling with an “impossible” jigsaw puzzle, synthesizers have trouble determining which genetic piece goes where when many of the building blocks look the same. Scientists from Duke University have removed this hurdle by developing a freely available computer program based on the “traveling salesman” mathematics problem. Synthetic biologists can now find the least-repetitive genetic code to build the molecule they want to study. The researchers say their program will allow those with limited resources or expertise to easily explore synthetic biomaterials that were once available to only a small fraction of the field. “Synthesizing and working with highly repetitive polypeptides is a very challenging and tedious process, which has long been a barrier to entering the field,” said Ashutosh Chilkoti, the Theo Pilkington Professor of Biomedical Engineering and chair of the biomedical engineering department at Duke. “But with the help of our new tool, what used to take researchers months of work can now be ordered online by anyone for about $100 and the genes received in a few weeks, making repetitive polypeptides much easier to study.” Every protein and polypeptide is based on the sequencing of two or more amino acids. The genetic recipe for an individual amino acid—called a codon—is three letters of DNA long. But nature has 61 codons that produce 20 amino acids, meaning there are multiple codons that yield a given amino acid. Because synthetic biologists can get the same amino acid from multiple codons, they can avoid troublesome DNA repeats by swapping in different codons that achieve the same effect. The challenge is finding the least repetitive genetic code that still makes the desired polypeptide or protein. “I always thought there was a potential solution, that there must be a way of mathematically figuring it out,” said Chilkoti. “I had offered this problem to graduate students before, but nobody wanted to tackle it because it requires a particular combination of high-level math, computer science and molecular biology. But Nicholas Tang was the right guy.” After studying the problem in detail, Nicholas Tang, a doctoral candidate in Chilkoti’s laboratory, discovered that the solution is a version of the “traveling salesman” mathematics problem. The classic question is, given a map with a set of cities to visit, what is the shortest route possible that hits every city exactly once before returning to the original city? After writing the algorithm, Tang put it to the test. He created a laundry list of 19 popular repetitive polypeptides that are currently being studied in laboratories around the world. After passing the codes through the program, he sent them for synthesis by commercial biotechnology outfits—a task that would be impossible for any one of the original codes. Without the help of commercial technology, researchers spend months building the DNA that cells use to produce the proteins being studied. It’s a tedious, repetitive task—not the most attractive prospect to a young graduate student. But if the new program worked, the process could be reduced to a few weeks of waiting for machines to deliver the goods instead. When Tang received his DNA, they each were introduced into living cells to produce the desired polypeptide as hoped. “He made 19 different polymers from the field in one shot,” said Chilkoti. “What probably took tens of researchers years to create, he was able to reproduce in a single paper in a matter of weeks.” Chilkoti and Tang are now working to make the new computer program available online for anybody to use through a simple web form, opening a new area of synthetic biology for all to explore. “This advance really democratizes the field of synthetic biology and levels the playing field,” said Tang. “Before, you had to have a lot of expertise and patience to work with repetitive sequences, but now anyone can just order them online. We think this could really break open the bottleneck that has held the field back and hopefully recruit more people into the field.” Sex Differences Revealed: Heart Failure Death RatesNews Death from heart failure in men and women compared in a study of over 90 000 patientsREAD MORE
<urn:uuid:422fa7c9-27e5-411c-aa52-9fb50a4a255e>
3.109375
992
News Article
Science & Tech.
28.069476
95,619,246
Dominik Kulakowski -- The consequences of changing disturbance regimes for aspen in the western U.S. Forest ecosystems are being affected by both the indirect and direct results of climate change. Indirect drivers include increasing extent, magnitude and/or frequency of various forest disturbances such as wildfires and insect outbreaks. Direct drivers include droughts and altered temperature and precipitation regimes. Together these drivers are likely to affect the composition of Rocky Mountain forests, including the dominance and extent of quaking aspen (Populus tremuloides). Here I review recent work on these topics and propose possible future consequences for quaking aspen. Warm and dry conditions generally result in increased wildfires and bark beetle outbreaks, particularly in coniferous forests. Both of these disturbances have the potential to increase aspen dominance due to aspen’s ability to regenerate in and dominate post-disturbance environments. Furthermore, compounded disturbances (i.e. two or more disturbances occurring in short succession) also appear to favor regeneration of aspen over conifers and could further increase aspen dominance if compounded disturbances increase with projected shifts in climate regimes. However, aspen demography is contingent on favorable climatic conditions. If the same warm and dry conditions that bring about disturbance regimes potentially favorable to aspen dominance also characterize post-disturbance environments, theses climatic conditions may actually inhibit the ability of aspen to regenerate, grow, and survive. If aspen is able to increase dominance due to changing disturbance regimes and a changing climate, this will likely affect forest susceptibility to subsequent disturbances. As aspen stands are generally more mesic than adjacent conifer stands, the former are less likely to burn. Aspen stands are also less susceptible to bark beetle outbreaks that affect conifers and to wind disturbances. Thus any change in the amount of aspen in the landscape has the potential to feedback to the overall disturbance regime at broad scales. The consequences of changing disturbance regimes for quaking aspen in the western U.S. are likely to be complex and contingent on post-disturbance climatic conditions as well as on feedbacks among climate, disturbances, and forest species composition.
<urn:uuid:5400115a-3be9-4851-a371-bf79aa659e1d>
2.6875
436
Academic Writing
Science & Tech.
11.886957
95,619,247
How does energy impact the environment and our lives? Dr. Webber provides a sweeping view of energy’s role in society over hundreds of years with fun facts, myth busting, and an optimistic eye towards future technologies and solutions. Drones have been all over the media as well as our imaginations. What’s real and what’s possible for these remarkable flying machines? Todd Humphreys shares the ways drones can be used in the future versus what is portrayed in the movies. Every day in the news there are more stories about record-breaking weather. Kevin Kloesel talks about the science behind extreme weather events such as tornadoes and super storms, and how meteorologists deal with uncertainty in their forecasts. Chemistry is easy. Dr. Laude demonstrates how a few simple, recurring ideas in chemistry can help you understand concepts like climate change, how batteries work, and how food is made. Humans have long been fascinated with their evolutionary cousins in the primate world, monkeys. Dr. Anthony Di Fiore shares the social behavior of our primate cousins, and how they may be both strikingly similar to, and vastly different from, humans. Space travel and exploration are popular settings for works of science fiction and can be a source of inspiration for space technology. What scientists and engineers can actually do with space technology is different from what is presented in fiction, though the discoveries are no less fascinating. Animals often display fascinating colors to help show or hide themselves. Dr. Molly Cummings examines the visual mechanisms animals use to avoid being eaten or to advertise for mates. Dr. Cummings provides vivid examples from creatures in the ocean and the jungle. NASA’s third Mars rover “Curiosity” is an extraordinary machine that carries the biggest, most advanced suite of instruments for scientific studies ever sent to the Martian surface. Dr. John Grotzinger, lead scientist for Curiosity’s mission, shares discoveries about Mars’ past climate and geology. What are the motivations behind mating desires in men and women? To understand many differences and conflicts between the sexes, we must look into our evolutionary past. Dr. David Buss presents a unified theory of human mating strategy using insights from a global study of human mating behavior. Our ability to monitor and maintain our health largely relies on interactions with the medical community. Dr. Andy Ellington discusses exciting advances in low-cost, personalized diagnostics and the promise of creating virtual clinical trials through social networks to improve healthcare on a global scale.
<urn:uuid:4026e630-23a7-4caa-887a-0533f782f353>
3.078125
515
Content Listing
Science & Tech.
40.820462
95,619,260
Authors: B.F. Woodfield, S. Liu, J. Boerio-Goates, L. Astle Affilation: Brigham Young University, United States Pages: 662 - 665 Keywords: synthesis, nanoparticles, metal oxides, industrial In this paper we discuss the materials that can now be produced using a novel method for producing large quantities of high-purity metal, metal oxide, and mixed-metal oxide particles. These nanoparticles are uniform in size with dimensions that can range from 1 nm to greater than 10 m. Extraordinarily crystalline particles as well as amorphous materials can be prepared. The metal oxides produced by this method include most of the metals and semi-metals found in the periodic table, including but not limited to the transition metals, rare earth metals, and the Groups I, II, and III metals of the Periodic Table. The mixed-metal oxides that can be produced by this method comprise any combination of the aforementioned metals with any stoichiometry. A unique feature and advantage of the present method is the use of a simple process to prepare a vast array of metal oxides, mixed metal oxides, and metals with purity levels as high as 99.999+%, with tight control of the particle size (±10%), and in industrial size quantities. The present method is also much lower in cost and more environmentally “green” than other techniques currently available. Nanotech Conference Proceedings are now published in the TechConnect Briefs
<urn:uuid:05222368-59de-4139-9c98-d12a0b514d7d>
2.640625
313
Academic Writing
Science & Tech.
36.010406
95,619,264
In a recent paper in Functional Ecology, Leonie Valentine, Richard Hobbs and collaborators investigated how bandicoot digging changed soil properties that subsequently altered seedling growth. Published: Friday, 29 June 2018 01:25 Many marine protected areas are often unnecessarily expensive and located in the wrong places, an international study has shown. The University of Queensland was part of research which found protected areas missed many unique ecosystems, and have a greater impact on fisheries than necessary. A collaboration with the University of Hamburg, Wildlife Conservation Society and The Nature Conservancy assessed the efficiency of marine protected areas, which now cover 16 per cent of national waters around the world. Read more: Marine protected areas often expensive and misplaced Published: Thursday, 28 June 2018 05:47 To combat overexploitation in artisanal fisheries, CEED researchers used spatial planning to design an enforcement strategy that considers multiple factors including climate variability, existing seasonal fishing closures, conservation targets, and enforcement costs. Read more: More efficient enforcement of fisheries to protect estuarine biodiversity Published: Thursday, 21 June 2018 06:04 Increasing the amount of private land permanently protected for biodiversity, through agreements like conservation covenants, are an essential part of conservation policies around the world. But finding sustainable ways to increase the total amount of privately protected land is challenging. Read more: The conservation property guide Published: Tuesday, 19 June 2018 04:47 A new study has found marine reserves near heavily polluted areas are largely unable to do their job of protecting marine biodiversity - but are still considered vast improvements compared to having no protection. Researchers from more than 30 organisations, including CEED, undertook the massive study of almost 1800 tropical coral reefs around the world, examining different conservation strategies. Read more: Marine reserves under too much pressure to function correctly Published: Monday, 18 June 2018 05:18 Animals interact in complex ways and form interconnected ecosystems. Even small changes to those complex systems could have potentially devastating consequences to the entire environmental system. Because of this, predicting outcomes of conservation management actions can be challenging – even more so with poor data. Read more: The fuzzy approach to complex problems Published: Wednesday, 13 June 2018 01:58 Palm plantations with certification did not excel compared to their non-certified equivalents when it came to protecting orangutan, increasing wealth, or access to healthcare for local villagers, say researchers. Read more: Sustainable certified palm oil scheme failing to achieve goals
<urn:uuid:4c621063-acbd-4581-ba1a-e6c5f66c902f>
3.171875
508
Content Listing
Science & Tech.
3.369646
95,619,269
Java Performance Tuning Java(TM) - see bottom of page Our valued sponsors who help make this site possible JProfiler: Get rid of your performance problems and memory leaks! Training online: Threading Essentials course Tips March 2015 Get rid of your performance problems and memory leaks! Get rid of your performance problems and memory leaks! Back to newsletter 172 contents Using wait(), notify() and notifyAll() in Java: common problems and mistakes (Page last updated August 2012, Added 2015-03-28, Author Neil Coffey, Publisher javamex). Tips: - wait() should be called in a loop, with a condition test in each loop iteration that evaluates to false if the notifying thread has notified (this catches notifications that trigger before entering the loop even once). - wait() can exit without a notifcation having been triggered (so should be called in a loop). - Do not wait() forever ( long forever = 0; object.wait(forever)). - Choosing between notify() and notifyAll() is to some extent a tuning choice; notifyAll() works in all situations but reduces throughput by increasing unnecesary context switching; notify()ing is generally more efficient unless you actually need to wake up multiple threads to process following a condition, in which case calling notify() can stall (as threads which need to be woken up may never be woken up). - object1.wait() only releases the lock of object1, not other locks - There is a non-deterministic delay between notify()ing and the notified thread executing (the lock is released, the waiting thread is set to be runnable, and then run when the OS next schedules it). Why Non-Blocking? (Page last updated March 2015, Added 2015-03-28, Author Bozhidar Bozhanov, Publisher bozho). Tips: - Non-blocking applications are written so that threads never block. Instead of blocking, threads get notified when new data is available. - Implementations of the reactor pattern typically have one thread serving all requests by multiplexing between tasks and never blocking; when something is ready to be processed, it is immediately processed or handed off to a thread. - There is a trade-off in using non-blocking implementations - you have higher complexity for potentially higher scalability, but latency can suffer and a blocking implementation is easier to understand and test. - For thread-safety you just have to follow one simple rule - no mutable state in the code. No instance variables and you are safe, regardless of how many threads execute the same piece of code. - Use only immutable and concurrent data structures to ensure thread-safety. Understanding the JVM and Low Latency Applications (Page last updated March 2013, Added 2015-03-28, Author Simon Ritter, Publisher Oracle). Tips: - If garbage collection kicks in, there is a pause of variable length meaning you get non-deterministic performance. - The frequency of minor GC is dependent on the rate of object allocation (how quickly you fill Eden) and the size of Eden. - The frequency of object promotion to tenured space is dependent on how quickly objects age (in minor GC counts), the size of the survivor spaces, and how many times objects are copied across survivor spaces. - Object retention (live objects) impacts latency more than object allocation - minor GC time is a function of how many live objects there are (and the complexity of how they are connected). - Very short lived objects (never copied out of Eden) are efficient to use, but if you use too many you cycle Eden faster causing more frequent pauses. Object allocation to Eden is very fast - about 10 cycles compared with about 30 cycles for the fastest malloc; and are very cheap to reclaim - they are just ignored and their space gets overwritten later! - The ideal application only experiences small minor GCs and no old generation GCs, so negligible promotion. - Start with parallel GC (-XX:+UseParallelOldGC and/or -XX:+UseParallelGC) as this provides the fastest minor GCs; move to CMS GC if old generation collection pauses are too long, but this will make minor GCs longer due to promition into free lists. - Avoid creating large objects as much as possible (they may not fit into Eden, they must be zeroed, they can cause fragmentation) or try to keep them initialized during application initialization. - Avoid resizing collections - try to size them from the start to be as large as they'll need. - Don't implement finalize() methods. Explicitly free resources, or use a Reference if you absolutely have to clear up during GC. - SoftReference clearup is up to the garbage collector and is nondeterministic (though you can work it out for particular versions). - Inner classes have an implict reference to the outer instance, which increases object connection complexity which in turns can make GCs longer. (From Java 8 inner classes can be implemented as lambda expressions, avoiding this complexity). - CMS has a pause target, but that can only be targeted by changing internal heap sizes so is very limited. - The G1 collector is targeted at replacing CMS - it includes compacting (CMS doesn't) and is more predictable than CMS. - Code in catch blocks are not normally JIT compiled (the JIT compiler assumes it won't be reached so no need to optimize it). - JIT deoptimisation causes non-deterministic behaviour. Tuning Large Scale Java Platforms (Page last updated November 2014, Added 2015-03-28, Author Emad Benjamin, Jamie O'Meara, Publisher InfoQ). Tips: - Establish your load profile - know or estimate: comcurrent requests; requests per second; peak & average response times. Specify the response time SLA. - To size your system, create benchmark tests based on your known or anticipated load profile, and tune and scale the test systems until you achieve your SLAs - this establishes your production configuration. - JVM Memory = Max heap + perm heap (if present, removed from Java 8 in HotSpot) + NumberOfConcurrentThreads * -Xss + other memory (nio direct memory, JNI memory, JIT code cache, classloaders, socket buffers, additional GC info) - The JVM should be sized below the memory available to a socket, eg on a 2-socket machine with 96G, each socket has 48G available to it (or half that on AMD) so the JVM heap size needs to be sized so that the full JVM process size is less than 48G. Otherwise NUMA interleaving happens and performance can drop by 30%. (-XX:+UseNUMA makes some GCs NUMA aware, but some don't work with it, eg parallelold works, CMS doesn't). - GC tuning is essentially balancing between latency (pause times) and throughput (after you have eliminated the major inefficiencies - The most successful general GC algorithm option is ParNew in the young gen and CMS in the old gen. - GC tuning: 1. measure minor GCs, and adjust young gen size and/or parallel thread count to minimize for either individual pauses or overall stop time, depending on whether latency or throughput is your target; 2. Adjust the total heap size in the same way; 3. Adjust survivor space size again for the same targets, but be aware the smaller survivor spaces can cause more promotion which can then cause more frequent old gen GCs. - An increase in young gen heap size will decrease the frequency of young GCs, but this can make individual pause times suffer (depends on the amount of live objects after a GC) . - Example best tuned config for a 50G JVM: -Xms50g -Xmx50g -Xmn16g -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+ScavengeBeforeFullGC -XX:TargetSurvivorRatio=80 -XX:SurvivorRatio=8 -XX:+UseBiasedLocking -XX:MaxTenuringThreshold=15 -XX:ParallelGCThreads=6 -XX:+OptimizeStringConcat -XX:+UseCompressedStrings -XX:+UseStringCache - For IBM JVMs, gencon GC algorithm tends to give the best GC performance. - A microservice architecture should not result in most requests needing to traverse most of your estate - that would imply the microservices are over-fragmented and should be coalesced into not quite so "micro" services. - More JVMs typically means you have more overall GC for the same total heap size (eg 4 x 1G JVMs uses more CPU because of more GC than 2 x 2G JVMs). - 64bit JVMs compress oops so that it should use a similar amount of memory as a 32bit one up to 4G. From 4G to 32G, there is also a benefit in compressed oops for memory, but above that there isn't. You Won't Believe How the Biggest Sites Build Scalable and Resilient Systems! (Page last updated January 2015, Added 2015-03-28, Author Jeremy Edberg, Philip Fisher-Ogden, Publisher InfoQ). Tips: - Build for at least 3 instances - that ensures you have architected correctly for horizontal scaling. - Automate as much as possible - confg, deployment, monitoring, alerts, etc, together with self-service interfaces for any of these. - Monitoring should be built-in as part of the development. - Any system that you haven't broken parts of is not a resilient system. - Disable non-critical features rather than the whole system when parts of the system fail. - Data best practices: Never have a single copy of data - have multiple copies of data; keep the copies in multiple datacentres (or availability zones); avoid keeping state on a single instance; don't keep secret keys on an instance (that's hugely vulnerable); - Queueing help scale - because they buffer throughout the system. And by monitoring the queues you can see if things get backed up and by how much. - Provide cached content rather than immediate content if the cached content is sufficiently recent or acceptable. - Sharding works well but you have to be careful about how the data is sharded. - Lambda/kappa architecture duplicates stream processing to a fast less accurate and a slower more accurate stream; you get quick results initially then accurate ones later. - Stateless scales much much much more easily than stateful. ExecutorService vs ExecutorCompletionService in Java (Page last updated March 2015, Added 2015-03-28, Author Akhil Mittal, Publisher DZone). Tips: - ExecutorService executes tasks, but doesn't tell you when individual tasks have completed. ExecutorCompletionService allows you to find task results as each task is completed by asking ExecutorCompletionService.take() for the next completed task (blocking until one comes available). - With ExecutorCompletionService you can kick off a set of parallel tasks, and when the first completes, cancel the other (iei if you just want the fastest to complete). Back to newsletter 172 contents Last Updated: 2018-06-28 Copyright © 2000-2018 Fasterj.com. All Rights Reserved. All trademarks and registered trademarks appearing on JavaPerformanceTuning.com are the property of their respective owners. Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries. JavaPerformanceTuning.com is not connected to Oracle Corporation and is not sponsored by Oracle Corporation. RSS Feed: http://www.JavaPerformanceTuning.com/newsletters.rss Trouble with this page? Please contact us
<urn:uuid:bf71b85a-e570-41be-b243-309468ad62f3>
2.765625
2,536
Content Listing
Software Dev.
40.239996
95,619,285
Published today in the prestigious journal Proceedings of the Royal Society B, the research investigates the genetic and geographical relationships between different forms of crimson rosellas and the possible ways that these forms may have arisen. Dr Gaynor Dolman of CSIRO’s Australian National Wildlife Collection says there are three main colour ‘forms’ of the crimson rosella – crimson, yellow and orange – which originated from the same ancestral population and are now distributed throughout south eastern Australia. “Many evolutionary biologists have argued that the different forms of crimson rosellas arose, or speciated, through ‘ring speciation’,” she says. The ring speciation hypothesis predicts that a species that spreads to new areas may eventually join back up with itself, forming a ring. By that time, the populations at the join in the ring may be two distinct species and unable to interbreed, despite continuous gene flow, or interbreeding, between populations around the ring. “We found that in the case of crimson rosellas, their three separate genetic groups don’t show a simple link to the geographical distribution of the colour forms,” Dr Dolman says. “For example, orange Adelaide and crimson Kangaroo Island rosellas are separated by 15km of ocean but are genetically similar. Conversely, genetic dissimilarity was found in the geographically linked yellow and orange populations in inland south eastern Australia. Dr Dolman says.“We rejected the ring hypothesis because it predicts only one region of genetic dissimilarity, which should occur at the geographical location of the join in the ring, around the headwaters of the Murray and Murrumbidgee Rivers. “However, it is possible that crimson rosellas formed a ring at some stage in their evolutionary history, but that the evidence has been lost through climatic or environmental changes,” she says. Wildlife genetic research of this kind is increasing our understanding of the biogeography and evolution of Australia’s terrestrial vertebrates, helping Australia sustainably manage its biodiversity and ecosystem functions in the face of land use and climate change. This work involved a team of researchers from CSIRO, Deakin University and the South Australian Museum. Andrea Wild | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:77283e6e-7b30-4c76-83bb-90c7b56c2992>
3.625
1,066
Content Listing
Science & Tech.
32.953095
95,619,290
A View from Emerging Technology from the arXiv How to Use Pulsars for Interstellar Navigation The signals from pulsars form a natural GPS system that could locate any object in the galaxy to within a meter. The Global Position System has revolutionised navigation on Earth. It consists of a network of satellites that each broadcast a time signal. A receiver on Earth can then work out its position in three-dimensional space by comparing the arrival times of the signals from at least three satellites. But the system cannot help with navigation on an interplanetary scale or beyond. Today, Bertolomé Coll at the Observatoire de Paris in France and a friend, Albert Tarantola, propose an interstellar GPS system that has the ability to determine the position of any point in the galaxy to within a metre. Their idea is to tune in to the signals from four pulsars: 0751+1807 (3.5ms), 2322+2057 (4.8ms), 0711-6830 (5.5ms) and 1518+0205B (7.9ms), which each generate regular millisecond radio signals. These form a rough tetrahedron centred on the Solar System. Why four pulsars? Coll points out that on these scales relativity has to be taken into account when processing the signals and to do this, the protocol has to specify a position in space-time, which requires four signals. Coll then defines the origin for this system of co-ordinates as 00:00 on 1 January 2001 at the focal point of the Interplanetary Scintillation Array, the radio telescope near Cambridge in the UK that first observed pulsars. With the co-ordinate system established, any interplanetary spacecraft could then use the signals from these pulsars to determine its position in this co-ordinate system to within a few nanoseconds, which corresponds to about a metre. Handy, and cheap too. Ref: http://arxiv.org/abs/0905.4121: Using Pulsars to Define Space-Time Coordinates Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today
<urn:uuid:25957291-6a53-4c57-b2ff-c488849e3c38>
3.59375
453
Truncated
Science & Tech.
48.062045
95,619,292
The challenges of combating terrorism, both domestically and abroad. ASU Study Reveals Unusual Adaptation In Gila Monster Hatchlings Some days it just doesn't pay to get out of bed. Now, an Arizona State University study shows how newborn Gila monsters (Heloderma suspectum) take that sentiment to a new level. The findings appear in Proceedings of the Royal Society B. Animals are not always born at the optimum time for survival. To cope, nature offers some creatures "snooze buttons" — putting off leaving the nest, delayed egg hatching and embryonic diapause, which "pauses" embryonic development. In many cases, these strategies allow animals to "overwinter" — pass through hard times and emerge when conditions improve and supplies grow more plentiful. Bears hibernate, and many insects overwinter as adults, pupae or eggs. But, among hatchlings, nest overwintering was once found only in aquatic turtles facing frigid winters. The new study suggests baby Gila monsters overwinter and then some, remaining in their nests for 7-10 months after hatching. Lead author Dale DeNardo of Arizona State University's School of Life Sciences says the reason could come down to food. "There are no small lizards laying eggs in the winter or even in the early spring, and so the value of coming to the surface, looking for food — therefore wasting energy and putting yourself vulnerable to predators — there isn't a value of doing that," DeNardo said. The largest lizards in the U.S., Gila monsters typically lay eggs during the onset of the summer monsoon in July, to help keep them from drying out. Based on observed incubation times, these eggs ought to hatch in the autumn. But newborn Gila monsters do not appear until late April through early August — the hottest time of year, when adult Gila monsters reduce their activity. Skipping fall and winter makes sense. Gila monsters raid nests for a living; they feed on the offspring of other vertebrates, mainly small lizards, which don't breed in the Sonoran Desert at that time. In spring, birds and small mammals breed, and their nests contain prey too large for hatchling Gila monsters to eat. That leaves summer. The paper also suggests night-time temperature could contribute to the late emergence of hatchlings. As summer heats up, adult Gila monsters become chiefly nocturnal, and hatchlings might benefit from the cover of darkness and the warmth of summer nights. Determining the role each factor plays will require further research. Either way, a summer emergence, however hot, might offer Gila monster hatchlings the best chance of survival.
<urn:uuid:e63aa2f9-e6c5-4cab-a925-d204a3295e74>
3.578125
563
Truncated
Science & Tech.
45.6612
95,619,296
General: Loosely dichotomously branched and having branch axils of 40–50° divergence. The axis is woody in texture and a dark golden brown in color, easily seen through the thin, single layer of translucent coenenchymal scales. Calyces are white and arranged in whorls of 3 or 4; calyces face upward. Size: 1 m (w) Ocean range (global): Known only from type locality off California and questionably form off Timor, 520 m (Versluys, 1906). Published depth range: 1,683 m Habitat description: Benthic, hard substrate, volcanic talus. ReferencesEncyclopedia of Life Tree of Life World Register of Marine Species National Center for Biotechnology Information Cairns, S.D. (2007). Calcaxonian Octocorals (Cnidaria; Anthozoa) from Eastern Pacific Seamounts. Proceedings of the California Academy of Sciences, 58: 511-541. Citation: Calyptrophora bayeri (Cairns, 2007) Deep-Sea Guide (DSG) at http://dsg/mbari.org/dsg/view/concept/Calyptrophora%20bayeri. Monterey Bay Aquarium Research Institute (MBARI). Consulted on 2018-07-15.
<urn:uuid:5ac2ef85-9280-4727-964f-ccaebaa0cead>
2.703125
293
Knowledge Article
Science & Tech.
47.345325
95,619,331
Produto disponível em até 15min no aplicativo Kobo, após a confirmação  do pagamento! Você pode ler este livro digital em vários dispositivos: IOs - Clique para baixar o app gratuitoAndroid - Clique para baixar o app gratuitoPC - Clique para baixar o app gratuitoBlackBerry - Clique para baixar o app gratuitoWindows Phone - Clique para baixar o app gratuitoKobo - Conheça nossa linha de leitores digitais Monitoring is integral to all aspects of policy and management for threatened biodiversity. It is fundamental to assessing the conservation status and trends of listed species and ecological communities. Monitoring data can be used to diagnose the causes of decline, to measure management effectiveness and to report on investment. It is also a valuable public engagement tool. Yet in Australia, monitoring threatened biodiversity is not always optimally managed. Monitoring Threatened Species and Ecological Communities aims to improve the standard of monitoring for Australia's threatened biodiversity. It gathers insights from some of the most experienced managers and scientists involved with monitoring programs for threatened species and ecological communities in Australia, and evaluates current monitoring programs, establishing a baseline against which the quality of future monitoring activity can be managed. Case studies provide examples of practical pathways to improve the quality of biodiversity monitoring, and guidelines to improve future programs are proposed. This book will benefit scientists, conservation managers, policy makers and those with an interest in threatened species monitoring and management.
<urn:uuid:c15aff02-0cbb-414a-ad19-28c96717111e>
2.671875
336
Product Page
Science & Tech.
-4.123188
95,619,346
The hope for restoring these arid environments and preventing further desertification may exist on the surface of the desert itself, according to new research by American Society of Agronomy and Soil Science Society of America Member Mandy Williams, a lab manager in the school of Life Science at the University of Nevada – Las Vegas. She describes the complex blend of microorganisms carpeting arid environments as biological soil crusts (BSC). The Soil Science Society of America (SSSA) is a progressive, international scientific society that fosters the transfer of knowledge and practices to sustain global soils. Based in Madison, WI, SSSA is the professional home for 6,000+ members dedicated to advancing the field of soil science. It provides information about soils in relation to crop production, environmental quality, ecosystem sustainability, bioremediation, waste management, recycling, and wise land use. SSSA supports its members by providing quality research-based publications, educational programs, certifications, and science policy initiatives via a Washington, DC, office. Founded in 1936, SSSA proudly celebrated its 75th Anniversary in 2011. For more information, visit www.soils.org or follow @SSSA_soils on Twitter. Teri Barr | Newswise Science News Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:efb41030-e998-4f2c-85eb-bad013e2147f>
3.4375
914
Content Listing
Science & Tech.
35.398218
95,619,396
posted by Stephen Context: The function v(t) represents the velocity of a particle moving along a horizontal line at any time, t, greater than or equal to zero. If the velocity is positive, the particle moves to the right. If the velocity is negative, the particle is moving to the left. The question: A particle has a velocity function v(t)=Asin(B(t-C))+D, with the minimum velocity at the point (1, -4) and the maximum velocity at the point (4, 2). Evaluate the constants A,B, C, and D and write the complete function. v = A sin(B(t-C))+D max v = D+A = 2 min v = D-A = -4 So, D = -1 and A = 3 v = 3sin(B(t-C))-1 max of sinθ occurs at θ = π/2 max of sinθ occurs at θ = 3π/2 so, see what you can do with that, the way I git A and D above. Recall that the period of sin(kt) is 2π/k What I ended up getting was v(t)=3 sin( pi/3 (t- ((4-pi/2)))) -1 so B= pi/3 and C=4–pi/2 I'll have to see where you went wrong. Too bad you didn't show your work, like me. wolframalpha says you are off:
<urn:uuid:069fad99-794f-4e6a-a460-db725fb6e6a4>
3.953125
332
Q&A Forum
Science & Tech.
86.337046
95,619,397
This means that new brain networks were likely added in the course of evolution from primate ancestor to human. These findings, based on an analysis of functional brain scans, were published in a study by neurophysiologist Wim Vanduffel (KU Leuven and Harvard Medical School) in collaboration with a team of Italian and American researchers. Our ancestors evolutionarily split from those of rhesus monkeys about 25 million years ago. Since then, brain areas have been added, have disappeared or have changed in function. This raises the question, ‘Has evolution given humans unique brain structures?’. Scientists have entertained the idea before but conclusive evidence was lacking. By combining different research methods, we now have a first piece of evidence that could prove that humans have unique cortical brain networks. Professor Vanduffel explains: "We did functional brain scans in humans and rhesus monkeys at rest and while watching a movie to compare both the place and the function of cortical brain networks. Even at rest, the brain is very active. Different brain areas that are active simultaneously during rest form so-called 'resting state’ networks. For the most part, these resting state networks in humans and monkeys are surprisingly similar, but we found two networks unique to humans and one unique network in the monkey.” “When watching a movie, the cortex processes an enormous amount of visual and auditory information. The human-specific resting state networks react to this stimulation in a totally different way than any part of the monkey brain. This means that they also have a different function than any of the resting state networks found in the monkey. In other words, brain structures that are unique in humans are anatomically absent in the monkey and there no other brain structures in the monkey that have an analogous function. Our unique brain areas are primarily located high at the back and at the front of the cortex and are probably related to specific human cognitive abilities, such as human-specific intelligence."The study used fMRI (functional Magnetic Resonance Imaging) scans to visualise brain activity. fMRI scans map functional activity in the brain by detecting changes in blood flow. The oxygen content and the amount of blood in a given brain area vary according to a particular task, thus allowing activity to be tracked. | KU Leuven Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:4d66f2dc-61ab-4831-9722-ee8a92516ca2>
3.6875
1,115
Content Listing
Science & Tech.
37.52821
95,619,415
SQL statements produce diagnostic information that populates the diagnostics area. Standard SQL has a diagnostics area stack, containing a diagnostics area for each nested execution context. Standard SQL also supports DIAGNOSTICS syntax for referring to the second diagnostics area during condition handler execution. MySQL STACKED keyword since MySQL 5.7. This section describes the structure of the diagnostics area in MySQL, the information items recognized by MySQL, how statements clear and set the diagnostics area, and how diagnostics areas are pushed to and popped from the stack. The diagnostics area contains two kinds of information: Statement information, such as the number of conditions that occurred or the affected-rows count. Condition information, such as the error code and message. If a statement raises multiple conditions, this part of the diagnostics area has a condition area for each one. If a statement raises no conditions, this part of the diagnostics area is empty. For a statement that produces three conditions, the diagnostics area contains statement and condition information like this: Statement information: row count ... other statement information items ... Condition area list: Condition area 1: error code for condition 1 error message for condition 1 ... other condition information items ... Condition area 2: error code for condition 2: error message for condition 2 ... other condition information items ... Condition area 3: error code for condition 3 error message for condition 3 ... other condition information items ... The diagnostics area contains statement and condition information items. Numeric items are integers. The character set for character items is UTF-8. No item can be NULL. If a statement or condition item is not set by a statement that populates the diagnostics area, its value is 0 or the empty string, depending on the item data The statement information part of the diagnostics area contains these items: The condition information part of the diagnostics area contains a condition area for each condition. Condition areas are numbered from 1 to the value of the NUMBER statement condition item. If NUMBER is 0, there are no condition areas. Each condition area contains the items in the following list. All items are standard SQL except MYSQL_ERRNO, which is a MySQL extension. The definitions apply for conditions generated other than by a signal (that is, by a RESIGNAL statement). For nonsignal conditions, MySQL populates only those condition items not described as always empty. The effects of signals on the condition area are described later. CLASS_ORIGIN: A string containing the class of the RETURNED_SQLSTATEvalue. If the RETURNED_SQLSTATEvalue begins with a class value defined in SQL standards document ISO 9075-2 (section 24.1, SQLSTATE), 'ISO 9075'. Otherwise, SUBCLASS_ORIGIN: A string containing the subclass of the 'ISO 9075'. Otherwise, RETURNED_SQLSTATE: A string that indicates the SQLSTATEvalue for the condition. MESSAGE_TEXT: A string that indicates the error message for the condition. MYSQL_ERRNO: An integer that indicates the MySQL error code for the condition. CONSTRAINT_NAME: Strings that indicate the catalog, schema, and name for a violated constraint. They are always empty. COLUMN_NAME: Strings that indicate the catalog, schema, table, and column related to the condition. They are always empty. CURSOR_NAME: A string that indicates the cursor name. This is always empty. MYSQL_ERRNO values for particular errors, see Section B.3, “Server Error Codes and Messages”. RESIGNAL) statement populates the diagnostics area, its SET clause can assign to any condition information item except RETURNED_SQLSTATE any value that is legal for the item data type. also sets the RETURNED_SQLSTATE value, but not directly in its SET clause. That value comes from the SIGNAL also sets statement information items. It sets NUMBER to 1. It ROW_COUNT to −1 for errors and 0 Nondiagnostic SQL statements populate the diagnostics area automatically, and its contents can be set explicitly with the RESIGNAL statements. The diagnostics area can be examined with DIAGNOSTICS to extract specific items, or with SHOW WARNINGS or SHOW ERRORS to see conditions SQL statements clear and set the diagnostics area as follows: When the server starts executing a statement after parsing it, it clears the diagnostics area for nondiagnostic statements. Diagnostic statements do not clear the diagnostics area ( If a statement raises a condition, the diagnostics area is cleared of conditions that belong to earlier statements. The exception is that conditions raised by RESIGNALare added to the diagnostics area without clearing it. Thus, even a statement that does not normally clear the diagnostics area when it begins executing clears it if the statement raises a condition. The following example shows the effect of various statements on the diagnostics area, using WARNINGS to display information about conditions DROP TABLE statement clears the diagnostics area and populates it when the mysql> DROP TABLE IF EXISTS test.no_such_table; Query OK, 0 rows affected, 1 warning (0.01 sec) mysql> SHOW WARNINGS; +-------+------+------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------+ | Note | 1051 | Unknown table 'test.no_such_table' | +-------+------+------------------------------------+ 1 row in set (0.00 sec) statement generates an error, so it clears and populates the mysql> SET @x = @@x; ERROR 1193 (HY000): Unknown system variable 'x' mysql> SHOW WARNINGS; +-------+------+-----------------------------+ | Level | Code | Message | +-------+------+-----------------------------+ | Error | 1193 | Unknown system variable 'x' | +-------+------+-----------------------------+ 1 row in set (0.00 sec) statement produced a single condition, so 1 is the only valid condition number for DIAGNOSTICS at this point. The following statement uses a condition number of 2, which produces a warning that is added to the diagnostics area without clearing it: mysql> GET DIAGNOSTICS CONDITION 2 @p = MESSAGE_TEXT; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> SHOW WARNINGS; +-------+------+------------------------------+ | Level | Code | Message | +-------+------+------------------------------+ | Error | 1193 | Unknown system variable 'xx' | | Error | 1753 | Invalid condition number | +-------+------+------------------------------+ 2 rows in set (0.00 sec) Now there are two conditions in the diagnostics area, so the GET DIAGNOSTICS statement mysql> GET DIAGNOSTICS CONDITION 2 @p = MESSAGE_TEXT; Query OK, 0 rows affected (0.00 sec) mysql> SELECT @p; +--------------------------+ | @p | +--------------------------+ | Invalid condition number | +--------------------------+ 1 row in set (0.01 sec) When a push to the diagnostics area stack occurs, the first (current) diagnostics area becomes the second (stacked) diagnostics area and a new current diagnostics area is created as a copy of it. Diagnostics areas are pushed to and popped from the stack under the following circumstances: Execution of a stored program A push occurs before the program executes and a pop occurs afterward. If the stored program ends while handlers are executing, there can be more than one diagnostics area to pop; this occurs due to an exception for which there are no appropriate handlers or due to RETURNin the handler. Any warning or error conditions occurring during stored program execution then are added to the current diagnostics area, except that, for triggers, only errors are added. When the stored program ends, the caller sees these conditions in its current diagonstics area. Execution of a condition handler within a stored program When a push occurs as a result of condition handler activation, the stacked diagnostics area is the area that was current within the stored program prior to the push. The new now-current diagnostics area is the handler's current diagnostics area. GET [CURRENT] DIAGNOSTICSand GET STACKED DIAGNOSTICScan be used within the handler to access the contents of the current (handler) and stacked (stored program) diagnostics areas. Initially, they return the same result, but statements executing within the handler modify the current diagnostics area, clearing and setting its contents according to the normal rules (see Section 188.8.131.52.3, “How the Diagnostics Area is Populated”). The stacked diagnostics area cannot be modified by statements executing within the handler except If the handler executes successfully, the current (handler) diagnostics area is popped and the stacked (stored program) diagnostics area again becomes the current diagnostics area. Conditions added to the handler diagnostics area during handler execution are added to the current diagnostics area. RESIGNALstatement passes on the error condition information that is available during execution of a condition handler within a compound statement inside a stored program. RESIGNALmay change some or all information before passing it on, modifying the diagnostics stack as described in Section 184.108.40.206, “RESIGNAL Syntax”. Certain system variables control or are related to some aspects of the diagnostics area: max_error_countcontrols the number of condition areas in the diagnostics area. If more conditions than this occur, MySQL silently discards information for the excess conditions. (Conditions added by RESIGNALare always added, with older conditions being discarded as necessary to make room.) warning_countindicates the number of conditions that occurred. This includes errors, warnings, and notes. Normally, warning_countare the same. However, as the number of conditions generated exceeds max_error_count, the value of warning_countcontinues to rise whereas NUMBERremains capped at max_error_countbecause no additional conditions are stored in the diagnostics area. error_countindicates the number of errors that occurred. This value includes “not found” and exception conditions, but excludes warnings and notes. Like warning_count, its value can exceed is 10, the diagnostics area can contain a maximum of 10 condition areas. Suppose that a statement raises 20 conditions, 12 of which are errors. In that case, the diagnostics area contains the first 10 conditions, NUMBER is 10, warning_count is 20, and error_count is 12. Changes to the value of max_error_count have no effect until the next attempt to modify the diagnostics area. If the diagnostics area contains 10 condition areas and max_error_count is set to 5, that has no immediate effect on the size or content of the
<urn:uuid:1f286f26-c4db-4a0c-b18e-e4e75c34b4f9>
2.796875
2,411
Documentation
Software Dev.
40.741595
95,619,424
Calculation of the Branching Behavior of Nonlinear Equations Equipped now with some knowledge about continuation, we assume that we are able to trace branches. We take for granted thatthe entirebranch canbe traced, provided one solution on that branch can be found. In this chapter we address problems of locating bifurcation points and switching branches. Essential ideas and methods needed for a practical bifurcation and stability analysis are presented. KeywordsTurning Point Indirect Method Bifurcation Point Newton Method Bifurcation Curve Unable to display preview. Download preview PDF.
<urn:uuid:e2c617c6-3136-4d0f-82fb-3231c330154f>
3.015625
122
Truncated
Science & Tech.
21.265245
95,619,447
Kolmogorov-Smirnov Two-Sample Tests We have not previously discussed the use of criteria suggested by direct comparison of empirical (sample) cumulative distribution functions with one another or with hypothetical c.d.f.’s (“goodness of fit”). This important approach leads to a wide variety of procedures which stand apart from the procedures of earlier chapters in several respects. They are expressed in a different form. The relevant statistics are not approximately or asymptotically normally distributed. The theory of their asymptotic behavior is fascinating and raises different kinds of problems requiring different kinds of tools. The mathematical interest of these and other problems has played a larger role than statistical questions in motivating the extensive literature about them, although there is also some excellent work on statistically important questions. KeywordsNull Hypothesis Null Distribution Empirical Distribution Function Asymptotic Null Distribution Population Quantity Unable to display preview. Download preview PDF.
<urn:uuid:4fcf3034-1589-49ed-9a8d-f14d9604d099>
2.5625
199
Truncated
Science & Tech.
14.261069
95,619,448
An expanded view of lightning around the globe is coming closer for scientists at The University of Alabama in Huntsville (UAH), thanks to a repurposed measuring instrument. UAH researchers have passed NASA qualifying inspections and shipped out a Lightning Imaging Sensor (LIS) in preparation for its planned March 2016 flight to the International Space Station (ISS). The instrument, dubbed ISS LIS, was originally built as a flight spare for a LIS mission that launched in November 1997 aboard NASA’s Tropical Rainfall Measuring Mission (TRMM). That instrument is still in operation today. Like the LIS that flew before it, the current ISS LIS is a space-based instrument used to detect the distribution and variability of total cloud-to-cloud, intracloud and cloud-to-ground lightning that occurs in the tropical regions of the globe. Funded by NASA, ISS LIS is being shipped to the Johnson Space Center (JSC) in Houston, Texas, where it will be integrated onto the Space Test Program H5 spacecraft as one of 10 instruments. The integrated H5 spacecraft will then undergo environmental testing at JSC through August of 2015. The H5 will then be shipped to NASA Kennedy Space Center for integration onto the EXpedite the PRocessing of Experiments to Space Station (EXPRESS) Pallet Adapter (ExPA). The ExPA will in turn be attached to a SpaceX Dragon Capsule for the 2016 launch. “The ISS LIS will be integrated onto the Space Test Program H5 spacecraft at NASA’s Johnson Space Center in Houston, where it will undergo testing through August 2015,” says Mike Stewart, a UAH Earth Systems Science Center (ESSC) principal research engineer. “The ISS LIS will be one of 10 instruments on the H5.” Once on-orbit, the ExPA will be robotically mounted to the EXPRESS Logistics Carrier (ELC), which provides the payload interface to the ISS. The ELC will be attached to the ISS truss structure. In less than 16 months, UAH’s ESSC and Rotorcraft Systems Engineering and Simulation Center (RSESC) designed, manufactured and space-qualified a new Interface Unit to adapt the legacy TRMM/LIS Electronics Unit and Sensor Unit to the STP H5 spacecraft. The legacy TRMM/LIS Units also required adaptations for the STP H5. “This development is an excellent follow-on to the original LIS, extending our ability to observe global lightning activity over a longer period of time,” says Dr. Hugh Christian, a principal researcher at ESSC and the principal investigator for the ISS LIS instrument. “Further, ISS LIS will be in a higher orbital plane, thus extending our observations to higher latitudes.” ISS LIS is designed to detect lightning during the daytime and nighttime. It takes 560 images per second and transforms those images into lightning events using specialized electronic processors. ISS LIS will be launched at about the same time as the Geostationary Lightning Mapper (GLM), much of which was also designed and developed at UAH. It will provide important validation data for GLM. In addition, there will be important complimentary instruments on the space station that will enable researchers to significantly extend knowledge of Terrestrial Gamma ray Flashes (TGF). “We hope to continue our studies of lightning and severe weather, investigate the relationship between global lightning activity and climate change, provide validation for the GLM, and improve our understanding of TGFs,” says Dr. Christian. UAH’s ESSC was the ISS LIS technical and scientific lead. The university’s RSESC was the program manager and lead systems engineer. “RSESC supported ESSC by providing the engineering and program management to complete the project,” says Sue O’Brien, principal research engineer at RSESC. “UAH worked with NASA’s Marshall Space Flight Center and provided the information and analysis to complete the certification and associated processes,” O’Brien says. “We prepared this payload for flight and are ready for delivery to NASA’s Space Test Program in just over a year, which was quite an accomplishment for the team,” O’Brien says. “We are looking forward to the knowledge gained from ISS LIS and what UAH can accomplish in space in the years to come.” ISS LIS carries forward a long UAH pedigree in space-based lightning research, Dr. Christian says. “I started working on the concept of space-based lightning observations in 1980,” he says. “Our first instrument, the Optical Transient Detector (OTD) was launch in April 1995.” Jim Steele | newswise Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:0292f8c5-a07a-4d0f-a56e-4044ba65e6cd>
2.609375
1,584
Content Listing
Science & Tech.
42.835653
95,619,459
In the preceding chapter, we focussed on (some of the) kinematic aspects related to the motion of a continuous body. In particular, the motion x(X,t) was treated as a given function. However, it is in fact one of the main tasks of continuum mechanics to calculate the motion of the particles forming continuous bodies and, along with it, the evolution of the associated fields such as e.g. density and temperature. This can be done, once the relevant equations — usually functional differential equations — will have been established together with sufficient initial and/or boundary conditions. These equations comprise two sets of statements, the so-called balance equations of mass, momenta, energy and entropy and the constitutive relations describing the material behaviour of the body for which the spatial and temporal evolution of the field quantities, such as motion, density and temperature are sought. The balance equations have fairly general character and, in particular, contain no material specific information. The present section is devoted to the derivation of the global forms of the balance laws. KeywordsAngular Momentum Balance Equation Momentum Balance Reference Configuration CAUCHY Stress Tensor Unable to display preview. Download preview PDF.
<urn:uuid:49a7ee0c-4752-4489-86c9-ae0d735c53a2>
3.34375
246
Truncated
Science & Tech.
24.094916
95,619,474
Common name: Desert Pupfish available through www.itis.gov Identification: Minckley (1973); Moyle (1976a); Page and Burr (1991); two subspecies in the United States, a Colorado River form C. m. macularius and a Quitobaquito form C. m. eremus. A third undescribed subspecies occurs in Mexico (U.S. Fish and Wildlife Service 1993b). Size: 7.2 cm. Native Range: This species occurs in the lower Colorado River drainage, including the Gila River system and south through southern Arizona and California (including the Salton Sea) into northern Mexico (Page and Burr 1991). The Colorado River form exists naturally in California in two streams tributary to, and a few shoreline pools and irrigation drainages of, the Salton Sea, and on the Colorado River Delta, and in the Laguna Salada basin. The Quitobaquito form exists only in a single modified spring at Organ Pipe Cactus National Monument, Pima County, Arizona. The Mexican subspecies is found at scattered localities along the Rio Sonoyta (U.S. Fish and Wildlife Service 1993b; Echelle, personal communication). Interactive maps: Point Distribution Maps Puerto Rico & Table 1. States with nonindigenous occurrences, the earliest and latest observations in each state, and the tally and names of HUCs with observations†. Names and dates are hyperlinked to their relevant specimen records. The list of references for all nonindigenous occurrences of Cyprinodon macularius are found here. Table last updated 5/25/2018 † Populations may not be currently present. Means of Introduction: Most introductions were for the purpose of establishing refuge populations of this imperiled species. Hendrickson and Varela-Romero (1989) reported on the spread of one population via artificial canals. Miller (1968) reported that six individuals escaped from a trap into Dos Palmas Spring, near the northeastern corner of the Salton Sea, in May 1939. Based on the distribution map for this species in Lee et al. (1980 et seq.), the spring site is apparently in or near its native range. The stocking of a least one site in California was part of a series of experiments to test the effects of changed environment on meristic and morphometric characters (Miller 1968). Status: Established in Arizona and California. Impact of Introduction: Unknown. Walters and Legner (1980) looked at the diets of desert pupfish in experimental ponds. Desert Pupfish ate mostly benthos, especially chironomid midge larvae, detritus, aquatic vegetation, and snails. Pupfish also ate zooplankters in weedy or benthic habitats. Walters and Legner (1980) concluded that, in the Southwest, pupfish may be a better alternative for mosquito control than stocking mosquitofish because they are less piscivorous than mosquitofish. References: (click for full references) Lee, D. S., C. R. Gilbert, C. H. Hocutt, R. E. Jenkins, D. E. McAllister, and J. R. Stauffer, Jr. 1980 et seq. Atlas of North American freshwater fishes. North Carolina State Museum of Natural History, Raleigh, NC. Page, L. M., and B. M. Burr. 1991. A field guide to freshwater fishes of North America north of Mexico. The Peterson Field Guide Series, volume 42. Houghton Mifflin Company, Boston, MA. Swift, C. C., T. R. Haglund, M. Ruiz, and R. N. Fisher. 1993. The status and distribution of the freshwater fishes of southern California. Bulletin of the Southern California Academy of Science 92(3):101-167. U.S. Fish and Wildlife Service. 1993b. Desert pupfish recovery plan. U.S. Fish and Wildlife Service, Phoenix, AZ. 67 pp. Williams, J. E., D. W. Sada, C. D. Williams, and other members of the Western Division of Endangered Species Committee. 1988. American Fisheries Society guidelines for introductions of threatened and endangered fishes. Fisheries 13(5):5-11. Pam Fuller, and Leo Nico Revision Date: 12/2/1999 Peer Review Date: 4/1/2016 Pam Fuller, and Leo Nico, 2018, Cyprinodon macularius Baird and Girard, 1853: U.S. Geological Survey, Nonindigenous Aquatic Species Database, Gainesville, FL, https://nas.er.usgs.gov/queries/FactSheet.aspx?SpeciesID=654, Revision Date: 12/2/1999, Peer Review Date: 4/1/2016, Access Date: 7/17/2018 This information is preliminary or provisional and is subject to revision. It is being provided to meet the need for timely best science. The information has not received final approval by the U.S. Geological Survey (USGS) and is provided on the condition that neither the USGS nor the U.S. Government shall be held liable for any damages resulting from the authorized or unauthorized use of the information.
<urn:uuid:8422aa91-c10f-4f54-8aa7-a334c8c88aab>
3.140625
1,112
Knowledge Article
Science & Tech.
55.058157
95,619,490
Ophiolites, known to geologists for a long time, have acquired new esteem in view of the fact that they represent oceanic crust which we may visit on dry land. How is it possible that the first 10–15 km of this crust shear off and then creep along the floor of the ocean before climbing onto a continental margin? We shall find an answer when we investigate the best-preserved ophiolite belt, that of Oman on the eastern edge of the Arabian Peninsula. How are we to reconstruct the framework of an active ridge from the debris of this ophiolitic shipwreck? After a general discussion, we shall show that there are two main types of ophiolites and we shall see in the subsequent chapters that the two types are derived, the one from slow-, the other from fast-spreading ridges. KeywordsOceanic Lithosphere Ocean Floor Oceanic Ridge Arabian Shield Ophiolite Belt Unable to display preview. Download preview PDF.
<urn:uuid:cf18695e-85dd-48e0-8098-9a58988dc9b0>
4.21875
203
Truncated
Science & Tech.
46.662357
95,619,491
Scientists plumbing the depths of the ocean have made a surprise finding that could change the way we understand supernovae, exploding stars way beyond our solar system. They have analysed extraterrestrial dust thought to be from supernovae, that has settled on ocean floors to determine the amount of heavy elements created by the massive explosions. "Small amounts of debris from these distant explosions fall on the earth as it travels through the galaxy," said lead researcher Dr Anton Wallner, from the Research School of Physics and Engineering. "We've analysed galactic dust from the last 25 million years that has settled on the ocean and found there is much less of the heavy elements such as plutonium and uranium than we expected." The findings are at odds with current theories of supernovae, in which some of the materials essential for human life, such as iron, potassium and iodine are created and distributed throughout space. Supernovae also create lead, silver and gold, and heavier radioactive elements such as uranium and plutonium. Dr Wallner's team studied plutonium-244 which serves as a radioactive clock by the nature of its radioactive decay, with a half-life of 81 million years. "Any plutonium-244 that existed when the earth formed from intergalactic gas and dust over four billion years ago has long since decayed," Dr Wallner said. "So any plutonium-244 that we find on earth must have been created in explosive events that have occurred more recently, in the last few hundred million years." The team analysed a 10 centimetre-thick sample of the earth's crust, representing 25 million years of accretion, as well as deep-sea sediments collected from a very stable area at the bottom of the Pacific Ocean. "We found 100 times less plutonium-244 than we expected," Dr Wallner said. "It seems that these heaviest elements may not be formed in standard supernovae after all. It may require rarer and more explosive events such as the merging of two neutron stars to make them." The fact that these heavy elements like plutonium were present, and uranium and thorium are still present on earth suggests that such an explosive event must have happened close to the earth around the time it formed, said Dr Wallner. "Radioactive elements in our planet such as uranium and thorium provide much of the heat that drives continental movement, perhaps other planets don't have the same heat engine inside them," he said. The research is published in Nature Communications. Explore further: Search for life suggests solar systems more habitable than ours
<urn:uuid:e3c1195c-ed8b-4d3b-bd62-26e6d58c27dd>
4.15625
522
News Article
Science & Tech.
40.395138
95,619,512
OSLO – The biggest icebergs breaking off Antarctica unexpectedly help to slow global warming as they melt away into the chill Southern Ocean, scientists said on Monday, Reuters reports. . The rare Manhattan-sized icebergs, which may become more frequent in coming decades because of climate change, release a vast trail of iron and other nutrients that act as fertilisers for algae and other tiny plant-like organisms in the ocean. These extract carbon dioxide from the atmosphere as they grow, a natural ally for human efforts to limit the pace of climate change blamed on man-made greenhouse gas emissions. Ocean blooms in the wake of giant icebergs off Antarctica absorbed 10 to 40 million tonnes of carbon a year, the study estimated, roughly equivalent to annual man-made greenhouse gas emissions of countries such as Sweden or New Zealand. Until now, the impact of ocean fertilization from the demise of giant icebergs, defined as floating chunks of ice longer than 10 nautical miles (18 kms) or almost the length of Manhattan, had been judged small and localized. “We were very surprised to find that the impact can extend up to 1,000 kms,” (625 miles) from the icebergs, Professor Grant Bigg of the University of Sheffield, an author of the study published in the journal Nature Geoscience, told Reuters. The scientists studied satellite images of 17 giant icebergs off Antarctica from 2003-2013 and found that algae could turn the water greener for hundreds of kms (miles) around the icebergs, with nutrients spread by winds and currents. There are typically 30 giant icebergs floating off Antarctica at any one time – they can linger for years. The study said the giant icebergs had an outsized impact in promoting ocean fertilization when compared with small icebergs. Bigg noted that global man-made greenhouse gas emissions had been growing at about two percent a year. “If the giant icebergs weren’t there, it would be 2.1 to 2.2 percent,” he said. Ken Smith, an expert at the Monterey Bay Aquarium Research Institute in California who reviewed Monday’s study, said in an email he found the new findings “convincing”. The Sheffield University scientists noted other estimates that the amount of ice breaking off Antarctica had gained by five percent in the past two decades and that it was likely to rise in future with warming. That in turn could spur more ocean fertilization.
<urn:uuid:3fb2d82e-58ba-4dbf-b8b9-52cfba5e440d>
3.6875
507
News Article
Science & Tech.
42.885474
95,619,521
A parabolic trough is a type of solar thermal collector that is straight in one dimension and curved as a parabola in the other two, lined with a polished metal mirror. The sunlight which enters the mirror parallel to its plane of symmetry is focused along the focal line, where objects are positioned that are intended to be heated. For example, food may be placed at the focal line of a trough, which causes the food to be cooked when the trough is aimed so the Sun is in its plane of symmetry. Further information on the use of parabolic troughs for cooking can be found in the article about solar cookers. For other purposes, there is often a tube, frequently a Dewar tube, which runs the length of the trough at its focal line. The mirror is oriented so that sunlight which it reflects is concentrated on the tube, which contains a fluid which is heated to a high temperature by the energy of the sunlight. The hot fluid can be used for many purposes. Often, it is piped to a heat engine, which uses the heat energy to drive machinery or to generate electricity. This solar energy collector is the most common and best known type of parabolic trough. The paragraphs below therefore concentrate on this type. The trough is usually aligned on a north-south axis, and rotated to track the sun as it moves across the sky each day. Alternatively, the trough can be aligned on an east-west axis; this reduces the overall efficiency of the collector due to the sunlight striking the collectors at an angle but only requires the trough to be aligned with the change in seasons, avoiding the need for tracking motors. This tracking method approaches theoretical efficiencies at the spring and fall equinoxes with less accurate focusing of the light at other times during the year. The daily motion of the sun across the sky also introduces errors, greatest at the sunrise and sunset and smallest at noon. Due to these sources of error, seasonally adjusted parabolic troughs are generally designed with a lower concentration acceptance product. Parabolic trough concentrators have a simple geometry, but their concentration is about 1/3 of the theoretical maximum for the same acceptance angle, that is, for the same overall tolerances of the system to all kinds of errors, including those referenced above. The theoretical maximum is better achieved with more elaborate concentrators based on primary-secondary designs using nonimaging optics which may nearly double the concentration of conventional parabolic troughs and are used to improve practical designs such as those with fixed receivers. Heat transfer fluid (usually thermal oil) runs through the tube to absorb the concentrated sunlight. This increases the temperature of the fluid to some 400 °C. The heat transfer fluid is then used to heat steam in a standard turbine generator. The process is economical and, for heating the pipe, thermal efficiency ranges from 60-80%. The overall efficiency from collector to grid, i.e. (Electrical Output Power)/(Total Impinging Solar Power) is about 15%, similar to PV (Photovoltaic Cells) but less than Stirling dish concentrators. A parabolic trough is made of a number of solar collector modules (SCM) fixed together to move as one solar collector assembly (SCA). A SCM could have a length up to 15 metres (49 ft 3 in) or more. About a dozen or more of SCM make each SCA up to 200 metres (656 ft 2 in) length. Each SCA is an independently-tracking parabolic trough. A SCM may be made as a single-piece parabolic mirror or assembled with a number of smaller mirrors in parallel rows. Smaller modular mirrors requires smaller machines to build the mirror, reducing cost. Cost is also reduced in case of the need of replacing a damaged mirror. Such damage may occur due to being hit by an object during bad weather. In addition, V-type parabolic troughs exist which are made from 2 mirrors and placed at an angle towards each other. In 2009, scientists at the National Renewable Energy Laboratory (NREL) and SkyFuel teamed to develop large curved sheets of metal that have the potential to be 30% less expensive than today's best collectors of concentrated solar power by replacing glass-based models with a silver polymer sheet that has the same performance as the heavy glass mirrors, but at a much lower cost and weight. It also is much easier to move and install. The glossy film uses several layers of polymers, with an inner layer of pure silver. As this renewable source of energy is inconsistent by nature, methods for energy storage have been studied, for instance the single-tank (thermocline) storage technology for large-scale solar thermal power plants. The thermocline tank approach uses a mixture of silica sand and quartzite rock to displace a significant portion of the volume in the tank. Then it is filled with the heat transfer fluid, typically a molten nitrate salt. The enclosed trough architecture encapsulates the solar thermal system within a greenhouse-like glasshouse. The glasshouse creates a protected environment to withstand the elements that can reduce the reliability and efficiency of the solar thermal system. Lightweight curved solar-reflecting mirrors are suspended within the glasshouse. A single-axis tracking system positions the mirrors to track the sun and focus its light onto a network of stationary steel pipes, also suspended from the glasshouse structure. Steam is generated directly using oil field-quality water, as water flows along the length of the pipes, without heat exchangers or intermediate working fluids. The steam produced is then fed directly to the field’s existing steam distribution network, where the steam is continuously injected deep into the oil reservoir. Sheltering the mirrors from the wind allows them to achieve higher temperatures and prevents dust from building up as a result from exposure to humidity. GlassPoint Solar, the company that created the Enclosed Trough design, states its technology can produce heat for EOR for about $5 per million British thermal units in sunny regions, compared to between $10 and $12 for other conventional solar thermal technologies. Enclosed troughs are currently being used at the Miraah solar facility in Oman. In November 2017, GlassPoint announced a partnership with Aera Energy that would bring parabolic troughs to the South Belridge Oil Field, near Bakersfield, California. Early commercial adoption In 1897, Frank Shuman, a U.S. inventor, engineer and solar energy pioneer built a small demonstration solar engine that worked by reflecting solar energy onto square boxes filled with ether, which has a lower boiling point than water, and were fitted internally with black pipes which in turn powered a steam engine. In 1908 Shuman formed the Sun Power Company with the intent of building larger solar power plants. He, along with his technical advisor A.S.E. Ackermann and British physicist Sir Charles Vernon Boys, developed an improved system using mirrors to reflect solar energy upon collector boxes, increasing heating capacity to the extent that water could now be used instead of ether. Shuman then constructed a full-scale steam engine powered by low-pressure water, enabling him to patent the entire solar engine system by 1912. Shuman built the world’s first solar thermal power station in Maadi, Egypt between 1912 and 1913. Shuman’s plant used parabolic troughs to power a 45-52 kilowatt (60-70 hp) engine that pumped more than 22,000 litres of water per minute from the Nile River to adjacent cotton fields. Although the outbreak of World War I and the discovery of cheap oil in the 1930s discouraged the advancement of solar energy, Shuman’s vision and basic design were resurrected in the 1970s with a new wave of interest in solar thermal energy. In 1916 Shuman was quoted in the media advocating solar energy's utilization, saying: We have proved the commercial profit of sun power in the tropics and have more particularly proved that after our stores of oil and coal are exhausted the human race can receive unlimited power from the rays of the sun.— Frank Shuman, New York Times, July 2, 1916 Commercial plants using parabolic troughs may use thermal storage at night while some are hybrids and support natural gas as a secondary fuel source. In the US the amount of fossil fuel used is limited to a maximum 27% of electricity production, allowing the plant to qualify as a renewable energy source. Because they include cooling stations, condensers, accumulators and other things besides the actual solar collectors, the power generated per square meter of area varies enormously. As of 2014, the largest solar thermal power systems using parabolic trough technology include, the 354 MW SEGS plants in California, the 280 MW Solana Generating Station that features a molten salt heat storage, the 250 MW Genesis Solar Energy Project, that came online in 2014, as well as the Spanish 200 MW Solaben Solar Power Station, and the Andasol 1 solar power station, using a Eurotrough-collector. - Chaves, Julio (2015). Introduction to Nonimaging Optics, Second Edition. CRC Press. ISBN 978-1-4822-0673-9. - Roland Winston et al.,, Nonimaging Optics, Academic Press, 2004 ISBN 978-0-12-759751-5 - Diogo Canavarro et al., New second-stage concentrators (XX SMS) for parabolic primaries; Comparison with conventional parabolic trough concentrators, Solar Energy 92 (2013) 98–105 - Diogo Canavarro et al., Infinitesimal etendue and Simultaneous Multiple Surface (SMS) concentrators for fixed receiver troughs, Solar Energy 97 (2013) 493–504 - "Absorber tube temperature". abengoasolar.es. Archived from the original on 2009-08-01. - Patel99 Ch.9 - Son, B. C. (1 January 1978). "Analysis of flat mirror V-trough solar concentrator". Ph.D. Thesis. Bibcode:1978PhDT.......157S – via NASA ADS. - Harry Tournemille. "Award-Winning Solar Reflectors Will Cut Production Costs". www.energyboom.com. Retrieved 2009-11-25. - Deloitte Touche Tohmatsu Ltd, "Energy & Resources Predictions 2012", 2 November 2011 - Helman, Christopher, "Oil from the sun", "Forbes", April 25, 2011 - Goossens, Ehren, "Chevron Uses Solar-Thermal Steam to Extract Oil in California", "Bloomberg", October 3, 2011 - "GlassPojnt Announces Belridge Solar Project". - Smith, Zachary Alden; Taylor, Katrina D. (2008). Renewable And Alternative Energy Resources: A Reference Handbook. ABC-CLIO. p. 174. ISBN 978-1-59884-089-6. - American Inventor Uses Egypt's Sun for Power; Appliance Concentrates the Heat Rays and Produces Steam, Which Can Be Used to Drive Irrigation Pumps in Hot Climates, The New York Times, July 2, 1916. - NREL.gov Concentrating Solar Power Projects in the United States, 17 February 2014 - NREL-gov Concentrating Solar Power Projects in Spain, 17 February 2014
<urn:uuid:382c9a24-c585-4933-8264-6cff7b0da8a3>
3.9375
2,333
Knowledge Article
Science & Tech.
46.29312
95,619,525
4. Learning and Adapting Software development is a continuous learning activity between the customer who knows the business domain and the developers who know the software. Since the software product needs to combine the two areas of knowledge, the customers must teach the software developers about the business domain and the software developers must teach the customer enough about software so that they can better express their requirements. This learning activity is done throughout the project, having both the business domain experts and the software developers communicating and collaborating all the time, teaching each other what is needed in order to build the best software. Circle of life – learning and adapting In most occasions, at the beginning of a software project, the customer doesn’t really know what he wants or cannot express very well what he wants. The intangible, virtual nature of software is often the main factor in this problem. Traditional, predictability based processes, which need to write all the customer requirements at first, are exposed to high risks in projects where the customer cannot express his needs from the beginning. Not knowing what they want at first, means that the customer will want changes later and the cost of change is very high late in development, in waterfall based processes: [Cost of change in traditional processes, by Scott W. Ambler] Agile methodologies have acknowledged the problem of having customers that cannot express their whishes at first, and have built, at the base of their processes, a way of development that allows continuous learning and adapting throughout the project lifecycle, handling very well the cost of change: [Cost of change in XP, by Scott W. Ambler] In “Extreme Programming: Embrace Change” , Kent Beck, explains how XP handles the problem of having customers that don’t know what they want and want changes late in development: For decades, programmers have been whining, "The customers can't tell us what they want. When we give them what they say they want, they don't like it." This is an absolute truth of software development. The requirements are never clear at first. Customers can never tell you exactly what they want. The development of a piece of software changes its own requirements. As soon as the customers see the first release, they learn what they want in the second release...or what they really wanted in the first. And it's valuable learning, because it couldn't have possibly taken place based on speculation. It is learning that can only come from experience. But customers can't get there alone. They need people who can program, not as guides, but as companions. What if we see the "softness" of requirements as an opportunity, not a problem? In his book “Agile and iterative development: A manager’s guide”, Craig Larman shows a very good sample of how, iteration by iteration, requirements evolve: [Craig Larman, 2004] The iterative and incremental nature of the agile methodologies allows the customers and the developers to continually adapt the software to their needs. Kent Beck compares software development with driving a car: We need to control the development of software by making many small adjustments, not by making a few large adjustments, kind of like driving a car. This means that we will need the feedback to know when we are a little off, we will need many opportunities to make corrections, and we will have to be able to make those corrections at a reasonable cost. The software process is a continuous cycle following a few simple steps: the developers and the customer decide what to do in the next iteration, the developers do it and then they show the result to the customer, who can now see the software and learn from it. The customers can now decide what needs to be corrected or improved and they can now give feedback about the product, that makes the programmers understand better what the customers really want. After a cycle like this, everything is taken from the beginning again. This is a continuous learning activity between the customer and the developers. Ron Jeffries calls this “the circle of life”, saying that: On an XP project, the customer defines business value by writing stories, and the programmers implement those stories, building business value. But there’s an important caveat: on an XP project, the programmers do what the customer asks them to do! Every time we go around the circle, we learn. Customers learn how valuable their proposed features really are, while programmers learn how difficult the features really are. We all learn how long it takes to build the features we need. As important it is to develop closely with the customer and learn and adapt after every iteration, in order to improve the software, it is more important that the developers collect feedback from its real users, every release when the software is put in production and is used daily. Throughout the development of a release, the learning and adapting is done with the customer together, but nothing can give better feedback than real use of the software after a software release or delivery has been made to the real end users. Development and production Mary and Tom Poppendieck consider that one of the most important principles in lean thinking (adapted to software development) is amplifying learning, and that this is done as described above, using the main tools: iterations and feedback. They also manage to show that it is very important to make the distinction between software development and product manufacturing, saying: Think of development as creating a recipe and production as following the recipe. … Developing a recipe is a learning process involving trial and error. You would not expect an expert chef's first attempt at a new dish to be the last attempt. In fact, the whole idea of developing a recipe is to try many variations on a theme and discover the best dish Showing the main differences between development and production Development - Designs the Recipe - Quality is fitness for use - Variable results are good - Iteration generates value Production - Produces the Dish - Quality is conformance to requirements - Variable results are bad - Iteration generates waste (called rework) [Mary and Tom Poppendieck, 2003] Not seeing these very important differences means not seeing development as a continuous learning and adapting process, thus working against nature, increasing the risks of failure in software projects. A game of invention and communication Alistair Cockburn defines software development as in Agile Software Development : Software development is a (resource limited) cooperative game of invention and communication. The primary goal is to deliver useful, working software. The secondary goal, the residue of the game, is to set up for the next game. The next game may be to alter and replace the system or to create a neighboring system He expresses the need for cooperation between developer and customers as in a group game, and the need between the participants at the game to learn the problem and invent or imagine a solution for it, constantly readapting it to fit the needs. 3.3 Reflective improvement One of the most important aspects of the agile movement is that its followers constantly need to look back at their activity, and learn from what they do about what is beneficial and what isn’t, and, based on this information, adapt and improve their working process. The need to improve the process is very well described by the last principle of the agile manifesto: At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. Alistair Cockburn, describes a very pragmatic way to put the self improving technique in practice, in the book “Crystal Clear” . At the end of each iteration, the team gets together and lists the activities they did in the last iteration, dividing them into two columns: to keep (activities that worked well and should be kept in the process) and to drop (activities, practices that didn’t work so well, and should not be put in practice anymore). Besides the two columns, a third column is added: “to try”, where ideas about different practices that could be tried are added by the team. Alistair calls this technique: reflective improvement. The intervals at which the team reflects should not be very long, and Alistair suggests twice per iteration, once in the middle of the iteration when things can be improved as the iteration progresses, and one at the end of the iteration, to reflect on the entire iteration, including the delivery to the client and his satisfaction, called “post mortem” reflection workshop. The activities and techniques discussed in a reflection workshop inside the team do not necessarily need to be from the past iteration or related to a project. They can be general conventions used throughout a larger base than the actual team like code conventions or database versioning and the people participating in them do not necessarily need to be from the team, so this kind of sessions could be done even with the customer to improve the whole collaboration and development process together. Eliminating waste and cutting overhead In order for a team to be able to drop some techniques that they are using, they need first to be aware that those techniques are not useful. Although this sounds incredibly simple, it proves to be one of the hardest to put in practice techniques. Seeing what generates waste is a very difficult activity and needs to be discussed in detail starting from what waste is. Mary and Tom Poppendieck, have adapted a method to see waste in software development, from lean production manufacturing, that is based on Taichi Ohno’s first steps at The 7 wastes in lean manufacturing The 7 wastes of software development Partially Done Work In many occasions we develop code, especially when designing software, to allow flexibility for future changes. Some of the investment in the flexible design is well paid off, but in other occasions, those changes never occur, and the investment in the flexibility of the design is waste. Following religiously a software process does not guarantee success. However, when software development fails, in many cases, the developers and the management come to the conclusion that the process wasn’t followed and they decide to make it stricter: more documentation is produced, more meetings are scheduled, the plans and designs go in further details. This can be good in some cases, but extra process overhead is one of the biggest problems in software development today. Programmers that work for 6 hours a day on writing documents that comply with the process and two hours on actually producing software is one of the best hidden wastes in software development, especially in large organizations. Moving programmers from one project to another, requires time for them to get used to the new project. If that project is big, then understanding takes a lot of time, and if the task performed by the programmer moved to the project is small it does not cover the initial investment in understanding and, even worse, because the software wasn’t understood properly the programmers decrease the internal quality of the project, making it harder to extend and maintain. One of the biggest problems with waste is that most programs have features that are never used. These contribute essentially to the cost, length and complexity of a software project, and not using them is just waste. Self improving sessions Back to work: Some time ago, we started to use a new open source ORM on .NET, called NH. It was a fairly small (2.5 months) and non complex project, that was built by me and another colleague, let’s call him John. I was needed for the first 3 iterations (6 weeks) to help get the project started then it would be continued by John. The project was delivered on time for the customer and it still works. Then John and another colleague, Michael started a new web project, quite similar to the previous and developed it for two months. Then Michael and another colleague, George, started the third project. On one discussion outside the company, Michel and George were complaining about the ORM saying that in many situations it proved to be an overhead. Suddenly, I detected a potential problem. I had been using it for the same period and my conclusion was exactly the opposite. Listening further, we decided to take a look in the code, and it was then that I realized that some parts of it were completely misused, becoming an overhead. Going back to the second project, I noticed the same mistake there. And, quite surprisingly in the first project also. I realized that one mistake, made by John, in using NH, was considered the right way to go by Michael and later by George resulting in much more code then it was actually supposed to be and the impression that it wasn’t a good technology. I decided that I needed to do something about this, so I said to the entire team that the next day at 2pm everyone needs to be available for a meeting to sort out the problem; however I warned everyone that it wasn’t going to be a conventional meeting. When everyone came, I gave them a document, of about 40 pages that described in detail how NH really worked and I asked everyone to read it. While they were reading I made a list of questions to see if they understood the technology and we were supposed to discuss the responses in the end. When one finished reading I sent the questions by email expecting the answers in the same way. Of course we were all collocated and everyone had his computer. When all the answers came I was surprised that they were 99% right. So, after 2.5 hours, that might be considered lost, because instead of working we read about a technology that was not used by everyone at the time, everyone started to understand it. If these 2.5 hours would have been done at the beginning of the first project, this might have saved days or maybe weeks of wasted resources because the technology wasn’t properly understood by someone and these misunderstandings were taken by the others as proper ways to use the technology. We decided that this kind of “ school like sessions” could have tremendous potential also in our attempt to become more agile, so we started with Extreme Programming, reading, answering questions and discussing a few pages every day, having information and knowledge spread much faster than ever before, which later avoided many overheads and wasted resources.
<urn:uuid:d2c8e2f3-c048-468c-9d86-7c8c2a7d2d68>
2.921875
2,929
Personal Blog
Software Dev.
36.317593
95,619,532
KGDB is an amazing Linux kernel debugging tool. It can debug the kernel while it is running, set breakpoints, and step through the code. Earlier, KGDB used to be a bunch of patches that had to be carefully merged into the mainline kernel. However, since version 2.6.26, KGDB has been merged into the mainline, and only needs to be enabled during kernel compilation. A typical KGDB setup requires two machines connected by a serial cable: one as a source machine on which debugging is done, and the other (destination) which is being debugged. With virtualisation, however, we can do away with that second machine. When we combine VirtualBox with KGDB on a single machine, the host OS is the ‘source’ machine, while the guest OS (Linux kernel compiled with KGDB enabled) is the “destination”. A virtual serial port is enabled between the host and the guest. Figure 1 displays such a setup. For this setup, we’ll need: - The host running a Linux system (you can have a host OS other than Linux, but this article does not cover that). My host system runs Ubuntu Maverick Meerkat 10.10, 64-bit. - VirtualBox software installed on the host OS. I used the VirtualBox 4.0 distribution-specific binary obtained from the project website. socatbinary installed on the host. This is used to link the pipe file (FIFO) that is created by VirtualBox, with a pseudo-terminal on the host system. Here’s the download link. Normally, GDB takes a physical terminal file (like ttys0) as the remote target, but in our case, we will instead provide a pseudo terminal created by socatas the remote target for GDB. Refer to the socatman pages for more information. - A VM installed with a Linux guest OS (I used Fedora 14). The VirtualBox documentation shows how to create a VM, if you need it, so I won’t repeat it here. Help on how to install an OS in a VirtualBox VM is also in the documentation. I downloaded the Fedora 14 ISO from the Fedora site, attached it to the VM, and booted the VM and installed Fedora. - The Linux kernel source is accessible to the VM (and the host, too — see below). This can be picked up from kernel.org; I used version 2.6.37. It is used to recompile the guest OS kernel with KGDB-specific options. How to get the source available in the VM is described under “File-sharing between machines” subsection below. Setting up the guest Configuring the virtual serial port in VirtualBox Right-click your virtual machine instance, and go to the Settings tab. On the Port 1 tab, choose “Enable Serial Port”. Select “Port Mode” to be “Host pipe”. Enter a pipe file-name in the “Port/File Path” text field, as shown in Figure 2. File-sharing between machines I have set up networking for the VM and in my Fedora guest, so that I can easily access files on the host. You could set up an optional NFS server on the host machine, and create an NFS share for the kernel source directory. This share is mounted within the guest, and the kernel is compiled and installed from the guest command prompt. View Figure 3 for an idea of my setup. There are many benefits in such a setup — the shared kernel source can be used to directly do a make installof the kernel within the guest. While debugging, you will need the kernel source files in the host OS, so that they are accessible to GDB — and let’s not forget the vmlinuzfile, which is passed as one of the arguments to the debugger. If you are debugging kernel modules, you can edit the module source (in the NFS-shared folder) from the host, while you debug the guest. There are many other ways of doing this, which you can explore on the Internet. Preparing and installing the kernel on a guest OS The kernel source can be compiled either on the guest or the host. Since I have Ubuntu on the host and Fedora in the guest, I preferred to compile the kernel in the guest itself. Wherever you compile it, you will obviously need the build environment set up. Again, preparing a build environment is a task that is documented very well on the Internet. An important point to note is that compiling the kernel in the VM (guest OS) is extremely time-consuming. During kernel configuration (once you do a make menuconfig) ensure you enable the following options: - Kernel hacking –> (options for kernel hacking) - Kernel debugging (features for kernel debugging) - Compile the kernel with debug info (kernel and modules are compiled with the - Compile the kernel with frame pointers (frame-pointer registers are used to keep track of stack) - KGDB: Kernel debugger –> (enable KGDB) - KGDB: Use over serial console (enable serial console support) (see Figure 4) Once the kernel is compiled, if you do a make modules_install and install from within the guest, it will install the newly compiled kernel. Figure 5 shows the Grub option for the new kernel after a guest reboot. Editing the bootloader To enable KGDB serial console support from the guest, we need to append the options 115200 kgdbwait to the kernel command-line. Here, kgdboc (KGDB over console) uses ttyS0 with the baud rate defined as 115200. The kgdbwait option tells the kernel to wait until we connect to it with GDB. From the Grub boot-loader screen, press e and append the options to the kernel line, as shown in Figure 6. An alternative, if you plan to be debugging frequently, is to edit the bootloader configuration file ( /etc/grub.conf, in my case) and update the kernel command line, as shown in Figure 7. This second method requires a reboot of the VM to activate the new kernel options, if you haven’t edited at the GRUB prompt. Booting with KGDB options Once you have these options in the kernel command-line and boot from it, the kernel boots till it gets to the stage where it waits for the remote GDB connection over the (virtual) serial port, as shown in Figure 8. Linking the serial file on the host to the pseudo-terminal We need to use socat to do this linking, with the following command: PTY: is the pseudo-terminal, and /code/guest/serial is the virtual serial port pipe file created by VirtualBox on my host machine as per the VM settings done earlier. When this command is run, it returns the pseudo-terminal number that’s allocated (in my case, /dev/pts/7), as shown in Figure 9. It’s important to remember that you should not terminate the socatcommand; it needs to be running in the background for us to be able to use the pseudo terminal, else it breaks the stream. Firing up GDB Enter the kernel source directory on the host, and start GDB, telling it to connect to the remote target, which is the pseudo-terminal number returned by This connects us to the waiting Linux kernel session in the VM. If we type continue at the GDB prompt, we will see booting resume in the VM’s guest OS. To be able to get back the GDB prompt on the host, you need to run the following command in the guest: This will break the running session, and give you control in GDB. This can be used to insert break-points and do other debugging operations, like those seen in Figure 10. If you need to debug a kernel module, insert the module in the guest, obtain the .text address of the module ( /sys/module/<module_name>/sections/.text) and use it as an argument for the GDB command As you can see, with the above setup, a great deal of control can be achieved in debugging the live kernel.
<urn:uuid:d5b4350e-7248-475e-97f0-77825ca27073>
2.703125
1,775
Tutorial
Software Dev.
58.641602
95,619,554
Most iterative language compilers have built-in error handling routines (e.g., TRY…CATCH statements) that developers can leverage when designing their code. Although SQL Server 2000 developers don't enjoy the luxury that iterative language developers do when it comes to built-in tools, they can use the @@ERROR system variable to design their own effective error-handling tools. In order to grasp how error handling works in SQL Server 2000, you must first understand the concept of a database transaction. In database terms, a transaction is a series of statements that occur as a single unit of work. To illustrate, suppose you have three statements that you need to execute. The transaction can be designed in such a way so that all three statements occur successfully, or none of them occur at all. When data manipulation operations are performed in SQL Server, the operation takes place in buffer memory and not immediately to the physical table. Later, when the CHECKPOINT process is run by SQL Server, the committed changes are written to disk. This means that when transactions are occurring, the changes are not made to disk during the transaction, and are never written to disk until committed. Long-running transactions require more processing memory and require that the database hold locks for a longer period of time. Thus, you must be careful when designing long running transactions in a production environment. Here's a good example of how using transactions is useful. Withdrawing money from an ATM requires a series of steps which include entering a PIN number, selecting an account type, and entering the amount of funds you wish to withdraw. If you try to withdraw $50 from the ATM and the machine fails thereafter, you do not want to be charged the $50 without receiving the money. Transactions can be used to ensure this consistency. The @@ERROR variable Successful error handling in SQL Server 2000 requires consistently checking the value of the @@ERROR system variable. @@ERROR is a variable updated by the SQL Server database engine after each statement is executed on the server for the given connection. This variable contains the corresponding error number, if applicable. You can find a listing of these error numbers in the sysmessages table in the master database. The details of this table are listed on Microsoft's site. Here's an example of how the @@ERROR variable works: PRINT 'Taking a look at @@ERROR' In these instructions, we are printing out a string to the screen and printing the value of the @@ERROR variable. Because no error is returned from printing out to the screen, the value @@ERROR contains is 0. In this example, we generate a division by zero error, which means that the @@ERROR variable will contain 8134, which is the error number that Microsoft assigns for this type of error. For most error handling purposes, you will only be concerned if the value of @@ERROR is non-zero, which will indicate that an error occurred. It is a good idea to keep track of the error numbers when recording the errors as they will come in handy during the debugging process. Error handling at work Here's a good example of how you can use error handling in stored procedures. The goal of the sample script is to execute a stored procedure that will declare a transaction and insert a record into a table. Because this is for explanation purposes only, we will design the procedure in such a way as to let us tell it whether to commit or roll back the transaction. Execute the following statement to create the table that we will use for our example: CREATE TABLE Transactions TranID SMALLINT IDENTITY(1,1) PRIMARY KEY, EntryDate SMALLDATETIME DEFAULT(GETDATE()), The two fields of value in the script are ParamValue and ThrowError. These fields will correspond to the input parameters of the procedure we will create, and we will use them in our logic for committing transactions. Once our table is in place to keep track of our transactions, we are ready to create our procedure. The procedure will have a parameter used simply to record a character value and a parameter, which will give us the ability to throw an error in the procedure. Run the statement in Listing A to create the procedure. This simple stored procedure exhibits the characteristics we need for effective error handling. First, a transaction is explicitly declared. After a record is inserted into the Transaction table, we check the value of the @ThrowError parameter. This parameter indicates whether to throw an error, and uses the RAISERROR function to throw the custom error. When the RAISERROR function is called, the value of the @@ERROR variable is populated with the error number that we provide. If an error occurs in the stored procedure, we will roll back the transaction. Rolling back the transactions means that the record we attempted to insert into the Transactions table will be removed as if it never occurred. The state of the database will be exactly how it was before the transaction began. In this example, you will also notice the use of the GOTO statement and the label ErrorHandler. GOTO statements are typically considered a bad programming practice in iterative programming languages, but they are very useful when handling errors in SQL Server 2000. Don't be afraid to use the GOTO statement to handle errors. This procedure call will throw an error and the record will not be inserted in the Transactions table: DECLARE @ReturnCode INT EXECUTE @ReturnCode = usp_TestTransaction @ParamValue = 'E', @ThrowError = 1 This procedure call will not throw an error, and the inserted record will be committed to the Transactions table: DECLARE @ReturnCode INT EXECUTE @ReturnCode = usp_TestTransaction @ParamValue = 'S', @ThrowError = 0 These procedure calls make use of a Return parameter, which indicates the success or failure of a stored procedure. It is a good programming practice to explicitly set the Return parameter in your code to indicate success or failure of the procedure; this allows you to know when your stored procedure has failed so you can take the necessary steps to handle the failure. For example, you can nest procedure calls and transactions. Your application could potentially declare a transaction, call a stored procedure, and (depending on the success or failure of the stored procedure) commit or roll back the outside transaction. Looking to the future Careful transaction design and consistently checking the value of the @@ERROR variable is the key to effective error handling in SQL Server 2000. In a future article, I'll show you how to use the new error handling capabilities in SQL Server 2005, which make use of TRY…CATCH statements. Tim Chapman is a SQL Server database administrator who works for a bank in Louisville, KY, and has more than 7 years of IT experience. If you would like to contact Tim, please e-mail him at email@example.com. Tim Chapman is a SQL Server MVP, a database architect, and an administrator who works as an independent consultant in Raleigh, NC, and has more than nine years of IT experience.
<urn:uuid:4a7c261c-b931-451a-a411-bec7a977c9ac>
3.296875
1,462
Nonfiction Writing
Software Dev.
39.493458
95,619,560
What are some of the BILL ADAPTATIONS and LIKELY FOOD SOURCES you might associate with the following habitats. Why would you expect to find this bill type? 1) Wetlands and shore birds 2) Lakes and open waters 3) Grasslands or Raptors 4) Oak wood land and/or Coniferous Forest© BrainMass Inc. brainmass.com July 16, 2018, 10:27 pm ad1c9bdddf This solution discusses the how bills in birds are adapted according to the sources of food. Specific species are identified in their bill adaptations and are correlated to the likely food source in habitats of wetlands, shore, lakes, open waters, grasslands, oak wood land and coniferous forest.
<urn:uuid:22983998-72e2-4353-b0d6-81bf7249922f>
3.21875
156
Tutorial
Science & Tech.
55.90426
95,619,594
Due to its simplicity, the mineral magnesium oxide is a good model for studying the nature of planetary interiors. New work from a team led by Carnegie's Stewart McWilliams studied how magnesium oxide behaves under the extreme conditions deep within planets and found evidence that alters our understanding of planetary evolution. It is published November 22 by Science Express. Magnesium oxide is particularly resistant to changes when under intense pressures and temperatures. Theoretical predictions claim that it has just three unique states with different structures and properties present under planetary conditions: solid under ambient conditions (such as on the Earth's surface), liquid at high temperatures, and another structure of the solid at high pressure. The latter structure has never been observed in nature or in experiments. McWilliams and his team observed magnesium oxide between pressures of about 3 million times normal atmospheric pressure (0.3 terapascals) to 14 million times atmospheric pressure (1.4 terapascals) and at temperatures reaching as high as 90,000 degrees Fahrenheit (50,000 Kelvin), conditions that range from those at the center of our Earth to those of large exo-planet super-Earths. Their observations indicate substantial changes in molecular bonding as the magnesium oxide responds to these various conditions, including a transformation to a new high-pressure solid phase. In fact, when melting, there are signs that magnesium oxide changes from an electrically insulating material like quartz (meaning that electrons do not flow easily) to a metal similar to iron (meaning that electrons do flow easily through the material). Drawing from these and other recent observations, the team concluded that while magnesium oxide is solid and non-conductive under conditions found on Earth in the present day, the early Earth's magma ocean might have been able to generate a magnetic field. Likewise, the metallic, liquid phase of magnesium oxide can exist today in the deep mantles of super-Earth planets, as can the newly observed solid phase. "Our findings blur the line between traditional definitions of mantle and core material and provide a path for understanding how young or hot planets can generate and sustain magnetic fields," McWilliams said. "This pioneering study takes advantage of new laser techniques to explore the nature of the materials that comprise the wide array of planets being discovered outside of our Solar System," said Russell Hemley, director of Carnegie's Geophysical Laboratory. "These methods allow investigations of the behavior of these materials at pressures and temperatures never before explored experimentally." The experiments were carried out at the Omega Laser Facility of the University of Rochester, which is supported by DOE/NASA. The research involved a team of scientists from University of California Berkley and Lawrence Livermore National Laboratory. This work was supported by the Department of Energy, the U.S. Army Research Office, A Krell Institute graduate fellowship, the DOE/NNSA National Laser User Facility Program, the Miller Institute for Basic Research in Science, and the University of California. The Carnegie Institution for Science (carnegiescience.edu) is a private, nonprofit organization headquartered in Washington, D.C., with six research departments throughout the U.S. Since its founding in 1902, the Carnegie Institution has been a pioneering force in basic scientific research. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science. New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:f1f47e9c-849a-4fff-9f91-c56104252047>
4.21875
1,302
Content Listing
Science & Tech.
31.994267
95,619,619
+44 1803 865913 Explores the unusual biology, amazing diversity and ecological importance of lichens and explains how understanding lichen biodiversity may lead to technological developments in medicine, metal prospecting and pollution control. Also includes new information on economic uses and outlines practical project ideas. What is lichen?; how lichen grow, multiply and disperse; lichen biodiversity; evolution, classification and naming; ecological role; lichens in forests; lichens in extreme environments; biomonitoring; prospecting and dating; economic uses; practical projects. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I have never been so pleased with the quality of service. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:98cbe163-8f8e-486d-b1f1-a169f55f6dba>
3.28125
180
Product Page
Science & Tech.
14.709985
95,619,620
When air (or any gas) flows in tubes of different diameter, due to Venturi principle pressure in thin sections is smaller than pressure in thick sections. If speed of flow is great, the process of transition between sections can be considered adiabatic. It means that by entering thick section temperature of air increases due to increase of pressure. Since thick and thin sections have direct thermal contact, air flowing in thick section is heating air flowing in thin section. By entering thick section, that air heats up yet more (due to adiabatic increase of pressure). So, every new portion of air, entering thick section has greater temperature than previous portion – the process is recurrent. If there are no losses, the tip temperature will rise to infinity. If we use some part of heat or withdraw some air from the flow Voutlet, the temperature of outgoing air will be Tout = Tin + ΔTadiabatic*V/Voutlet, where Tin is temperature of incoming air; ΔTadiabatic is adiabatic increase of temperature at transition; V – volume of air, flowing through device per second; Voutlet – volume of air, exiting outlet holes per second. #energy #heater #stove
<urn:uuid:a37fda7d-b509-4504-aa02-951cd546261e>
3.390625
254
Knowledge Article
Science & Tech.
35.141339
95,619,661
Please consider donating to Behind the Black, by giving either a one-time contribution or a regular subscription, as outlined in the tip jar to the right or below. Your support will allow me to continue covering science and culture as I have for the past twenty years, independent and free from any outside influence. Rather than using conventional chemical thrusters for a Mars orbiter planned for the 2020s, NASA managers are considering using ion engines instead. Worried its fleet of Mars orbiter is aging, NASA intends to dispatch the spacecraft to the red planet in September 2022 to link ground controllers with rovers and extend mapping capabilities expected to be lost when the Mars Reconnaissance Orbiter stops functioning. Engineers also want to add ion engines to the orbiter and fly the efficient electrically-powered thruster system to Mars for the first time, testing out a solar-electric propulsion package that officials say will be needed when astronauts visit the red planet. Ion engines produce just a whisper of thrust, using electric power to ionize atoms of a neutral gas and spit out the particles at high speed. While the drive given by the thrusters is barely noticeable in one instant, they can operate for months or years, burning scant fuel compared to traditional chemical rockets. That this decision requires long-winded and extended high level negotiations at NASA illustrates the slow and lumbering nature of government. Private enterprise is embracing ion engines now, and NASA itself is seeing its own spectacular ion engine success with Dawn. The decision should be a no-brainer, especially because the benefits of ion engines (low weight, more power, greater flexibility) are so obvious.
<urn:uuid:4358705e-c78c-4851-a5d3-71844ae826bc>
2.96875
330
Truncated
Science & Tech.
27.343571
95,619,687
Having been millions of years in development, the life forms that thrive on Earth have come up with a trick or two for maximizing sustainability. So what better model could we use for sustainable technology than the Earth’s own natural processes? And one of the most impressive of the world’s natural features is the energy storage system of plants. It’s also impossibly difficult to replicate, which is why the recent innovation by MIT’s Dr. Daniel Nocera has the scientific community in a buzz. He has created the world’s first artificial leaf. Not to be confused with the plastic leaves of a fake houseplant, Nocera’s leaf is actually a paper-thin piece of silicon. It absorbs sunlight which is split into hydrogen and water molecules by catalysts on either side of the wafer – just like the photosynthesis of a plant. The hydrogen can then be used in fuel cells to create electricity. The essence of the breakthrough lies in its revolutionary capacity to store solar energy. Current solar technologies work well when the sun is shining, but struggle to store energy when it isn’t. Artificial leaves could be a major leap forward for solar power. Ironically, it’s only now, as our actions destabilize the Earth’s ecosystems, that we’ve begun to appreciate the lessons in efficiency and balance of those formerly stable systems. Many of these lessons are about resilience: ecological resilience is the ability of an ecosystem to absorb shocks or disruptions, to bend without breaking. A healthy ecosystem has layers of redundancies that allow it to do this easily. Lately, theorists have begun applying the idea of resilience to human communities and systems in the context of climate change adaptation: a resilient community is one that can adapt quickly and readily to changing circumstances. The artificial leaf mimics nature in its mechanisms and also in its contribution to energy resilience. A solar cell that can store energy is much more dependable than one that cannot. It also represents a more reliable form of energy production, since it is immune to small-scale shocks, such as a cloudy day. Theoretically, lightweight solar cells could power homes all over the world, since the technology is relatively simple to replicate. “That's why I know this is going to work,” says Nocera. “It's so easy to implement.” Nocera dreams of providing people with “their first 100 watts of energy” in parts of the world where electricity is scarce. 100 watts is enough electricity to power a cell phone with which to receive important updates in an emergency. It’s also sufficient to power a light bulb, so that one can study after work, further one’s education and support one’s family. Along with dozens of other uses, that tiny spark of dependable electricity is enough to make a difference when it’s needed most. Reliable solar power adds another layer of tools and resources that embattled communities can draw on in the face of increasing natural disasters. Resilient technology begets resilient communities. Not bad for a little leaf. - A\J Editorial Board (17) A\J Editorial Board - A\J Special Delivery (160) A\J Special Delivery - Backstage at A\J (84) Backstage at A\J - Current Events (212) Current Events - EcoLogic (7) EcoLogic - Food and Culture (23) Food and Culture - Green Living (32) Green Living - Made in Canada (21) Made in Canada - Renewable Energy (54) Renewable Energy - Shades of Green (12) Shades of Green - Summer Reading Series (7) Summer Reading Series - Sustainable A\J (57) Sustainable A\J - The Green Student (19) The Green Student - The Mouthful (14) The Mouthful - The Wild Side (38) The Wild Side - Think Global (13) Think Global - Turtle Island Solidarity Journey 2018 (4) Turtle Island Solidarity Journey 2018 Popular on A\J - From EATING AROUND THE WORLD article: "The long road to sustainability requires rebuilding our communities, and a g… https://t.co/gLTuZ7Rvu5 — 21 weeks 6 days ago - A Valentine's Day (and every day) message from Jane Goodall: "Let us replace impatience and intolerance with unders… https://t.co/1WGML2toyK — 21 weeks 6 days ago - For Valentine's Day: https://t.co/exvDzE2LQf — 21 weeks 6 days ago
<urn:uuid:5e8afca2-f7af-42ed-91c3-ef4e44d7a8b3>
4.1875
972
Nonfiction Writing
Science & Tech.
42.703864
95,619,705
This image was taken by NASA’s Solar Dynamics Observatory on June 20, 2013, at 11:15 p.m. EDT. It shows a solstice flare and an eruption of solar material shooting through the sun’s atmosphere, called a prominence eruption. The flare was a class M-2.9, which is in the low-moderate range. Shortly thereafter, this same region of the sun sent a coronal mass ejection (CME) out into space. Oh, and the night, the night, when the wind full of cosmic space Gnaws at our faces – R. M. Rilke, Duino Elegies (trans. Leslie P. Gartner) IF WE CONSIDER the phenomenon of solar flares, Rilke’s description of how cosmic winds sometimes blow in our faces is surprisingly true. Atmospheric auroras, the merry dancers that illuminate the folklore of all those people familiar with the midnight sun, often reach us here on Earth. In addition to beautiful northern lights visual displays, they can also have a profound geophysical impact on our planet. The origin of northern lights in the sky is the potent internal forces of our sun. The aurora borealis visits the skies of every Arctic nation. The lights usually circle the globe in an elliptical band centered on the Earth’s magnetic North Pole and beginning 20 degrees or so south. Since the magnetic pole is offset from the geographic pole in the direction of Hudson’s Bay, Canada is prone to more aurora borealis displays than anywhere else on the planet. In fact, 80 to 90 per cent of the most readily accessible landmass under the northern lights’ elliptical band is situated in Canada. Only rarely do they become visible to those who live further south. That happy event is happening right now. Nearly everyone who lives north of the equator should be able to look out a window at the sky and enjoy the pyrotechnic displays of the northern lights for a few weeks this fall. By the end of August 2013, the increased sun spot activity that is characteristic of the 11-year Solar Maximum cycle will have ejected much more of the sun’s coronal mass into space than in a typical year. These coronal mass ejections (CMEs), many of which begin as solar flares from inside the sun, rip out a chunk of the sun’s atmosphere as they escape its gravity. Comprised mainly of ionized hydrogen, X-rays and gamma rays, CMEs travel more slowly than the speed of light, stretching out like tendrils at a mere 1.4 million kilometers per hour or so. In slow motion, it appears as if the fiery sphere of the sun suddenly grows a tentacle and becomes – temporarily – a living orange man-of-war jellyfish reaching out into deep space for contact. Sometimes, after about a day (or four or five), these CMEs then touch the Earth’s magnetic field. When such geomagnetic storms happen, their beauty can be as immense as their potential for danger. In the past 70 years, there have been five known geomagnetic storms of extreme intensity. An amateur astronomer named Richard Carrington observed the first extreme storm in 1859. The Carrington event shut down much of the world’s telegraph network, making communication impossible and causing telegraph cables worldwide to burn and emit sparks. We have obviously become much more reliant on our electrical grid than we were in Carrington’s lifetime. By 1919, most of North America’s consumed energy had shifted from steam to electricity, and since 1950, continental demand for electricity has increased by nearly 900 per cent. There are now more than 480,000 kilometres of long-range electrical cables in North America. All of them are connected in an overlapping design that is intended to prevent power outages in most regions. In addition, CMEs threaten the satellite network that informs all smart phones and other portable GPS devices. In 2013, nearly a billion new phones entered the worldwide market. This year, the solar flares should be particularly intense because the peak 11-year Solar Maximum also coincides with a longer solar cycle. The Grand Solar Cycle was first observed in May 1921, a year after it began. Michael Lockwood, a professor of Space Environment Physics in the UK, says the coincidence of both cycles (aka a Grand Solar Maximum) means that “the peaks of the 11-year sunspot cycle are” much larger, so the “average number of solar flares and … coronal mass ejections are greater.” The largest, or so-called X-class geomagnetic storms are incredibly powerful – the ground itself becomes electrified. This can cause the same disastrous result as a deliberate terrorist attack against the power grid using an electromagnetic pulse (EMP) weapon. If the May 1921 storm happened today, it would shut down all electrical service to at least 130 million people in North America and burn 350 transformers in the US beyond repair. Sometimes CMEs cause the most damage when a second wave of them reaches the Earth. This is exactly what happened during the Carrington event. The first wave of CMEs in 1859 temporarily disrupted telegraph communications. But within days, solar bombardment wore away the normal level of resistance provided by Earth’s electromagnetic field and the telegraph wires began to burn. When the Earth’s magnetic field is weakened, much more energy touches the surface. It can also cause ground currents to flow into power lines and overload the electrical equipment connected to them, including the heavy transformers on which the grid relies. When these transformers experience such overloads, they explode. Since they take years to build and cost at least a few million apiece, there are not a lot of spares available, and their removal and installation is quite time-consuming. Electricity grids can fail following the most powerful CMEs, as they did in 1989 throughout Québec and much of the northeastern US when CMEs entered the Earth’s magnetic field and became a geomagnetic storm. Although the storm lasted only six hours, electrical overloads caused severe damage to the entire grid, leaving six million Québecers without power for at least half a day. The storm was much smaller than the Carrington event. It would cost between $1- and $2-billion to supply surge suppressors to shield America’s electricity grid. But that cost is too much according to the 500 power companies who control the flow of electricity in the United States. Consequently in 2011, the Grid Law, which would have provided such protection, was defeated in the Senate following a powerful initiative by lobbyists. This seems shortsighted since an interruption to the shared North American power grid for only a day or two would bring both the Canadian and US economies to a halt. The daily average GDP in the US in 2012 was slightly more than $41-billion; in Canada, that figure was nearly $5-billion. In other words, we could pay for surge suppressors if we set aside the cash that North Americans will generate during the next hour. What’s more, the dangers of geomagnetic storms are not limited to simple power failures. Unlike 1859, the social impact of a failed power grid would now be crushing because nearly every aspect of modern life depends on electricity. Electric light illuminates our 24-hour culture and keeps our urban spaces safe. Electric heat provides us with comfort in the most austere northern climates or air conditioning in warmer places. We require electricity for cooking, and for freezing and refrigerating our food to store or transport it. Much of our transportation and all of our communications rely on electricity. And since communication systems (like telephones, radio and the Internet) all rely on the ground-based electrical supply, these are all vulnerable to an intense geomagnetic storm. We have known this since 1972, when AT&T redesigned its trans-Atlantic cable system after a powerful CME shut down the telephone system In addition to modern communications systems, our vital supply of fresh and potable water is vulnerable to these solar events because it is purified and delivered to our homes by an elaborate network of electrical pumps. Moreover, the vulnerability of these electrical pumps introduces a whole new class of dangers that might result from an extreme X-level geomagnetic storm. Water is far more necessary to the continuation of human life than food. There is another frightening danger represented by the potential failure of electrical pumps as well. In the US, the cooling systems of 104 nuclear reactors depend on electrical power sources. Without water to cool them, the fuel rods within those reactors would soon begin to overheat. If they became so hot that the zirconium in which they are encased began to burn, there would then be little possibility of extinguishing the fire. For this reason, American reactors are required to keep a 30-day supply of diesel fuel on hand at all times to power the nuclear station’s back-up generators. A meltdown like Chernobyl or Fukushima has never occurred in North America, although Three Mile Island was a very close call. Still, a recent report on the nuclear hazards of EMP attacks concludes that only 33 reactors in the US are not vulnerable to total grid failure following an intense geomagnetic storm. The vulnerability is concentrated in two areas: the northwest (Oregon and Washington) and the entire coastal area east of the Mississippi. Ironically, evacuation plans for both areas assume that the telephones, radio stations and Internet will remain intact, whereas these communication media would shut down in the earliest days of a geomagnetic event. The US emergency response regime would likely deal efficiently with the simultaneous meltdowns of several reactors, but 71 of America’s 104 reactors are now rated as very vulnerable because there is not sufficient shielding or systems redundancy in place to protect them. America’s nuclear vulnerability is shared by Canada, where a central worry is that our nuclear infrastructure is centralized and aging. Two radioactive spills at the Point Lepreau New Brunswick plant in 2011 and 2012 raised security questions at all five of Canada’s nuclear power generating stations, which are mostly located within 200 kilometres of Toronto, the most densely populated region in the country. The Darlington complex in Clarington, Ontario, is still a fairly modern site, completed in 1993. But both nuclear generating stations in Kincardine, Ontario, are now three decades old, and the Pickering plant – just 30 kilometres east of downtown Toronto – was completed in 1971. Pickering is one of the largest nuclear facilities in the world, and generates about 20 per cent of Ontario’s electrical energy. Anxieties about the failure of coolant pumps were heightened in Canada this year after a researcher accidentally shut them down at the Chalk River Nuclear Research Facility northwest of Ottawa on February 27. Canada is currently reviewing its vulnerability to reactor failure from a variety of threats. But in spite of the calculable risks, there is hope that sometime soon we may be able to predict the size and number of solar flares that a sunspot will emit. Probability is on our side; space is vast and only a direct hit by a CME will likely damage the Earth significantly. In 2003, the most powerful known CME (rated somewhere between X-28 and X-45) dealt the Earth only a glancing blow, knocking out 14 transformers in South Africa. The sun is eight minutes away at the speed of light, but sunspots form gradually, and CMEs take days to reach Earth. In 1997, NASA launched a satellite called Solar Shield to take on the exclusive task of observing the sun’s surface for sunspot activity. We know that a sunspot first forms as a dark mass on the surface of the sun, perhaps several days before a CME. The size of the darkening spot is a preliminary indication of the future CME’s force. A second sign is an inverse S-curve appearing in the X-ray signature of the sun’s corona. Both are best observed by sensitive X-ray telescopes that are unobstructed by Earth’s atmosphere. However, Solar Shield was built to last only five years, and like many other satellites, it could easily fail during a solar event, when its systems are most subject to heat, particle bombardment and electrical stresses. CMEs often cause satellite failure. During a geomagnetic storm in March 1989, four US navigational satellites had to be shut down for days. Ironically, the Solar Maximum satellite itself fell out of orbit that same year. In 1998, the $250-million Galaxy IV spun out of control 32,000 kilometres above the Earth. Four other satellites failed at the same moment, so the main suspect was a solar flare. In recent years, 12 satellites have definitely been lost to the effects of space weather, and an equal number of CME-related satellite losses are suspected. Much of the contemporary research concerning CMEs and solar flares takes place in Canada. In the 1950s, Canadian scientists fired “sounding rockets” into northern lights displays from Churchill, Manitoba. The Churchill Northern Studies Centre now houses a control centre for research on plasma, magnetospheric physics and CMEs. Today a vast network of nearly 60 space-probing instruments stretches across the Canadian North from Goose Bay, Labrador, to Inuvik, NWT. This network will soon be joined by the new $25-million Resolute Scatter Radar-Canada installation in Resolute Bay, Nunavut. All this is part of an emerging transpolar research network focused on the northern skies. If an electromagnetic event like the one Carrington observed in 1859 were to happen this fall, it would scramble radio waves across the planet and disrupt everything from Global Positioning Systems to TVs to mobile devices, without exception. It would damage (in some cases destroy) satellites carrying much of Earth’s radio and television programming, telephone calls, text messages and GPS tracking information. After such an event, even after transmission capability was restored, there might be few satellite transmitters left. In 2012, the US National Academy of Sciences estimated that a severe solar storm might cause $2-trillion damage to the nation’s communications systems alone. We can do a few things to protect the power grid from geomagnetic storms. At $2-billion, the surge suppressor option seems cost-effective and reasonable, although it would take time to complete. So too would the separation of the entire electrical grid into less vulnerable local micro-grids like the ones Thomas Edison first developed to serve individual communities. Finally, the suggestion that we simply ground the world’s electrical grid is eminently doable. We should get to work on that today. Unless there is sufficient warning, it is very difficult to secure the technology we launch into space. For that reason, we should also begin to fund a new generation of solar observation satellites with sufficient redundancy, so there would always be one on the far side of the planet when a CME collided with the Earth’s magnetic field. The last thing is to find global funding for an accurate method of predicting the size, number and direction of solar flares. Finally, remember that not all solar flares that become CMEs are dangerous. Only when powerful CMEs penetrate the Earth’s magnetic field as a geomagnetic storm is our electricity grid threatened. If that does happen, you can pre-plan to protect your own personal electronics inside a quick-to-assemble Faraday cage that distributes electrical energy away from your device. In the likely event that it doesn’t happen, put your electronics away and enjoy the northern lights show while it lasts. Popular on A\J More by this Author - From EATING AROUND THE WORLD article: "The long road to sustainability requires rebuilding our communities, and a g… https://t.co/gLTuZ7Rvu5 — 21 weeks 6 days ago - A Valentine's Day (and every day) message from Jane Goodall: "Let us replace impatience and intolerance with unders… https://t.co/1WGML2toyK — 21 weeks 6 days ago - For Valentine's Day: https://t.co/exvDzE2LQf — 21 weeks 6 days ago
<urn:uuid:750668d5-bdd9-4daa-b642-f51944e076ad>
3.546875
3,368
Nonfiction Writing
Science & Tech.
42.303583
95,619,706
Introduction to Compartmental Analysis In the context of compartmental analysis, a living organism can be described as an open biological system existing in a steady-state far from thermodynamic equilibrium. Thermodynamic equilibrium is a state in which no biological processes can occur because there are no potential gradients to drive them; no differences in mechanical potential to drive blood flow, in concentrations to drive diffusion, in chemical potentials to drive metabolism, in electrical potentials to drive ions, and in temperature to drive heat flow. Steady-state and thermodynamic equilibrium share the characteristic that they are invariant in time. Thermodynamic equilibrium is also invariant in space. The steady-state variance of constituent chemicals in space is the focus of compartmental analysis. Spatial variance is assigned to the interfaces between abstract compartments rather than to the living system as a whole. As the compartments by this definition are in thermodynamic equilibrium internally, they are incompatible with life but we choose to ignore this fundamental characteristic. Compartmental analysis uses the principles of biophysics and mathematics to determine the velocity of exchanges among the compartments (biochemical processes) and the relative size of the individual compartments (biochemical pools) in vivo, using tracer molecules, defined as markers that do not perturb the system. During a medical study or biological experiment, the tracer and its metabolites assume different states, each of which may be well defined but all of which change and interact as functions of time. Eventually, one or more of these states may reach the steady-state characteristic of the native system, though far from thermodynamic equilibrium. This steady-state can be maintained only in thermodynamically open systems. If energy is no longer provided or expended, potential KeywordsThermodynamic Equilibrium Potential Gradient Tracer State Transient Phase Compartmental Analysis Unable to display preview. Download preview PDF.
<urn:uuid:db687317-1322-45c6-9c71-fe4e971f187a>
3.171875
383
Truncated
Science & Tech.
3.255217
95,619,710
The new tomographic image of the mantle beneath the French Massif Central reaching a depth of 270 km is interpreted in terms of mantle temperature, considering effects of anharmonicity and anelasticity on seismic velocities as well as effects of mineral reactions, composition and partial melt. For every block of the tomographic model we calculate the absolute temperature required to fit the observed velocity perturbation, the average temperature of the tomographic layer being constrained by P-T estimates from mantle xenoliths and by surface heat flow. From the 3-D temperature distribution we estimate the topography of the thermal lithosphere-asthenosphere boundary as well as 3-D distributions of density, absolute P- and S-velocities and seismic attenuation. The observed velocity perturbations in the mantle beneath the Massif Central can be explained nearly entirely by temperature variations. Temperatures approach the dry peridotite solidus in the depth range from 50 to 90 km just below Cenozoic volcanic areas, but no large-scale partial melting is required to fit the seismic observations. Model temperatures agree well with P-T estimates from mantle xenoliths and measured surface heat flow. Model-predicted seismic velocities, seismic attenuation and density fit well the observations from seismic refractions, surface waves and gravity. The model predicts a broad uplift of the thermal lithosphere-asthenosphere boundary to a depth of 65-70 km with a 50-70 km wide band of stronger lithospheric thinning which crosses the main volcanic fields and strikes parallel to the direction of maximal compression in the crust. The Limagne Graben, which is the major rift structure of the Massif Central, has no clear expression in the topography of the lithosphere-asthenosphere boundary. Our interpretation suggests a mantle plume below the central and southern part of the Massif Central with a potential temperature which is about 150-200°C higher than the average potential temperature of the upper mantle. The structure of the lithosphere-asthenosphere boundary provides evidence for a possible thinning of the mantle part of the lithosphere beneath the volcanic fields parallel to the direction of minimal horizontal compression in the crust. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:dfa6c415-b936-4939-b7b9-75cb73b31d09>
2.859375
466
Academic Writing
Science & Tech.
11.717507
95,619,712
Parent DisplayObject Library display.* Revision 2018.3332 Keywords group, display group, groups See also display.newGroup() Group Programming (guide) Group objects are a special type of display object. You can add other display objects as children of a group object. You can also remove them. Even if an object is not visible, it remains in the group object until explicitly removed. Thus, to minimize memory consumption, you should explicitly remove any object that will no longer be used. All objects are implicitly added to the current stage, which itself is a kind of group object. Corona does not have layers or levels but groups can be used to simulate the functionally. Objects added to groups can be moved and transposed as a group by controlling the group object. Child objects can also be accessed by array index, for example: See the Group Programming guide for more information about group objects. Using groups with the physics engine has some limitations — see the Physics Notes/Limitations guide.
<urn:uuid:4c4264a0-03d4-4356-a3f2-afdb616e5aa4>
2.96875
204
Documentation
Software Dev.
49.875944
95,619,758
An intermetallic (also called an intermetallic compound, intermetallic alloy, ordered intermetallic alloy, and a long-range-ordered alloy) is a type of metallic alloy that forms a solid-state compound exhibiting defined stoichiometry and ordered crystal structure. Schulze in 1967, defined intermetallic compounds as solid phases containing two or more metallic elements, with optionally one or more non-metallic elements, whose crystal structure differs from that of the other constituents. Under this definition, the following are included - Electron (or Hume-Rothery) compounds - Size packing phases. e.g. Laves phases, Frank–Kasper phases and Nowotny phases - Zintl phases The definition of a metal is taken to include: - the so-called post-transition metals, i.e. aluminium, gallium, indium, thallium, tin and lead - some, if not all, of the metalloids, e.g. silicon, germanium, arsenic, antimony and tellurium. Homogeneous and heterogeneous solid solutions of metals, and interstitial compounds (such as the carbides and nitrides), are excluded under this definition. However, interstitial intermetallic compounds are included, as are alloys of intermetallic compounds with a metal. In common use, the research definition, including post-transition metals and metalloids, is extended to include compounds such as cementite, Fe3C. These compounds, sometimes termed interstitial compounds, can be stoichiometric, and share similar properties to the intermetallic compounds defined above. A B2 intermetallic compound has equal numbers of atoms of two metals such as aluminum and iron. Involving two or more metallic elements Intermetallic compounds are generally brittle and have a high melting point. They often offer a compromise between ceramic and metallic properties when hardness and/or resistance to high temperatures is important enough to sacrifice some toughness and ease of processing. They can also display desirable magnetic, superconducting and chemical properties, due to their strong internal order and mixed (metallic and covalent/ionic) bonding, respectively. Intermetallics have given rise to various novel materials developments. Some examples include alnico and the hydrogen storage materials in nickel metal hydride batteries. Ni3Al, which is the hardening phase in the familiar nickel-base superalloys, and the various titanium aluminides have also attracted interest for turbine blade applications, while the latter is also used in very small quantities for grain refinement of titanium alloys. Silicides, intermetallics involving silicon, are utilized as barrier and contact layers in microelectronics. Properties and examples - Magnetic materials e.g. alnico; sendust; Permendur; FeCo; Terfenol-D - Superconductors e.g. A15 phases; niobium-tin - Hydrogen storage e.g. AB5 compounds (nickel metal hydride batteries) - Shape memory alloys e.g. Cu-Al-Ni (alloys of Cu3Al and nickel); Nitinol (NiTi) - Coating materials e.g. NiAl - High-temperature structural materials e.g. nickel aluminide, Ni3Al - Dental amalgams which are alloys of intermetallics Ag3Sn and Cu3Sn - Gate contact/ barrier layer for microelectronics e.g. TiSi2 - Laves phases (AB2), e.g., MgCu2, MgZn2 and MgNi2. The formation of intermetallics can cause problems. Intermetallics of gold and aluminium can be a significant cause of wire bond failures in semiconductor devices and other microelectronics devices. There are five intermetallic compounds in the binary phase diagram of Al–Au. AuAl2 is known as "purple plague". Au5Al2 is known as "white plague". Intermetallic particles form during solidification of metallic alloys. Examples of intermetallics through history include: German type metal is described as breaking like glass, not bending, softer than copper but more fusible than lead. The chemical formula does not agree with the one above; however, the properties match with an intermetallic compound or an alloy of one. - Gerhard Sauthoff: Intermetallics, Wiley-VCH, Weinheim 1995, 165 pages - Intermetallics, Gerhard Sauthoff, Ullmann's Encyclopedia of Industrial Chemistry, Wiley Interscience. (Subscription required) - Electrons, atoms, metals and alloys W. Hume-Rothery Publisher: The Louis Cassier Co. Ltd 1955 - G. E. R. Schulze: Metallphysik, Akademie-Verlag, Berlin 1967 - Cotton, F. Albert; Wilkinson, Geoffrey; Murillo, Carlos A.; Bochmann, Manfred (1999), Advanced Inorganic Chemistry (6th ed.), New York: Wiley-Interscience, ISBN 0-471-19957-5 - "Wings of steel: An alloy of iron and aluminium is as good as titanium, at a tenth of the cost". The Economist. February 7, 2015. Retrieved February 5, 2015. - S.P. Murarka, Metallization Theory and Practice for VLSI and ULSI. Butterworth-Heinemann, Boston, 1993. - Milton Ohring, Materials Science of Thin Films, 2nd Edition, Academic Press, San Diego, CA, 2002, p. 692. - Type-pounding The Penny Cyclopædia of the Society for the Diffusion of Useful Knowledge By Society for the Diffusion of Useful Knowledge (Great Britain), George Long Published 1843 - Intermetallics, scientific journal - Intermetallic Creation and Growth – an article on the Wire Bond Website of the NASA Goddard Space Flight Center. - Intermetallics project (IMPRESS Intermetallics project at the European Space Agency) - Video of an AB5 intermetallic compound solidifying/freezing
<urn:uuid:390120f0-9147-464e-9c2b-5e20689a1095>
3.484375
1,309
Knowledge Article
Science & Tech.
27.820295
95,619,760
Sketching Rupes (Scarps) Part of the Patrick Moore's Practical Astronomy Series book series (PATRICKMOORE) Escarpments on the lunar surface are often labeled with the Latin word “rupes”. A number of processes created scarps on the Moon: KeywordsLunar Surface White Paint Black Paper Small Crater Paint Marker These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. © Springer Science+Business Media, LLC 2012
<urn:uuid:54d8e89a-ccaa-46e5-ac4c-0a1bc57140b9>
2.890625
121
Truncated
Science & Tech.
31.253502
95,619,761
Edited by Jamie (ScienceAid Editor), Taylor (ScienceAid Editor), Jen Moreau Fermentation is any process where microorganisms use an external food source for energy. This process is done in a fermenter. Conditions are controlled by mixing using a water jacket. Below is a diagram of a simple fermenter. In industry, these would be very large and would have lots of different pipes and tubes coming out of them for various functions. |Paddles||Inside the fermenter, they are rotated to evenly distribute the mixture.| |Water Jacket||Cold water is pumped through this to reduce and maintain the temperature. Respiration by microorganisms heats it up.| |Data Logger||Measures a range of conditions (temperature, pH, oxygen concentration). Measurements can be used to adjust the conditions in the fermenter.| |Products||Products are removed, all at once in batch culture or bit by bit in continuous.| |Air Supply||Provides oxygen for respiration. Must be sterilized so no extra microorganisms contaminate the fermenter.| A fermentation process is used to make yogurt. Milk contains the sugar lactose; and some bacteria will ferment lactose to produce lactic acid. Yogurt is produced by batch culture, where pasteurized milk has the bacteria Lactobacillus bulgaricus and Streptococcus thermophilus added. The mixture is maintained at a temperature of around 40°C. The bacteria will produce lactic acid by respiration, and this lowers the pH. When it reaches a target, the product is harvested. Referencing this Article If you need to reference this article in your work, you can copy-paste the following depending on your required format: APA (American Psychological Association) Fermentation. (2017). In ScienceAid. Retrieved Jul 22, 2018, from https://scienceaid.net/biology/micro/fermentation.html MLA (Modern Language Association) "Fermentation." ScienceAid, scienceaid.net/biology/micro/fermentation.html Accessed 22 Jul 2018. Chicago / Turabian ScienceAid.net. "Fermentation." Accessed Jul 22, 2018. https://scienceaid.net/biology/micro/fermentation.html. Categories : Micro Recent edits by: Taylor (ScienceAid Editor), Jamie (ScienceAid Editor)
<urn:uuid:8af03350-7d45-41da-bea7-35c7835b38b5>
3.6875
492
Knowledge Article
Science & Tech.
35.113924
95,619,763
Overnight tipping points from a cataclysmic event: impacts, recovery and constraints on rocky reef ecosystems (innovation fund) The uplift of Kaikoura’s coastline due to the November 2016 earthquakes caused an unprecedented loss of kelp forests, which provide habitat and energy for other species. This project is investigating the long-term resilience of kelp thrust into shallow water in the subtidal zone, and the potential mechanisms affecting canopy expansion, colonisation and survival. This will help determine which kelp beds are likely to recover, the environmental conditions likely to promote recovery, and which beds are vulnerable to further decline. Project leader: Leigh Tait, NIWA Investigating the recovery of Kaikoura's coastal kelp forests The November 2016 earthquake lifted around 50km of Kaikoura’s coastline, causing an unprecedented loss of kelp forests that provide habitat and energy for other species. This event will significantly affect nutrient cycling, primary productivity and overall functioning of nearshore ecosystems in what was one of New Zealand’s most productive coastal zones. In New Zealand the recovery of kelp forests can be very slow, especially if disrupted by sediment and/or colonisation by turfing algae. Healthy kelp bed ecosystems maintain themselves through positive-feedback cycles that inhibit the growth of turfing algae and buffer sediment from being resuspended by wave action. However, negative feedback loops associated with turfing algae can develop if environmental conditions degrade, which can be triggered by kelp loss due to natural events such as storms. Because extensive reef areas on Kaikoura’s coast are now devoid of kelp and dominated by turfing species, recovery of the habitat-forming kelp forests is threatened – especially in areas of high sedimentation. We are assessing changes in how much light is penetrating the water in three regions along Kaikōura’s coast that have differing levels of sedimentation and varying populations of surviving kelp: southern region – Oaro to Hikurangi Marine Reserve; central region – Kaikōura Peninsula; northern region – Waipapa Bay and Okiwi Bay. This work complements what MPI is doing. To assess how abundant – and healthy – the remaining kelp forests are, we are using novel equipment to measure the metabolism of the kelp forests, and protocolsto assess the long-term health and resilience of key species in the affected areas. This will enable us to determine the long-term resilience of kelp thrust into shallow water in the subtidal zone, as well as determine the potential mechanisms affecting canopy expansion, colonisation and survival. We are: - Quantifying the effects of kelp loss on carbon cycling - Assessing differences in water clarity within and between areas, and the implications for kelp bed resilience and recovery - Estimating critical tipping point thresholds associated with reduced buffering due to kelp loss This information will help determine which kelp beds are likely to recover, the environmental conditions likely to promote recovery, and which beds are vulnerable to further decline. Where mitigation is possible and practical we will recommend measures to limit further damage (eg, by restricting or changing human activities that contribute to sedimentation). Latest news and updates Improving marine management is critical to New Zealand's future health and wealth, but research in isolation is not enough. Excellent engagement with, and participation from, all users and sectors of society is essential. We therefore invite comment on our draft strategy for Phase II (2019–2024). This strategy has been co-developed with Māori and stakeholders. During Seaweek, more than 4,600 school pupils joined 6 Sustainable Seas researchers for 3 days of marine science fieldwork in Tasman Bay, as part of the LEARNZ virtual field trip Sustainable seas – essential for New Zealand’s health and wealth.
<urn:uuid:d327af24-c3b4-4c47-8b42-e5ffe88afa9b>
3.171875
792
News (Org.)
Science & Tech.
15.363
95,619,775
Best programming languages 2018 → The web industry has itself changed in the past two decades. During its initial days, it was all about having a website and HTML was the primary language used for building websites. Static websites are used to represent a pre-defined set of information such as a business profile. Dynamic websites interact with the user and show dynamic information depending upon the data of the user such as online railway reservation website. The first question that comes to every developer’s mind is to choose the best programming language for the project. What programming language to learn? That’s why today we’re here with a list of Some best programming languages 2018 List for a web developer. Check out our list of best programming language for web developer below and leave a comment if you like it! Best programming languages 2018 List back to menu ↑ JAVA is one of the best and most popular programming languages on the web. Sun Microsystems develop it. Java is an open-source language, which means it’s available free-of-cost. It can be used for developing a stand-alone program as well as individual applets used on multiple websites. It benefits from both object-oriented paradigms and functional programming characteristics. Read: Free Earning Guide .NET is a framework developed by Microsoft in 2000, and it’s used as a framework for many software and web-based applications. It’s mainly a Windows-based framework. It supports the Common Language Infrastructure and uses different CLI languages such as C#, F#, J#, Visual Basic.NET along with numerous other languages. back to menu ↑ PHP is used to signify the PHP Hypertext Processor language, and it’s an interpreted script language. It is best suited for server-side programming that includes constant server-tasks performed during the development of your website. It’s a fast-prototyping language, and it’s best for developing web-based applications that require maximum functionality with minimum code. It’s suited for advertisement apps, agency, media, small software business, and startup business owners. Don’t miss: Top 10 best PHP development tools Python is a dynamic language which means the developer can write and run the code without needing a separate compiler for the purpose. It supports many programming models such as object-oriented programming, structured programming, and even functional programming to a certain extent. It is a superb language for scientific, academic and research level applications that want a quick execution and accurate mathematical calculation.back to menu ↑ Developed by Yukihiro Matsumoto in the year 1993 Ruby is a programming language to offer a balance of functional programming along with imperative programming. It’s a dynamic programming language and supports multiple programming paradigms such as functional, imperative, and object-oriented model. It’s an object-oriented language and has a somewhat similar syntax as that of Python and Perl. So above is all about Best programming language 2018 for a web developer. Hope you like it so please don’t forget to share this post with others. In search of: programming languages, programming language, programming languages 2018, programming language 2018 Note. We usually updated this article so, please Bookmark it in your web browser.
<urn:uuid:36c19751-372e-4c83-929f-314fffcd667b>
2.578125
679
Listicle
Software Dev.
38.902601
95,619,782
Mice with deviant internal rhythms due to a genetic mutation have fewer offspring and shorter life spans than normal conspecifics whose rhythms follow the 24-hr cycle of a day more accurately. This discovery was made by a team of scientists led by researchers from the Max Planck Institute for Ornithology and Princeton University. Internal clocks that generate daily rhythms in living beings are among the most important achievements on earth. They are essential for coordinating processes of life with the environment. The study on mice shows that a deviation of internal rhythms from the 24-hr rotation of the earth has a direct influence on biological fitness. Almost all living things possess internal clocks that govern periods of sleep and waking, and ensure that these processes are in synchrony with night and day. This circadian clock evolved to allow the anticipation of regular daily events. Sunlight aligns the internal clock with the 24-hour-rhythm of the rotation of the earth. A fundamental, unanswered question so far has been: is the functioning of the internal clock important for how long an organism lives and how well it is able to reproduce in its natural environment? Mutations in certain genes can disrupt the internal clock so that it runs out of sync with the day-night cycle. In mice, a mutation called tau is known to alter daily rhythms: mice carrying this mutation run through their day about two hours faster than normal mice. Scientists from the Max Planck Institute for Ornithology in Seewiesen and Radolfzell together with colleagues from the University of Groningen, the University of Manchester and Princeton University studied the biological fitness of such mice with deviant circadian rhythms in a large outdoor enclosure for over a year, where they were exposed to natural predators. At the beginning of the study the researchers divided 238 mice into six groups. For each group they housed an identical mix of mice without the mutation together with mice carrying either one or two copies of the mutation in their genes. Each mouse was equipped with a transponder, so that the scientists could monitor their activity rhythms at feeders. Mice with one or two copies of the mutation showed aberrant daily rhythms. Mice without the mutation were observed to live longer and to produce more offspring than mice with the mutation that showed abnormal rhythms. As a consequence, after more than one year the prevalence of the mutation in the population dropped from an initial 50 percent in the starting population to only about 20 percent in the last cohort that was studied. This finding led the researchers to conclude that strong selection pressures must exist against the tau mutation in a natural environment. “Our findings highlight the fundamental importance of circadian clocks for the biological fitness of living beings. This has never been shown that clearly”, summarizes senior author Michaela Hau. (SL/HR) Prof. Dr. Michaela Hau Max-Planck-Institut für Ornithologie Abteilung Evolutionäre Physiologie Tel. +49 (0) 8157 932-273 Dr. Kamiel Spoelstra Netherlands Institute for Ecology, Wageningen Department of Animal Ecology Phone +31 (0)317 473 453 Dr. Sabine Spehn | Max-Planck-Institut für Ornithologie Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:11a59c7b-87d5-433d-8b0f-89403152b173>
3.859375
1,254
Content Listing
Science & Tech.
40.774779
95,619,783