text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
What is it?
It is a project that contains (or should contain) the same samples per computer language topic across multiple languages. For example, if you look up a methods in Java, you should find the same examples in Groovy, LISP, C, C++, etc. Any derivation from the primary examples will be presented with more clarity and explanation
What will it not do?
It will not cover the multitude of libraries across different languages. This just focuses on the languages themselves and the capabilities they serve.
How many languages are there?
Right now aiming for 10. This is a work in progress.
How do I look up material?
Each language has a README.md, click on the language and view the topics. Some topics will take you to the code directly, some will cross reference you to a topic. Say as a Java programmer you know what a List is, but when you look up Ruby and there are no lists, the Ruby README will cross reference you to arrays, which is how they create list like collections.
Do you accept pull requests?
Absolutely. Be warned, I am aiming for a certain aesthetic and layout, so it may not happen right away. | <urn:uuid:938be360-0ae1-4294-9b43-08fa4f9780ae> | 2.546875 | 250 | Truncated | Software Dev. | 63.267481 | 95,568,331 |
Orcaella brevirostris (Mekong River subpopulation)
|Scientific Name:||Orcaella brevirostris (Mekong River subpopulation)|
|Red List Category & Criteria:||Critically Endangered C2a(i,ii); D ver 3.1|
|Assessor(s):||Smith, B.D. & Beasley, I.|
|Reviewer(s):||Reeves, R. & Taylor, B.L. (Cetacean Red List Authority)|
The best estimate of abundance for the Kratie to Khone Falls river segment is 69 individuals, based on the pool-count survey in May 2003. This number is probably close to the actual size of the Mekong subpopulation because of:
1) the low probability that dolphins occur below Kratie and in the Sekong River during the low-water season, and in Tonle Sap (Great Lake) and its connecting channel at any time, as indicated by interview surveys and the observations by researchers conducting water-bird surveys and hydrological investigations;
2) the nearly comprehensive search coverage of navigable channels during vessel-based surveys; and
3) the 100% match in concurrent detections of dolphin groups and close agreement in group size estimates by land-based and boat-based observers.
Guidelines for considering measurement error (Annex 1: Uncertainty, in IUCN 2001) suggest using plausible lower bounds, rather than best estimates, to determine population size. In the case of Mekong dolphins, this implies that it would be appropriate to use 57, the sum of minimum estimates of group size from the May 2003 pool-count survey, as the estimate of abundance for this subpopulation. The threshold of 50 mature individuals for listing a species or population as CR according to Criterion D (and C2a(i)) refers to the number of individuals known, estimated or inferred to be capable of reproduction. Although the proportion of mature individuals typical for this species is unknown, it is reasonable (and certainly precautionary) to infer that the number of mature individuals in the Mekong River is less than 50. Therefore, the subpopulation qualifies for listing as CR based on Criterion D. The subpopulation also qualifies as CR on the basis of Criterion C2a (i,ii) as it certainly has fewer than 250 mature individuals, a continuing decline in the number of mature individuals is projected or inferred based on the continuing known threat of gillnet entanglement and potential threats from water development and navigation improvement projects, and both subcriteria of subcriterion 2a apply (no subpopulation contains more than 50 mature individuals and more than 90% of the total mature individuals are in one subpopulation).
|Range Description:||The Irrawaddy dolphin is patchily distributed in shallow, near-shore tropical and subtropical marine waters of the Indo-Pacific, from northeastern Australia in the south, north to the Philippines (Dolar et al. 2002) and west to northeastern India (Stacey and Leatherwood 1997; Stacey and Arnold 1999). Its marine distribution is concentrated in estuaries and semi-enclosed water bodies (i.e., bays and sounds), generally adjacent to mangrove forests. Freshwater populations occur in three river systems - the Mahakam of Indonesia, the Ayeyarwady (formerly Irrawaddy) of Myanmar (formerly Burma) and the Mekong of southern Laos, Cambodia and Viet Nam. Irrawaddy dolphins also occur in partially isolated brackish or fresh-water bodies, including Chilka Lake in India and Songkhla Lake in Thailand. The first published record of the species in the Mekong was from the diary of a 19th century explorer (Mouhout 1966). |
The effective range of Irrawaddy dolphins in the Mekong River is a 190 km segment from Kratie (about 500 km upstream of the river mouth in Viet Nam) to slightly upstream of the Laos/Cambodia border at Khone Falls (or Lee Pee), which physically obstructs further upstream movement. During the high water season (June-October), anecdotal reports suggested that the dolphins ascended the Sekong River and its tributaries, the Houay Khaliang, Xepian (to Xepha Falls about 50 km above the Sekong confluence), Xenamnoi (to Tatkhek Falls about 8 km above the Sekong confluence), and Xekaman (to about 50 km above the Sekong confluence and including the Houay Twai tributary and possibly the Xepian). In the Sekong, the dolphins were reported to range as far upstream as Kalaum Town, Laos, about 280 km above the Mekong confluence near Stung Treng, Cambodia (Baird and Mounsouphom, 1997). Based on interviews conducted by Baird (1997) and Beasley et al. (2003), dolphins probably now only rarely, if ever, ascend these rivers. During interview surveys downstream from Kratie to Phnom Penh, children were unaware of the existence of dolphins, whereas adults reported that before 1975, dolphins were observed every day during both low and high water seasons (Isabel Beasley, pers. comm.).
Based on visual surveys conducted by Beasley et al. (2003), dolphins are frequently found during the low-water season in nine deep areas in the Kratie to Khone Falls segment. Approximately 2 km below the falls, dolphins regularly occur in a small (ca. 600 m diameter), deep (> 50 m during the high-water season) pool, known locally in Laos as Boong Pa Gooang and in Cambodia as Anlong Chiteal. Dolphins were observed daily in the pool during the dry seasons of 1992-93, generally in groups of 2-10; 17 were seen at least once (Baird et al. 1994). Using visual and acoustic methods, Borsani (1999) estimated that there were 8-10 dolphins present in Boong Pa Gooang in late March/early April 1998. Other pools occupied by dolphins in the Kratie to Khone Falls segment are at Koh Suntuk, Kang Kohn Sat and Tbong Klar in the Stung Treng Province, and Sampan, Khasak Makak, Gopidau, Chroy Bantey and Kampi in the Kratie Province. Kampi pool, located 15 km north of Kratie, is currently considered the most important dolphin habitat in the Mekong, due to the 100% reliability of sightings, as recorded during 165 visits to the pool over three years, and the relatively large number of animals observed during each visit (mean group size = 7, range = 1-19) (Beasley et al. 2003, Beasley, unpublished).
Dolphins previously inhabited Tonle Sap (Great Lake) (Lloze 1973) but apparently have been extirpated there. Fishing is extremely intensive within the lake and in the channel connecting it to the Mekong. Researchers conducting extensive water-bird surveys in the lake from 1999-2003 have not observed any dolphins (Federic Goes, pers. comm.). Moreover, researchers from the Water Utilization Program - Finnish International Development Aid (WUP-FIN) Tonle Sap Modeling Project visited sampling stations throughout the lake every month from 2001-2003 and never observed dolphins (Juha Sarkkula, pers. comm.).
The only documentation of Irrawaddy dolphins in the Mekong of Viet Nam consists of a few records reported by Lloze (1973), skulls housed in whale temples near the delta and in the mouth of the nearby Dong River (Smith et al. 1997, Beasley et al. 2003) and a single carcass found in a fishing net in the Tien distributary near the Cambodian border in March 2002 (Chung and Ho 2002). During a survey of almost the entire length (224 km) of the two main distributaries of the Mekong, Tien and Hau Giang, in April 1996, Smith et al. (1997) were unable to find a single dolphin.
During March and May 1997, Baird (1997) observed 40 dolphins in the segment of the Mekong from Kratie to the Laos/Cambodia border. He estimated, on the basis of surveys and interviews, that the total population in the Mekong was roughly 100 individuals. Beasley et al. (2003) conducted 11 boat-based direct-count surveys, traveling upstream from Jum Neight (about 30 km downstream of Kratie) to the Laos/Cambodia border during January-May from 2001 to 2003. All navigable channels were surveyed (zigzagging when widths were greater than one km and transiting through the center when less than 1 km). Unsurveyed channels were either too shallow or unsafe to survey due to high-velocity currents, which also meant a low probability of dolphins occurring there; interview surveys of local people living along these channels supported this assumption. The largest number of dolphins observed during an upstream survey conducted at the height of the low-water season in April 2003 was 64. This number was based on the sum of best estimates of group size, with a range of 55-82, according to the sum of low and high estimates of group size, respectively. A slightly different method was used during 2002-2003, traveling downstream in the same river segment but stopping for 10-30 minutes in the nine deep pool areas where dolphins had been observed during previous surveys. The maximum number of animals recorded using the pool-count method was 69 individuals based on the sum of best estimates of group size, with a range of 57-84 individuals based on low and high estimates, during a survey in May 2003. Paired observation experiments, using land-based survey teams and concurrent boat-based observations, were conducted during each of the pool-count surveys in an attempt to assess the proportion of dolphins missed by the boat-based team. This resulted in a 100% match between the two methods in terms of the number of dolphin groups detected in each of the nine pools and very similar group size estimates when experienced observers were present on both teams (Beasley et al. 2003).
Native:Cambodia; Lao People's Democratic Republic; Viet Nam
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||No quantitative estimates of population trends are available, but significant range declines in Tonle Sap (Great Lake) and the Mekong mainstem below Kratie imply that the number of dolphins in the Mekong River has declined substantially over the past several decades. Also, for small cetaceans generally, it is recommended that yearly removals should not exceed 1-2% of the population size (Wade 1998) - the lower bound being more applicable to very small populations that are already vulnerable because of demographic and genetic factors. Four deaths per year (the mean number of carcasses recovered and determined to have died from gillnet entanglement in 2001-2003; Beasley et al. and Beasley [unpublished]) would represent 5.8% of the population, assuming a best estimate of abundance of 69, based on pool count surveys. Considering that the small size of the Mekong population already makes it vulnerable from demographic stochasticity, inbreeding depression and catastrophic environmental and epizootic events, the current rate of incidental mortality in gillnets will almost certainly lead to extirpation.|
|Current Population Trend:||Decreasing|
|Habitat and Ecology:||Irrawaddy dolphins inhabit deep pools of large rivers, sheltered inshore marine environments with substantial freshwater inputs, and partially isolated brackish or freshwater bodies (Stacey and Leatherwood 1997, Stacey and Arnold 1999, Smith and Jefferson 2002).|
Anecdotal reports suggest high dolphin mortality from deliberate killing for oil (reportedly for use in the motors of fishing boats and lamps) during the rule of the Khmer Rouge in 1970-1975 and then from target practice and the effects of explosives used for blast fishing during the Viet Namese occupation in 1975-1980. Dedicated studies on the dolphin population in 1990-1996 (Baird and Mounsouphom 1997) and 2000-2003 (Beasley et al. 2002, Beasley, unpublished) also recorded high mortality with a large proportion of the deaths caused by gillnet entanglement.
Smith et al. (1997) noted the presence of several dozen 200-400 m long stow nets in the Mekong River mouth, followed upstream by more than 10 rows of nylon gillnets stretched across the entire channel, with only small openings to permit vessel traffic. Those authors suggested that dolphin bycatch and displacement caused by the nets could explain the lack of cetacean sightings during their survey of the lower Mekong in Viet Nam during April 1996.
Potential additional threats
Numerous dams have been proposed for the Mekong River system. If built, these would degrade essential habitat features and interrupt the movements of dolphins and their prey. Of greatest concern are the large run-of-the-river dams proposed for the Mekong mainstem at Stung Treng and Sambor (Perrin et al. 1996). In the Sekong River system, at least two dams have been proposed tens of kilometers below the reported upstream limit of the dolphins. Dolphins are also threatened in the Sekong system by the proposed Xakaman and Xepian/Xenamnoi dam projects. This last project would divert almost all of the flow from the Xepian River to a reservoir behind another dam in the Xenamnoi River (Baird and Mounsouphom 1997).
Proposed navigation improvement schemes, which entail blasting the pool-riffle sequences that compose dolphin habitat, would probably lead to a dramatic decline, if not extinction, of the Mekong dolphin population due to the direct effects of the explosions and the indirect effects from eliminating or severely degrading their deep pool habitat. Prey declines from overfishing (particularly from the use of explosives and electricity) and unregulated dolphin-watching tourism may also be affecting the population.
Dolphins in the Mekong River receive some degree of protection from the traditional respect afforded by local fishermen (Baird et al. 1994, Beasley et al. 2003). Fishermen in Viet Nam worship whales and dolphins because they believe that the animals will aid them if they are in distress (Smith et al. 1997). Most Cambodians and Laotians say that they do not hunt dolphins and believe that bad luck will result from killing them (Baird et al. 1994). The Lao Community Fisheries and Dolphin Protection Project was working with local fishermen at Chiteal Pool to reduce incidental catches of dolphins in gillnets, stop explosive fishing and manage aquatic resources in a sustainable manner (Perrin et al. 1996). One practical measure was the establishment of a fund so that fishermen who found dolphins entangled in their nets and cut them free would be compensated for damages (Baird et al. 1994). However, this project has now stopped. Small-scale dolphin watching operations were established at Chiteal Pool (Laos/Cambodia border) in 1997, and this provided substantial income to a few local boat owners. However, due to the decline in dolphin numbers at this pool, the tour operations are now on the verge of collapse. Dolphin-watching also occurs at Kampi Pool and a project partially funded by the Whale and Dolphin Conservation Society, UK, is planned for 2004 to manage the operations so that they provide maximum benefits to the local community (and thereby increase the value of the dolphins as living resources) while not adversely affecting the animals.
In Laos, dolphins are legally protected from hunting, capture and trade, with fines of US$ 65-650 and imprisonment for three months to one year. In Viet Nam, all cetaceans are protected by a decree of the national assembly but this is not generally enforced. During the last three years, the Viet Namese government has been drafting a new law that will give authorities greater power to enforce fishery regulations (Perrin et al., in press). Approval by the national assembly is expected in the near future. No legal protection for cetaceans currently exists in Cambodia. However, a fisheries law is being drafted that includes specific regulations pertaining to marine mammals and the Cambodian Department of Fisheries has proposed to formulate a Royal Decree for protecting the Mekong River dolphin population.
The Mekong Dolphin Conservation Project (currently supported by the Mekong River Commission and Ocean Park Conservation Foundation) was initiated in January 2001. The aims of the project are to assess the Mekong Irrawaddy dolphin population, initiate conservation and management efforts, and build capacity among local government officials. The project works in cooperation with the Cambodian Department of Fisheries, the Wildlife Conservation Society - Cambodia Program and Community Aid Abroad.
|Citation:||Smith, B.D. & Beasley, I. 2004. Orcaella brevirostris (Mekong River subpopulation). The IUCN Red List of Threatened Species 2004: e.T44555A10919444.Downloaded on 20 July 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:300155db-7324-4a83-add3-0d53d24a8afa> | 2.671875 | 3,600 | Knowledge Article | Science & Tech. | 39.852305 | 95,568,340 |
An important paper by Colin Studds and colleagues shines a spotlight on the Yellow Sea, where waders/shorebirds have lost vast areas of feeding habitat during China’s economic boom.
Waders make some of the most remarkable migratory journeys in the bird world and many rely on a few key estuaries to refuel, especially as they head north to breed. For hundreds of thousands of waders on the East Asian-Australasian Flyway, from tiny Red-necked Stints to Far Eastern Curlews, the Yellow Sea is absolutely crucial. A new paper by Colin Studds and sixteen colleagues collates the available information on current population trends of waders using this flyway and shows how these relate to the reliance of each species on the Yellow Sea. The more a species relies on disappearing mudflats, between China and the Korean peninsular, the faster it is declining.
As Colin Studds says: “Scientists have long believed that loss of these rest stops could be related to the declines, but there was no smoking gun.” Now there is. The new paper is published in Nature Communications.
Establishing the routes
Over the last twenty years, satellite tracking has revealed the amazing migratory journeys of shorebirds on the East Asian-Australasian Flyway. The most famous wader ever must be E7, which was the first Bar-tailed Godwit to be tracked from Alaska to New Zealand in one continuous journey, covering the 11,600 km journey in 9 days. When E7 flew from New Zealand to the Yellow Sea in the next spring, on the first leg of its return journey, that was another flight of 10,200 km in 7 days.
It’s not just Bar-tailed Godwits that link New Zealand and Australia to the Yellow Sea. Colour-ringing has established that at least 10 wader species use this staging area during their northward migration in spring.
Counts by Birdwatchers
Annual counts of waders have been taking place in sites across Australia and New Zealand since the early 1980s. Colin Studds and his colleagues use data collected during the non-breeding seasons between 1993 and 2012 from 43 of these key locations. The analysis relies on the work of scores of volunteer birdwatchers who undertake these counts during the months from October through to March. The count data used in this paper focused on December and January, when there is least likelihood of within-season movement. Some of the declines have been dramatic; in twenty years, the number of Far Eastern Curlew fell by about 60%, with a 75% drop in Curlew Sandpipers, just to give two examples.
If numbers are going down, then that suggests that these waders are failing to breed as successfully as they once did or that the adults themselves are dying in larger numbers than used to be the case – or both. The fact that no changes have been observed in the proportion of juveniles in flocks strongly suggests that survival rate is the key demographic parameter upon which to focus when trying to understand population declines.
Declining survival rates
Colour-ring observations not only establish migratory links, they also provide the raw data from which annual survival rates can be estimated. A typical annual survival rate for an adult wader is between around 70% and 90%. If the survival rate is 90%, and 50 female waders lay an average of 4 eggs during a breeding season, then only 10 of the chicks need to hatch and reach breeding age for the population to remain stable. If that same level of productivity occurred but the survival rate for adults changes to 80%, then the chance of an adult dying in any given year doubles and the population will drop by half in just six years. This illustrates that a fall in survival of just 10% can have serious implications for the population trajectory.
The counts of non-breeding waders in Australia suggested that there were major changes in numbers for several species between 2010 and 2012. When Theunis Piersma and colleagues analysed the colour-ring sightings for populations of three species that spend the non-breeding season in Australia and breed in eastern Siberia – Bar-tailed Godwit, Great Knot and Red Knot – they discovered a decline in annual survival of between 18% and 19% in just two years. Their paper raised serous alarm bells. All three populations spend time in the Yellow Sea on their spring migration and Theunis argued that rapid habitat loss in the Yellow Sea was the most likely explanation of reduced summer survival, with dire (but uncertain) forecasts for the future of these flyway populations.
There is a global review of survival rates in this paper: Méndez, V., Alves, J. A., Gill, J. A. and Gunnarsson, T. G. (2018), Patterns and processes in shorebird survival rates: a global review. Ibis. doi:10.1111/ibi.12586. The paper is summarised in this WaderTales blog: Measuring shorebird survival.
Reliance on the Yellow Sea
It is estimated that nearly 30% of Yellow Sea tidal mudflats have been lost to coastal development in the last 30 years and China is forecast to undergo up to 14% expansion in urban development over the next 15 years, much of it concentrated on the margins of the Yellow Sea. Within the remaining mudflats, there have been increases in algal blooms, heavy metal deposits and areas of invasive Spartina alterniflora, the last of which reduces mudflat availability. All of these changes have the potential to put huge pressures on waders that are fattening up for the last leg of their migratory journey to arctic breeding grounds.
Previous work focused on waders in Japan, by Tatsuya Amano and colleagues, had shown that wader species relying on the Yellow Sea while on migration are declining more quickly than those that are not but Japan is on the migratory flyway so this result could have been confounded by changes in migratory route. By using data from the the non-breeding season and looking at a wider range of species, Colin Studds and his colleagues have been able to link reliance on the Yellow Sea with the magnitude of population changes.
A key element of the new paper is the compilation of available data on flyway population sizes, migratory connectivity and Yellow Sea count data, in order to estimate the proportion of each species that rely upon the Yellow Sea. At the lowest end is the Grey-tailed Tattler, only 3% of which use the area, whilst 100% of the menzbieri subspecies of Bar-tailed Godwit rely on the Yellow Sea. When reliance is plotted against annual population trend the fit is remarkable. Interestingly, there are two very similar subspecies in the analysis; whilst the menzbieri Bar-tailed Godwits are estimated to have been declining by 6.1% per annum, the baueri subspecies, which is only 50% reliant on the Yellow Sea, has ‘only’ been declining at 1.4% per annum.
Emerging Conservation Action
This paper provides further evidence of the huge importance of the Yellow Sea. To quote Richard Fuller, the team leader of this research “Every country along the migration route of these birds must protect habitat and reduce hunting to prevent the birds declining further or even going extinct.” Issues facing birds that use the flyway are being successfully highlighted by the East Asian-Australian Flyway Partnership. Australia has signed agreements with China, Korea and Japan to protect migratory birds, and China and South Korea have recently begun the process of listing parts of the Yellow Sea as World Heritage Sites. As well as development controls, a range of mitigation actions are discussed in the paper – let’s hope that they are pursued with enthusiasm.
Update – January 2018
“Great news for shorebirds! China to halt coastal land reclamation”. Read more in this BirdLife International article.
The paper is free to download
The results of this study have been published as Rapid population decline in migratory shorebirds relying on Yellow Sea tidal mudflats as stopover sites (Nature Communications 8:14895 | DOI: 10.1038/ncomms14895)
The authors are: Colin E. Studds, Bruce E. Kendall, Nicholas J. Murray, Howard B. Wilson, Danny I. Rogers, Robert S. Clemens, Ken Gosbell, Chris J. Hassell, Rosalind Jessop, David S. Melville, David A. Milton, Clive D.T. Minton, Hugh P. Possingham, Adrian C. Riegen, Phil Straw, Eric J.Woehler & Richard A. Fuller.
WaderTales blogs are written by Graham Appleton, to celebrate waders and wader research. Many of the articles are based on previously published papers, with the aim of making wader science available to a broader audience. | <urn:uuid:37d9cc57-b442-4275-8558-3ec9e8e96dc6> | 3.453125 | 1,839 | Personal Blog | Science & Tech. | 49.36607 | 95,568,347 |
Climate bulletin - July 2017
Summary of the world's climate in July 2017
The global average temperature for July 2017 as estimated from the HadCRUT.22.214.171.124 data set was 0.65±0.18°C above the 1961-1990 average. Globally, July 2017 was one of the seventeen warmest Julys on record but most likely the fourth warmest. Global temperature data sets maintained by NASA GISS, NOAA NCEI, Berkeley Earth and C3S also show that July was a very warm month globally. July was nominally between 1st and 4th warmest in these data sets. Sea-surface temperatures across the breadth of the tropical Pacific were near average indicating neutral ENSO conditions.
The global average land temperature for July 2017 was 1.08 ± 0.28°C above the 1961-1990 average. For global land areas July 2017 was nominally the warmest July on record and very likely one of the top thirteen. Unusual warmth – temperatures exceeding the 90th percentile for the month – was recorded across western parts of the US and Canada, areas of Central America, Alaska, many areas of Africa where observations are available, Madagascar, Australia (where average daily maximum temperatures for July beat the previous record by a clear margin), southern India, Iceland and in a band running from the Mediterranean, through the Middle East to China and Japan. Few areas were unusually cold with temperatures below the 10th percentile. These were small areas in ain the southeast of Brazil and southern China. An unusual cold spell from 14-21 July affected Argentina and neighbouring countries.
The global average sea-surface temperature for July 2017 was 0.54 ± 0.08°C above the 1961-1990 average, nominally the fourth warmest on record and very likely between second and tenth warmest. Sea surface temperatures in the tropical Pacific were near average, indicating neutral ENSO conditions. Areas of unusually warm SST included: the western Pacific, two bands around 30°N and 30°S in the Pacific, the Mediterranean and large areas of the Indian Ocean (except an area just west of Australia). The tropical Atlantic, Gulf of Mexico and western north Atlantic were much warmer than average in July. SSTs in the Atlantic hurricane Main Development Region (MDR) were warmer than average, with SSTs in some areas exceeding the 98th percentile. Areas of unusually low SST included limited regions of the southeast Pacific, south Atlantic and eastern Indian Ocean. Another area of below average SST – where historical coverage is too low to accurately assess the significance of current anomalies – was an extended area in the Atlantic sector of the Southern Ocean which has persisted for several months and has been spreading slowly east.
As in May, April and June, there was a band of cooler-than-average waters in the North Pacific at around 45°N surrounded by areas of warmer-than-average waters. This pattern is characteristic of the positive phase of the Pacific Decadal Oscillation (PDO). Some measures of the PDO have been indicating a shift to its positive phase since the start of 2014. However, on short time scales, SST patterns associated with the PDO look very similar to those associated with El Niño. As ENSO conditions are currently neutral, this suggests a more persistent shift to the positive phase of the PDO. The negative phase of the PDO has been associated with a reduction in the rate of global temperature increase since the start of the millennium.
Higher than average precipitation totals (based on the monthly first-guess analysis by the Global Precipitation Climatology Centre, GPCC) were recorded in a band running from Ireland (it was the wettest July at Shannon airport since 1946) and the UK, across Germany into eastern Europe and Russia. Parts of Norway and Sweden and southeast Europe were also wetter than average. Northwest Canada saw higher than average rainfall totals in July as did parts of the southwest and northeast US, eastern Brazil, New Zealand, areas of the Arabian Peninsula, western China, areas of Southeast Asia, including Vietnam and parts of Indonesia. Above average rainfall caused flooding in Gujarat in western India and heavy rain caused flooding in northeast India early in the month. In Japan, heavy rains lead to flooding in the north of the main island, Honshu.
Drier than average conditions were recorded: along the west coast of the US and areas of southern Canada; large areas of Brazil, Bolivia and Paraguay; Portugal; Italy; southern France (which has also been affected by wildfires) and parts of the Balkans. Austria found itself on the boundary between wet conditions to the north and dry conditions in the south. Australia was drier than average in the south, and much wetter than average in parts of the Northern Territory.
Based on data from the HadISST.126.96.36.199 data set and from the National Snow and Ice Data Center, the Northern Hemisphere (Arctic) sea ice extent in July 2017 was likely to have been between the 3rd and 7th least extensive in the satellite record for July. There is some uncertainty in the ranking as a number of years have very similar extents. Southern Hemisphere (Antarctic) sea ice extent was nominally the least extensive or 2nd least extensive July on record in both data sets. Sea-ice extent in the Antarctic has been unusually low since late last year likely due to high remnant sea-surface temperatures following the El Niño and unusual atmospheric circulation [link to paper]. For more details and analysis of the ice extents including updates throughout the summer, see the sea-ice monitoring brief. | <urn:uuid:0c542997-a0ea-4e6e-8129-7d9b982dc4e0> | 3.328125 | 1,146 | Knowledge Article | Science & Tech. | 41.362581 | 95,568,373 |
NASA: Hubble Telescope Detects Mysterious Dark Spot on Neptune
NASA has detected a strange dark vortex on Neptune's surface. Images sent from the Hubble Space Telescope have revealed that a dark spot suddenly opened on the planet's surface and it's perplexing astronomers.
According to NASA, this is the first observation of a dark vortex in Neptune in the 21st century. Past observations have occurred in 1989 via the Voyager 2 and in 1994 via the Hubble Telescope. The most famous dark spot discovery in Neptune was in 1989. Tagged as the Great Dark Spot, it was located in Neptune's southern atmosphere and is approximately as large as the Earth, The Verge reports.
Dark vortices are formed in Neptune when clouds of air and gas swirl and freeze up. This creates a solid single mass that moves on the planet's atmosphere.
Mike Wong, leader of the study from the University of California-Berkley, described these dark vortices as "huge, lens-shaped gaseous mountains” that move through the atmosphere. Gizmodo says that the dark vortices appearances, such as their size, shape and lifespan, all differ. However, what's always constant is the stream of pancake-shaped bright clouds accompanying them called orographic clouds.
The Outer Planet Atmospheres Legacy (OPAL) captured the said dark spot on Neptune in September 2015. To better observe the phenomenon, the team created a higher-quality map of the dark vortex and its surrounding using the new images from Hubble. Wong and his team announced the discovery on May 17, 2016 in a Central Bureau for Astronomical Telegrams (CBAT) electronic telegram.
There is still limited data about dark vortices. Through this third sighting, scientists in NASA hope to learn more about the origin of dark vortices, their behavior as well as how they interact with Neptune's surroundings through continued observation.
To know more interesting facts about Neptune, check out the video below. | <urn:uuid:093130e9-b2af-447e-9841-4928b77955ca> | 3.515625 | 402 | News Article | Science & Tech. | 43.909118 | 95,568,374 |
OCaml Introduction: Tuples and Lists Jeff Meister CSE 130, Winter 2011 So far, we have only dealt with expressions of single values of a single type, like 5 : int or 9.7 : float or "cse130" : string (note that an OCaml string is not made of multiple chars, but has its own built-in type). There are plenty of combinations left to consider. What about compound values? The language has a few built-in constructs for these, including tuples and lists. The simple overview: a tuple can hold values of different types, but only a fixed number of them; a list can hold an unlimited number of values, but only of the same type. You might well ask, what about holding an unlimited number of values of different types? I would reply that such a collection is nearly useless. Because every function has exactly one input type, you can’t call any function on every element of the collection, so why collect them together? Counting up how many you have is about all you can do. The fixed-length restriction of the tuple allows us to pair up each contained value with its corresponding type, and then we know which functions are OK to call on which values. Of course, it would be acceptable to form a list of values of different types if you defined a new type that is the logical disjunction of each different one you need (say, int or float or int -> string), along with a tag so that every value of this new type is labeled to distinguish which case of the or it is. OCaml provides a mechanism to do exactly this, which we will see later on.
For now, back to tuples and lists. Here is a 4-tuple of someone’s personal information: # let person = ("larry", 47, 165, ’M’);; val person : string * int * int * char = ("larry", 47, 165, ’M’) In general, the type of an n-tuple is written as an ordered sequence of n types, separated by *. The * is pronounced “cross” (think cross product), but more intuitively it means and. So, this personal information contains a string (name), an int (age), another int (weight), and a char (sex). However, the tuple itself is not written using *, because in expression-land that is the integer multiplication operator. To write down an n-tuple, we write the n values in order between parentheses and separated by commas. Any tuple that has a different number of values inside will have a different type than any other tuple, and these types are not compatible. If I add a new element for height, I will have a 5-tuple; functions written for 4-tuples will not work on 5-tuples without modification, and vice versa. The way to get values back out of a tuple is by pattern matching . This is an important concept that you’ll see throughout the course. A pattern is like an expression, but it appears on the left side of an = or -> instead of on the right side. The pattern “looks like” the value of the expression it’s matching, but if you put names where the sub-expressions would go, OCaml will bind the names to the corresponding values for you. Like so: # let (name, age, _, sex) = person;; val name : string = "larry" val age : int = 47 val sex : char = ’M’ 1
Because person is a 4-tuple, I need a pattern that looks like 4 things inside parens separated by commas. I’d like to extract the name, age, and sex, so I put those names into the appropriate spots in the pattern. I don’t care about the person’s weight for whatever reason, but because I’m writing a 4-tuple pattern, I must fill in that spot; for this purpose, OCaml provides the dummy pattern name _. Notice that all three values are extracted in one shot; conceptually, the pattern matches them all simultaneously.
The unit type
Besides n-tuples for n ≥ 1 (a 1-tuple is just a single, i.e., a plain type with no *s like we had before), there is also a 0-tuple. To understand it, think of types as sets of values. The type int represents the set of all signed 31-bit integers; char represents the set of all unsigned 8-bit integers; the type float * int represents the set of all float-int pairs; and so on. The 0-tuple has type unit, and this type represents the set containing exactly one element, namely () (also pronounced “unit”). This unit value conveys no information. It is useful only when that’s precisely what you want: to indicate that there is no value, and that’s OK. For example, the function print_string takes a string as input, causes the external effect of writing it to stdout, and then has nothing to return. Because all functions in OCaml must take an argument and return a value, print_string instead returns the () value. Moreover, the function print_newline simply terminates the current line on stdout; it doesn’t even require any input. Again, unit is employed. # print_string;; - : string -> unit = # print_newline;; - : unit -> unit = # print_string "the objective is caml"; print_newline ();; the objective is caml - : unit = () Notice (and yet also ignore) the single semicolon sequencing the two print function calls. We have not seen this yet. It works simply by evaluating the expression on the left, throwing the value away, and then evaluating the expression on the right. The only reason you would evaluate an expression with the intent to discard its value is if the expression has some other side effect (like printing to the screen). Usually, in this case, the value discarded will be (). OCaml will warn if you try to discard a value of any other type. In general, unless you’re inserting debugging print calls into your code, you should not be doing imperativestyle statement sequencing with the semicolon. The following code is no good, and trying to write this will produce horrible results, either ugly compiler errors or many points deducted on exams: let x = 1; let y = 2; return (x + y, x - y);
(* WRONG *) (* BAD IDEA *) (* DON’T DO THIS *)
The proper way to sequence code like this is with let-in: let x = 1 in let y = 2 in (x + y, x - y)
We do have a use for the semicolon, though. Whereas tuples were written with their elements inside parentheses and separated by commas, lists are written with their elements inside square brackets and separated by semicolons. Here’s one way to write the list of integers from 1 to 5:
# [1; 2; 3; 4; 5];; - : int list = [1; 2; 3; 4; 5] Unlike tuples, we’re not listing the type of every element in the type of the collection, because they are all required to have the same type, which in this case is int. Any list containing any number of int elements has type int list, even the empty list, which contains 0 ints and is written (pronounced “nil”). The key operation on lists is the infix operator ::, pronounced “cons” (think construct a new list). It takes an element and sticks it at the front of an existing list, returning a new list as a result (the existing list is not modified and cannot ever be modified). The expression 1 :: [2; 3; 4; 5] takes the element 1 and puts it at the front of [2; 3; 4; 5] to yield the new list [1; 2; 3; 4; 5]. In fact, the bracket-andsemicolon way of writing things is just syntactic sugar, a notational convenience. That means the following expressions are equivalent ways of writing the same list value: [1; 2; 3; 4; 5] 1 :: [2; 3; 4; 5] 1 :: 2 :: [3; 4; 5] 1 :: 2 :: 3 :: [4; 5] 1 :: 2 :: 3 :: 4 :: 1 :: 2 :: 3 :: 4 :: 5 :: 1 :: (2 :: (3 :: (4 :: (5 :: )))) The parentheses are only included in the last case to show that cons is right-associative, unlike most binary operators like +. Because the syntactic sugar is based on (and rewritten to) the more fundamental cons notation, I will not consider it for the moment. That leaves us with precisely two ways of creating lists. We can either build the empty list , or we can build a list by cons-ing an item to an existing list, which must contain items of the same type. In fact, that is exactly the definition of a list, as built into OCaml: type ’a list = | :: of ’a * ’a list The ’a, pronounced “alpha”, is a type variable. The definition expresses the following facts: is an ’a list; also, x :: y is an ’a list provided that x is a single ’a and y is an ’a list. This is just what I said above. To put it another way, the type system enforces that only ints can be “consed” onto an int list with ::. Trying to do otherwise will elicit a type error: # 1 :: ["foo"; "bar"; "baz"];; Error: This expression has type string but an expression was expected of type int OCaml complains: you are trying to cons 1, an int, onto a list, so the list had better contain ints; but "foo" is a string. Type variables such as ’a are OCaml’s way of doing parametric polymorphism, like Java generics. Java has classes like ArrayList that take a type parameter for the elements of the list; you can instantiate ArrayList or ArrayList, and so on. Similarly, you can have int list or string list in OCaml, by instantiating the ’a type parameter. However, you do not have to write these instantiations, because OCaml will figure them out for you! We’ll see later on how it does that.
List pattern matching
As with tuples, lists are not very useful unless we can extract and operate on the items inside them. But unlike tuples, we can’t tell from the type of a list how many items there are. It could be the empty list with no items at all; or it could be a nonempty list, with one item like 1 :: , with two items like 1 :: 2 :: , or with any unlimited positive number of items. Destructuring a list with let will cause a 3
warning to this effect: # let x = [1; 2; 3; 4; 5];; val x : int list = [1; 2; 3; 4; 5] # let head :: tail = x in head + 10;; Warning 8: this pattern-matching is not exhaustive. Here is an example of a value that is not matched: - : int = 11 The proper way to get values out of a list is by using match-with, a more powerful construct that allows you to test whether your list matches empty or nonempty patterns, in addition to extracting and binding the head and tail values in the nonempty case. Using it makes the warning go away: # match x with | -> 0 | head :: tail -> head + 10;; - : int = 11 An important point: the entire match-with construct is an expression that returns a value, like if-then-else and most other things in OCaml (and unlike the if and switch statements you know from C or Java). That means I can write something like: let | | | |
answer = pattern1 pattern2 ... patternN
match something with -> expr1 -> expr2 -> exprN
Like all values in OCaml, the one we just named answer must have a single type. However, which value we get depends on which expr we evaluate, which itself depends on what pattern matches the value of something. The type system cannot predict this; therefore, every expr1 through exprN must have the same type. As a consequence, unlike similar constructions in imperative languages, you cannot have a case of a match-with that “does nothing”. Using the dummy unit value introduced earlier for one case would only be acceptable if every other case also had type unit. Thus, I must provide an expression for the empty list case, even though I know it’s not going to be evaluated at run time. All cases (introduced by |) must have the same type, so I arbitrarily chose the int 0. If you need to fill in a match case and you really cannot produce a sensible value, you can write assert false, which indicates to OCaml that your program is never intended to reach that point at run time. If it does, you’ll get an exception with the location of the failure. I could have written that instead of 0, but only because I am sure that the value of x is not (I just bound it to a nonempty list). As we’ll see below, and in your assignments, you’ll be writing recursive list functions where that assumption no longer holds. Anyway, in both of these pattern-matchings, we have extracted just one item from the front of the list and added 10 to it, producing a single int. That’s not very interesting. How can we access all the other items? Well, our pattern bound the name head to the first int 1, and it bound tail to the rest of the int list [2; 3; 4; 5]. If we could just repeat the same process on the tail, of binding its head, adding 10, then proceeding on its tail, and so on, we would eventually add 10 to all the numbers. Let’s write a recursive function to do exactly that, and keep track of the answers it produces in a list:
# let rec add10 nums = match nums with | -> | head :: tail -> head + 10 :: add10 tail;; val add10 : int list -> int list = # add10 x;; - : int list = [11; 12; 13; 14; 15] Think about how this function works. Here, the empty list case is important: not only could someone pass an empty list to this function, but the empty list is also the base case of its recursion. Fortunately, we have a sensible value of type int list to return: if you want a list that’s just like but with 10 added to all its numbers, then you just want , because there are no numbers in there to begin with. In the nonempty list case, we add 10 to the head as before, but additionally, we stick the result on the front of the list that we get from recursively adding 10 to all the numbers in the tail, using the same function add10. Make sure you are convinced that this works, and that you’re not just writing such functions down without knowing why you wrote them and believing they are true, or this class will quickly become very difficult to understand! It’s not so bad though, just think about what expression the function is building up, and trace through its evaluation. Each time, it does head + 10 :: then makes a recursive call using tail. The first time we call the function on x, the parameter nums matches the pattern head :: tail, with head = 1 and tail = [2; 3; 4; 5]. So we have 11 ::, and now we have to call the function again on tail. This time, parameter nums matches the pattern head :: tail with head = 2 and tail = [3; 4; 5], so we have 11 :: (12 ::, and we call the function again. This process continues until we reach the base case of , at which point we have built the expression 11 :: (12 :: (13 :: (14 :: (15 :: )))) Again, the parens are just for clarity; because of operator precedence I didn’t need to write them in the function. Now we have an expression we know how to evaluate! Of course, it results in the list value [11; 12; 13; 14; 15]. You’ll see plenty more functions on lists very soon, but I want to reinforce one more thing: using a pattern like h :: t to bind names to the components of a list does not modify the original list. Lists are immutable; there is no way to modify them. | <urn:uuid:5c8efdb2-2dff-4950-86e0-4a90039c5106> | 3.671875 | 3,606 | Documentation | Software Dev. | 60.585839 | 95,568,378 |
Protoplast-Independent Production of Transgenic Plants
Problems encountered with protoplast-based methods for the generation of transgenic plants have prompted the development of alternative techniques for gene transfer in grasses. These problems relate mainly to plant regeneration from protoplasts and do not reflect specific barriers to the uptake of foreign DNA by isolated protoplasts. Examples of these difficulties are relatively low plating efficiencies and low plant regeneration frequencies from protoplasts, species and genotype dependence often observed in the regeneration process, and albinism and somaclonal variation revealed in protoplast-derived plants (Potrykus 1990). Plant regeneration from protoplasts is thus a delicate process depending upon parameters that are not under experimental control, such as wound response and genotype-dependent competence for regeneration (Vasil 1988; Potrykus 1990). Furthermore, transgenic plants recovered from protoplasts may show serious fertility constraints and undesired integration of multiple and rearranged transgene copies (Spangenberg et al. 1995a).
KeywordsTransgenic Plant Tall Fescue Perennial Ryegrass Immature Zygotic Embryo Italian Ryegrass
Unable to display preview. Download preview PDF.
- Dalton SJ, Bettany AJE, Timms E, Morris P (1998) Transgenic plants of Lolium multiflorum, Lolium perenne, Festuca arundinacea and Agrostis stolonifera by silicon carbide fibre-mediated transformation of cell suspension cultures. Plant Sci (in press)Google Scholar
- Dunder E, Dawson J, Suttie J, Pace G (1995) Maize transformation by microprojectile bombardment of immature embryos. In: Potrykus I, Spangenberg G (eds) Gene transfer to plants. Springer, Berlin Heidelberg New York, pp 127–138Google Scholar
- Hensgens LAM, de Bakker EPHM, van Os-Ruygrok EP, Rueb S, van de Mark F, van der Maas HM, van der Veen S, Kooman-Gersmann M, Hart L, Schilperoort RA (1993) Transient and stable expression of gusA fusions with rice genes in rice, barley and perennial ryegrass. Plant Mol Biol 22: 1101–1127PubMedCrossRefGoogle Scholar
- Klein TM (1995) The biolistic transformation system. In: Potrykus I, Spangenberg G (eds) Gene transfer to plants. Springer, Berlin Heidelberg New York, pp 115–117Google Scholar
- Pérez-Vicente R, Wen XD, Wang ZY, Leduc N, Sautter C, Wehrli E, Potrykus I, Spangenberg G (1993) Culture of vegetative and floral meristems in ryegrasses: potential targets for microballistic transformation. J Plant Physiol 142: 610–617Google Scholar
- Sanford JC, Klein TM, Wolf ED, Allen NJ (1987) Delivery of substances into cells and tissues using a particle bombardment process. J Particulate Sci Technol 6: 559–563Google Scholar
- Spangenberg G, Wang ZY, Wu XL, Nagel J, Iglesias VA, Potrykus I (1995a) Transgenic tall fescue (Festuca arundinacea) and red fescue (F. rubra) plants from microprojectile bombardment of embryogenic suspension cells. J Plant Physiol 145: 693–701Google Scholar
- Spangenberg G, Wang ZY, Potrykus I (1998) Biotechnology in forage and turf grass improvement. In: Frankel R, Grossman M, Linskens HF, Maliga P, Riley R (eds) Monographs on theoretical and applied genetics, this volume. Springer, Berlin Heidelberg New YorkGoogle Scholar
- Wang K, Frame BR, Drayton PR, Thompson JA (1995) Silicon carbide whisker-mediated transformation: regeneration of transgenic maize plants. In: Potrykus I, Spangenberg G (eds) Gene transfer to plants. Springer, Berlin Heidelberg New York, pp 186–192Google Scholar
- Ye XD (1997) Gene transfer to ryegrasses (Lolium spp.): modification of fructan metabolism in transgenic plants. PhD Diss, Swiss Federal Institute of Technology, ZürichGoogle Scholar
- Ye XD, Wang ZY, Wu XL, Potrykus I, Spangenberg G (1997) Transgenic Italian ryegrass (Lolium multiflorum) plants from microprojectile bombardment of embryogenic suspension cells. Plant Cell Rep 16: 379–384Google Scholar | <urn:uuid:b9379ae9-7308-4ebd-b336-73b52a669579> | 2.515625 | 996 | Truncated | Science & Tech. | 30.444716 | 95,568,385 |
ELECTRODYNAMIC DRIVER FOR WINDOWS DOWNLOAD
Advanced reading level Subject: There are a number of derivations that solve for the potentials and currents involved in an EDT system numerically. Fort Wayne Weekly Sentinel. In addition to this theory which has been derived for a non-flowing plasma , current collection in space occurs in a flowing plasma, which introduces another collection affect. No notes for slide.
|Date Added:||21 October 2017|
|File Size:||38.10 Mb|
|Operating Systems:||Windows NT/2000/XP/2003/2003/7/8/10 MacOS 10/X|
|Price:||Free* [*Free Regsitration Required]|
Potential tether applications can be seen below:. In field emission, electrons tunnel through electrodynamic potential barrier, rather than escaping over it as in thermionic emission electrodynamic photoemission.
You electrodynamic change your ad preferences anytime. This is, however, an extreme oversimplification of the concept.
Over the years, numerous applications for electrodynamic tethers have been identified for potential use in industry, government, and scientific exploration. Thomas Edison ‘s electric lighting discoveries were first shown electtrodynamic September In this regard, electrodynamic also homopolar generator.
Electrons are accelerated from the electrodynamic region, through the orifice to the keeper, which is always at a more positive bias. As the beam energy of the electrodynamic is increased, the total escaping electrons can be seen to increase.
This book requires that you first read Calculus Special Relativity. An electric meter to measure the amount electrodynamic electricity used in electrodynamic office or house for electrodynamic purposes was also demonstrated.
As such electrodynamic tether can be completely self-powered, besides the initial charge in the batteries to electrodynamic electrical power for the deployment and startup procedure. S Scattering-matrix method Spacetime triangle diagram technique Stewart—Tolman effect.
The orbital-motion-limit regime is attained when electrodynamic cylinder radius is small enough such that all electrodynamix electrodynamic trajectories that are electrodynamic are terminated on the cylinder’s surface are connected to the background plasma, regardless of their initial angular momentum i. Some of these applications are general concepts, while others are well-defined systems.
electrodynamic Work has been accomplished on current collection using porous spheres, by Stone electrodynamic al. All of the theory presented is used towards developing a current collection model to account for all conditions encountered during an EDT mission. However, this has been found to be ineffective.
Britannica does electrodynamic currently electrodynamic elecctrodynamic electrodynamic on this topic. TC electron emission will occur in one of two different regimes: Retrieved 15 March This figure is symmetrically set up so either end can be used as the anode. From Wikibooks, electrodynamic books for an open world.
Views Read Edit View history. This effect then electrodynamic a chance to cause the libration amplitudes to grow and eventually cause wild oscillations, including one such as the electrodynamic effect’, but that is beyond the scope of this derivation.
electrodynamics – Wiktionary
Wires came electrodynamic the electric lights and went to an adjacent room where there was a generator set up to produce electricity. Retrieved from ” https: The wires passed through keyholes to the adjoining room. When a tether is moved at a velocity v at right angles to the Earth’s magnetic field Ban electric field is observed in the tether’s frame of reference. The z -axis electrodynamic defined as up-down from the Earth’s electrodynamic, as seen in the figure below.
The Electro-Dynamic Light Company was the first organized specifically to manufacture and sell incandescent electric light bulbs.
An electrodynamic tether can electrodynamic described as a type of thermodynamically “open system”. Embeds 0 Electrodynamic embeds. | <urn:uuid:fb9d6b81-91ed-4f13-86eb-9adeb60aa5ee> | 2.53125 | 855 | Product Page | Science & Tech. | 14.593553 | 95,568,387 |
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
Monomolecular wire is a type of wire consisting of a single strand of strongly bonded atoms or molecules, like carbon nanotubes.
This section needs expansion. You can help by adding to it. (January 2018)
Monomolecular wire is often used as a weapon in fiction. It has applications in cutting objects and severing adjacent molecules. A similar or identical concept may be called a microfilament wire or, as a weapon, a microfilament whip.
Among the first references in fiction to a monofilament is in John Brunner's Stand on Zanzibar (1968), where hobby terrorists deploy this over-the-shelf General Technics product across roads to kill or injure the people passing there. According to Brunner, the monofilament will easily cut through glass, metal and flesh, but in any non-strained structure the molecules will immediately rebond. No harm is done if the cut object is not under mechanical stress.
An early example of a substance similar to monomolecular wire is 'borazon-tungsten filament' from G. Randall Garrett's "Thin Edge." (Analog, Dec 1963) The main character uses a strand from an asteroid towing-cable to cut jail bars and to booby-trap the door of his room. Frank Herbert later described shigawire in his Dune novels. First making its appearance in Dune (1965), shigawire is a metallic extrusion produced naturally from a ground vine found on the planets Salusa Secundus and III Delta Kaising. It varies in diameter from approximately 1.5 cm down to monomolecular (micronic) diameters, and is notable for its incredible tensile and mechanical strength. Shigawire is able to cut through almost any material cleanly, possessing edges that are incredibly sharp. It is a weapon of choice for assassins.
Monomolecular wire is a plot element in the short story "Johnny Mnemonic" by William Gibson. The assassin following the protagonist has a diamond spindle of monomolecular wire (or filament) implanted in his thumb, the idea being that diamond is also made of a single molecule and thus hard enough to not be cut by a monomolecular wire. The top of a prosthesis, attached to the other side of the wire, was used as a weight and the wire could be used as a whip-like weapon or a garotte.
Monomolecular wire (in the form of wide 'tapes' of a "pseudo-one-dimensional modified diamond crystal") is used as the basic building material of the space elevator in Arthur C. Clarke's novel The Fountains of Paradise.
Monomolecular wires are seen in the Star Wars expanded universe, Cyber City Oedo 808, Hyperion Cantos, Robert J. Sawyer's Illegal Alien, Battle Angel Alita, Naruto, Akame ga Kill, Hellsing, Trinity Blood, My-Hime, Vampire Knight, Simon R. Green's Deathstalker series, Alastair Reynolds's Revelation Space universe, as well as the roleplaying games Shadowrun, One Piece as Doflamingo's string-string devil fruit and Cyberpunk 2020. Monomolecular wires are also seen in Larry Niven's "Known Space" universe as human-produced "Sinclair Molecule Chain".
In the One Piece manga, the character Donquixote Doflamingo ate the Ito Ito no Mi, a devil fruit that grants the user the ability to create and manipulate strings. He is capable of creating strings so thin that they cannot be seen, and he can use this ability to ensnare people and control them like a puppet. His strings are also incredibly strong, being able to cut through stone with ease.
Various Imperial and alien technologies in the Warhammer 40,000 universe use monomolecular blades or wire offensively. Possibly the most notable example are Eldar Warp Spiders, whose Deathspinner weaponry traps targets in a mesh of such filaments or the Dark Eldar Shredder weapon which shoots meshes of it.
The game Chaos Overlords featured a weapon 'monom rod' which used this technology.
Sion Eltnam Atlasia wields a monofilament whip called the Etherlite in Melty Blood.
In the 2000 film XChange, the main character acquires an Urban survival Kit which includes a monomolecular wire.
- Laurens D. A. Siebbeles, Ferdinand C. Grozema (July 18, 2011), Charge and Exciton Transport through Molecular Wires, retrieved January 27, 2014
- Masuda, Hideki (2016). "Combined Transmission Electron Microscopy – In situ Observation of the Formation Process and Measurement of Physical Properties for Single Atomic-Sized Metallic Wires". In Janecek, Milos; Kral, Robert. Modern Electron Microscopy in Physical and Life Sciences. InTech. doi:10.5772/62288. ISBN 978-953-51-2252-4. | <urn:uuid:a8b6e9d5-be7a-44fb-89c4-1f46e77bf925> | 3.046875 | 1,096 | Knowledge Article | Science & Tech. | 39.374693 | 95,568,405 |
- Open Access
Datasets for evolutionary comparative genomics
© BioMed Central Ltd 2005
Published: 28 July 2005
Many decisions about genome sequencing projects are directed by perceived gaps in the tree of life, or towards model organisms. With the goal of a better understanding of biology through the lens of evolution, however, there are additional genomes that are worth sequencing. One such rationale for whole-genome sequencing is discussed here, along with other important strategies for understanding the phenotypic divergence of species.
Bioinformaticists and computational biologists working in the field of comparative genomics are largely dependent on datasets generated by others. Working with available data opens up desires for complementary datasets to fill knowledge gaps. In addition to writing grants for experimental laboratories and molecular biology supplies, one can also write an opinion piece to convince others to do some of the dirty work for you; this is what I am attempting to do here. Comparative genomics starts with sequencing. Many have suggested gaps in the tree of life, where additional genome projects will augment current knowledge, either to shorten long 'branches' on the tree of sequenced genomes or to complement existing genome projects. For example, there remain huge gaps in our knowledge of archaea. But with the faith that these gaps will ultimately be filled in, in this article I focus on alternative strategies for directing genomic resources so as to answer fundamental questions in evolution.
The tape of life
A whole class of genomic experiments can be hypothesized through what can be called the 'tape of life' question. Stephen J. Gould wrote in his book Wonderful Life , "Wind back the tape of life to the early days of the Burgess shale; let it play again from an identical starting point, and the chance becomes vanishingly small that anything like human intelligence would grace the replay". At the molecular level, the tape of life has been played in parallel. Different species have gone from a similar ancestral point to a similar derived phenotype. In these cases, are the same molecules and pathways driving the phenotypic evolution? Comparative genomics gives us unprecedented opportunities to answer such questions.
A few studies have tried to address the tape-of-life question through analysis of a single gene, such as the melanocortin-1 receptor (MC1R). This receptor plays a role in pigmentation and body/hair color, representing an obvious link between selectable genotype and phenotype. MC1R has been demonstrated to be under such selective pressure in various birds and mammals . In another set of studies, the transcription factor Pitx1, involved in hindlimb formation, has been implicated in parallel evolution of morphologically very distinct types of stickleback fish . At a genomic level, there are whole classes of experiments that can be proposed where phenotypic evolution is the driving force.
Rapid phenotypic evolution
Examination of the tape-of-life question or rapid phenotypic evolution does not need to involve entire genome sequencing. Large-scale full-length cDNA [12, 13] and upstream promoter sequence can be generated more cheaply but contains much of the relevant functional information. The molecular basis for changes in coding sequence function, gene expression, and possibly alternative splicing is likely to be contained within such data. Ultimately, population-level data in the form of single nucleotide polymorphisms (SNPs) linked to biogeography will also be desirable, to shed light on the process of speciation.
In addition to coding-sequence evolution, changes in alternative splicing patterns and gene-expression levels and patterns can also contribute to lineage-specific diversification. Large-scale inter-specific datasets that characterize relative splice-site usage or splice-variant frequencies would be valuable. An initial study comparing alternative splicing patterns in mouse, rat, and human led to the conclusion that alternative splice variants, like gene duplicates, have been used as a testbed for evolutionary novelty .
Changes in gene expression have become the leading candidates as drivers of evolutionary novelty, dating back to Allan Wilson's attempt to explain the phenotypic divergence between human and chimpanzee . Pioneering work on the evolution of regulatory networks in echinoderms has pointed to a major role for changes in the expression of key regulatory proteins during development in driving morphological change . A systematic examination of gene-expression changes in higher primates has also been presented . The molecular variation in the human population that affects gene expression that is subject to the diversifying selection and fixation seen in inter-specific studies is also being characterized and can be related to chimpanzee sequences in a bid to understand lineage-specific evolution. Extending this in a well controlled study across larger portions of the tree of life (initially at the inter-specific level) is warranted.
Both relative gene-expression levels and relative alternative splicing levels are continuous variables, unlike sequences that are discretely A, C, G or T. There are methods for reconstructing such values over a phylogenetic tree and parsing changes onto branches, coupled to a reconstruction of the regulatory sequences that govern such processes (see, for example, ). The power of harnessing phylogenetic information not only provides an understanding of the molecular basis for organismal phenotypic divergence but can also be used to reduce the background 'noise' in attempts to understand basic principles of transcriptional regulation, mRNA splicing, and protein folding and function [19, 20].
Even within the completed genomes that we already have, there are many unknown genes. Phylogenetic focusing (systematically attempting to sequence such genes from closely related species) will help us understand how they evolved, their function, and the evolution of novel genes in general. This can also be applied to rare protein structures, in order to understand the process of neostructuralization by searching for phylogenetic intermediates that provide a 'missing link' sequence. Phylogenetic focusing will be greatly aided by the establishment of local DNA banks containing genomic DNA from regionally specific species. This will also aid nations and their regions in understanding local biodiversity.
Ohno , and subsequently Lynch and Conery , proposed a major role for gene duplication in the generation of evolutionary novelty. Wilson and Davidson and colleagues have done the same for gene expression [15, 16]; the Lee lab has done the same for alternative splicing . All are probably right to some degree, as evolution is opportunistic and different regulatory mechanisms have potential different selectable outcomes. Generating datasets that enable us to integrate such knowledge and output better models (also drawing on work in population genetics, structural genomics, and systems biology) will allow a better understanding of biology, with evolution at its core. This article aims to continue a dialog between experimental and computational researchers towards the aim of a better understanding of genomes, and to encourage experimentalists to provide the community with even more varieties of genomic data.
- Gould SJ: Wonderful Life: The Burgess Shale and the Nature of History. 1989, New York: W.W. Norton & CompanyGoogle Scholar
- Mundy NI, Badcock NS, Hart T, Scribner K, Janssen K, Nadeau NJ: Conserved genetic basis of a quantitative plumage trait involved in mate choice. Science. 2004, 303: 1870-1873. 10.1126/science.1093834.PubMedView ArticleGoogle Scholar
- Nachman MW, Hoekstra HE, D'Agostino SL: The genetic basis of adaptive melanism in pocket mice. Proc Natl Acad Sci USA. 2003, 100: 5268-5273. 10.1073/pnas.0431157100.PubMedPubMed CentralView ArticleGoogle Scholar
- Shapiro MD, Marks ME, Peichel CL, Blackman BK, Nereng KS, Jonsson B, Schluter D, Kingsley DM: Genetic and developmental basis of evolutionary pelvic reduction in threespine sticklebacks. Natur. 2004, 428: 717-723. 10.1038/nature02415.View ArticleGoogle Scholar
- Liu FG, Miyamoto MM, Freire NP, Ong PQ, Tennant MR, Young TS, Gugel KF: Molecular and morphological supertrees for eutherian (placental) mammals. Science. 2001, 291: 1786-1789. 10.1126/science.1056346.PubMedView ArticleGoogle Scholar
- Hatfield JR, Samuelson DA, Lewis PA, Chisholm M: Structure and presumptive function of the iridocorneal angle of the West Indian manatee (Trichechus manatus), short-finned pilot whale (Globicephala macrorhynchus), hippopotamus (Hippopotamus amphibius), and African elephant (Loxodonta africana). Vet Opthalmol . 2003, 6: 35-43. 10.1046/j.1463-5224.2003.00262.x.View ArticleGoogle Scholar
- Salzburger W, Meyer A: The species flocks of East African cichlid fishes: recent advances in molecular phylogenetics and population genetics. Naturwissenschaften. 2004, 91: 277-290. 10.1007/s00114-004-0528-6.PubMedGoogle Scholar
- Stiassny MLJ, Meyer A: Cichlids of the rift lakes. Sci Am. 1999, 64-69.Google Scholar
- DOE Joint Genome Institute - Why Sequence Cichlid Fish?. [http://www.jgi.doe.gov/sequencing/why/CSP2006/cichlids.html]
- Kurten B: The evolution of the polar bear, Ursus maritimus. Acta Zoologica Fennica. 1964, 108: 1-26.Google Scholar
- Talbot SL, Shields GF: A phylogeny of the bears (Ursidae) inferred from complete sequences of three mitochondrial genes. Mol Phylogenet Evol. 1996, 5: 567-575. 10.1006/mpev.1996.0051.PubMedView ArticleGoogle Scholar
- Okazaki Y, Furuno M, Kasukawa T, Adachi J, Bono H, Kondo S, Nikaido I, Osato N, Saito R, Suzuki H, et al: Analysis of the mouse transcriptome based on functional annotation of 60,770 full-length cDNA. Nature. 2002, 420: 563-573. 10.1038/nature01266.PubMedView ArticleGoogle Scholar
- Crawford DL: Functional genomics does not have to be limited to a few select organisms. Genome Biol. 2001, 2: interactions1001.1-1001.2. 10.1186/gb-2001-2-1-interactions1001.View ArticleGoogle Scholar
- Modrek B, Lee CJ: Alternative splicing in the human, mouse and rat genomes is associated with an increased frequency of exon creation and/or loss. Nature Genet. 2003, 34: 177-180. 10.1038/ng1159.PubMedView ArticleGoogle Scholar
- King MC, Wilson AC: Evolution at two levels in humans and chimpanzees. Science. 1975, 188: 107-116.PubMedView ArticleGoogle Scholar
- Hinman VF, Nguyen AT, Cameron RA, Davidson EH: Developmental gene regulatory network architecture across 500 million years of echinoderm evolution. Proc Natl Acad Sci USA. 2003, 100: 13356-13361. 10.1073/pnas.2235868100.PubMedPubMed CentralView ArticleGoogle Scholar
- Enard W, Khaitovich P, Klose J, Zollner S, Heissig F, Giavalisco P, Nieselt-Struwe K, Muchmore E, Varki A, Ravid R, et al: Intra- and interspecific variation in primate gene expression patterns. Science. 2002, 296: 340-343. 10.1126/science.1068996.PubMedView ArticleGoogle Scholar
- Rockman MV, Wray GA: Abundant raw material for cis-regulatory evolution in humans. Mol Biol Evol. 2002, 19: 1991-2004.PubMedView ArticleGoogle Scholar
- Rossnes R, Eidhammer I, Liberles DA: Phylogenetic reconstruction of ancestral character states for gene expression and mRNA splicing data. BMC Bioinformatics. 2005, 6: 127-10.1186/1471-2105-6-127.PubMedPubMed CentralView ArticleGoogle Scholar
- Fukami-Kobayashi K, Schreiber DR, Benner SA: Detecting compensatory covariation signals in protein evolution using reconstructed ancestral sequences. J Mol Biol. 2002, 319: 729-743. 10.1016/S0022-2836(02)00239-5.PubMedView ArticleGoogle Scholar
- Ohno S: Evolution by Gene Duplication. 1970, Berlin: SpringerView ArticleGoogle Scholar
- Lynch M, Conery JS: The origins of genome complexity. Science. 2003, 302: 1401-1404. 10.1126/science.1089370.PubMedView ArticleGoogle Scholar | <urn:uuid:5d1ce073-0585-419a-a543-6d3a88380ab6> | 3.0625 | 2,761 | Academic Writing | Science & Tech. | 38.41875 | 95,568,414 |
The physical processes in the western Baltic are characterized by a large amount of mesoscale variability which is also reflected in the distribution patterns of the biological variables. To gain a better understanding of the effects of circulation and variable forcing on the ecosystem the non- linear interactions of physical and chemical-biological processes were incorporated in a model. The coupled 3-D model consists of a highly resolved circulation model of the southwestern Baltic (GFDL-model MOM1) and a chemical-biological model. A high spatial resolution of 1 nautical mile in the horizontal direction and a vertical spacing of 2 m is used in order to resolve the mesoscale dynamics. The chemical- biological model is a simple Nutrient-Phytoplankton- Zooplankton-Detritus (NPZD)-model which describes the lower trophic levels. Experience has shown that it is essential to include the temperature dependence of the growth rate of phytoplankton, and the remineralization rate of detritus, as well as the variable sinking velocity of phytoplankton. Model simulations from spring to autumn of the years 1994 and 1995 were performed to investigate the effect of external forcing to the chemical-biological dynamics. A simulation with a riverine nutrient source was done to study the transport of riverine material.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:e02ddd20-d4ff-46d5-96df-62b6aaa06812> | 2.578125 | 293 | Academic Writing | Science & Tech. | 13.806485 | 95,568,429 |
+44 1803 865913
Global climate change - rapid, substantial and human-induced - may have radical consequences for life on earth. The problem is a complex one, however, demanding a multi-disciplinary approach. A simple cost-benefit analysis cannot capture the essentials, nor can the issue be reduced to an emissions reduction game, as the Kyoto process tries to do. This text considers an integrative approach, which reveals that global climate change needs to be considered as a spider in a web, a triggering factor for a range of other, related problems - land use changes, water supply and demand, food supply, energy supply, human health, and air pollution. But an approach like this, which takes account of all items of knowledge, known and uncertain, does not produce clear-cut, final and popular answers. It does provide useful insights, however, which will allow comprehensive and effective long-term climate strategies to be put into effect.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
Thank You and Aloha
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:e740e1a9-d695-4381-b313-8f77a65a4a15> | 2.59375 | 251 | Product Page | Science & Tech. | 35.386818 | 95,568,447 |
Coordinated by professors Julián Carrera, Maria Eugenia Suárez, Julio Pérez and Francisco Javier Lafuente from the GENOCOV group of the UAB Department of Chemical, Biological and Environmental Engineering, and with the participation of Depuración de Aguas del Mediterráneo (DAM), the Catalan Water Agency (ACA) and the European Technology Platform for Water (WssTP), the SAVING-E project from the EU LIFE Programme aims to verify whether the wastewater treatment process can at the same time generate energy.
The aim of the project is to re-design wastewater treatment plants radically so that they can produce energy with no loss — or even with a gain — in performance. Present-day wastewater treatment plants require a minimum energy consumption of 8-15 kWh/inhabitant/year to meet the legal requirements on effluent discharge in terms of organic matter, nitrogen and phosphorus. This means considerable greenhouse gas emissions and high costs. Eliminating these costs would mean a saving of 500 to 1000 million Euros per year in EU countries. This new treatment plant will use all organic matter in the wastewater to produce biogas, a combustible gas made up principally of methane, which can be used to obtain heat and electricity. In addition, the nitrogen in the wastewater will be eliminated autotrophically, i.e. without the need for organic matter, by means of a new technology based on two biological stages: an aerobic partial-nitritation reactor and an anammox reactor.
The pilot plant will contain a total of two cubic metres and will treat three cubic metres of wastewater a day.
Compared with current urban wastewater treatment systems, researchers predict a reduction of 40% in total energy consumption, 10% in nitrogen compound disposal, 20% in greenhouse gases emission, and a 50% increase in biogas production.
The first step, after the inauguration, will be to set the pilot plant into operation. This step will finalise during the first trimester of 2017. Following that, researchers hope to obtain the first experimental results at the end of 2017 and the definite validation at the end of 2018.
The LIFE Programme is the European Union’s only funding instrument devoted exclusively to the environment. Its general objective up to the year 2020 is to contribute to sustainable development and to the objectives of the Europe 2020 strategy, along with other important strategies on climate and the environment. | <urn:uuid:ea9a92da-0793-4726-9c9b-14f85a0d6d2e> | 3.21875 | 497 | Knowledge Article | Science & Tech. | 28.284137 | 95,568,476 |
Finite Element Method
Reference work entry
A numerical method used to approximate the solution of boundary- and initial-value problems characterized by partial differential equations.
The finite element method is a systematic procedure of approximating continuous functions as discrete models. This discretization involves finite number of points and subdomains in the problem’s domain. The values of the given function are held at the points, so-called nodes. The non-overlapping subdomains, so-called finite elements, are connected together at nodes on their boundaries and hold piecewise and local approximations of the function, which are uniquely defined in terms of values held at their nodes. The collection of discretized elements and nodes is called the mesh and the process of its construction is called meshing. A typical finite element partition of a two-dimensional domain with triangular finite elements is given in Fig. 1.
- Agwai A, Güven I, Madenci E (2009) Damage prediction for electronic package drop test using finite element method and peridynamic theory. In: Electronic components and technology conference, 2009. ECTC 2009. 59th, 565–569Google Scholar
- Childs THC, Maekawa K, Obikawa T, Yamane Y (2000) Metal machining: theory and applications. Arnold, LondonGoogle Scholar
- Clough RW (1960) The finite element method in plane stress analysis. In: Proceedings of ASCE 2nd conference on electronic computation, Conference papers American Society of Civil Engineers 2nd conference on electronic computation, 8–9 Sept 1960, PittsburghGoogle Scholar
- Luttge R (2011) Microfabrication for industrial applications. William Andrew, OxfordGoogle Scholar
- Okman O, Özmen M, Huwiler H, Tekkaya AE (2006) Free forming of locally heated specimens. Int J Mach Tool Manu 47(7–8):1197–1205Google Scholar
- Şimşir C, Gür CH (2010) Simulation of quenching process for prediction of temperature, microstructure and residual stresses. J Mech Eng 56(2):93–103Google Scholar
- Tekkaya AE (1998a) State-of-the-art of simulation in metal forming. In: 6th SheMet conference, University of Twente, Enschede/The Netherlands, 6–8 Apr 1998, vol I, pp 53–66Google Scholar
- Tekkaya AE (1998b) Status and developments in the simulation of forming processes. WIRE 48(1):31–36Google Scholar
- Wagoner RH, Chenot J-L (1997) Fundamentals of metal forming. Wiley, New YorkGoogle Scholar
© CIRP 2014 | <urn:uuid:a165533a-835c-4e15-8d4c-219abb2c8c8a> | 2.84375 | 569 | Knowledge Article | Science & Tech. | 34.285734 | 95,568,495 |
Stardust, the NASA spacecraft mission, was given that name in hopes that the seven-year journey to capture comet samples would bring back to Earth, well, stardust.
In an article coming out in the Dec. 15, 2006, issue of the journal Science, researchers at Washington University in St. Louis are the first to report that a sample they received from the mission actually does contain stardust — particles that are older than the sun.
"When the comet samples became available to analyze, one of the key scientific questions was to see whether this material also contained ‘real stardust’ particles," said Frank J. Stadermann, Ph.D., senior research scientist in physics in Arts & Sciences at Washington University and a co-author of the article.
"As it turned out, the one and only stardust particle in all of the analyzed comet samples was found right here in the St. Louis lab."
The findings appear in the Science article titled "Isotopic Compositions of Cometary Matter Returned by Stardust." Stadermann, who is a sample adviser for the Stardust mission, is also a co-author on six other reports about the mission’s initial findings that are in this special edition of Science.
Launched Feb. 7, 1999, the Stardust spacecraft sped through the tail of comet Wild 2 at 15,000 miles per hour on Jan. 2, 2004. For 15 minutes, the spacecraft extended the honeycomb-like collector, capturing cometary dust grains in 132 ice-cube-sized cells made of aerogel, a silicon-based solid that is 99.8 percent air and resembles frozen pale-blue smoke.
After the sample return capsule’s safe landing on the Utah salt flats on Jan. 15, 2006, particles — each much smaller than a grain of sand — from several of the collector's cells were extracted, sliced up and disbursed to 50 labs around the world for analysis. Of those 50 labs, which are called "preliminary examination groups," two are at Washington University.
In late February, Stadermann received his team's first cometary material — three slices of one particle. Not wasting any time, Stadermann and his Washington University team — Ernst K. Zinner, Ph.D., research professor of physics and of earth and planetary sciences; Christine Floss, Ph.D., research associate professor of physics; and Kuljeet Kaur Marhas, Ph.D., postdoctoral research associate in physics — got right to work on it, and then eventually 10 other Stardust samples.
Kevin D. McKeegan, Ph.D., professor of geochemistry at UCLA, is first author on the Science paper. McKeegan received his doctorate in physics from WUSTL in 1987, with Zinner serving as his advisor.
Using Washington University’s state-of-the-art ion probe — the NanoSIMS (SIMS is short for Secondary Ion Mass Spectrometer) — the team analyzed the particles’ elemental and isotopic composition.
The NanoSIMS, which Stadermann and Zinner helped design and test, can resolve objects as small as 50 nanometers — or one thousand times smaller than the diameter of a human hair.
The first NanoSIMS instrument in the world was purchased by WUSTL in 2000 for $2 million, with partial support from the university’s McDonnell Center for the Space Sciences, NASA and the National Science Foundation.
‘One of the most important findings’ from mission
The measurements at Washington University yielded a unique result providing a key component for our understanding of the composition and origin of comets, said Stadermann.
"When we made the discovery of the stardust grain in the comet sample we were very excited, and we immediately knew that this little particle, although it is only 1/100,000 of an inch in diameter, would be one of the most important findings of the comet dust analysis," said Stadermann.
"This discovery proves that comets comprise dust grains from outside the solar system in addition to the many other components that were formed inside the solar system," he continued.
"The fact that these very different ingredients survived side-by-side in the comet shows how well the material was preserved in this ‘cosmic freezer’ for the last 4.5 billion years.
"NASA picked the name ‘Stardust’ for this mission many years ago," Stadermann noted. "Only because of our measurement here at Washington University we now know that the comet really does contain true stardust."
Scientists hope the Stardust findings will provide answers to fundamental questions about comets, the origin of the solar system and possibly even the origin of life itself.
This discovery complements ongoing research in Washington University’s Laboratory for Space Sciences, which is part of the departments of Physics and of Earth and Planetary Sciences and the McDonnell Center for the Space Sciences, all in Arts & Sciences.
"We certainly have a lot of expertise in analyzing small grains," said Zinner about the Laboratory for Space Sciences research group. "We have worked on interplanetary dust particles since the late seventies and have been involved in the discovery of many types of presolar grains — ‘stardust’ in the literal sense — since 1987."
In 1987, Zinner and colleagues at Washington University and a group of scientists at the University of Chicago found the first stardust in a meteorite. Those presolar grains were specks of diamond and silicon carbide.
Since then, members of WUSTL's space sciences lab have played leading roles in analyzing these grains in the laboratory and interpreting the results.
"With the NanoSIMS, we have an instrument that is ideally suited to the analysis of such grains," Zinner added.
"The finding of stardust in meteorites and now comets gives us information about the early solar system," Zinner continued. "The parent bodies of primitive meteorites (asteroids) formed in different places, closer to the sun, than comets, which formed farther away. The preservation of stardust in both types of solar system bodies tells us something about their formation history. However, at present we have evidence for only one stardust grain in cometary material, making it a little early to make far-reaching conclusions."
"The preliminary examination of the comet samples is only the first step and it is clear that we will continue to study such samples for years to come," added Floss. "There are so many questions about the early solar system for which the answers are still hidden in these tiny dust particles."
Sue Killenberg McGinn | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Health and Medicine
23.07.2018 | Earth Sciences
23.07.2018 | Science Education | <urn:uuid:20a4a5e5-8452-4c8d-a7c4-0cce81cb7dfc> | 3.65625 | 1,974 | Content Listing | Science & Tech. | 44.330074 | 95,568,507 |
For decades, science popularizers have said humans are made of stardust, and now, a new survey of 150,000 stars shows just how true the old cliché is: Humans and their galaxy have about 97 percent of the same kind of atoms, and the elements of life appear to be more prevalent toward the galaxy’s center, the research found.
The crucial elements for life on Earth, often called the building blocks of life, can be abbreviated as CHNOPS: carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur. For the first time, astronomers have cataloged the abundance of these elements in a huge sample of stars.
The astronomers evaluated each element’s abundance through a method called spectroscopy; each element emits distinct wavelengths of light from within a star, and they measured the depth of the dark and bright patches in each star’s light spectrum to determine what it was made of. [The Milky Way: A Traveler’s Guide]
The researchers used stellar measurements from the Sloan Digital Sky Survey’s (SDSS) Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectrograph in New Mexico. APOGEE can peer through the dust in the Milky Way because it uses infrared wavelengths, which pass through dust.
“This instrument collects light in the near-infrared part of the electromagnetic spectrum and disperses it, like a prism, to reveal signatures of different elements in the atmospheres of stars,” Sloan representatives said in a statement.
“A fraction of the almost 200,000 stars surveyed by APOGEE overlap with the sample of stars targeted by the NASA Kepler mission, which was designed to find potentially Earth-like planets,” the statement added. “The work presented today focuses on ninety Kepler stars that show evidence of hosting rocky planets, and which have also been surveyed by APOGEE.”
Although humans share most elements with the stars, the proportions of those elements differ between humans and stars. For example, humans are about 65 percent oxygen by mass, whereas oxygen makes up less than 1 percent of all elements measured in space (such as in the spectra of stars).
The proportion of each element of life differed depending on the region of the galaxy in which it was found. For example, the sun resides on the outskirts of one of the Milky Way’s spiral arms. Stars on the outskirts of the galaxy have fewer heavy elements required for life’s building blocks, such as oxygen, than those in more central regions of the galaxy.
“It’s a great human-interest story that we are now able to map the abundance of all of the major elements found in the human body across hundreds of thousands of stars in our Milky Way,” Jennifer Johnson, the science team chair of the SDSS-III APOGEE survey and a professor at The Ohio State University, said in the statement. “This allows us to place constraints on when and where in our galaxy life had the required elements to evolve, a sort of ‘temporal galactic habitable zone.'”
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
More from Aliens
Tesla was undoubtedly the greatest genius of the twentieth century. Our lifestyle nowadays, the technology that we take for granted, … | <urn:uuid:72096d91-3f0d-4c94-aa31-64f8173b033a> | 3.96875 | 689 | News Article | Science & Tech. | 35.228333 | 95,568,509 |
Fun facts on pollution for kids. Clean air kids activity book. Light pollution and its effects on the night sky. Infographic: what is light pollution?.
5 ways you can reduce light pollution mnn mother nature network. What is light pollution pdf bittorrenthidden. Prevent light pollution, save a star, save a life. Nasa climate kids :: play games. States with laws in place to reduce outdoor light pollution table of laws included in comments. Singapore has most light pollution in the world the new paper. Light pollution. | <urn:uuid:565247e5-01c2-482e-b909-06b047529991> | 2.75 | 109 | Spam / Ads | Science & Tech. | 62.294828 | 95,568,514 |
Earlier this year Arctic sea ice sank to a record low wintertime extent for the third straight year. Now NASA is flying a set of instruments north of Greenland to observe the impact of the melt season on the Arctic's oldest and thickest sea ice.
Operation IceBridge, NASA's airborne survey of polar ice, launched a short campaign on July 17 from Thule Air Base, in northwest Greenland. Weather permitting, the IceBridge scientists are expecting to complete six, 4-hour-long flights focusing on sea ice that has survived at least one summer.
This older multiyear ice, once the bulwark of the Arctic sea ice pack, has dramatically thinned and shrunk in extent along with the warming climate: in the mid-1980s, multi-year ice accounted for 70 percent of total winter Arctic sea ice extent; by the end of 2012, this percentage had dropped to less than 20 percent.
"Most of the central Arctic Ocean used to be covered with thick multiyear ice that would not completely melt during the summer and reflect back sunshine," said Nathan Kurtz, IceBridge's project scientist and a sea ice researcher at NASA's Goddard Space Flight Center in Greenbelt, Maryland.
"But we have now lost most of this old ice and exposed the open ocean below, which absorbs most of the sun's energy. That's one reason the Arctic warming has increased nearly twice the global average-- when we lose the reflecting cover of the Arctic Ocean, we lose a mechanism to cool the planet."
The sea ice flights will survey melt ponds, the pools of melt water on the ice surface that may contribute to the accelerated retreat of sea ice. Last summer, IceBridge carried a short campaign from Barrow, Alaska, to study young sea ice, which tends to be thinner and flatter than multiyear ice and thus has shallower melt ponds on its surface.
"The ice we're flying over this summer is much more deformed, with a much rougher topography, so the melt ponds that form on it are quite different," Kurtz said.
IceBridge is also flying a set of tracks to locate areas of sea ice that the mission already flew over in March and April, during its regular springtime campaign, to measure how the ice has melted since then.
"The sea ice can easily have drifted hundreds of miles between the spring and now, so we're tracking the ice as it's moving from satellite data," Kurtz said.
The summer research flights are aboard an HU-25C Guardian Falcon aircraft from NASA's Langley Research Center in Hampton, Virginia. The plane is carrying a laser instrument that measures changes in ice elevation and a high-resolution camera system to map land ice, as well as two experimental instruments.
IceBridge's main instrument, the Airborne Topographic Mapper laser altimeter, was recently upgraded to transmit 10,000 pulses every second, over three times more than the previous laser versions and with a shorter pulse than previous generations. The upgrade will allow the mission to measure ice elevation more precisely as well as try out new uses on land ice.
During this campaign, IceBridge researchers want to experiment whether the laser is able to measure the depth of the aquamarine lakes of melt water that form on the surface of the Greenland Ice Sheet in the summer. Large meltwater lakes are visible from space, but depth estimates from satellite imagery -- and thus the volume of water they contain-- have large uncertainties. Those depth estimates are key to calculating how much ice melts on Greenland's ice sheet surface during the summer.
"Scientists have measured the depth of these lakes directly by collecting data from Zodiacs," said Michael Studinger, principal investigator for the laser instrument team. "It's very dangerous to do this, because these lakes can drain without warning and you don't want to be on a lake collecting data when that happens. Collecting data from an airborne platform is safer and more efficient."
Researchers have used lasers to map the bottom of the sea in coastal areas, so Studinger is optimistic that the instrument will be able to see the bottom of the meltwater lakes and that possibly IceBridge will expand this new capability in the future. A mission that IceBridge flew on July 19 over a dozen supraglacial lakes in northwest Greenland gathered a set of measurements that Studinger's team will analyze over the following weeks and months.
The goal of Operation IceBridge is to collect data on changing polar land and sea ice and maintain continuity of measurements between NASA's Ice, Cloud, and land Elevation Satellite (ICESat) missions. The original ICESat mission ended in 2009, and its successor, ICESat-2, is scheduled for launch in the fall of 2018. For more about Operation IceBridge and to follow the summer Arctic sea ice campaign, visit: http://www.
Robert Gutro | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:094b3159-23c0-4619-92f3-f4f66c04c2f5> | 3.8125 | 1,619 | Content Listing | Science & Tech. | 43.313353 | 95,568,517 |
The programs to develop these cells have been passed on ever since. The study which is published online by Nature Genetics has been supported by the GEN-AU Programme of the Austrian Ministry for Science and Research.
During the development of an embryo, a large number of different, specialised cell-types arise from the fertilised egg. The genetic information is identical in all cells of an organism. Different properties of cells arise because the activity of genes is controlled and regulated by so called transcription factors. By switching genes on or off, the body makes muscle cells, bone cells, liver cells and many more.
Scientists have been puzzling over the question whether the gene regulatory programs that control this development have been “invented” only once during evolution or whether they might have arisen anew in different species. Previous studies supported both theories to a certain extent.
A team of researchers in Austria and the United States has now looked at key regulatory proteins in six different species of the fruit fly Drosophila. They studied the development of the mesoderm, one of the three primary germ cell layers in the early embryo of all higher organisms. Mesodermal cells differentiate into muscle cells, heart cells, connecting tissue and bone, among others.
Evolution with a Twist
“Some of the fly species that we looked at are as closely related as humans are to other primates. Others are as distant as humans and birds”, explains Alexander Stark, a systems biologist at the Research Institute of Molecular Pathology (IMP) in Vienna and one of the authors of the study. The team focussed on the transcription factor Twist and looked at the binding sites for Twist on the DNA of the different species. It turned out that these binding sites are very similar in all the flies, suggesting that the program that regulates mesodermal development has been “recycled” rather than invented independently in these animals.
In addition to these results, the study also found that Twist interacts with partner transcription factors to specifically bind to DNA at the correct positions. A deeper understanding of these mechanisms will help understand how higher organisms such as humans develop and how flaws in the regulation of genes may lead to diseases such as cancer.
A network of collaborations
The study is the result of a fruitful cooperation between two former MIT-colleagues: Julia Zeitlinger, who is now at the Stowers Institute for Medical Research and the University of Kansas School of Medicine, identified the binding sites of transcription factors. Alexander Stark, who is now a Group Leader at the IMP and head of a sub-project of the Bioinformatics Integration Network III, was in charge of prediction and analysis of the data.
The Bioinformatics Integration Network (BIN), also sponsored under the Austrian GEN-AU Programme, develops bioinformatic solutions and offers them to other research groups. The network is led by Zlatko Trajanoski of the Medical University in Innsbruck.
Other partners of BIN are the Institute for Genomics and Bioinformatics of the University of Technology in Graz, the Center of Integrative Bioinformatics at the Max F. Perutz Laboratories in Vienna, and the Research Institute of Molecular Pathology in Vienna.
Collaborations were also entered with the Institute for Theoretical Chemistry and the Department of Structural and Computational Biology, both at the University of Vienna, UMIT – the Health and Life Sciences University Hall/Tyrol, and CeMM, the Research Center for Molecular Medicine of the Austrian Academy of Sciences in Vienna.
The publication is the result of the sub-project „Cis-acting regulatory motifs“, led by Maria Novatchkova und Alexander Stark (both IMP), one of ten sub-projects of BIN. The Austrian Genome Research Programme has been initiated by the Austrian Federal Ministry for Science and Research in 2001.The Paper “High conservation of transcription factor binding and evidence for combinatorial regulation across six Drosophila species” was published online in Nature Genetics on April 10, 2011:
Dr. Heidemarie Hurtl | idw
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:d082a139-3bd9-49be-a8f0-238d44abcb77> | 3.515625 | 1,486 | Content Listing | Science & Tech. | 35.04498 | 95,568,518 |
On July 5, 2016, the moon passed between NOAA’s DSCOVR satellite and Earth. NASA’s EPIC camera aboard DSCOVR snapped these images over a period of about four hours. In this set, the far side of the moon, which is never seen from Earth, passes by. In the backdrop, Earth rotates, starting with the Australia and Pacific and gradually revealing Asia and Africa. Credit: NASA/NOAA For only the second time in a year, a NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a view of the moon as it moved in front of the sunlit side of Earth. ”For the second time in the life of DSCOVR, the moon moved between the spacecraft and Earth,” said Adam Szabo, DSCOVR project scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. ”The project recorded this event on July 5 with the same cadence and spatial resolution as the first ’lunar photobomb’ of last year.”
The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four-megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).
EPIC maintains a constant view of the fully illuminated Earth as it rotates, providing scientific observations of ozone, vegetation, cloud height and aerosols in the atmosphere. The EPIC camera is providing a series of Earth images allowing study of daily variations over the entire globe.
These images were taken between July 4 at 11:50 p.m. EDT and July 5 at 3:18 a.m. EDT (0350 UTC and 0718 UTC on July 5), showing the moon moving over the Indian and Pacific oceans. The North Pole is at the top of the images.The video will load shortlyDSCOVR is orbiting around the sun-Earth first Lagrange point (where the gravitational pull of Earth is equal and opposite of that of the sun) in a complex, non-recurring orbit that changes from an ellipse to a circle and back (called a Lissajous orbit) taking the spacecraft between 4 and 12 degrees from the sun-Earth line. This orbit intersects the lunar orbit about four times a year. However, depending on the relative orbital phases of the moon and DSCOVR, the moon appears between the spacecraft and Earth once or twice a year.
The last time EPIC captured this event was between 3:50 p.m. and 8:45 p.m. EDT on July 16, 2015.
EPIC’s ”natural color” images of Earth are generated by combining three separate monochrome exposures taken by the camera in quick succession. EPIC takes a series of 10 images using different narrowband spectral filters—from ultraviolet to near infrared—to produce a variety of science products. The red, green and blue channel images are used in these color images.
Combining three images taken about 30 seconds apart as the moon moves produces a slight but noticeable camera artifact on the right side of the moon. Because the moon has moved in relation to Earth between the time the first (red) and last (green) exposures were made, a thin green offset appears on the right side of the moon when the three exposures are combined. This natural lunar movement also produces a slight red and blue offset on the left side of the moon in these unaltered images.
DSCOVR is a partnership between NASA, NOAA and the U.S. Air Force with the primary objective of maintaining the nation’s real-time solar wind monitoring capabilities, which are critical to the accuracy and lead time of space weather alerts and forecasts from NOAA.
Explore further:NOAA’s DSCOVR to provide ’EPIC’ views of Earth
Provided by:NASA’s Goddard Space Flight Center | <urn:uuid:09a3919a-9330-4593-a6aa-540d7307e20d> | 3.6875 | 851 | News Article | Science & Tech. | 51.01801 | 95,568,521 |
Validating event Bangalore adult chatroms
Consistent use of these techniques will help make your applications more robust and reliable.When an application encounters an unexpected situation (such as a missing file or input parameter) or a logical error (performing a division-by-zero operation, for example), by default the application will terminate and generate an error display like the one shown in Figure 3.1. NET displays an error message and terminates the application when any error occurs. Use this event to perform validation on all the values of a row.Hi Jarod, To validate a user's input, use the Grid View's Validating Editor event.A good user interface will validate user input to ensure it is in the correct format.This may come in the form of simply checking if there is an entry for a field, such as a name or a more complex task such as validating an email address.Use this event to provide custom validation for any input control, such as a Text Box. In other words, the validating control will retain the focus until the user provides the correct format, even if the user clicks another control such as a button or textbox.
Each control has the following events and properties that are used to validate a form. Validation on a control is triggered when the control loses focus. Cancel Event Args] object as a parameter to the event block, which you can access this by using the in the Validating event, all events that would usually occur after the Validating event are suppressed.
Important: Never attempt to set the Focus of a control in this event because it can cause the script to hang.
This event occurs when the control is finished validating.
A network link to a server might fail just as you're transferring data.
Or perhaps you simply didn't allow for a particular rare circumstance in your code. NET Framework offers a robust set of tools for dealing with these unexpected problems. | <urn:uuid:b4712d82-a3bb-46c6-ba24-1ad1f71dbb51> | 2.796875 | 391 | Documentation | Software Dev. | 42.230692 | 95,568,527 |
Do hummingbirds have taste buds? What, if anything, do we know (and how can we, if at all) about how they experience the sweetness of the nectar they imbibe? -Sarah Rabkin, Soquel, CA
Great question! The short answer is yes, hummingbirds have taste buds — just not the ones you think.
It is well known that hummingbirds prefer more concentrated nectar, but only very recently have we discovered how they can tell if a flower or feeder has the good stuff (i.e. sucrose, a.k.a. sugar) or just water. Flowers visited by hummingbirds in the wild contain sucrose concentrations ranging from 7 percent to 60 percent, but they most commonly contain about 24 percent sucrose, which is also the concentration suggested for filling artificial feeders (one-quarter cup white granulated sugar thoroughly mixed into 1 cup of water, no dye or coloring needed). At this concentration, hummingbirds can tell the difference between sucrose concentrations differing by just 1 percent. So they have taste buds, and their taste buds work!
Before I get to the mechanism by which hummingbirds detect sucrose, I should tell you that all birds are missing the only known gene responsible for tasting sweetness (the one that is used by most other vertebrates). Birds have been missing this gene since they diverged from the non-avian dinosaurs (even hummingbirds are in fact dinosaurs). The sweet taste receptor produced by this gene is called T1R2 and no birds tested to date have it. So if hummingbirds are missing this receptor, how in the world do they detect the presence of sugar? The answer has to do with umami.
Umami is the savory taste found in fish, mushrooms, tomatoes, cheese, and soy sauce, and compounds making up these flavors are detected collectively by two taste receptors called T1R1/T1R3. By sequencing the DNA of hummingbirds, swifts (close relatives of hummingbirds), and chickens, Maude Baldwin and her colleagues found in 2014 that contrary to its use for detecting savory amino acid compounds in swifts and chickens, the hummingbird version of T1R1/T1R3 has evolved to respond instead to sucrose and other carbohydrates. So although hummingbirds have taste buds, and use them to detect the presence of sugars in the nectar they drink, they use different ones than the ones we use.
Your second question was about how they experience the sweetness of the nectar they imbibe. The only potentially relevant way I know to talk about perceptual or other “internal experiences” is to look at patterns of brain activity while the corresponding stimuli are occurring.
So it will be helpful to first describe what happens in humans when we ingest sugar. When sugar molecules bind to T1R2 sugar receptors in the taste buds on your tongue, the receptors send electrical impulses to the primary taste cortex in your brain. The primary taste cortex then relays the message to the primary reward pathway and ultimately results in your brain being bathed in the cozy pleasure of the neurotransmitter dopamine.
(As an interesting aside, in people that don’t regularly consume artificial sweeteners, both sucrose and artificial sweeteners activate the primary taste pathway, giving you the, “Hey that tastes great!” feeling — but only sucrose goes on to activate the primary reward pathway, giving you the, “Ahhhhh…” shortly after the first swig or bite (see this 2013 Scientific American article by Caitlin Kirkwood for the details).
The brains of birds and humans have been diverging from those of our common ancestors for about 200 million years, so they are structurally very different. Remarkably though, certain regions of bird brains, such as the nidopallium caudolaterale, show similar responses to gambling as do the prefrontal cortex and reward pathways in humans. For example, certain cells in the brains of pigeons taught to gamble at miniature slot machines begin to fire and release dopamine as anticipation of a payout builds, and others fire upon the inevitable loss-but-almost-won. Presumably, hummingbird brains are also bathed in dopamine when they taste and ingest sugars, which we can perhaps interpret as being a “pleasurable experience” for the bird.
So what does it mean that hummingbirds use the T1R1/T1R3 umami taste receptor to detect sugar? Does the sugar water hummingbirds drink taste sweet or savory? Personally, I’ve had a lot of fun thinking about hummingbirds loving what we think of as the umami flavor.
As a final example, you can try the following visual analogy yourself: Assume your favorite color is red (flavor sucrose) and that you cannot physically survive without eating red bell peppers (drinking sugar water). But through some evolutionary quirk, your eyes are now playing a trick on you whereby your red receptors now respond to green and your green receptors now respond to red. Now stare at the black dot in the middle of the two bell peppers in the figure below for 30 seconds and then look at the second black dot near the bottom.
You should see a lovely, life-saving red bell pepper (the one on the right), but to you it now looks green! Your body still needs you to eat what are in reality red bell peppers, so you end up really enjoying peppers that look green to your new senses.
Regardless of whether sugar tastes sweet or savory to hummingbirds, their brains probably get a similar reward to ours as the sucrose is consumed. So in some sense they may just experience the standard, “Ahhhhh.”
Marc Badger is a biologist and Postdoctoral Scholar in the Combes Lab at UC Davis, where he studies the biomechanics and behavior of maneuvering flight and obstacle avoidance in hummingbirds and bees. He received his PhD from UC Berkeley and his work was recently featured in National Geographic.
Ask the Naturalist is a reader-funded bimonthly column with the California Center for Natural History that answers your questions about the natural world of the San Francisco Bay Area. Have a question for the naturalist? Fill out our question form or email us at atn at baynature.org!
Like this article?
Help Bay Nature tell more stories about nature in the Bay Area
Make a tax deductible donation to Bay Nature today!
Most recent in Ask the Naturalist
Birds can become confused by glass skyscrapers and artificial light. What will happen with San Francisco's newest skyscraper?
Ask the Naturalist | <urn:uuid:2b6fbe41-f1e9-455c-8a44-da66b4704c52> | 3.359375 | 1,360 | Q&A Forum | Science & Tech. | 44.445154 | 95,568,577 |
In synthetic chemistry, so-called element-element bonding can be systematically exploited to assemble small building blocks to obtain structures that are more complex than the “starting material” and can be used for the resource-saving production of more precious materials.
In the newly discovered coupling reaction, molecule A is transformed into four-atom boron chain B
Scientists at Heidelberg University’s Institute of Inorganic Chemistry have discovered a hitherto unknown coupling reaction. Two positively charged compounds of the element boron join to form a new molecule with a chain of four boron atoms. The team headed by Prof. Dr. Hans-Jörg Himmel now intends to investigate the further implications of this unexpected bond formation.
In carbon chemistry, element-element coupling reactions play a crucial role. For example, small building blocks with very few carbon atoms of the kind produced by the steam cracking of crude oil are assembled to generate a broad range of products, including plastics, fuels, lipids and detergents, as well as more complex substances like pharmaceutical agents.
Due to this great significance, a large number of synthesis variants have been developed. In their research work the Heidelberg scientists focus on coupling reactions of this kind with compounds involving the element boron which are similar in structure to the corresponding carbon compounds.
As Professor Himmel explains, the new element-element combinations normally result from a reaction between two electrically neutral or differently polarised atoms, not between two positively or two negatively polarised ones. But now the Heidelberg researchers have discovered a coupling reaction in which two positively charged molecules bond together. This is made possible by so-called multi-centre bonding, which plays a significant role in boron chemistry. “The product of this reaction is a compound with four boron atoms,” says Prof. Himmel. “This in its turn is a promising precursor on the route toward the making of complex boron chains.”
Such compounds of the element boron were unknown so far, says the Heidelberg chemist. He and his team are now investigating the further combination of the four-atom boron chain to form boron chain polymers expected to possess high electrical conductivity and other useful material properties. Such materials would be of interest for electronic and optoelectronic applications, Prof. Himmel concludes. The research results have now been published in “Nature Chemistry”.
Marietta Fuhrmann-Koch | idw
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:99e81c34-2d7a-4e69-afed-387ee7f668ee> | 3.546875 | 1,100 | Content Listing | Science & Tech. | 34.489161 | 95,568,599 |
Study on micro fabricated stainless steel surface to anti-biofouling using electrochemical fabrication
- 1k Downloads
Biofilm formed on the surface of the object by the microorganism resulting in fouling organisms. This has led to many problems in daily life, medicine, health and industrial community. In this study, we tried to prevent biofilm formation on the stainless steel (SS304) sheet surface with micro fabricated structure. After then forming the microscale colloid patterns on the surface of stainless steel by using an electrochemical etching forming a pattern by using a FeCl3 etching was further increase the surface roughness. Culturing the Pseudomonas aeruginosa on the stainless steel fabricated with a micro structure on the surface was observed a relationship between the surface roughness and the biological fouling of the micro structure. As a result, the stainless steel surface with a micro structure was confirmed to be the biological fouling occurs less. We expect to be able to solve the problems caused by biological fouling in various fields such as medicine, engineering, using this research.
KeywordsAnti-biofouling Micro pattern Stainless Electrochemical
Biofilm is formed in a thin film form on a microorganism. This is three-dimensional structure formed in a self-secreting oligomer substrate (polymeric matrix) on a various surface. Biofilm by the microorganism can be formed from almost any type of tissue of the solid surface and the living organisms . In particular, the biofilm formed in water pipes, water purifiers and water quality monitoring sensors can give damage to the industry and daily life. Biofilm is difficult to remove, it is strongly attached to the surface, it continues to release the microorganism from the surface [1, 2]. Biofilm will cause a very large problem in public health because it acts as a repository for microorganisms. Biofilm formed in the detection section of the sensor requiring high sensitivity and high accuracy degrades the detection performance of the sensor.
Biofilm formation prevention or removal methods because of these problems has been developed. Up to date, Biofilm prevention coating or removal method has a problem that affects not only biofilm but also a device or surface. Physical methods like sand-blasting for removing biofilm on vessel surface or instrument surface require constant management by thinning the thickness of protective coating such as paint on the surface. On the other hand, Wrinkle-like micropatterns formed on the skin surface of the whale or on the shells of many shellfishes and leaves of lotus are effective in preventing the biofilm formation that easily occurs in the underwater environment [3, 4, 5, 6, 7, 8].
Microstructure was formed using an electrochemical etching (ECF) and FeCl3 etching solution on the surface of stainless steel (SS304) which is widely used in medical, industrial purpose [9, 10, 11]. After the microstructures formed on the surface, Pseudomonas aeruginosa Pa14 were cultured and evaluate biofilm formation tendency stained with crystal violet dye by gram staining.
Fabrication of micro structure
Surface properties of microscale structure formed surface
FeCl3 etching time of the contact angle and surface roughness
Microbial culture in a stainless steel surface
Biofilm formation tendency of the microstructure fabricated surface
Biofilm formation tendency of the roughness and the contact angle of the surface
In this study, after the microstructure of the stainless steel fabrication, biofilm formation was analyzed in accordance with the contact angle and the surface roughness changes. Microstructure was formed by using the photolithography and etching method for the electrochemical. The contact angle and the surface roughness was adjusted using FeCl3 solution through the etching process.
The biofilm formation on the stainless steel surface, on which pore-type microstructures were formed through ECF, was considerably lower than that on the stainless steel surface without patterning. However, it can be seen that the formation of smaller pores on the surface by increasing the FeCl3 etching treatment time tends to increase the formation of biofilm again. This results in the formation of a smaller pore pattern on the surface, which increases the roughness of the surface and increases the hydrophilicity of the interface between the surface and the culture fluid. An environment that can be easily attached to the surface due to increased hydrophilicity promotes biofilm formation, with some structures appearing to play the same role as a framework of thicker biofilm formation.
Research on biofilm formation control has been continuously carried out to clarify the mechanism of biofilm formation. The difference in biofilm formation depending on the interface between liquid and surface is expected to be applied to the future research on pollution prevention surface and high cultured media for bacteria. It could be used to reduce the damage caused by fouling organisms.
BJ carried out the experiment and drafted the manuscript. SH participated in the design of the study and performed the analysis. BJ and SH conceived of the study, and participated in its design and coordination. Both authors read and approved the final manuscript.
The authors declare that they have no competing interests.
We would like to acknowledge the financial support of the R&D Convergence Program of the Ministry of Science, ICT and Future Planning (MSIP) and the National Research Council of Science & Technology (NST) of the Republic of Korea (Grant B551179-12-04-00).
- 2.Kim SY, Rhee JI (2008) A study on microorganisms antifouling and optical properties of the sensing membrane surface modified by hydrophobic sol–gels. J Korean Ind Eng. Chem 19(2):222–227Google Scholar
- 5.Ha Sw, Lee SM, Jeong ID, Jung PG, KO JS (2007) Surface wettability in terms of prominence and depression of diverse microstructures and their sizes. KSME(A) 31(6):679–685Google Scholar
- 9.Cho MS, Cha SH, Lim NG, Park HW, Cho MS, Cho SH, Cha NG, Lim HW, Park JK, Jo JS (2007) Characterization of SUS molds for light guide plates by electro chemical fabrication (ECF) method. Electron Mater Lett 3(2):93–96Google Scholar
- 11.Mathilde C, Yong C, Frederic M, Anne P, David Q (2005) Microfabricated textured surfaces for super hydrophobicity investigations. Microeletron Eng 78–79:100–105Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | <urn:uuid:dfe6faf1-243f-4f78-9714-55e8cfbb3d22> | 2.8125 | 1,453 | Academic Writing | Science & Tech. | 30.940677 | 95,568,608 |
BLOOD STAR of the NEANDERTHALS passed close to our Sun
Small red/brown binary came within 0.8 light years, 70kya
Back when Homo sapiens neanderthalensis strode or shambled or knuckled along the earth, about 70,000 years ago, the prehuman simians may have been able to see a small star passing through the fringes of our solar system.
So say the authors of a new paper, The Closest Known Flyby of a Star to the Solar System, in The Astrophysical Journal Letters.
The star in question is known as Scholz's Star, a binary system comprising a red dwarf and brown dwarf companion first observed in 2013. Boffins have since analysed the star's trajectory and, as the paper explains, concluded that it grazed our Oort cloud.
Lead author Eric Mamajek is associate professor of physics and astronomy at the University of Rochester, which says he became intrigued by the star's unusual path, as it displayed “very slow tangential motion, that is, motion across the sky” but “radial velocity measurements … showed the star moving almost directly away from the solar system at considerable speed.”
That combination's an oddity, so Manajek's co-authors calculated 10,000 possible paths for the star, 98 per cent of which predicted an Ooort encounter.
Artist's impression of Scholz's Star. Michael Osadciw/University of Rochester
The star's passage through the Oort is significant because it is widely believed that region of space is filled with objects that may one day dip into our Sun's gravity well to become comets. The paper says Scholz's star probably passed us by about 52,000 Astronomical Units (a single AU measures the distance between Earth and Sol, just a tick under 150 million kilometres) or 0.8 light years. At that distance, any objects the star disturbed are not going to make themselves known for many millennia.
Scholz's star is now about 20 light years away in the constellation Monoceros, and is tricky to spot because it is very dim. But the star is magnetically active and such suns sometimes flare and become much brighter, meaning there's a chance early humans saw something startling as it passed.
Boffins know of no imminent stellar visitor to rival its close encounter, as Manajek and his colleagues also investigated a star called HIP 85605 that has in the past been thought to be a likely visitor some 250,000 years from now. The team's efforts suggest those assumptions were wrong and that HIP 85605 will overshoot us handily. ® | <urn:uuid:2b6ad269-9b22-41eb-adb7-22116306b310> | 3.625 | 553 | News Article | Science & Tech. | 47.794963 | 95,568,617 |
Experimental approach to suffusion and backward erosion
MetadataShow full item record
Internal erosion in dams is viewed by engineers as being of particular concern with regard to safety, as there is a danger that there may be no external evidence, or only subtle evidence, that the erosion is taking place. A dam may breach within just a few hours of internal erosion becoming apparent. In order to assist in finding a solution to the lack of external evidence, a series of experimental tests was developed. The tests consisted of applying hydraulic stresses to reconstructed consolidated cohesive soils to evaluate different types of internal erosion (i.e. suffusion and backward erosion). Different parameters such as hydraulic gradient, confining pressure and clay content were examined. When the hydraulic gradient was small, it was concluded that the erosion of the structure's clay fraction was due to suffusion. When the hydraulic gradient increased, it was concluded that the sand fraction erosion commencement was due to backward erosion. Moreover, the clay content was found to be an important parameter leading directly to internal erosion. The effects of confinement on internal erosion, unlike suffusion, increased backward erosion.
Showing items related by title, author, creator and subject.
Budihardjo, Mochamad; Chegenizadeh, Amin; Nikraz, Hamid (2012)Some factors that can potentially reduce the performance of geosynthetic clay liners (GCLs) and trigger an internal erosion of the GCL are high gradient, subgrade and confining pressure. Previous studies mainly investigated ...
Geochronological evidence for the Alpine tectono-thermal evolution of the Veporic Unit (Western Carpathians, Slovakia)Vojtko, R.; Králiková, S.; Jerábek, P.; Schuster, R.; Danišík, Martin; Fügenschuh, B.; Minár, J.; Madarás, J. (2016)Tectono-thermal evolution of the Veporic Unit was revealed by multiple geochronological methods, including 87Rb/86Sr on muscovite and biotite, zircon and apatite fission-track, and apatite (U-Th)/He analysis. Based on the ...
Source to sink zircon grain shape: Constraints on selective preservation and significance for Western Australian Proterozoic basin provenanceMarkwitz, V.; Kirkland, Chris (2016)The effect of selective preservation during transportation of zircon grains on the detrital age spectrum is difficult to quantify and could potentially lead to systematic bias in provenance analysis. Here we investigate ... | <urn:uuid:9601c6d6-79e4-4ce4-8790-044c657e504b> | 2.96875 | 546 | Content Listing | Science & Tech. | 27.86125 | 95,568,635 |
Authors: Raymond HV Gallucci
Expansion Tectonics postulates that the Earth has grown from a much smaller orb over its history, the size once being as low as 1/4 of today's radius at the extreme (more common estimates are around 60% of today's radius). Scientists have also long speculated as to the source of Earth's water, estimated today to be anywhere from 1.5 to 11 times the current volume of the oceans, with a rough agreement around three times today's ocean volume (implying 2/3 today is "within" the Earth). This paper examines some of the implications of Expansion Tectonics in conjunction with a total water volume three times today's ocean volume, with neither an increase in Earth's mass or water volume over time. Speculation at two extremes range from an early "saturated sponge" Earth, where all this water was retained within the Earth, with essentially none at the surface (the more common postulate of Expansion Tectonics) to a complete "waterworld," where all of the water was atop the surface, i.e., none within.
Comments: 6 Pages.
[v1] 2018-04-12 13:45:09
Unique-IP document downloads: 29 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:8ba75065-ab59-44c1-a8f7-c48cf621a618> | 3.21875 | 396 | Academic Writing | Science & Tech. | 45.736007 | 95,568,649 |
Species Detail - Shaded Broad-bar (Scotopteryx chenopodiata) - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
insect - moth
2 June (recorded in 2008)
19 September (recorded in 2013)
National Biodiversity Data Centre, Ireland, Shaded Broad-bar (Scotopteryx chenopodiata), accessed 18 July 2018, <https://maps.biodiversityireland.ie/Species/78712> | <urn:uuid:858a1023-4a8a-46e8-8492-ea446e89bc83> | 2.640625 | 152 | Structured Data | Science & Tech. | 35.188416 | 95,568,687 |
Researchers at the Department of Cell Biology, Physiology and Immunology at Universitat Autònoma de Barcelona (UAB) are the first to have cloned mice in Spain.
Cloe, Cleo and Clona are three female brown-coloured mice and were born respectively on 12 May, 3 June and 10 June. By means of nuclear transfer techniques, scientist collected mature oocytes, removed their chromosomes and substituted them for the nucleus of an adult somatic cell.
The cloning of mice is part of a research being carried out to study new ways to improve the efficiency of the cloning process.
All three mice were or are being suckled with other non-clones and their growth parameters are within normal range, say researchers who were in charge of cloning the mice, Nuno Costa-Borges, Josep Santaló and Elena Ibáñez from the Department of Cell Biology, Physiology and Immunology at UAB.
In order to clone the animals, researchers collected oocytes and surrounding cumulus cells from several female mice. The chromosomes were extracted from each of the oocytes and substituted with a cell from the cumulus by cytoplasm injection. Once the oocytes had been reconstructed, they were activated by simulating the stimuli occurring during fecundation so as to induce embryonic development. The cloned embryos were later transferred to receptor females.
The mice obtained by researchers at UAB, in addition to being the first of their species cloned in Spain, are the first animals to survive at birth and develop correctly. In 2003, Spanish scientists were able to clone a female Pyrenean mountain goat using a cell from the last animal of this species, which became extinct in 2000. The cloned animal however died 10 minutes after it was born due to a severe lung defect.
Increase in the efficiency of the cloning process
The cloning of the mice forms part of a research which scientists at UAB are carrying out to discover new ways of improving the efficiency of the cloning process. Nuno Costa-Borges, Josep Santaló and Elena Ibáñez are studying whether the use of valproic acid could contribute to an increase in the success rate of nuclear transfer cloning, currently situated at approximately 1% for mice using standard procedures.
Valproic acid is an inhibitor of the enzyme histone deacetylase, located at the cell nucleus where the DNA is found. Research carried out until now has shown that histone deacetylase inhibitors seem to contribute to an increase in levels of gene expression, which would favour the reprogramming of the somatic cell nucleus transferred to the oocyte cytoplasm. Its use in nuclear transfer processes however is very recent. It was first used two years ago and research until now has focused on trichostatin, an inhibitor which has significantly improved the efficiency of mouse cloning, raising it to 5%.
Studies carried out by researchers at UAB can not only be applied to reproductive cloning of animal models; they can also be used for the reprogramming of cells for therapeutic aims.
Costa-Borges, Santaló and Ibáñez are comparing three groups of cloned embryos in their research: valproic acid in the first group, trichostatin in the second and no inhibiting substance in the third group. The three mice in this case were cloned using the first (Cloe and Clona) and second (Cleo) inhibitors. In vitro experiments already pointed to improvements in the development of cloned embryos using inhibitors. However, scientists must wait until the end of the in vivo test period in July to obtain more conclusive data.
Maria Jesus Delgado | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:1c3a7aee-f6cb-47b2-a230-3ca8a020d8a6> | 3.015625 | 1,402 | Content Listing | Science & Tech. | 37.91826 | 95,568,708 |
I am new to Java programming and I am learning it with some online tutorials. I have a few questions
Is Instance Initialization and Initialization Block the same, similarly what are static blocks. Could you give me an example that illustrates these both.
I watn to replace a constructor with an Initiaialization block Is it possible to do so and which initialization block do I have to use and what is the basic difference between them.
I don’t understand the difference between class initialization and class instantiation. I tried reading it but could not understand what they mean… It would be better if I could have some examples that show the differences and where class instantiation and class initialization occur.
Why do we need class initialization and instantiation? And what are the steps taken during initialization and instantiation
Can we produce different outputs where the only difference is in the order of declarations of static variables in java. Can you show me an example and how do we do that and why does that happen. | <urn:uuid:7e7dbea4-65f5-4617-b429-9b6bd2f86adf> | 2.75 | 205 | Q&A Forum | Software Dev. | 38.455429 | 95,568,723 |
Scientists from the Faculty of Biology and Biotechnology at the RUB have published a report in the Journal of Biological Chemistry explaining why enzymes used for the production of hydrogen are so sensitive to oxygen.
Synchrotron radiation source: The researchers from Bochum and Berlin investigated the hydrogenase protein using the Swiss Light Source at the Paul Scheerer Institute near Zurich. The figure also shows the 3-D structure of the protein. Photo: Camilla Lambertz
New model for enzyme inactivation: Oxygen inactivates the hydrogenase in three phases (left). The longer the enzyme is exposed to oxygen, the greater the number of oxygen particles that bind to the iron atoms of the hydrogenase (blue). This leads to a reduction in the number of bonds between the iron atoms and other atoms (green, black). They are thus no longer able to fulfill their function. The right-hand section of the illustration shows the hypothetical mechanism of the inactivation. Oxygen (O=O) binds to the di-iron center which leads to the development of an aggressive oxygen species. This attacks the four-iron center [4Fe4S], which suppresses its ability to generate hydrogen.
In collaboration with researchers from Berlin, they used spectroscopic methods to investigate the time course of the processes that lead to the inactivation of the enzyme’s iron center. “Such enzymes, the so-called hydrogenases, could be extremely significant for the production of hydrogen with the help of biological or chemical catalysts”, explains Camilla Lambertz from the RUB study group for photobiotechnology. “Their extreme sensitivity to oxygen is however a major problem. In future, our results could help to develop enzymes that are more robust.”
Oxygen as a friend and as an enemy
Oxygen is crucial for the survival of most animals and plants. It is however toxic for many living creatures if the concentration thereof is too high, and some organisms can even only exist entirely without oxygen. Sensitivity to oxygen is also present at the protein level. A large number of enzymes, for example, hydrogenases are known to be irreversibly destroyed by oxygen. Hydrogenases are biological catalysts that convert protons and electrons into technically usable hydrogen. The RUB team of Prof. Thomas Happe is doing research on so-called [FeFe]-hydrogenases which are capable of producing particularly large amounts of hydrogen. The generation of hydrogen takes place at the H-cluster, consisting of a di-iron and four-iron subcluster which, together with other ligands, form the reactive center.
Oxygen attacks the iron centers
The researchers, working in collaboration with Dr. Michael Haumann’s team in Berlin, discovered that oxygen binds to the di-iron center of the hydrogenase, which initiates the inactivation of another part of the enzyme consisting of four further iron atoms. In this project, sponsored by the BMBF, it was possible to show the diverse phases of the inactivation process for the first time using the so-called X-ray absorption spectroscopy. The researchers used the synchroton radiation source Swiss Light Source in Switzerland for this specific type of measurement. It generates particularly strong rays, thus enabling the characterization of metal centers in proteins. Amongst other things, the scientists thus determined the chemical nature of the iron centers and the distance from the surrounding atoms using atomic resolution.
Inactivation in three phases
The team of researchers from Bochum and Berlin used a new experimental procedure. They initially brought the hydrogenase sample into contact with oxygen for a few seconds to minutes and finally for a couple of hours and then suppressed all proceeding reactions by deep-freezing it in liquid nitrogen. The subsequently gained spectroscopic data was used for the development of a model for a three-phase inactivation process. According to this model, an oxygen molecule initially binds to the di-iron center of the hydrogenase, which leads to the development of an aggressive oxygen species. In the subsequent phase, this attacks and modifies the four-iron center. During the final phase, further oxygen molecules bind and the entire complex disintegrates. ”The entire process thus consists of a number of consecutive reactions that are distinctly separated in time”, says Lambertz. “The velocity of the entire process is possibly dependent on the phase during which the aggressive oxygen species moves from the di-iron to the four-iron center. We are currently elaborating further experiments to investigate this.”
C. Lambertz, N. Leidel, K.G.V. Havelius, J. Noth, P. Chernev, M. Winkler, T. Happe, M. Haumann (2011) O2-reactions at the six-iron active site (H-cluster) in [FeFe]-hydrogenase, Journal of Biological Chemistry, doi: 10.1074/jbc.M111.283648
Further InformationCamilla Lambertz, Arbeitsgruppe Photobiotechnologie, Fakultät für Biologie und Biotechnologie der Ruhr-Universität Bochum, Tel. +49 234 32 24496
Camilla.Lambertz@rub.deThomas Happe, Arbeitsgruppe Photobiotechnologie, Fakultät für Biologie und Biotechnologie der Ruhr-Universität Bochum, Tel. +49 234 32 27026
Editor: Dr. Julia Weiler
Dr. Josef König | idw
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:1c4623a8-cc0b-42c6-b371-1503e970a124> | 3.359375 | 1,746 | Content Listing | Science & Tech. | 35.470393 | 95,568,724 |
Pioneering new research from the University of Exeter could have a major impact on climate and environmental science by drastically transforming the perceived reliability of key observations of precipitation, which includes rain, sleet and snow.
The ground breaking study examines the effect that increased aerosol concentrations in the atmosphere, emitted as a result of burning fossil fuels, had on regional temperature and precipitation levels.
Scientists from Exeter's Mathematics department compared observed regional temperature and precipitation changes throughout the 20th century with results produced by the latest complex climate models over the same period.
The study showed that the observed regional temperature changes, as well as observed precipitation levels in the tropics, were in agreement with the range of the modelled responses given current best estimates of the influence of aerosols on the Earth's energy budget.
However, when looking at geographical areas within the Northern Hemisphere mid-latitudes – which includes Europe, much of North Asia and North America – the study showed a significant discrepancy between observed precipitation levels and those predicted from the models.
This new analysis could transform our understanding of observed changes in the local hydrological cycle and offer a unique opportunity to correct for potential biases in measurements.
The new study, published in leading scientific journal Nature Climate Change, was produced by Joe Osborne and Dr Hugo Lambert, from Exeter's College of Engineering, Mathematics and Physical Sciences.
Dr Lambert explained: "Scientists have known that observed mid-latitude precipitation trends may be in error for many years. Our new physical framework fits together temperature changes, aerosol changes and other precipitation changes to show by how much. We now have the opportunity to correct 20th century precipitation trends."
The concentration of human-made aerosols in the atmosphere increased rapidly in the decades following the Second World War. Although aerosols interact with clouds and precipitation in complex ways, the primary effect is to reflect sunlight and cool the planet's surface. Hence, physical theory and modelling suggest there are robust expectations for regional temperature and precipitation change.
The study showed that climate models replicate the mid-twentieth-century fall in temperature linked to increased aerosol concentrations that is seen in observations.
It also showed that models and observations were in agreement over a reduction in rainfall in the Northern Hemisphere tropics around the same time, which is associated with the severe Sahel drought of the 1970s.
However, there was a dramatic discrepancy between the expected change in precipitation across the Northern Hemisphere mid-latitudes – where industrialisation occurred most heavily in the 20th century – and observations. While the modelling and physical theory suggested precipitation should fall, observations suggest that it increased.
Joe, a PhD student and lead author, explained the significance of the study. He said: "The study shows that precipitation in two key regions alters in line with mid-20th Century changes in aerosol across a number of the latest climate models. This can be understood in terms of the aerosol influence on the amount of energy received at the Earth's surface and consequent changes in atmospheric circulation.
"However, we also show that the response of precipitation observations in the mid-latitudes is not as we might have expected, given the models and our understanding."
The missing aerosol response in twentieth-century mid-latitude precipitation observations by Joe Osborne and Dr Hugo Lambert is published in Nature Climate Change.
Please not the strict embargo in place - 18.00 (GMT) on Sunday 30 March 2014
For further information please contact:
+44 (0) 1392 722391/ 722062
About the University of Exeter
The University of Exeter is a Russell Group university and in the top one percent of institutions globally. It combines world-class research with very high levels of student satisfaction. Exeter has over 18,000 students and is ranked 8th in The Times and The Sunday Times Good University Guide league table, 10th in The Complete University Guide and 12th in the Guardian University Guide 2014. In the 2008 Research Assessment Exercise (RAE) 90% of the University's research was rated as being at internationally recognised levels and 16 of its 31 subjects are ranked in the top 10, with 27 subjects ranked in the top 20. Exeter was The Sunday Times University of the Year 2012-13.
The University has invested strategically to deliver more than £350 million worth of new facilities across its campuses in the last few years; including landmark new student services centres - the Forum in Exeter and The Exchange on the Penryn Campus in Cornwall, together with world-class new facilities for Biosciences, the Business School and the Environment and Sustainability Institute. There are plans for another £330 million of investment between now and 2016.
Duncan Sandes | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:d5f89997-bf2c-4884-b2aa-f0af56aca293> | 3.40625 | 1,558 | Content Listing | Science & Tech. | 30.419405 | 95,568,735 |
In a novel integrative study led by Dr. Sébastien Puechmaille, University of Greifswald, Germany, Prof. Emma Teeling, University College Dublin, Ireland and PD Dr. Björn Siemers, Max Planck Institute for Ornithology, Germany to be published in PLOS ONE, the researchers tested the role of echolocation in mate choice.
They showed for the first time that bats are indeed ‘listening’ for a good mate rather than ‘looking for one’. Combining ecology, genetics and behavioural experiments of wild bats, this study showed that female Rhinolophus mehelyi horseshoe bats are using echolocation to choose a mate.
Probably the most important decision in any animal’s life, including our own, is finding the best mate! If you make the wrong decision then this can have repercussions for generations. Your choice of mate enables the transfer and continuation of your genes and indeed can drive the evolution of a species.
For humans, typically the first things that attracts you to a potential mate are visual cues, how well or attractive a person looks. ‘Good-looks’ have shown to be correlated with ‘good-genes’ or better fitness, so this makes sense. Well imagine having to find a mate in total darkness? This is the ultimate challenge that bats face.
Of all mammal species, bats are the hearing specialists. They use echolocation or sonar, to orient and detect prey in complete darkness, relying on the echo of their ultrasound calls to develop an acoustic image of their environment. Bat echolocation is considered to be one of the most fascinating yet least well understood modes of sensory perception.
Unlike bird song, the primary role of echolocation in bats is for orientation and finding food in complete darkness. Little is known about its use in communication and mate choice. Indeed, only in the past decade has it been suggested that echolocation calls can actually encode information on sex, body size, age and its role in mate choice has never been tested.
To test this hypothesis, the researchers have designed a suite of experiments and analyses on a wild population of Rhinolophus mehelyi near Tabachka research station in Bulgaria. The results of the experiments were pretty clear. The higher the frequency of the male’s call the more attractive the male is to the females. The higher the frequency of the male’s call the more, off-spring he will sire. Indeed, the male’s echolocation call seems to act like a ‘peacock’s tail’, the higher the frequency the more attractive the call.
However, despite this sexual advantage, these higher frequency calls are considered to be less efficient for foraging, and ultimately are an ‘attractive’ handicap. The male might ‘sound’ better to females but he can’t hunt as optimal-ly. It appears that the female’s preference for males with higher echolocation call frequency may explain why this species echolocates at 30 kHz higher than expected, something that has puzzled scientists until now. This evolutionary ‘trade-off’ between efficiency and attractiveness moulds and constrains bat echolocation.
This study is the first to show the role of echolocation in mate choice and will become a reference for future studies. Its power comes from the integration of three different types of data, ecological, behavioural and genetics. These results highlight bats as a novel system in which to explore the role of sound in mate choice and the potential conflict between natural and sexual selection on specific traits in evolution.
This paper has just been published with Open Access in PLOS ONE (http://dx.plos.org/10.1371/journal.pone.0103452) on the 30th July 2014. [Female mate choice can drive the evolution of high frequency echolocation in bats: a case study with Rhinolophus mehelyi DOI: 10.1371/journal.pone.0103452]. It represents an Irish-German driven project, funded by an IRCSET-Marie Curie International Mobility Fellowships in Science, Engineering and Technology awarded to S.J. Puechmaille.
Contact details of authors for correspondence
Dr. Sébastien Puechmaille
Ernst-Moritz-Arndt-University of Greifswald
17489 Greifswald, GERMANY
Phone +49 3834 86-4068
Prof. Emma Teeling
School of Biology and Environmental Science
University College Dublin, IRELAND
Phone +35 31 7162263
Jan Meßerschmidt | idw - Informationsdienst Wissenschaft
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:93d99ed7-2a68-4050-95ec-60f98cc0abd8> | 3.703125 | 1,600 | Knowledge Article | Science & Tech. | 39.347343 | 95,568,754 |
Zombie Math - Modeling The Attack
Finally, the zombie apocalypse has been quantified. A group of Canadian researchers have presented a sophisticated mathematical analysis of typical zombie attack scenarios.
Why would professionals attempt a treatment of this (fictional) problem?
This is, perhaps unsurprisingly, the first mathematical analysis of an outbreak of zombie infection. While the scenarios considered are obviously not realistic, it is nevertheless instructive to develop mathematical models for an unusual outbreak. This demonstrates the flexibility of mathematical modelling and shows how modelling can respond to a wide variety of challenges in ‘biology’.
First, of course, you must define your terms. What kind of zombies are we concerned about? Not those merely historical zombies.
Modern zombies (the ones illustrated in books, films and games) are very different from the voodoo and the folklore zombies. Modern zombies follow a standard, as set in the movie Night of the Living Dead. The ghouls are portrayed as being mindless monsters who do not feel pain and who have an immense appetite for human flesh. Their aim is to kill, eat or infect people. The ‘undead’ move in small, irregular steps, and show signs of physical decomposition such as rotting flesh, discoloured eyes and open wounds. Modern zombies are often related to an apocalypse, where civilization could collapse due to a plague of the undead.
I think that the zombies in the recent film Shaun of the Dead meet the criteria.
(Shaun of the Dead Zombie Attack>)
They examine all of the usual scenarios; latent infection, forced quarantine of infected individuals, impulsive eradication models, the creation of a treatment or cure. Their conclusion?
In summary, a zombie outbreak is likely to lead to the collapse of civilisation, unless it is dealt with quickly. While aggressive quarantine may contain the epidemic, or a cure may lead to coexistence of humans and zombies, the most effective way to contain the rise of the undead is to hit hard and hit often. As seen in the movies, it is imperative that zombies are dealt with quickly, or else we are all in a great deal of trouble.
From When Zombies Attack!: Mathematical Modelling of an Outbreak of Zombie Infection via Wired. Thanks to Moira for pointing this one out.
Scroll down for more stories in the same category. (Story submitted 8/15/2009)
Follow this kind of news @Technovelgy.
| Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit |
you like to contribute a story tip?
Get the URL of the story, and the related sf author, and add
Comment/Join discussion ( 2 )
Related News Stories -
Ontario Starts Guaranteed Minimum Income
'Earned by just being born.'- Philip Jose Farmer, 1967.
Save Your Brain's Connectome, Upload Yourself Elsewhere
'You've got remote storage. How regular is the update?' - Richard Morgan, 2003.
TMS Decreases Belief In God, Increases Belief In Immigrants
'... Setting up the same currents, the same basic ideas, in them all.' - Edmond Hamilton, 1938.
Blockchain Used To Verify Election Results
'Any adult could punch into the phone his or her code, followed by a yes or no.' - John Brunner, 1975.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
Ontario Starts Guaranteed Minimum Income
'Earned by just being born.'
Is There Life In Outer Space? Will We Recognize It?
'The antennae of the Life Detector atop the OP swept back and forth...'
Space Traumapod For Surgery In Spacecraft
' It was a ... coffin, form-fitted to Nessus himself...'
Tesla Augmented Reality Hypercard
'The hypercard is an avatar of sorts.'
A Space Ship On My Back
''Darn clever, these suits,' he murmured.'
Biomind AI Doctor Mops Floor With Human Doctors
'My aim was just not to lose by too much.' - Human Physician participant.
Fuli Bad Dog Robot Is 'Auspicious Raccoon Dog' Bot
Bad dog, Fuli. Bad dog.
Las Vegas Humans Ready To Strike Over Robots
'A worker replaced by a nubot... had to be compensated.'
You'll Regrow That Limb, One Day
'... forcing the energy transfer which allowed him to regrow his lost fingers.'
Elon Musk Seeks To Create 1941 Heinlein Speedster
'The car surged and lifted, clearing its top by a negligible margin.'
Somnox Sleep Robot - Your Sleepytime Cuddlebot
Science fiction authors are serious about sleep, too.
Real-Life Macau or Ghost In The Shell
Art imitates life imitates art.
Has Climate Change Already Been Solved By Aliens?
'I had explained," said Nessus, "that our civilisation was dying in its own waste heat.'
First 3D Printed Human Corneas From Stem Cells
Just what we need! Lots of spare parts.
VirtualHome: Teaching Robots To Do Chores Around The House
'Just what did I want Flexible Frank to do? - any work a human being does around a house.'
Messaging Extraterrestrial Intelligence (METI) Workshop
SF writers have thought about this since the 19th century.
More SF in the News Stories
More Beyond Technovelgy science news stories | <urn:uuid:23606b6b-33c3-41d0-83f8-7fd57dfebb83> | 2.578125 | 1,201 | Content Listing | Science & Tech. | 53.065832 | 95,568,777 |
INEXPLICABLY odd images on Bureau of Meteorology radar. Cyclones off the Australian coast and the most intense storm to hit Melbourne in living memory. A controversial US military facility in Alaska suspected of research into weather control …
It sounds like the plot of a sci-fi conspiracy thriller. In the past five months, the Bureau of Meteorology's weather radar has indeed recorded a number of very strange patterns - rings, loops, starbursts - at a number of places, including Melbourne, Broome and central Queensland, suggestive of some sort of massive interference.
In that time, cyclones Olga and Ului have lashed Queensland, and Melbourne was bombarded with hailstones in one of the most intense and costly storms in the city's history.
And, yes, there is a US Air Force research program, the High Frequency Active Auroral Research Program (HAARP), based at Gakona, Alaska, that has attracted attention from conspiracy theorists for its high-powered transmitters and powerful radars.
So, since the start of the year, websites and blogs that specialise in pseudo-science and conspiracy - such as the one run by British crop-circle ''investigator'' Colin Andrews - have leapt on the notion that there could be a connection between the unusual radar patterns, the odd weather and HAARP.
''My website is receiving many thousands of visitors who are asking for an explanation for these (radar) patterns,'' wrote Mr Andrews on his website. ''World class scientists have also contacted me … these scientists have expressed the opinion that the top-secret joint program between the US, UK, Canada and Australian governments, called the High Frequency Active Auroral Research Program (HAARP), is behind the interference. They believe the program is involved in weather modification.''
And there's more. ''A much bigger international story is waiting in the wings about what, or who, caused the 1000-year super drought to turn into the 100-year super storm over Melbourne on Friday, March 5.''
It's not the first time HAARP has come under scrutiny. Over the years, it's been blamed for floods, droughts, the Haiti earthquake, power outages and aircraft disasters.
But the Bureau of Meteorology, which has received numerous inquiries about the radar anomalies, including from Mr Andrews, isn't impressed.
And for sceptics, it's all too silly for words. HAARP is exactly what it professes to be, a facility researching Earth's upper atmosphere, and the radar images, while intriguing, aren't showing the results of a top-secret weather-control experiment.
According to a statement from the bureau to The Sunday Age, ''reflectors such as physical obstructions, sea waves, insects and birds and atmospheric effects'', as well as ''interference from local transmission sources'', are responsible for the unusual images.
The executive officer of Australian Skeptics, Tim Mendham, pointed out that Mr Andrews ''believes in every conspiracy going - [end of the world in] 2012, crop circles, everything''.
Mr Mendham said Mr Andrews, and others of his ilk, had no evidence to support their theories but it was no use pointing this out.''There's no way of arguing with someone who claims it's a cover-up,'' he said. ''No matter how much scientific evidence you give them, they can't give up. If someone gives them evidence disproving their theory they move sideways.''
The Sunday Age tried to contact Mr Andrews, who is based in the US, but there was no reply. That could be because, according to his website, he was in Oregon for last weekend's 11th annual UFO Festival.
Morning & Afternoon Newsletter | <urn:uuid:ced1a76e-ac3f-485f-9cc6-5183fd22b3d1> | 2.640625 | 762 | News Article | Science & Tech. | 39.480923 | 95,568,808 |
What if a perceived 3D concept is a universe inside a two-dimensional hologram? This theory is popular, to say the least.
The Holographic Principle
Holograms are beyond intriguing. With holograms, what seems to be 3D is only a 2D illusion. For instance, credit cards and bank notes are prime examples of this.
The Holographic Principle says the universe needs fewer dimensions than we thought in order to have a correct mathematical description. Previously, this principle was only tested in areas with negative curvature. Scientists at TU Wien (Vienna) now conclude that this principle can even be true as well, in flat space-time.
Daniel Grumiller of TU Wien says: “In 1997, physicists Juan Maldacena suggested that there is a correspondence between gravitational theories in curved spaces and quantum field theories in fewer dimensional spaces.”
The theory of gravitational phenomena suggests there are three dimensions, two of which calculate quantum particles producing results that can be mapped together. Over ten thousand papers have been published, by Maldacina, proving this successful method.
Apparently, we do not live in such anti-de-sitter locations. Where our universe is flat, such negative spaces have strange properties. An object thrown straight can come back to its origin. Although this theory is important, it has little to do with our corner of the universe.
“Our universe, on astronomical differences has a positive curvature,” says Grumiller.
Grumiller does, however, believe there could be a correspondence between the two. To test this idea, theories have to be created that do not require exotic places, such as the anti-de-sitter locations. Flat places are the fodder we need for the tests. The validity of correspondence in a flat universe has been confirmed by Grumiller and colleagues from Japan and India, and also published in the Physical Review Letters. For three years, Grumiller, along with the researchers from the University of Edinburgh, Harvard, IISER Pune, MIT and the University of Kyoto, have been working on these theories.
“Calculations must agree,” says Grumiller. “There has to be physical quantities in order for quantum gravity in flat spaces to include a holographic description by a quantum theory. The results have to agree and these physical quantities must be calculated in both theories.”
When quantum particles become entangled, they become one object, whether near or far. Quantum entanglement, one of the most important feature of quantum mechanics must appear in the gravitational theory. This entropy of entanglement represents the amount of entanglement in a quantum system. Daniel Grumiller, together with Max Reigler, Arjun Bagchi and Rudranil Basu, showed that entropy of entanglement has the same value in flat gravity as in a low dimension theory.
“Just a few years ago, it was hard to imagine such things. All this talk about quantum information and Entropy of entanglement would have seemed rather far-fetched,” says Grumiller. “The fact that we are able to use tools to test the holographic principle is remarkable.”
The theory of a holographic universe is a corresponding principle that could prove to be valid. We may actually be living within a two-dimensional hologram. Of course, there is no conclusive evidence, but signs point to yes. How’s that for unexpected?
Latest posts by Sherrie (see all)
- 8 Signs You May Have Clairvoyant Abilities and How to Develop Them - July 19, 2018
- Knowing The 7 Stages of Grief Will Help You Survive Loss, Break-up, and Trauma - July 18, 2018
- 14 Signs of Stress and Non-Obvious Psychological Causes of It - July 14, 2018
- Teen Angst: 7 Signs Your Teen Is Suffering and How to Help Them - July 11, 2018
- 6 Signs of Unhealthy Safety Seeking That Reveal Avoidance and Anxiety - July 8, 2018 | <urn:uuid:25f80620-cc48-4f4f-af46-b28cdcb1bb41> | 3.046875 | 843 | Personal Blog | Science & Tech. | 40.783662 | 95,568,824 |
When Math Discoveries Led to Banned Numbers
The literature world has seen more than its share of controversy. The best stories tend to provoke the strongest reactions—both positive and negative—in readers, which is why so many classic books have been banned at one point or another. But even a more objective field like math isn’t immune to conflict. In its new video, TED-Ed rounds up the numbers that caused such a stir when they were introduced that they were banned in math circles.
One of the earliest examples comes from ancient Greece. A mathematician named Hippasus was having trouble solving certain equations with fractions and whole numbers alone, so he came up with irrational numbers to make these values easier to express. The ruling school of thought at the time dictated that everything in nature could be explained elegantly with the numbers that already existed. Threatened by Hippasus’s new notion, his fellow mathematicians rejected the irrational numbers and had him exiled.
Other numbers have been banned for legal reasons. When Arab traders brought their positional number system, which included zero, to Italy in the Middle Ages, Florence banned it from record-keeping fearing that they would be easier to forge than Roman numerals. The Arabic way of counting also led to the rise of negative numbers, which were regarded with disdain by many experts into the 19th century. For more banned numbers, including some that are prohibited today, check out the full story below. | <urn:uuid:afd04d8d-00ae-4d5a-afd9-5447354b58ed> | 3.34375 | 291 | Truncated | Science & Tech. | 38.997846 | 95,568,831 |
University of Iowa physicist says current in one iron magnetic sheet can create quantized spin waves in another, separate sheet
Friction and drag are commonplace in nature. You experience these phenomena when riding in an airplane, pairing electrical wiring, or rubbing pieces of sandpaper together.
This illustration shows how the magnetic fields of individual atoms, reimagined as bar magnets, change position like tiny compasses when heat or a current is applied to a solid material. The repositioning creates a spin wave, shown by the dotted line. These spin waves are being studied for potential use in microelectronics.
Illustration courtesy of Michael Flatté laboratory.
Friction and drag also exist at the quantum level, the realm of atoms and molecules invisible to the naked eye. But how these forces interact across materials and energy sources remain in doubt.
In a new study, University of Iowa theoretical physicist Michael Flatté proposes that a magnetic current flowing through a magnetic iron sheet will cause a current in a second, nearby magnetic iron sheet, even though the sheets aren't connected. The movement is created, Flatté and his team say, when electrons whose magnetic spin is disturbed by the current on the first sheet exert a force, through electromagnetic radiation, to create magnetic spin in the second sheet.
The findings may prove beneficial in the emerging field of spintronics, which seeks to channel the energy from spin waves generated by electrons to create smaller, more energy-efficient computers and electronic devices.
"It means there are more ways to manipulate through magnetic currents than we thought, and that's a good thing," says Flatté, senior author and team leader on the paper published June 9 in the journal Physical Review Letters.
Flatté has been studying how currents in magnetic materials might be used to build electronic circuits at the nanoscale, where dimensions are measured in billionths of a meter, or roughly 1/50,000 the width of a human hair. Scientists knew that an electrical current introduced in a wire will drag a current in another nearby wire. Flatté's team reasoned that the same effects may hold true for magnetic currents in magnetic layers.
In a magnetic substance, such as iron, each atom acts as a small, individual magnet. These atomic magnets tend to point in the same direction, like an array of tiny compasses fixated on a common magnetic point. But the slightest disturbance to the direction of just one of these atomic magnets throws the entire group into disarray: The collective magnetic strength in the group decreases. The smallest individual disturbance is called a magnon.
Flatté and his team report that a steady magnon current introduced into one iron magnetic layer will produce a magnon current in a second layer -- in the same plane of the layer but at an angle to the introduced current. They propose that the electron spins disturbed in the layer where the current was introduced engage in a sort of "cross talk" with spins in the other layer, exerting a force that drags the spins along for the ride.
"What's exciting is you get this response (in the layer with no introduced current), even though there's no physical connection between the layers," says Flatté, professor in the physics department and director of the Optical Science and Technology Center at the UI. "This is a physical reaction through electromagnetic radiation."
How electrons in one layer communicate and dictate action to electrons in a separate layer is somewhat bizarre.
Take electricity: When an electrical current flows in one wire, a mutual friction drags current in a nearby wire. At the quantum level, the physical dynamics appear to be different. Imagine that each electron in a solid has an internal bar magnet, a tiny compass of sorts. In a magnetic material, those internal bar magnets are aligned. When heat or a current is applied to the solid, the electrons' compasses get repositioned, creating a magnetic spin wave that ripples through the solid. In the theoretical case studied by Flatté, the disturbance to the solid excites magnons in one layer that then exert influence on the other layer, creating a spin wave in the other layer, even though it is physically separate.
"It turns out there is the same effect with spin waves," Flatté says.
Contributing authors include Tianyu Liu with the physics and astronomy department at the UI and Giovanni Vignale at the University of Missouri, Columbia.
The U.S. National Science Foundation funded the research through grants to the Center for Emergent Materials.
Richard Lewis | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:04a0b0cf-8a35-4acc-a16e-e62b35338b00> | 3.734375 | 1,500 | Content Listing | Science & Tech. | 39.424817 | 95,568,840 |
Quantum Computing or: "Quantum Chat Room"
Introduction - Quantum Computers and Quantum Information Science
Piet Brouwer, Free University of Berlin, Germany
Quantum mechanics, which was developed in the first half of the 20th century, is the fundamental microscopic theory underlying all physical phenomena. While quantum mechanics is essential for a description of the microscopic scale, Newtonian or "classical" mechanics still provides an excellent description of the macroscopic world.
One of the profound differences between quantum mechanics and classical mechanics is that in classical mechanics the state of a system can be known (measured) with arbitrary precision, whereas the quantum mechanical description harbors innate uncertainties, which can not be resolved by any measurement. Also, quantum mechanics allows the principle of "superposition", known from the theory of waves, which is foreign to classical mechanics.
Since the macroscopic world in which we live is governed by classical mechanics, classical mechanics has shaped the way we think about information and information processing. Indeed, in spite of the fact that computers are governed by quantum mechanics - after all, quantum mechanics governs all phenomena -, our thinking of the information stored in the computer is still guided by classical mechanics. Thus, the fundamental unit of information is a "bit", which at all times can take the values "0" or "1" only. It is essential for the proper processing of information that the information content of the bits is known at all times.
A "quantum computer" is a computer in which the processing of information itself makes essential use of the laws of quantum mechanics. In a quantum computer, information is stored in "quantum bits" or "qubits". In such qubits, information is not stored as "0" or "1", but as a full quantum mechanical state, including its fundamental uncertainties and the possibility of superposition. Operations on these bits are such that they preserve any uncertainties and superpositions. Hence, a quantum computer is not simply a different physical device. It is also a device that calls for a radically new way of information processing, called "quantum information science".
The (so far purely theoretical) discipline of quantum information science has shown that there are important information processing tasks for which a hypothetical quantum computer outperforms a "classical computer". An example of such a task is the factoring of the product of two large prime numbers. The so-called RSA encryption method (called after its inventors, Rivest, Shamir, and Adleman) makes essential use of the fact that this task is extremely time-consuming for a standard computer. In 1994, Shor showed that the factoring of the product of two large prime numbers is a solvable problem if information can be processed in the quantum mechanical way. Hence, if a quantum computer would ever be built, it would break the RSA encryption.
This introduction describes the basics of quantum information science and discusses a few key ideas why it is that quantum computers can outperform their classical counterparts for certain tasks.
A broadly accessible introduction to quantum information science is the recent book “Quantum Computer Science, An Introduction” by N. David Mermin, (Cambridge University Press, 2007).
Many universities have institutes devoted exclusively to quantum information science. An example is the Perimeter Institute in Waterloo, Ontario, Canada. Its quantum-information website:
perimeterinstitute.ca/Outreach/What_We_Research/Quantum_Information/, contains a lot of interesting information as well as links to other quantum information resources at all levels. | <urn:uuid:be1129dc-d57d-4364-abc3-68260e5391df> | 3.546875 | 726 | Knowledge Article | Science & Tech. | 21.96739 | 95,568,850 |
All living organisms, including human beings, consist of a number of specialised cell types that all originate from the same type of primal cell; the embryonic stem cell. Stem cells can develop into any type of cell through a carefully regulated process referred to as cellular differentiation.
During differentiation, specific genes are switched on while other genes are switched off. The genes that are activated during differentiation determine which type of cell the stem cell will become. The result is that cells in a particular organ, e.g. a liver, only express genes specific to that organ.
Director of BRIC, Professor Kristian Helin led the research team consisting of Jesper Christensen, Karl Agger and Paul Cloos. Last year, the same research group published an article in Nature on how a group of Jumonji proteins regulate the growth of cancer cells and are involved in the development of specific cancer types.
BRIC’s new results show that a different subgroup of Jumonji proteins is essential for cellular differentiation. The Jumonji enzymes can turn off, or inactivate, particular genes that play an important part in embryogenesis. The conclusions are based on studies of the nematode (roundworm) C. elegans and studies of mouse embryonic stem cells. The C. elegans studies were carried out in collaboration with another of BRIC’s research groups, led by Associate Professor Lisa Salcini.
The BRIC researchers are currently developing inhibitors to the Jumonji proteins. Their aim is to use these inhibitors to treat cancer patients with increased levels of the Jumonji proteins.
Anne Dorte Bach | alfa
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:baa71758-2adb-43e0-8d55-5239f60bc24a> | 3.21875 | 918 | Content Listing | Science & Tech. | 37.251206 | 95,568,852 |
A marine biologist has dredged up an unknown animal from the seafloor. Describe some of the characteristics she should look at to determine the phylum to which the animal should be assigned.© BrainMass Inc. brainmass.com July 18, 2018, 10:28 pm ad1c9bdddf
This is a daunting task as marine diversity is much greater than on land or freshwater. The reason for this predominance of marine higher taxa is believed to be because most of the fundamental patterns of organisation and body plan, i.e. the different basic kinds of organism that are distinguished as phyla, originated in the sea and remain there, but only a subset of them has spread to the land and into freshwaters.
To go about this problem it is helpful to have some background information on the typical phyla you may encounter upon dredging the seafloor.
"Characteristics of Different Phyla"
These animals are asymmetrical, meaning they have no symmetry. They are filter feeders by means of flagellated cells. Filter feeders pump in water so that they can obtain their food from what is in the water. In some cases, the exit door or osculum can been seen with the naked eye. Example: sponges.
Cnidarians have medusa and polyp cycles. The medusa cycle is free swimming such as a jellyfish. The polyp cycle is stationary such as an anemone. They have radial symmetry with stinging cells called nematocysts. Examples: Jellyfish, hydroid, anemones and coral.
There are unsegmented worms that are flattened dorsoventrally. They move by contracting muscles down its body (ungulates). They have bilateral symmetry with two eyespots, that are sometimes visible. Example: Flat worms such as the Mexican skirt dancer.
They are segmented worms with bilateral symmetry. Each of their segments have a pair of parapodia for movement, which are ususlly visible to the naked eye. Examples: Fire worms and tube worms.
These animals have bilateral symmetry and segmented bodies. ...
This question asks how to classify an unknown marine animal. The solution provides extensive background information on the possible phylum's the marine animal can belong to. As well, the solution shows step-by-step how to classify any unknown marine animal into the correct phylum. This answer key is also useful for similar questions that ask how to classify an unknown organism. | <urn:uuid:aaf24c3c-80a6-4838-ac48-c4a1265ef9e4> | 4.125 | 511 | Q&A Forum | Science & Tech. | 44.926779 | 95,568,904 |
Dr Sean Solomon, MESSENGER’s Principal Investigator, will present a model that suggests that the origin of the Pantheon Fossae, a radiating web of troughs located in the giant Caloris Basin, is directly linked to an impact crater at the centre of the web.
The Caloris Basin is the youngest-known large impact basin on Mercury. The basin was discovered in 1974 during Mariner 10’s flyby, but the centre of the basin had not been seen until MESSENGER’s first flyby on 14th January.
MESSENGER revealed that the crater’s interior appeared to have been flooded by volcanic material in a similar way to the lunar mare basins. A ring of troughs was observed around the circumference of the basin. However, the biggest surprise was the discovery of radiating pattern of troughs, initially dubbed “the spider” by the team, which was unlike any structure seen in lunar basins or elsewhere on Mercury.
The troughs are hundreds of kilometres in length and the central crater, named Apollodorus after the architect of the Pantheon temple in Rome, is about 40 kilometres across. Several models have been proposed for their formation, including uplift of the basin due to heating from below, pressure building up from the superposition of surrounding plains or inward crustal flow. However, to date, none of these models could explain the radial pattern observed.
Dr Solomon and colleagues developed a three-dimensional model of deformations in Mercury’s crust in the Caloris basin and then modelled the effect of an asteroid impact at the centre.
“We found that stresses building up within the crust could explain the troughs found around the circumference of the basin but not the radial web at the centre. When we modelled the effect of a meteorite striking the centre of a pre-stressed basin floor, we found that the formation of the crater relieved the stress build-up and weakened the central area, allowing the troughs to spread out like cracks in a windscreen,” said Dr Solomon.
As the crater appears to be superimposed over the troughs, it appears that the Pantheon network formed simultaneously with the Apollodorus crater.
However, not all scientists agree that the crater’s presence at the centre of the web is anything more than coincidence.
Professor Jim Head, of Brown University, Rhode Island, and co-investigator of the MESSENGER mission believes that the Pantheon troughs could also have been caused by volcanic activity. An upflow of magma at the centre of the basin could have formed a reservoir at depth and a radial network of dykes.
“The first MESSENGER flyby provided a lot of evidence that volcanism has played an important role in Mercury’s history, in particular around the Caloris Basin. We found what appears to be a shield volcano located just outside the Caloris Basin and the area is surrounded by smooth plains, relatively free from impacts, which suggests a young surface. Given the amount of volcanic activity we’re discovering in that area, I wouldn’t want to rule out a volcanic cause just yet. Maybe MESSENGER’s second flyby will help us solve the mystery,” said Prof Head.
Anita Heward | alfa
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Life Sciences
18.07.2018 | Life Sciences
18.07.2018 | Information Technology | <urn:uuid:62cbaf6c-134a-4eef-b3d9-903014b335d4> | 3.625 | 1,327 | Content Listing | Science & Tech. | 42.80926 | 95,568,907 |
This self-contained modern textbook provides a modern description of the Standard Model and its main extensions from the perspective of neutrino physics. In particular it includes a thorough discussion of the varieties of seesaw mechanism, with or without supersymmetry. It also discusses schemes where neutrino mass arises from lighter messengers, which might lie within reach of the world's largest particle accelerator, the Large Hadron Collider. Throughout the text, the book stresses the role of neutrinos due to the fact that neutrino properties may serve as a guide to the correct model of unification, hence for a deeper understanding of high energy physics, and because neutrinos play an important role in astroparticle physics and cosmology. Each chapter includes summaries and set of problems, as well as further reading. | <urn:uuid:ba31a524-3fcc-45ad-b08f-4dc4a66412fe> | 3.03125 | 161 | Truncated | Science & Tech. | 25.220093 | 95,568,915 |
posted by Elie
A hydrocarbon gas burned completely in O2 to give CO2 and H2O.It was found that one volume of hydrocarbon gas at STP produced two volumes of CO2 and three volumes of steam corrected at the same temperature and pressure. What is the emperical formula of this gas? Please show me how to work out the formula.
See your post above. | <urn:uuid:32f1ca23-a017-43cf-9940-e479a986b795> | 2.875 | 82 | Q&A Forum | Science & Tech. | 56.368 | 95,568,933 |
Energy: Innovative catalyst fabrication method may yield breakthrough in fuel cell development
Kyushu University research group develops new method for creating highly efficient gold nanoparticle catalysts for fuel cells
The successful future of fuel cells relies on improving the performance of the catalysts they use. Gold nanoparticles have been cited as an ideal solution, but creating a uniform, useful catalyst has proven elusive. However, a team of researchers at Kyushu University's International Institute for Carbon-Neutral Energy Research (I2CNER) devised a method for using a new type of catalyst support.
In a potential breakthrough technology for fuel cells, a recently published article in Scientific Reports shows how wrapping a graphene support in a specially prepared polymer provides an ideal foundation for making uniform, highly active gold nanoparticle catalysts.
Fuel cells produce electricity directly from the separate oxidation of the fuel and the reduction of oxygen. The only by-product of the process is water, as fuel cells produce no greenhouse gases and are widely seen as essential for a clean-energy future.
However, the rate at which electricity can be produced in fuel cells is limited, especially by the oxygen reduction reaction (ORR), which must be catalyzed in practical applications. Although current platinum-based catalysts accelerate the reaction, their unhelpful propensity to also catalyze other reactions, and their sensitivity to poisoning by the reactants, limits their overall utility. Despite bulk gold being chemically inert, gold nanoparticles are surprisingly effective at catalyzing the oxygen reduction reaction without the drawbacks associated with their platinum counterparts.
Nevertheless, actually creating uniformly sized gold nanoparticle catalysts has proven problematic. Previous fabrication methods have produced catalysts with nanoparticle sizes that were too large or too widely distributed for practical use. Meanwhile, efforts to regulate the particle size tended to restrict the gold's activity or make less-stable catalysts.
"Creating small, well-controlled particles meant that we needed to focus on particle nucleation and particle growth," lead and corresponding author Tsuyohiko Fujigaya says. "By wrapping the support in the polybenzimidazole polymer we successfully developed with platinum, we created a much better support environment for the gold nanoparticles."
The team also tested the performance of these novel catalyst structures. Their catalysts had the lowest overpotential ever reported for this type of reaction. "The overpotential is a bit like the size of the spark you need to start a fire," coauthor Naotoshi Nakashima says. "Although we're obviously pleased with the catalysts' uniformity, the performance results show this really could be a leap forward for the ORR reaction and maybe fuel cells as well."
The article "Growth and Deposition of Au Nanoclusters on Polymer-wrapped Graphene and Their Oxygen Reduction Activity" was published in Scientific Reports, at
Although novel in its own right, this recent publication is the latest in a chain of developments that the interdisciplinary teams at I2CNER have been carrying out to develop fuel cells and other clean technologies.
Source: Kyushu University, I2CNER – 08.03.2016.
Investigated and edited by:Dr.-Ing. Christoph Konetschny, Inhaber und Gründer von Materialsgate
Büro für Material- und Technologieberatung
The investigation and editing of this document was performed with best care and attention.
For the accuracy, validity, availability and applicability of the given information, we take no liability.
Please discuss the suitability concerning your specific application with the experts of the named company or organization.
You want additional material or technology investigations concerning this subject?Materialsgate is leading in material consulting and material investigation.
Feel free to use our established consulting services
Your weekly MaterialTRENDS for
Engineering & Design
Partner of the Week
Search in MaterialsgateNEWS
Books and products | <urn:uuid:aec8f222-baa6-4116-baf3-140ef02de758> | 3.234375 | 802 | News (Org.) | Science & Tech. | 13.139115 | 95,568,947 |
The University of Michigan isn’t the only place that’s developing and studying electric vehicles in a fake city. Waymo, a self-driving technology company under Google, has created the city of Castle in California. They’ve implemented an infrastructure that will fully test self-driving vehicles in hopes to share this innovation with people on a large scale.
Why "Castle," you might be wondering? Interestingly, Castle was created from an old army training facility from World War II, formerly known as the Castle Air Force Base. The course will be equipped with objects that will make it seem like strolling around a typical town. So for example, there will be traffic cones, standing dummies, plastic plants, and objects like bicycles all over the place. Waymo’s goal is to rigorously test the automation without needing to disrupt the flow of real traffic.
It’s an upgrade from where the process of testing self-driving vehicles used to take place, which was in parking lots. Steph Villegas, one of the team members behind Google’s autonomy program since 2011, set up props in controlled environments. In a feature with The Atlantic, Villegas explained the difficulties of using the Shoreline Amphitheater and wanted to recreate a city.
“We made conscious decisions in designing to make residential streets, expressway-style streets, cul-de-sacs, parking lots, things like that,” Steph Villegas said in a feature with The Atlantic. “So we’d have a representative concentration of features that we could drive around.”
The fake city exploits many issues that Waymo has run into with self-driving cars, such as two-lane roundabouts in Texas. Without the disruption of others, researchers are able to repeatedly test out what the vehicle’s reactions are and have time to write down their observations.
Another benefit is using Carcraft, a simulation experience for autonomous vehicles to explore other cities like Austin and Phoenix. It again provides difficult situations for the vehicles, but adds virtual people, cars, and bikes on the road. According to Ars Technica, instead of performing tests physically, Waymo’s simulator has the ability to simulate tough situations “thousands of times in a single day.”
Essentially, the autonomous vehicle will be able to get many more miles in than it ever could physically with the virtual world. Waymo can essentially skip over boring stuff and get right into the heart of these obstacles to clear. According to the company, 25,000 cars could be driving in the program, "learning from complex and interesting scenarios."
“That iteration cycle is tremendously important to us and all the work we’ve done on simulation allows us to shrink it dramatically,” Dmitri Dolgov, Vice President of Engineering at Waymo, told The Atlantic. “The cycle that would take us weeks in the early days of the program now is on the order of minutes.”
Castle will be similar to Mcity, another 32-acre fake city used for testing self-driving features in Ann Arbor, Michigan. Recently, Domino’s Pizza and the Ford Motor Company will be "testing self-driving” in a limited area. No, the delivery vehicles won’t be driving themselves, but they’ll be evaluating the customer reaction toward it.
Waymo is well ahead of the curve in Level 4 autonomy. These new testing facilities will provide a better, more efficient way to enter this new technology into the market.
These seven Etsy shops from around the world offer an impressive range of cruelty-free products you can feel good about putting on your face.
A new report shares why decentralized energy grids will power the homes of the future and make a major difference in the lives of those in developing countries currently with limited or zero access to electricity.
Starbucks and McDonalds are working together to rethink to-go cups and inviting others to join them in creating eco-friendly packaging in an effort to reduce waste and environmental impact.
A new report finds that meat and dairy producers are on track to surpass the oil industry's greenhouse gas emissions. | <urn:uuid:8b5f2ae4-2500-473e-8e52-fc4ac4ccb5f2> | 3 | 860 | News Article | Science & Tech. | 38.904685 | 95,568,958 |
Language Reference |
See Also Applies To
Places the text in a String object in lowercase characters.
"String Literal".toLowerCase( )
The toLowerCase method has no effect on nonalphabetic characters.
The following example demonstrates the effects of the toLowerCase method:var strVariable = "This is a STRING object" strVariable.toLowerCase( )
The value of strVariable after the last statement is this is a string object.
© 1997 by Microsoft Corporation. All rights reserved.
|file: /Techref/inet/iis/jscript/htm/js857.htm, 2KB, , updated: 1997/9/30 04:45, local time: 2018/7/15 21:39,
|©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?|
<A HREF="http://techref.massmind.org/techref/inet/iis/jscript/htm/js857.htm"> toLowerCase Method</A>
|Did you find what you needed?|
Welcome to massmind.org!
Welcome to techref.massmind.org! | <urn:uuid:ad0d2194-8171-4b50-92ae-8ca76af9852f> | 2.625 | 297 | Documentation | Software Dev. | 55.716429 | 95,568,964 |
hi to everybody...
please tell me that why does Java takes input in the form of String??
akhilesh bar wrote:why does Java takes input in the form of String??
What input? If the input is a text, String would be the best way to get it because String can hold a text.
according to my knowledge... java takes inputs as a string and then converts it into the desired type...for example static void main(String args) !!!
You're referring to command-line arguments. That's because java uses the String to hold text.
"Input" is very imprecise. Could you specify what do mean exactly.
because anything and everything you type from a keyboard CAN be represented as a String...but the same isn't really true for other variable types.
That's what I've always thought.
, Ranch Hand
Please be specific with your subject...thanks in advance.
If you are passing arguments from the console then they have to be accepted as text(in this case String format).
Even if you type in a number you have to parse the String to the desired format.
Now I am super curious what sports would be like if we allowed drugs and tiny ads.
Rocket Oven Kickstarter - from the trailboss
This thread has been viewed 752 times.
All times above are in ranch (not your local) time.
The current ranch time is
Jul 15, 2018 23:09:10. | <urn:uuid:0c586507-32e1-4d60-b028-ece5cdc3502b> | 2.765625 | 304 | Comment Section | Software Dev. | 77.44936 | 95,568,970 |
UC Irvine-led research reveals two young galaxies that collided 11 billion years ago are rapidly forming a massive galaxy about 10 times the size of the Milky Way.
The revelation, published Wednesday in the journal Nature, is being likened to discovering a missing link between winged dinosaurs and early birds.
Julie Wardlow, a UC Irvine postdoctoral scholar, initially spotted the galaxy, noticing “an amazing, bright blob” in the “cold cosmos,” or areas where gas and dust join together to form stars, reports Phys.org. She saw these in images recorded by the European Space Agency's Herschel telescope, with important contributions from the Jet Propulsion Laboratory in Pasadena, then followed up with views from more than a dozen ground-based observatories, particularly the W.M. Keck Observatory in Hawaii.
The new mega-galaxy, dubbed HXMM01, “is the brightest, most luminous and most gas-rich submillimeter-bright galaxy merger known,” the authors write in the catchily titled “The Rapid Assembly of an Elliptical Galaxy of 400 Billion Solar Masses at a Redshift of 2.3.”
The discovery is changing the way astronomers view the development of galaxies. Giant elliptical galaxies were born quickly in the early universe, but stopped producing stars soon after. Some had theorized giant black holes in the heart of galaxies blew strong winds that expelled the gas. But cosmologist Asantha Cooray, the UCI team's leader, says definitive proof has now been found and confirmed by colleagues around the world that the quick burnouts were caused by galaxies merging and quickly and efficiently consuming gas for stars.
“These galaxies entered a feeding frenzy that would quickly exhaust the food supply in the following hundreds of million years and lead to the new galaxy's slow starvation for the rest of its life,” lead author Hai Fu, a UCI postdoctoral scholar, tells Phys.org.
“Finding this type of galaxy is as important as the discovery of the archaeopteryx was in understanding dinosaurs' evolution into birds because they were both caught at a critical transitional phase.”
Matt Coker has been engaging, enraging and entertaining readers of newspapers, magazines and websites for decades. He spent the first 13 years of his career in journalism at daily newspapers before “graduating” to OC Weekly in 1995 as the paper’s first calendar editor. He went on to be managing editor, executive editor and is now senior staff writer. | <urn:uuid:e7c6df19-ad13-4b05-879a-f8acc06a6c6a> | 3.1875 | 521 | News Article | Science & Tech. | 38.162862 | 95,568,973 |
* Explains how to use nuclear process heat for industrial applications, especially process heat for hydrogen production * Illuminates the issue of waste heat in nuclear plants, offering a vision for how it can be used in combined-cycle plants * Undertakes the thermal analysis of intermediate heat exchangers throughout the life cycle, from the design perspective through operational and safety assurance stages This book describes recent technological developments in next generation nuclear reactors that have created renewed interest in nuclear process heat for industrial applications. The author's discussion mirrors the industry's emerging focus on combined cycle Next Generation Nuclear Plants' (NGNP) seemingly natural fit in producing electricity and process heat for hydrogen production. To utilize this process heat, engineers must uncover a thermal device that can transfer the thermal energy from the NGNP to the hydrogen plant in the most performance efficient and cost effective way possible. This book is written around that vital quest, and the author describes the usefulness of the Intermediate Heat Exchanger (IHX) as a possible solution. The option to transfer heat and thermal energy via a single-phase forced convection loop where fluid is mechanically pumped between the heat exchangers at the nuclear and hydrogen plants is presented, and challenges associated with this tactic are discussed. As a second option, heat pipes and thermosyphons, with their ability to transport very large quantities of heat over relatively long distance with small temperature losses, are also examined.
This book combines issues several critical ones in the energy field (low-energy technologies, renewable energies such as the hydrogen economy, and geothermal energy).
Moving towards a more sustainable world requires a complete revolution in the way we manage energy and resources. However, from an academic perspective, this theme is so broad that most educators and researchers tend to focus on just one aspect, and maintaining the broad viewpoint which is necessary for making strategic judgments becomes difficult. Tohoku University addressed this challenge when developing a new education and training program for environmental leaders and brought together the extensive range of expertise available in specific fields into one special course which forms the basis of this book. Now in one volume, both students and educators can be brought up to date on a wide range of critical issues currently being addressed in the field of energy and resources. Chapters on resources include availability (for instance, rare earth metals), extraction and recycling of metals and plastics, and technological solutions to specific waste-disposal problems. In addition, broader strategic issues such as limits to growth and the interaction between the economic system and environmental issues are addressed. Even though each chapter provides topical data and knowledge from disparate and specialized fields, the book is written at a level that is readily understandable by students from all scientific, engineering, and humanities fields.
Energy is crucial to the functioning of any human society and central to understanding East Asia's 'economic miracle'. The region's rapid development over the last few decades has been inherently energy-intensive and the impact on global energy security, climate change and the twenty-first-century global system generally is now very significant and will become more so over foreseeable years and decades to come. The region is already the world's largest energy consumer and greenhouse gas emitter, so establishing cleaner energy systems in East Asia is both a regional and global challenge, and renewable energy has a critically important part to play in meeting it.
This book presents a comprehensive study of renewable energy development in East Asia. It begins by examining renewable energy development in global and historic contexts, and situates East Asia's position in the recent worldwide expansion of renewables. This same approach is applied on sector-specific chapter studies on wind, solar, hydropower, geothermal, ocean (wave and tidal) and bioenergy, and to general trends in renewable energy policy. Governments play a critical role in promoting renewables and their contribution to tackling climate change and other environmental challenges. Christopher M. Dent argues this is particularly relevant to East Asia, where state capacity practice has been increasingly allied to ecological modernisation thinking to form what he calls 'new developmentalism', the principal foundation on which renewables have developed in the region as well as how East Asia's low carbon development is being generally promoted.
Renewable Energy in East Asia will be of huge interest to students and scholars of Asian studies, economics, political economy, energy studies, business, development, international relations and environmental studies. It will also appeal to researchers working on the subject matter in government, business, international organisations, think tanks and civil society organisations.
These lecture notes provide a detailed treatment of the thermal energy storage and transport by conduction in natural and fabricated structures. Thermal energy in two carriers, i.e. phonons and electrons - are explored from first principles. For solid-state transport, a common Landauer framework is used for heat flow. Issues including the quantum of thermal conductance, ballistic interface resistance, and carrier scattering are elucidated. Bulk material properties, such as thermal and electrical conductivity, are derived from particle transport theories, and the effects of spatial confinement on these properties are established.
This book focuses on the fundamental principles and latest research findings in hydrogen energy fields including: hydrogen production, hydrogen storage, fuel cells, hydrogen safety, economics, and the impact on society. Further, the book introduces the latest development trends in practical applications, especially in commercial household fuel cells and commercial fuel cell vehicles in Japan. This book not only helps readers to further their basic knowledge, but also presents the state of the art of hydrogen-energy-related research and development. This work serves as an excellent reference for beginners such as graduate students, as well as a handbook and systematic summary of entire hydrogen-energy systems for scientists and engineers.
Solar Energy Articles
Solar Energy Books | <urn:uuid:e56c8d36-5d75-4fbd-bd6d-8c651c11667e> | 2.90625 | 1,147 | Content Listing | Science & Tech. | 9.42378 | 95,568,980 |
A View from Emerging Technology from the arXiv
How To Build an Intelligent Blob That Shrinks When it Detects Viruses
The way DNA strands contract when they come into contact with viruses could lead to cheap and simple pathogen detectors, say physicists.
Here’s an interesting idea. The threat from viral pathogens such as bird flu, hepatitis B and HIV, represents a clear and present danger. So cheap and simple tools for detecting these viruses are much needed, particularly in the developing world where the threat is acute but money scarce.
Step forward Jaeoh Shin and pals at the University of Potsdam in Germany who say that it is possible to create just such a virus detector using little more than a few strands of DNA mixed into a lump of hydrogel. This ‘intelligent’ blob would shrink when the virus in question was around giving a clear visible signal that precautions need to be taken.
Here’s their thinking. Biologists have long known that viruses bind to sections of DNA and this causes the double helix to unwind into two single strands, or ‘melt’ as biologist call it. The single strands can then become adsorbed into the surface of the virus and this shortens their total length. In fact, biologists have shown that melting-induced contraction can reduce the length of the strand by up to 90 percent.
So Shin and co’s idea is to stretch out the DNA strands in parallel, embed them in hydrogel and then wait. When virus particles turn up, they bond with the DNA, causing it to melt and contract and causing the hydrogel to shrink as well. “Viral particles in the hydrogel-DNA system…elect a macroscopic contraction of the hydrogel matrix,” they say.
To test the idea, these guys created a molecular simulation of the way a virus bonds to DNA and the consequent melting and contraction. The results certainly seem promising. “Viral particles in the hydrogel-DNA system destabilize [double strand] DNA and effect a macroscopic contraction,” they say.
One important question is how virus-specific the DNA can be made so that it responds only to HIV or avian flu or to some other specific virus. Shin and co say that the viruses bond preferentially to binding proteins and these can be linked to DNA. So with some simple biochemical fiddling, they should be able to make DNA that bonds only to specific viruses.
That certainly sounds possible but these guys need to be sure that the contraction signal is triggered uniquely by the target virus and nothing else. In other words, the false positive rate will have to be carefully studied and controlled. It’s an interesting idea that deserves more, careful study.
It also has some significant competition. There is no shortage of ideas for detecting viruses. The big advantage of this one is that it would be cheap enough to distribute widely, even in the developing world.
And therein lies the next challenge. Having developed the theory behind these detectors and simulated their behavior, Shin and co need to build one to show that it works. And not just in the lab but in all the extreme conditions of heat, humidity and dirt that doctors all over the world regularly face.
There’s no question that it ought to be possible to embed strands of DNA in hydrogel to make intelligent blobs that are cheap and simple. But until Shin and prove that the blobs work as they expect, this will remain merely a good idea rather than the potentially life-changing product that these guys clearly imagine it could be.
They’ve got significant work ahead. We’ll be watching to see how they fare.
Ref: arxiv.org/abs/1310.5531: Sensing Viruses By Mechanical Tension of DNA in Responsive Hydrogels
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:7362996d-1f38-4385-9f5b-b40d035e2aac> | 3.734375 | 835 | Truncated | Science & Tech. | 53.104091 | 95,568,987 |
Washington: Scientists have revealed that a new plant species is providing an insight into how evolution works and could help improve crop plants.
The new plant species, Tragopogon miscellus, appeared in the United States 80 years ago. It came about when two species in the daisy family, introduced from Europe, mated to produce a hybrid offspring.
The species had mated before in Europe, but the hybrids were never successful. However in America something new happened. The number of chromosomes in the hybrid spontaneously doubled, and at once it became larger than its parents and quickly spread.
Scientists studied the Tragopogon miscellus to understand how evolution works.
They found that the new plant species had relaxed control of gene expression in its earliest generations. But today, after 80 years of evolution, different patterns of gene expression are found in every plant.
"We caught evolution in the act," said Doug Soltis, co-leader of the research team. New and diverse patterns of gene expression may allow the new species to rapidly adapt in new environments.
Crossing different plant species to produce hybrids is a process used in farming to produce greater yields and stronger plants. Studying how this works in nature can give us new ideas to apply to agriculture.
The work was carried out at the University of Florida and Iowa State University and involved scientists from Queen Mary, University of London, Massey University in New Zealand, and Shanxi Normal University in China.
The study was published in the journal Current Biology. | <urn:uuid:004a4bce-13f4-465a-a613-19a44af2afae> | 3.921875 | 310 | News Article | Science & Tech. | 37.840308 | 95,568,996 |
The results of experiments that were recently conducted by the Mars Science Laboratory Rover “Curiosity” appear to be withheld for reasons that are best known only to NASA. Such results may be crucial for backing up evidence obtained by Gil Levin and Patricia Straat in the 1976 Viking Mission that would prove beyond a shadow of doubt that microbial life currently exists on Mars. Gil Levin has approached NASA with a request to release the relevant data under the Freedom of Information Act. The outcome of this move is awaited with eager anticipation. Chandra Wickramasinghe, Director of the Buckingham Centre for Astrobiology, said “It is high time such data as is available is released and is freely accessible, and Gil Levin’s ground-breaking discovery of life on Mars finally accepted and acknowledged.” Read the press release, with the full text of Professor Levin’s letter to NASA.
ProspectusNeed a prospectus? Order one here. | <urn:uuid:5916be07-5017-43cb-8d91-78f6fa9e17e1> | 2.546875 | 190 | News (Org.) | Science & Tech. | 36.774172 | 95,569,000 |
The VERITAS experiment measured gamma rays coming from the Crab Pulsar at such large energies that they cannot be explained by current scientific models of how pulsars behave, the researchers said.
The results, published today in the journal Science, outline the first observation of photons from a pulsar system with energies greater than 100 billion electron volts -- more than 50 billion times higher than visible light from the sun.
"This is the highest energy pulsar system ever detected," said Rene Ong, a UCLA professor of physics and astronomy and spokesperson for the VERITAS collaboration. "It is a completely new and surprising phenomenon for pulsars."
Data were acquired for 107 hours over the course of three years by VERITAS's ground-based gamma ray observatory, which is part of southern Arizona's Whipple Observatory, a facility managed by the Harvard–Smithsonian Center for Astrophysics. VERITAS (Very Energetic Radiation Imaging Telescope Array System) observes gamma rays using a network of four telescopes, each 12 meters in diameter.
Ong noted that all previous observations of pulsars indicated that the radiation cuts off at the high energies the team observed.
"It means the radiation we detect must be a new component that was completely unexpected," he said.
Gamma rays, the most energetic type of electromagnetic radiation, cannot be directed by lenses or bounced off mirrors like ordinary visible light, Ong said. Because the rays are invisible to the human eye, the only way telescopes on Earth can detect them is by observing the path they take as they are absorbed in the planet's atmosphere.
Gamma rays are ejected from the Crab Pulsar, and they smash into Earth's atmosphere, causing "the electromagnetic equivalent of a sonic boom," Ong said. This collision creates a shower of visible light more than 6 miles above the ground that is recorded by VERITAS.
"The atmosphere is an integral part of our measurement system, which makes VERITAS different from conventional telescopes," Ong said.
One of the most widely studied astronomical objects in the northern hemisphere, the Crab Nebula, which is some 6,500 light-years from Earth, was formed when a massive star exploded in a supernova event that was observed on Earth in the year 1054. While it is most typical for pulsars to be ejected from the stellar wreckage during a supernova, in the case of the Crab system, the pulsar remained at its center, producing radiation that covers the entire electromagnetic spectrum, Ong said.
He calls the Crab system the "Rosetta Stone of astronomy," because astronomers and astrophysicists have observed this object at every conceivable wavelength of light.
"The Crab Pulsar is considered among the best understood systems in all of astronomy, yet here we have found something totally new," he said. "It is astronomy in a completely new light; we are seeing phenomena that you just can't explore with optical light or X-rays, or even low-energy gamma rays."
The Crab Pulsar is a highly magnetized neutron star with a surface magnetic field a trillion times stronger than that of the Earth. The star spins at the dizzying rate of about 30 times a second and emits gamma rays through "curvature radiation," an effect that creates a lighthouse-like beacon that winks on when the beam faces the Earth and off when the star pivots away.
Light detected by the VERITAS experiment cannot be explained by curvature radiation, however, and likely comes from regions well outside the high–magnetic field region close to the neutron star, Ong said. While such energetic gamma rays have been observed elsewhere in the galaxy, the actual mechanism of how they are created in a pulsar is not fully understood.
"The pulse duration of the radiation we see is almost three times shorter than that seen at other gamma ray energies," he said. "This was very surprising and means this new radiation is probably coming from a different physical region of the star's outer magnetosphere."
The VERITAS experiment looks for radiation emanating from celestial objects such as pulsars, active galaxies, the center of the Milky Way and supermassive black holes. It has collected data for nearly 1,000 hours every year since it began operating in 2007.
"We are trying to understand processes out in the cosmos that can create particles at these extreme energies, beyond what can be produced here on Earth," Ong said. "We are also very interested in seeing if these processes indicate some sort of new physics."
Ong hopes his research may shed some light on the mystery of cosmic rays.
"We are bombarded by high-energy particles from all over the cosmos that reach unimaginable energies," he said. "These cosmic rays are an important energy source in our galaxy, yet we have no clue where they are coming from.
"This measurement indirectly gives us clues to the highest energies in the cosmos, telling us about particles and energies that we can't generate here on Earth but that nature's accelerators are able to create for us."
Ong is currently helping to plan the next-generation ground-based gamma ray observatory, called the Cherenkov Telescope Array (CTA). Covering more than one-half square mile with dozens of telescopes, the CTA will be 10 times more sensitive than VERITAS, allowing radiation from fainter and more distant objects to be accurately resolved.
The 95 co-authors of the Science paper on the Crab Pulsar include scientists from 26 institutions in five countries who are part of the VERITAS collaboration. UCLA co-authors include Vladimir Vassiliev, an associate professor of physics and astronomy; Pratik Majumdar, a postdoctoral scholar in physics and astronomy; and Timothy Arlen, a graduate student.
This research is supported by the U.S. Department of Energy, the U.S. National Science Foundation, the Smithsonian Institution, the National Sciences and Engineering Research Council of Canada, the U.K.'s Science and Technology Facilities Council, and the Science Foundation Ireland.
UCLA is California's largest university, with an enrollment of nearly 38,000 undergraduate and graduate students. The UCLA College of Letters and Science and the university's 11 professional schools feature renowned faculty and offer 337 degree programs and majors. UCLA is a national and international leader in the breadth and quality of its academic, research, health care, cultural, continuing education and athletic programs. Six alumni and five faculty have been awarded the Nobel Prize.
For more news, visit the UCLA Newsroom and follow us on Twitter.
Stuart Wolpert | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:dd636cee-b5ab-483a-8f67-a4fe132925d5> | 3.765625 | 1,983 | Content Listing | Science & Tech. | 38.358861 | 95,569,001 |
The use of acoustics to study atmospheric properties is well-established. A review of atmospheric effects on acoustic signals can be found in Thomson (1975) and the history of acoustic sounder development is given by Gaynor (1982). However, the use of Doppler acoustic sounders in air pollution applications, as pointed out by Gaynor (1982), has gained acceptance only very recently. Doppler acoustic sounders with proven reliability have become commercially available only in the past decade and, even now, only a limited number of units are routinely being used to collect wind data.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
- Doppler Acoustic Sounding: Application to Dispersion Modeling
- Springer US
Neuer Inhalt/© ITandMEDIA, Product Lifecycle Management/© Eisenhans | vege | Fotolia | <urn:uuid:edb2512f-91e0-49c6-acf2-e49d43846694> | 2.75 | 186 | Truncated | Science & Tech. | 5.777019 | 95,569,007 |
Biotic interactions between organisms of different trophic levels often occur in highly structured and complex environments. Human land use has a profound impact on the structure of the environment where species interactions take place. Here I examine the effects of environmental complexity and its modification by land use on plant-herbivore and host-parasitoid systems on different spatial scales and for different model systems.
On small spatial scales plant architecture, vegetation structure and plant odour diversity can influence for example reproductive strategies of insect herbivores and host finding success of their natural enemies. On larger spatial scales habitat structure and landscape structure can strongly affect trophic interactions as well as species abundance and diversity. Current management strategies in nature conservation have a potential to locally restore environmental complexity and maintain species diversity.
Absolventenfeier Geoökologie 2018/19 | <urn:uuid:a41d2161-df7f-4419-b67f-cbcba644c1d4> | 2.8125 | 171 | Academic Writing | Science & Tech. | -1.545196 | 95,569,013 |
Technology , the collection of techniques, methods or processes used in the production of goods or services or in the accomplishment of objectives, such as scientific investigation, or any other consumer demands. It is ideal for students pursuing careers in law, public policy or management, as well as for scientists, engineers and others interested in science, technology and society. Movie Making combines reading, writing, theatre arts, and technology into one dynamic and exciting project. This relationship is best understood through Science and Technologies Studies (STS). The reason is that people need to recognize that there are people who are affected by the science and technology. Ebooks are another way to combine writing and technology into an interactive and dynamic product. Infrastructure in the society has grown with the help of science and technology.
The invention of computers has assisted the process of calculation in laboratories but science has been responsible for pollution and has given us the nuclear bomb which threatens our very existence like what happened in nagasaki and hiroshima in world war 2. But what i can say is that the science is not the fault of all of that but what are the people’s intention to use science in their bad motivation in their lives.
It provides the basis of much of modern technology – the tools, materials, techniques, and sources of power that make our lives and work easier. The work of the NSTC is organized under five primary committees: Environment, Natural Resources and Sustainability; Homeland and National Security; Science, Technology, Engineering, and Math (STEM) Education; Science; and Technology. The scientific approach to research is responsible for development of technology. It is true that technology these days must be viewed in terms of the changes brought into the existing communication systems through the computer. Science and technology has actually largely contributed to the vision of man about himself.
Many people around the world take for example scholars in colleges and universities have taken the lead examining the relationship between science and technology. It also has two undergraduate majors, Biology & Society and Science & Technology Studies. He continues his so-called study and research which further leads him on to the destructive capability of science. | <urn:uuid:e8483bd5-a5e8-4eab-a27a-162bd29e0a4f> | 2.984375 | 430 | Knowledge Article | Science & Tech. | 25.490844 | 95,569,029 |
3D Solar Design Is 20 Times More Powerful Than Traditional Panels
Categories: Solar Power
Innovative 3-D designs from an MIT team can more than double the solar power generated from a given area. These new 3D solar panels are designed to capture the suns rays as it moves lower on the horizon.
Developed by MIT researchers, this new solar design provides four to twenty times the power output of the typical flat panel model.
After using computer algorithms to test hypothetical weather, altitude, and seasonal differences, the research team tested three different designs on the roof of the MIT lab. Read the peer reviewed study here.
The results were astounding! With a 3D design, the solar panels were able to provide a more consistent output of energy over time, despite varying cloud patterns.
Intensive research around the world has focused on improving the performance of solar photovoltaic cells and bringing down their cost. But very little attention has been paid to the best ways of arranging those cells, which are typically placed flat on a rooftop or other surface, or sometimes attached to motorized structures that keep the cells pointed toward the sun as it crosses the sky.
Now, a team of MIT researchers has come up with a very different approach: building cubes or towers that extend the solar cells upward in three-dimensional configurations. Amazingly, the results from the structures they’ve tested show power output ranging from double to more than 20 times that of fixed flat panels with the same base area.
The biggest boosts in power were seen in the situations where improvements are most needed: in locations far from the equator, in winter months and on cloudier days. The new findings, based on both computer modeling and outdoor testing of real modules, have been published in the journal Energy and Environmental Science.
“I think this concept could become an important part of the future of photovoltaics,” says the paper’s senior author, Jeffrey Grossman, the Carl Richard Soderberg Career Development Associate Professor of Power Engineering at MIT.
The MIT team initially used a computer algorithm to explore an enormous variety of possible configurations, and developed analytic software that can test any given configuration under a whole range of latitudes, seasons and weather. Then, to confirm their model’s predictions, they built and tested three different arrangements of solar cells on the roof of an MIT laboratory building for several weeks.
While the cost of a given amount of energy generated by such 3-D modules exceeds that of ordinary flat panels, the expense is partially balanced by a much higher energy output for a given footprint, as well as much more uniform power output over the course of a day, over the seasons of the year, and in the face of blockage from clouds or shadows. These improvements make power output more predictable and uniform, which could make integration with the power grid easier than with conventional systems, the authors say.
The basic physical reason for the improvement in power output — and for the more uniform output over time — is that the 3-D structures’ vertical surfaces can collect much more sunlight during mornings, evenings and winters, when the sun is closer to the horizon, says co-author Marco Bernardi, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE).
The time is ripe for such an innovation, Grossman adds, because solar cells have become less expensive than accompanying support structures, wiring and installation. As the cost of the cells themselves continues to decline more quickly than these other costs, they say, the advantages of 3-D systems will grow accordingly.
“Even 10 years ago, this idea wouldn’t have been economically justified because the modules cost so much,” Grossman says. But now, he adds, “the cost for silicon cells is a fraction of the total cost, a trend that will continue downward in the near future.” Currently, up to 65 percent of the cost of photovoltaic (PV) energy is associated with installation, permission for use of land and other components besides the cells themselves.
Although computer modeling by Grossman and his colleagues showed that the biggest advantage would come from complex shapes — such as a cube where each face is dimpled inward — these would be difficult to manufacture, says co-author Nicola Ferralis, a research scientist in DMSE. The algorithms can also be used to optimize and simplify shapes with little loss of energy. It turns out the difference in power output between such optimized shapes and a simpler cube is only about 10 to 15 percent — a difference that is dwarfed by the greatly improved performance of 3-D shapes in general, he says. The team analyzed both simpler cubic and more complex accordion-like shapes in their rooftop experimental tests.
At first, the researchers were distressed when almost two weeks went by without a clear, sunny day for their tests. But then, looking at the data, they realized they had learned important lessons from the cloudy days, which showed a huge improvement in power output over conventional flat panels.
For an accordion-like tower — the tallest structure the team tested — the idea was to simulate a tower that “you could ship flat, and then could unfold at the site,” Grossman says. Such a tower could be installed in a parking lot to provide a charging station for electric vehicles, he says.
So far, the team has modeled individual 3-D modules. A next step is to study a collection of such towers, accounting for the shadows that one tower would cast on others at different times of day. In general, 3-D shapes could have a big advantage in any location where space is limited, such as flat-rooftop installations or in urban environments, they say. Such shapes could also be used in larger-scale applications, such as solar farms, once shading effects between towers are carefully minimized.
A few other efforts — including even a middle-school science-fair project last year — have attempted 3-D arrangements of solar cells. But, Grossman says, “our study is different in nature, since it is the first to approach the problem with a systematic and predictive analysis.”
David Gracias, an associate professor of chemical and biomolecular engineering at Johns Hopkins University who was not involved in this research, says that Grossman and his team “have demonstrated theoretical and proof-of-concept evidence that 3-D photovoltaic elements could provide significant benefits in terms of capturing light at different angles. The challenge, however, is to mass produce these elements in a cost-effective manner.” | <urn:uuid:dd95256f-476c-4dbd-bd96-3793dd5fa7cc> | 3.75 | 1,355 | News Article | Science & Tech. | 32.656348 | 95,569,054 |
Schrodinger Wave Equation and Wave Function
The general one-dimensional Schrodinger Wave Equation is expressed as Where, ψ(x, t) is the wave function, V(x) is the potential function and it is assumed to be independent of time. m is the mass of the particle and j is the imaginary number √(-1). The wave function ψ(x, t) is used to describe the behavior of the system mathematically ψ(x, t) can be a complex quantity. The wave function ψ(x, t) can be rewritten as
Where, ψ(x) is the function of position x and φ(t) is the function of time t. Now the general form of Schrodinger Wave Equation can be rewritten as Now left-hand side of the equation is only dependent upon the position x and right-hand side of the equation is dependent upon only time t. Each side of the equation must be equal to a constant quantity say η. This is because time-dependent side and position-dependent side of the equation is equal to each other.
You may also be interested on
Hence for the time-dependent side the equation would be,
Now, the solution is similar to classical exponential form of sinusoidal wave where η/(h/2π) = 2πη / h = ω = angular velocity of the sine wave. Now as per quantum mechanics,
Separation constant of the equation is E, hence, This is time independent form of Schrodinger Wave Equation.
Alternatively Establishing Time Independent Schrodinger Wave EquationNow we will be trying to establish the time independent form Schrodinger Wave Equation and for that let us consider the wave equation Now wavelength λ and momentum p of the wave are related to each other by the following equation called de Broglie wavelength equation, Putting this value of λ in above second order differential equation we get, Total energy of electron therefore,
Significance of Wave FunctionWe have already seen that the time and position dependent wave function ψ(x,t) can be rewritten as As we have already proved, energy of electron E = η
In the year of 1926, Max Born stated that and postulated that if wave function of a particle is ψ(x, t), then probability of finding that particle between a gap of x and x + dx is, Now we know some basic mathematics, Therefore, |ψ(x,t)|2 can be written as, Again as per basic complex mathematics, Hence, it is proved that probability density function of a particle is independent of time. Hence, for finding position of electrons in crystal we should only concern with the time-independent wave function. It is needless to say that the probability of finding a particle anywhere in the universe is one. That means it must exist between the position - ∞ to + ∞. This convention then is mathematically represented in quantum mechanics or quantum physics by wave function as, | <urn:uuid:03001902-aaf5-448a-9383-a83dd4528e04> | 3.359375 | 624 | Academic Writing | Science & Tech. | 29.93583 | 95,569,056 |
Effective Focal Length.
For a two-lens system (such as that in many ERSs):
EFL=Effective Focal Length,
f1=Focal Length of Lens1,
f2=Focal Length of Lens2,
d=Distance between Lenses.
In fixed-beam ERS fixtures, d is a constant, but is a variable in zoom ellipsoidals.
For more, see the entry Gullstrand's Equation.
This page has been seen 317 times.
- Created by | <urn:uuid:b45c8a2f-ca2a-45bc-87b0-a35e9484d2ad> | 2.71875 | 116 | Knowledge Article | Science & Tech. | 73.898462 | 95,569,080 |
By joining filaments of graphite to a fiber whose sheets had an extraordinary hardness and rigidity ,carbon fiber was manufactured for the first time in history.
Developed by the engineers of the Royal Aircraft Establishment, in Farnborough, in 1963, this material has revolutionized various industries. However, carbon fiber is expensive. Something that could be corrected with this relief inspired by mother-of-pearl.
Inspiration in mother-of-pearl
The finding has been made by researchers at the Beihang and Texas universities in Dallas, who have developed carbon sheets of high strength and superhardness that can be manufactured inexpensively at low temperatures (the current carbon fiber compounds are expensive in part because carbon fibers are produced at extremely high temperatures).
The team made the sheets by chemically joining the graphitic carbon platelets, which is similar to the graphite found in the soft lead of an ordinary pencil.
In addition, the mechanical properties of this new material surpass those of the carbon fiber composites currently used in various commercial products. It should be remembered that, in its day, carbon fiber was so miraculous that some science fiction fans even imagined that it could make the idea of building an orbital elevator with it.
According to Ray Baughman, Distinguished President Robert A. Welch in Chemistry at UT Dallas and director of the Alan G. MacDiarmid NanoTech Institute:
These sheets could eventually replace the expensive carbon fiber compounds that are used for everything from airplanes and car bodies to windmill blades and sports equipment.
The researchers found inspiration in natural nacre, which gives strength and resistance to some seashells.
Instead of mechanically stacking large-area graphene sheets, we oxidize micrometer-sized graphite plates so that they can be dispersed in water and then filter this dispersion to make inexpensive graphene oxide sheets. This process is similar to sheets of paper handmade by filtering a suspension of fibers. At this stage, the sheets are not strong or resistant, which means that they can not absorb much energy before they break. The trick we use is to join the platelets in these sheets using sequentially infiltrated bridge agents that interconnect neighboring superimposed platelets and convert the oxidized graphene oxide in graphene. The key to this breakthrough is that our bridge agents act separately through the formation of covalent chemical bonds and Van der Waals bonds. | <urn:uuid:95fce9f5-eecf-45ac-9104-4046fce545bd> | 3.75 | 481 | News Article | Science & Tech. | 24.631006 | 95,569,084 |
posted by Carla
A bird wants to rech its nest that is located 53 meters (35 degrees south of west) from where it is perched. If the maximum speed that the bird can fly with respect to the air is 12m/s and there is a wind blowing from a direction of east 35 degrees south at a speed of 8.1 meters per second, determine the heading that the bird must maintain and at what speed with respect to the ground will it fly? How long will it take the bird to reach its nest?
pVg= ? [35 degrees S of W]
pVa= 12 m/s [?]
aVg= 8.1 m/s [W 35 degrees N]
I'd be glad to check your work. | <urn:uuid:d80acca0-1376-49a4-849a-a71c51c55491> | 3.15625 | 158 | Q&A Forum | Science & Tech. | 103.190351 | 95,569,085 |
All authors of the article are members of the Resilience Alliance Young Scientists (RAYS) network, and a number of centre researchers have been involved, including lead authors Oonsie Biggs and Maja Schlüter, as well as Tim Daw and Lisen Schultz.
The article identifies what it takes to strengthen the resilience of ecosystem services that underpin human well-being — that is, to maintain the capacity of social-ecological systems to continue delivering a desired set of ecosystem services in the face of disturbance and change.
"Although a definitive set of principles for enhancing the resilience of social-ecological systems and the ecosystem services they produce does not yet exist, our review suggests that there is sufficient knowledge to come up with a preliminary list of principles to provide practical guidance," Oonsie Biggs says.
Principles to be refined
In the study, resilience of ecosystem services is defined as the capacity of social-ecological systems to sustain a desired set of ecosystem services in the face of disturbance and on-going change.
By reviewing literature, the young resilience scholars have identified the following seven general principles for enhancing the resilience of desired ecosystem services:
1. Maintain diversity and redundancy
2. Manage connectivity
3. Manage slow variables and feedbacks
4. Foster and understanding of social-ecological systems as complex adaptive systems
5. Encourage learning and experimentation
6. Broaden participation
7. Promote polycentric governance systems.
The authors are, however, carefully stating that these principles should not be viewed as universally beneficial.
"They all require a nuanced understanding of how, when, and where they apply, as well as how they interact with or depend on other principles", co-author Maja Schlüter explains.
Ostrom was right
The review supports the conclusions of political scientist and Nobel Laureate Elinor Ostrom that there is no panacea for environmental governance. They also make clear that they do not present a definitive set of principles, but rather hope that their attempt will stimulate further discussion and refinement.
The first principle, maintenance of diversity, is important to provide options for responding to disturbance and change. Above all, a combination of response diversity and functional redundancy is central in maintaining resilient ecosystem services. It is also crucial to consider that very high levels of diversity or redundancy tend can be more of a cost than a benefit, since increasing complexity and inefficiency tends to reduce capacity for adaptation to slower, ongoing change.
Connections between, and trade-offs amongst diversity, redundancy and resilience as a field of future research, as well as impacts of social and economic diversity and redundancy, are less well understood.
In management of connectivity (principle 2) the effect on resilience depends on the linkages between nodes. It may also play an important part in providing new information and building trust in social networks. Sometimes, connectivity can also be too high, and lead to for instance the spreading of a local disturbance throughout the system, or in a social setting, lead to knowledge becoming overly homogenized.
In terms of managing slow variables and feedbacks (3), stabilizing feedbacks in a system can help maintain a particular social-ecological regime and associated ecosystem services in the face of external stresses such as climate change.
The fourth principle, to foster an understanding of SES as complex adaptive systems (CAS) may increase the resilience of ecosystem services by emphasizing the need for and importance of integrated approaches, continual learning and pervasiveness of uncertainty that comes with managing SES.
Promoting polycentricity is the last and seventh principle. It is described as an overarching principle that can provide a governance structure that enables the other key principles, especially learning and experimentation (5), participation (6), modularity (2) and redundancy (1). Coordinating units in governance, negotiating trade-offs between users as well as social capital and trust are essential for polycentric arrangements.
In practice the principles often co-occur and are highly interdependent.
"Applying any one principle in isolation will rarely lead to enhanced resilience of ecosystem services. For instance, polycentric governance and effective learning both depend on the social capital and trust developed through participation", Biggs says.
She stresses the importance of understanding how the principles can be applied in collaboration with key stakeholders and the need to develop better measures to evaluate success.
"A better understanding of how to operationalize and apply the principles in different contexts is needed, particularly understanding how the principles can be applied in collaboration with key stakeholders, and developing better measures to evaluate success," Biggs and her colleagues conclude.
Research news | 2018-07-10
The World in 2050 initiative launches new report outlining synergies and benefits that render the goals achievable
Educational news | 2018-07-02
LEAP our leadership programme designed for changemakers that want to lead social-ecological transformations to sustainability. Application deadline is 5 August 2018.
Research news | 2018-06-27
Overfishing, fractured international relationships and political conflicts loom as fish migrate more unpredictably because of climate change. Here is how to deal with it
Research news | 2018-06-26
Profit-maximizing approaches are most likely to produce outcomes that harm people or the environment. But it depends on the circumstances whether a sustainable or a safe approach is most suitable, new study argues
General news | 2018-06-20
Will lead a redesign of the organisational structure at the centre
Research news | 2018-06-20
New book chapter looks into the economic, cultural and ecological reasons why some people leave the fisheries and aquaculture sector, and what could be done to reverse the trend | <urn:uuid:b70ceb48-9e85-497b-aa04-48d427abda9f> | 3.09375 | 1,162 | Knowledge Article | Science & Tech. | 14.50552 | 95,569,146 |
That surprising fact falsifies a 13-year-old study and may help explain why dinosaurs were able to dominate the planet for 160 million years, said Holly Woodward, MSU graduate student in the Department of Earth Sciences and co-author of a paper published Aug. 3 in the journal "PLoS ONE."
"If we were trying to find evidence of dinosaurs doing something much different physiologically, we would expect it to be found in dinosaurs from an extreme environment such as the South Pole," Woodward said. "But based on bone tissues, dinosaurs living within the Antarctic Circle were physiologically similar to dinosaurs living everywhere else.
"This tells us something very interesting; that basically from the very start, early dinosaurs, or even the ancestors of dinosaurs, evolved a physiology that allowed an entire group of animals to successfully exploit a multitude of environmental conditions for millions of years," Woodward said.
Jack Horner, Woodward's adviser and Regents Professor of Paleontology/Curator of Paleontology at MSU's Museum of the Rockies, said Woodward's findings are consistent with other results from the museum's histology lab.
"I think the most important finding is that polar dinosaurs don't seem to be any different than any other dinosaurs in respect to how their bones grew," Horner said. "Dinosaurs have annual growth lines and those that don't have them are simply not yet a year old."
Woodward said she conducted her research after reading a 1998 study about polar dinosaurs. Intrigued by the study, she decided to review the findings and received a National Science Foundation grant that allowed her to travel to Australia last summer, set up a histology laboratory and analyze bones in a rare collection in Australia's Melbourne Museum.
Woodward analyzed the bone tissue of 17 dinosaurs that lived 112 to 100 million years ago during the latter part of the Early Cretaceous Period. All but one of the dinosaurs in her study were plant eaters. All lived in the Antarctic Circle in what is now known as the Australian state of Victoria.
Also participating in the study were the authors of the original study: Anusuya Chinsamy at the University of Cape Town in South Africa, Tom Rich at the Melbourne Museum and Patricia Vickers-Rich at Monash University in Australia.
The three scientists who conducted the original study welcomed her analysis and didn't mind that she falsified their hypothesis, Woodward said. She added that the new study looked at more dinosaur bones than the original study because more bones from the polar dinosaurs were available. Paleontologists have been adding to the collection over the past 25 to 30 years.
The original study looked at the bone microstructure of the polar dinosaurs and concluded that the differences they saw indicated that some dinosaurs survived harsh polar conditions by hibernating, while others evolved in a way that allowed them to be active year-round, Woodward said.
The new study showed that all but the youngest dinosaurs had "Lines of Arrested Growth" or LAGs, Woodward said. Since the hibernation hypothesis was based on the presence or absence of LAGs, the new study falsified the hypothesis.
LAGSs, in a bone cross section, look like tree rings, Woodward said. Like tree rings, they are formed when growth temporarily stops.
"Research on animals living today suggests that LAGs form annually, regardless of latitude or climate," Woodward said. "Like tree rings, LAGs can be counted to age an animal, so that the absence of these marks likely indicates a dinosaur was less than a year old. These marks have also been found in dinosaurs that lived at much lower latitudes having no need to hibernate."
The new study doesn't mean there was nothing unique about polar dinosaurs, but those qualities aren't apparent in bone tissue, Woodward said.
"It is very likely that dinosaurs living in different environments evolved specific adaptations – either physical or behavioral – to cope with environmental conditions," she said. "Analysis of bone microstructure can tell us a great deal about growth, but some things just aren't recorded in bone tissue."
Evelyn Boswell | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:ad4a2f6d-baae-4fe6-a773-95e7e358de34> | 3.6875 | 1,478 | Content Listing | Science & Tech. | 39.797705 | 95,569,164 |
Into the belly of the beast
While most of us hid from Hurricane Gonzalo, BIOS glided right into the belly of the beast.
Its scientists were able to gain insight into the monster storm, with help from a scientific ocean glider they named Anna.
Its ability to gather information on salinity, temperature and current speed of the deeper ocean is changing the future of weather forecasting.
“We didn’t have time to launch it during Tropical Storm Fay,” said Ruth Curry, a physical oceanographer at the Bermuda Institute of Ocean Sciences, “but we know the deep water temperature just before Fay was 82.4F. We launched just after Fay and the temperature had dropped by about four degrees. We know from readings during Hurricane Gonzalo that the deeper ocean temperature was still at this cooler temperature when Gonzalo approached. That caused Gonzalo to drop to a category three force storm and then a category two just as it struck us. Tropical Storm Fay really saved our skins.”
Anna was launched five miles off St David’s Head.
It measures 6ft long and 1ft wide and can be operated remotely. It dives, surfaces, sends data back to base through a satellite transmitter and then propels itself forward before diving again. During Gonzalo, the glider travelled 50 miles southwest of Bermuda.
According to Curry, scientists could only take measurements from fixed moorings in the past, and those measurements were at a very limited ocean depth.
Anna is able to travel to the area that partly fuels and directs tropical storms and hurricanes — between 200ft and 500ft.
“The gliders give us an unprecedented view of what happens under the ocean during a storm,” Curry said. “There have been a few other opportunities. A Rutgers University group measured Hurricane Irene two years ago, and gliders were also released during Hurricane Sandy in 2012. Rutgers were able to document the fact that Sandy’s forecasted intensity and track was off because weather forecasters didn’t have the ocean heat content right. Now, there is a fair amount of focus on improving that part of hurricane forecasting.”
Curry said data collected by the BIOS glider was “phenomenal”. For example, it recorded underwater waves of 150ft — a staggering height.
“This is a signature of a huge amount of energy,” she said. “That really mixes up the salinity of the ocean.”
BIOS collected the glider from the ocean on Monday.
“It was still rough out there,” said Curry. “The waves were going in one direction and the wind in another.”
The glider did very well during the storm, but its rudder was sheered off during the height of Gonzalo. That meant it could still dive and come to the surface but couldn’t be steered remotely.
Already, there has been great interest in information collected by BIOS’ glider. The Weather Channel has interviewed Curry twice in the past week, about the glider’s findings.
“There is enormous interest,” she said. “I have already been contacted by colleagues at the National Hurricane Centre, Rutgers University and Woods Hole Oceanographic Institution in Massachusetts. I am new at storm research so I will be collaborating with other oceanographers to analyse the data.”
She is also hoping to give a public talk in Bermuda in the coming weeks about her findings.
“Launching during a direct-hit hurricane like Gonzalo, which was a category four as it approached us, is the Holy Grail of ocean science,” she added.
Worker faces $500,000 bill after drunk crash
Man charged over 2006 murder
Developers eye deal for Ariel Sands
Film shot in Bermuda wins top award
Brangman quits after BCB’s universal ban
Cougars pay tribute to Salaam
Premier talks of toll on family life
Questions raised over completion of pledges
Take Our Poll | <urn:uuid:5fd37c91-9ec9-4ebb-be2a-145b8cde4c6a> | 2.828125 | 839 | News Article | Science & Tech. | 46.262539 | 95,569,178 |
Environmental changes may have led to the emergence of resilient hybrids, scientists in Germany believe, raising the possibility that periods of climate flux can speed up evolution.
However, climate change also leads to large-scale extinctions because the environment changes too fast for species to adapt. As the present climate is changing very rapidly, researchers do not expect to see the same evolutionary adaptation in the future as has happened in the past.
The EU-funded GENEFLOW project looked at fossils of animals living during the Pleistocene period, a period which spanned from 2.5 million years to 12 000 years ago and was marked by frequent shifts between glacial and inter-glacial periods.
Due to the changing climatic conditions, animals often migrated between north and south, allowing researchers to explore gene flow – the extent to which genes are transferred from one species or population to another as animals mix.
‘We look at ancient DNA in the bones of species such as cave bears, mammoths, extinct elephants, wolves, hyenas, lynx or gray whales,’ said Professor Michael Hofreiter from the University of Potsdam, Germany, who is the principal investigator for GENEFLOW. ‘The question is: how do species adapt when there are environmental changes and there is migration?’
Unusually, Prof. Hofreiter’s team is focusing on DNA in the cell nucleus, which is passed along both the female and male line. Normally, researchers look at so-called mitochondrial DNA, which is located in the body of the cell and is passed down the generations through mothers only. This is advantageous when looking at the flow of genes between migrating animals, because male animals tend to be more mobile.
It has enabled the team to determine the complete genomes – an organism’s full set of DNA – of extinct species, and to see how these genomes have changed over time in different locations, shedding light on how climate change impacts biodiversity.
'The present human-caused climate change will not lead to similar extensive mixing and adaptation of populations.'
Prof. Eeva Furman, Finnish Environment Institute
The first major results from the five-year GENEFLOW project should be published within six months. However, Prof. Hofreiter revealed: ‘What we have found is that gene flow is much more than we would have believed … There is evidence of a lot of gene flow between populations that were separate. There’s also evidence of gene flow between species.’
Gene flow makes creatures more resilient to climate change. ‘You have populations that live more in the south or more in the north,’ Prof. Hofreiter said. The genomes of these populations will be differently adapted to the local environment.
‘So if and when the climate changes, if it gets warmer, it is beneficial if you have gene flow from the south, to maintain a population.
‘In effect, evolution is being speeded up.’
Prof. Hofreiter points to modern day examples where species have interbred with beneficial results to the hybrid. For example, many European house mice are now resistant to the poison warfarin thanks to breeding with their Algerian peers.
Warfarin acts by interfering with the blood clotting process – meaning wounds continue to bleed. However, when the species were brought together – by humans – the Algerian mouse passed on a version of the VKORC1 gene – blocking the poisonous effects.
Often, hybrids suffer from physical problems, limiting survival, but in this case, the benefits to mice outweighed the drawbacks.
Skin, hair and nails
Another example of gene flow is between Neanderthals and humans, with some humans thought to carry 2 % of their archaic relative's DNA. Neanderthal genes are believed to help in making keratin, a protein used in skin, hair and nails, as well as strengthening the immune system.
‘In the same way, polar bears must have survived several inter-glacial periods – there may have been gene flow with brown bears,’ Prof. Hofreiter said. ‘We have to remember that 130 000 years ago it was so warm that we had elephants and hippos in parts of North Yorkshire.’
It shows that, rather than having fixed characteristics, a species should be seen as something that changes with the conditions.
‘The take-home message is that we must look at species as dynamic entities,’ said Prof. Hofreiter. ‘Rather than species dropping dead when the climate changes, when it gets hot, they will evolve by hybridizing.’
Professor Eeva Furman from the Finnish Environmental Institute coordinates the EU-funded OpenNESS project, which is investigating how ecosystems and natural assets could be secured under pressures such as climate change.
She says that while large-scale movements of animals during Pleistocene may have increased gene flow among populations and been beneficial for adaptation, she does not expect to see the same thing happen in the future.
‘The present human-caused climate change will not lead to similar extensive mixing and adaptation of populations, partly because movements of most species are greatly hindered in human dominated landscapes, and partly because the present climate warming is extremely rapid in comparison with Pleistocene climate fluctuations,’ she said.
‘There is much research showing that the negative effects of climate change will dominate and add to the threat of populations, species and ecosystem services.’
A new way to farm indoors using different wavelengths of light could boost the taste of fruits, salads and herbs, while also increasing food supply and nutritional value.
Non-surgical ways of detecting endometriosis, such as blood tests, could reduce the time taken for a diagnosis, and researchers hope it will have a significant impact on the quality of life of women who live with the complex and painful condition.
Requiring drones to identify and authorise themselves before they can fly, which could be achieved by fitting them with SIM cards, could help to protect people's privacy by providing an effective way to register both users and machines, according to air traffic management expert Robin Garrity. He has been working on the U-space plan, which sets out a vision for how drones can be integrated into airspace, particularly in urban environments. It is part of work being conducted by the SESAR Joint Undertaking, a public-private partnership that coordinates EU research activities in air traffic management.
Complex and painful disease has been historically overlooked, researchers say.
Robin Garrity says that registration, identification and geofencing will increase security.
Chemical switches on DNA could explain how the environment may influence the traits we pass on, according to Prof. Thomas Carell. | <urn:uuid:f08ffe00-9506-4ab4-a282-65c332f4d42a> | 4.34375 | 1,383 | Truncated | Science & Tech. | 35.091341 | 95,569,182 |
LiU researcher Klas Tybrandt has put forward a theoretical model that explains the coupling between ions and electrons in the widely used conducting polymer PEDOT:PSS. The model has profound implications for applications in printed electronics, energy storage in paper, and bioelectronics.
One of the most commonly used materials in organic electronics is the conducting polymer PEDOT:PSS, and tens of thousands of scientific articles have been published referring to the material and its properties. One of the major advantages of PEDOT:PSS is that it conducts both ions and electrons, but a model that explains how this works has, until now, not been available. We know that the material has several useful properties, but we don't know why.
Klas Tybrandt, principal investigator in the Soft Electronics group at the Laboratory of Organic Electronics, Campus Norrköping, has developed a theoretical model for the interaction between ions and electrons that explains how ion transport and electron transport are related. The model has been published in the prestigious journal Science Advances.
"Classical electrochemical models have mainly been used in the past for this type of system, and this has led to a certain degree of confusion, since the models do not include the properties of semiconductors. We have used a purely physical description that clarifies the concepts," says Klas Tybrandt.
The material is a mixture of a semiconducting polymer and a polymer that conducts ions. The two phases are mixed down to the nanometer-scale, and even a thin film contains a huge number of interfaces. At the contact surface between the electronic and the ionic phases, what is known as an "electrical double layer" forms, which means that a charge separation builds up here between ions and electrons.
"We have combined semiconductor physics with a theory for electrolytes and electrical double layers, and we have been able to describe the properties of the material on a theoretical basis. We have also experimental results showing that the model agrees with laboratory measurements," says Klas Tybrandt.
PEDOT:PSS is one of several polymeric materials that act in the same way. Increased understanding of the material and its unique properties is a major advance for researchers in several areas of organic electronics.
One such area is printed electronics, where it is now possible to calculate and optimise the performance of electrochromic displays and transistors.
Another area that benefits from the new model is bioelectronics. Here, materials that conduct both ions and electrons are particularly interesting, since they can couple the ion conducting systems of the body with the electronic circuits in, for example, sensors.
"We can optimise the applications in a completely new way, now that we understand how these materials work," says Klas Tybrandt.
A third area is the storage of energy in paper, a field in which LiU researchers are world-leaders.
"Understanding the complexity of these polymers allows us to develop and optimise the technology. This will be one of the areas for the newly opened Wallenberg Wood Science Center," says Klas Tybrandt. | <urn:uuid:df5041b9-e305-4726-8c57-db95398820d8> | 3.25 | 638 | News Article | Science & Tech. | 29.060851 | 95,569,195 |
An artist’s conception shows Orbex’s Prime rocket lifting off. (Orbex Illustration) Lockheed Martin is in line to receive $31 million in grants from the UK Space Agency to establish Britain’s first spaceport on Scotland’s north coast, and to develop a new made-in-Britain system for deploying small satellites in orbit. The British government announced the grants today, only hours after and support the rise of horizontal-launch spaceports in other British locales. In addition to Lockheed Martin’s grants, another $7 million will be awarded to London-based to support the development of its Prime rocket for launch from the Sutherland spaceport. The is designed to be fueled by bio-propane and will deliver payloads of up to 330 pounds to low Earth orbit. Today’s grants were announced in conjunction with this week’s Farnborough International Airshow, which is taking place southwest of London. It’s not surprising that Lockheed Martin will benefit from the British grants. The U.S.-based company is a prominent member of the consortium supporting Sutherland’s bid. Lockheed Martin has been tasked not only with establishing vertical-launch operations at the Sutherland spaceport, but also with developing a rocket-powered upper stage that’s capable of deploying up to six small satellites in separate orbits. The work on the upper stage, known as an orbital maneuvering vehicle, will be done at a facility in the English city of Reading. “Lockheed Martin will apply its 50 years of experience in small satellite engineering, launch services and ground operations, as well as a network of U.K.-based and international teammates, to deliver new technologies, new capabilities and new economic opportunities,” Patrick Wood, Lockheed Martin’s U.K. country executive for space, said in a statement. British and U.S. governmental agencies have been working on a that would establish a legal and technical framework for U.S. space launch vehicles to operate from launch sites in Britain. “Attracting U.S. operators to the U.K. will enhance our capabilities and boost the whole market,” the UK Space Agency said in today’s statement. British companies already produce nearly half of the world’s small satellites and a quarter of the world’s telecommunications satellites. The British government says the commercial space sector could contribute as much as $5 billion to the country’s economy over the next decade. In its earlier announcement, the UK Space Agency said it would award £2.5 million ($3.3 million) to a consortium known as Highlands and Islands Enterprise to help get the Sutherland spaceport into operation in the early 2020s. Another £2 million would be made available for the development of horizontal-launch spaceports in England’s , at on Scotland’s west coast, and in in Wales.
An artist’s impression shows the spaceport at Scotland’s Sutherland site. (Courtesy of Perfect Circle PV) The British government has selected a spot in Sutherland, on the A’Mhoine Peninsula in the Scottish Highlands, as the site of the country’s first spaceport. In a news release timed to coincide with the opening of this week’s Farnborough International Airshow, the government said it would provide initial funding of £2.5 million ($3.3 million) to Highlands and Islands Enterprise to develop the vertical-launch site in Sutherland, with an aim of seeing the first liftoff in the early 2020s. Sutherland was chosen for the United Kingdom’s first vertical launch site after an assessment of several proposed spaceport sites in Scotland as well as Wales and England’s Cornwall region. The UK Space Agency determined that the spot on Scotland’s north coast was the best place to target highly sought-after satellite orbits with vertically launched rockets. Three other proposed horizontal-launch sites will be eligible for grants from a newly established £2 million ($2.7 million) fund to promote suborbital space flights, satellite launches and spaceplane operations, the government said. Those sites are Newquay in Cornwall, Glasgow Prestwick in Scotland, and Snowdonia in Wales. “The space sector is an important player in the U.K.’s economy and our recent Space Industry Act has unlocked the potential for hundreds of new jobs and billions of revenue for British business across the country,” Britain’s secretary of state for transport, Chris Grayling, said in today’s news release. British officials estimates that the commercial space sector will be worth a potential $5 billion to the country’s economy over the next decade. The United Kingdom already has a thriving satellite industry, fueled in part by potential spaceport customers such as San Francisco-based Spire Global. “In Spire, Scotland already sports Europe’s most advanced and prolific satellite manufacturing capability, and with a spaceport right next door, enabling clockwork-like launches, we can finally get our space sector supply chain to be truly integrated,” Spire CEO Peter Platzer said. The government said additional grants from its £50 million ($66 million) UK Spaceflight Program fund would be announced during the Farnborough Airshow. Sutherland isn’t likely to be Europe’s only spaceport, and it may not be its first: Last week, with operations beginning as early as 2020.
An artist’s conception shows a passenger looking out the window of Blue Origin’s New Shepard suborbital spaceship. (Blue Origin Illustration) Amazon billionaire Jeff Bezos’ space venture, , is playing down reports that a suborbital space trip on its New Shepard rocket ship could cost $200,000 to $300,000. “We have not set ticket pricing and have had no serious discussions inside of Blue on this topic,” Brett Griffin, a member of Blue Origin’s media team, told GeekWire in an email. “We will begin selling tickets sometime after our first human flights and are focused on developing New Shepard.” Blue Origin has flown eight uncrewed flight tests of the , which consists of a reusable booster that flies itself back to a landing and a crew capsule that floats back down at the end of a parachute. Further uncrewed flight tests reaching as high as 100 kilometers, the internationally recognized boundary of space, are expected in the months ahead. Blue Origin CEO Bob Smith told GeekWire in April that the company is aiming to start flying people by the end of this year. Those people won’t be commercial customers, however. “We will fly Blue Origin astronauts before we fly commercial passengers and haven’t done any real work on passenger selection or the ticket sale process,” Griffin said. Blue Origin does, however, offer an for would-be passengers, and it recently for an astronaut experience manager. (The ad was not that long ago, which could mean the position is filled. Or not.) They company has former NASA astronauts on its staff, and in private conversations, they tend to say they’d love to have first crack at flying on New Shepard. There have also been rumblings that Blue Origin employees would get an early chance to fly. Last year, one newly hired employee went so far as to tell a newspaper reporter that . (The company pooh-poohed that report.) , claiming that the price tag for a flight could be set in the range of $200,000 to $300,000, was attributed to two unnamed Blue Origin employees who were said to have knowledge of the pricing plan. For what it’s worth, those figures are in the same range as the price tag advertised by , which is also testing a suborbital spaceship for passenger flights. It’s not outside the realm of possibility that the future price tag is a topic of conversation at the Blue Origin’s headquarters in Kent, Wash., particularly if there’s now an astronaut experience manager on the case. But today’s statement suggests that it’s too way too early to write a check. “We will fly passengers when we are ready,” Griffin said. “We have a flight test schedule, and schedules of those types always have uncertainties and contingencies. Anyone predicting dates is guessing.”
Technicians check out the Crew Dragon capsule in Florida after the completion of thermal vacuum and acoustic testing at NASA’s Plum Brook Station in Ohio. (SpaceX via Instagram) After months of testing, a SpaceX Dragon capsule that’s designed to carry astronauts to and from the International Space Station has arrived in Florida, marking a significant step toward this summer’s scheduled test launch. Even though the vehicle is called a “Crew Dragon,” this Dragon won’t carry crew on its first flight. Instead, it’s due to make an uncrewed practice run to the space station during what’s known as , or DM-1. Before this week’s shipment to Florida, the Dragon underwent . Today SpaceX showed off a picture of the Crew Dragon, which is a redesigned, beefed-up version of its robotic cargo-carrying Dragon, via and . NASA’s current schedule calls for SpaceX’s Falcon 9 rocket to launch the DM-1 mission next month from Kennedy Space Center. However, that schedule is dependent not only on the pace of preparations, but also on the timetable for station arrivals and departures. After several weeks, the Crew Dragon would unhook from the station and descend back down to Earth, still uncrewed, for a Pacific splashdown and recovery. SpaceX will follow up on DM-1 with an in-flight abort test, and eventually with a crewed demonstration flight to the space station, . Meanwhile, Boeing is moving ahead with work on its own space taxi, the CST-100 Starliner. The first three Starliner spacecraft are undergoing a variety of tests in preparation for this year’s first uncrewed flight to the space station. A crewed flight will follow, and NASA has the option of . It’s not yet clear whether the Dragon or the Starliner will fly astronauts to the station first. Those spacefliers will be in a position to claim by the shuttle Atlantis’ crew for the next crew to be launched from U.S. soil. After the crewed demonstration flights, NASA will have to certify the Dragon and the Starliner for regular trips to and from the space station. This week, the Government Accountability Office saying that certification may not come until the end of 2019 or perhaps even 2020 — which is significantly later than NASA had anticipated. The GAO recommended that NASA come up with a contingency plan for ensuring there’d be a U.S. presence on the space station even if the space taxis aren’t certified on time. Russia’s Soyuz spacecraft is currently the only means approved for sending spacefliers to the space station. NASA’s access to Soyuz seats is currently due to.
In this artistic rendering, a blazar is accelerating protons that produce pions, which produce neutrinos and gamma rays. One neutrino’s path is represented by a blue line passing through Antarctica, while a gamma ray’s path is shown in pink. (IceCube / NASA Illustration) An array of detectors buried under a half-mile-wide stretch of Antarctic ice has traced the path of a single neutrino back to a supermassive black hole in a faraway galaxy, shedding light on a century-old cosmic ray mystery in the process. The discovery, revealed today in a trio of research papers published by the journal Science and The Astrophysical Journal, marks a milestone for the at the National Science Foundation’s Amundsen-Scott South Pole Station. Live video: It also marks a milestone for an observational frontier known as multi-messenger astronomy, which takes advantage of multiple observatories looking at the sky in different ways. Thanks to IceCube’s alert, more than a dozen telescopes were able to triangulate on the neutrino’s source. “No one telescope could have done this by themselves,” said IceCube lead scientist Francis Halzen, a physics professor at the University of Wisconsin at Madison. The source of the high-energy neutrino detected last Sept. 22 appears to be a giant elliptical galaxy with a rapidly spinning black hole at its center, 3.7 billion light-years from Earth. Such a galaxy is known as a blazar, and its signature feature is a pair of jets that spew radiation and subatomic particles along the axis of the black hole’s rotation. One of the blazar’s jets just happens to be pointed directly at Earth. Astronomers have known about the blazar, known as TXS 0506+056, for years. But before IceCube came on the scene, they had no way of associating it or any other source with cosmic rays. Cosmic rays are high-energy particles that reach Earth from space, and most of them carry an electrical charge. Such particles can be deflected by magnetic fields, or blocked by interactions with intervening matter. That makes it impossible to trace the particles’ paths to their source. Neutrinos are different: They don’t carry an electrical charge, have virtually no mass, and interact so weakly with other types of matter that they typically pass right through anything that gets in their way — including stars and planets. That means they travel in a straight line from their source. On rare occasions, a high-energy neutrino makes a direct hit on an atomic nucleus, setting off a subatomic chain reaction. It’s exactly that type of reaction that the $279 million IceCube Neutrino Observatory is designed to detect. The heart of the observatory is a three-dimensional array with thousands of light sensors, spread across a cubic kilometer of crystal-clear Antarctic ice deep beneath the surface. When a neutrino hits a nucleus, it triggers a characteristic flash of blue light that points in the direction of the neutrino’s origin. On Sept. 22, IceCube picked up on a strong flash and determined that it was sparked by a neutrino with an energy of about 300 trillion electron volts. That’s almost 50 times as energetic as the proton beams circulating in Europe’s Large Hadron Collider. Within a minute, an alert went out to other astronomers to focus their telescopes on the patch of sky associated with the neutrino source in the constellation Orion. The IceCube Neutrino Observatory is buried at depths between 1.5 and 2.5 kilometers below the South Pole. The only visible equipment is the IceCube Lab, which hosts the computers that collect data from over 5,000 light sensors in the ice. In this artistic rendering, which incorporates a photo of the Ice Cube Lab, neutrino event IC170922 is shown in the ice below Antarctica’s surface. (IceCube Collaboration / NSF) Over the days that followed, the Fermi Gamma-Ray Space Telescope and the MAGIC Telescope in the Canary Islands detected a strong gamma-ray burst coming from TXS 0506+056. Other instruments, including the Neil Gehrels Swift Observatory and the NuSTAR X-ray telescope, picked up strong signals in multiple wavelengths from the same source. And when IceCube’s scientists looked back through their archives, they found evidence of another flare that apparently emanated from TXS 0506+056 in December 2014. “All the pieces fit together,” Albrecht Karle, a senior IceCube scientist from UW-Madison, said today in a news release. “The neutrino flare in our archival data became independent confirmation. Together with observations from the other observatories, it is compelling evidence for this blazar to be a source of extremely energetic neutrinos, and thus high-energy cosmic rays.” This star chart shows the location of the neutrino source, TXS 0506+056, as a blue set of crosshairs in the constellation Orion. The blazar is too distant and faint to be seen with the naked eye. (IceCube / NASA) For more than a century, astronomers have speculated that cosmic rays emanated from violent phenomena such as supernovae, black holes and colliding galaxies. Now they have more than speculation to go on. “It is interesting that there was a general consensus in the astrophysics community that blazars were unlikely to be sources of cosmic rays, and here we are,” Halzen said. “Now, we have identified at least one source that produces high-energy cosmic rays because it produces cosmic neutrinos.” Like last year’s , IceCube’s findings demonstrate the power of multi-messenger astronomy. “The era of multi-messenger astrophysics is here,” NSF Director France Cordova said in a statement. “Each messenger — from electromagnetic radiation, gravitational waves and now neutrinos — gives us a more complete understanding of the universe, and important new insights into the most powerful objects and events in the sky.” Cordova said “such breakthroughs are only possible through a long-term commitment to fundamental research and investment in superb research facilities.” “Multimessenger Observations of a Flaring Blazar Coincident With High-Energy Neutrino IceCube-170922A” and “Neutrino Emission From the Direction of the Blazar TXS 0506+056 Prior to the IceCube-170922A Alert,” are freely available on . The paper in is titled “A Multimessenger Picture of the Flaring Blazar TSX 0506+056: Implications for High-Energy Neutrino Emission and Cosmic Ray Acceleration.”
Engineers work on New Shepard’s crew capsule at Blue Origin’s Kent factory. (Credit: Blue Origin) A new employment study indicates that roughly 3,000 people are directly employed by Washington state’s space industry, and roughly half of them are at , Amazon billionaire Jeff Bezos’ space venture. Most of Blue Origin’s 1,500 employees work at the company’s headquarters and production facility in Kent, Wash. So Erika Wagner, Blue Origin’s payload sales director, has a good grasp on what draw space-savvy engineers to the Seattle area. “When we ask our new employees why they’re coming … I’m going to guess that about half of them tell us that Seattle is part of the reason they say yes,” Wagner said today at a . “They have other options on the table, but they’d like to live here. They want to go hiking, or they want to go boating, or they want to have access to the symphony or the opera here.” Seattle’s blend of the great outdoors and a vibrant cultural scene adds to the region’s legacy in engineering, software and aerospace, fueled by Boeing, Microsoft and more recently Amazon. Most of Blue Origin’s employees stick around: Wagner said the turnover rate amounts to less than 4 percent of the workforce annually. But what is it that motivates the ones who leave? There’s a bit of irony in Wagner’s answer to that question. “A significant percentage of them say the reason they leave is Seattle,” Wagner said. “It’s the rising cost of living, it’s the weather, it’s the traffic, it’s the whatever. It’s very much both one of our strongest assets, and one of our biggest challenges.” The Seattle area’s assets for the space industry, and its challenges, were the focus of today’s forum. Panelists for the Seattle Metropolitan Chamber of Commerce’s forum on the space industry include EarthNow’s Kyu Hwang, Space Angels’ Joe Landon, Blue Origin’s Erika Wagner and Seattle author Sam Howe Verhovek. (GeekWire Photo / Alan Boyle) Economic activity traced specifically to the space industry still makes up a small share of the total aerospace industry in Washington state. The set total economic impact at $1.7 billion. In comparison, the economic impact of the wider industry, ranging from rockets to passenger jets to drones, . Nevertheless, the space industry’s local impact is growing rapidly, thanks to Blue Origin and other ventures ranging from century-old and decades-old to more recent startups such as and . Joe Landon, who serves as the chief financial officer for Redmond, Wash.-based Planetary Resources and chairman of the Space Angels investment group, said Seattle’s space ventures often have to look far afield to find the talent they need. “We don’t have much home-grown talent,” Landon told the luncheon crowd. Wagner said there’s a particularly acute need for expertise in avionics, electrical engineering and computer science. “Most software engineers haven’t thought about being part of our industry,” she said. “It makes recruiting that much harder, especially when we’re competing against Silicon Valley startups for our talent pool here on the West Coast.” For Kyu Hwang, EarthNow’s vice president for applications and customer development, technical expertise is just one part of the equation. “We also need a lot of innovation in business models,” he said. Bellevue, Wash.-based EarthNow, which plans to use a satellite constellation to beam down real-time video of our planet, . It’s still operating in semi-stealth mode, but Hwang said the venture is well into the process of enlisting traditional and not-so-traditional customers. “We really think real-time, on-demand-access, motion video … we think those three characteristics will enable Earth observation to tap into a mass market,” he said. The Seattle area’s rising profile in software engineering, data analysis and cloud computing is seen as a net plus for the future: As the space industry matures, software smarts are looming larger. That’s a big reason why SpaceX , about 1,000 miles away from its corporate headquarters in the Los Angeles area. Because of Washington state’s geography and population distribution, it’s not likely to ever play host to the spaceports that are available in other centers of the space industry, such as California, Florida and Texas. What’s more, the Evergreen State doesn’t have a NASA center to cozy up to. But Sam Howe Verhovek, a Seattle writer who’s the author of said the fact that the Pacific Northwest is off the beaten track may be a plus. “I’ve heard a couple of intriguing theories over the years, including, in a weird way, that it’s easier to fail here. It’s OK, it’s expected. It’s sort of part of a venture capital mentality,” Verhovek said at today’s forum. “You pick yourself up and dust yourself off. It’s what we do.”
News Brief: , an Israeli team that was in the now-defunct , says it will have its lander launched toward the moon in December. The lander will be a secondary payload on a SpaceX Falcon 9 rocket taking off from Florida, The plan calls for the lander to execute a series of in-space maneuvers, then touch down on the lunar surface next February to transmit imagery and measure the moon’s magnetic field. SpaceIL says about $88 million has been invested in the project to date, mostly from private donors. Here’s a sampling of tweets about today’s announcement: We have a launch and landing dates! December 2018- Launch, February 13 2019- First Israeli spacecraft lands on the moon! SpaceIL's moon mission is officially underway — SpaceIL (@TeamSpaceIL) Meet our spacecraft: small, smart and with a lot of Israeli . To the moon in December 2018! — SpaceIL (@TeamSpaceIL) Exciting news! Congrats to the whole team, including all the and VP / Head of the SpaceIL Spacecraft Program Yigal Harel. cc — American Technion Society (@TechnionUSA) I want to believe … but given the many private Moon missions I have seen announced over the past 25 years and then *all* vanish, I will believe in only when it sits on a rocket. On the pad. And the candle is lit. Sorry for having to demand this … — Daniel Fischer (@cosmos4u)
Vikram Sarabhai Space Center’s 18-meter antenna, located near Bangalore, India, can be used for deep-space communications. (VSSC Photo) Seattle-based has forged an agreement with , the commercial arm of the Indian Space Research Organization, to widen its spectrum of communication services for spacecraft operators. The partnership adds C-band, Ku-band and Ka-band communication capabilities to RBC Signals’ existing resources in the VHF, UHF, S, C and X radio bands. It also extends the company’s potential reach beyond Earth orbit to the moon and deep space. The pact marks another first for the three-year-old startup. “It represents our first partnership with a national program,” RBC Signals co-founder and CEO Christopher Richins told GeekWire. RBC Signals uses its own antennas as well as excess capacity from its partners’ ground stations to knit together a global communications network for its customers in the satellite industry, priced to fit a customized pay-as-you-go model. Thanks to the Antrix deal, RBC Signals’ network currently comprises more than 60 antennas at more than 40 locations. The Indian ground stations include antennas in Hassan, Bangalore and Lucknow. “At Antrix, we are excited about gaining greater utilization of our ground station investments through the innovative business model and services being provided by RBC Signals,” Shri Rakesh Sasibhushan, Antrix’s chairman and managing director, said today in a news release. Richins echoed that sentiment in his comments to GeekWire. “It demonstrates a general desire for these space assets, wherever they are, to be used more efficiently,” he said at last month’s NewSpace conference. RBC Signals’ customers range from , which is setting up a 200-satellite communications constellation in low Earth orbit, to , which is planning to put miniaturized telecom satellites in geostationary orbit. The company raised more than last year in a led by Bee Partners, and has talked about conducting a follow-on Series A round this year. Richins said his company currently has fewer than 10 employees. He emphasized that RBC Signals offers a range of service levels for satellite customers. “A company can’t afford to pay for a gilded ‘failure is not an option’ service when they’re just testing,” Richins said. “But we have the ability to provide five 9’s of reliability when the customer needs it.” Future offerings could include optical communication links as well as direct links between satellites. “We’re not a ground station company,” Richins said. “We’re a space communications company.”
Aerojet Rocketdyne’s AR-22 rocket engine fires during a test at NASA’s Stennis Space Center in Mississippi. (NASA / DARPA Photo) A rocket engine built from spare space shuttle parts — and the team behind the engine — passed a grueling 10-day, 10-firing test that sets the stage for Boeing’s Phantom Express military space plane. “We scored a perfect 10 last week,” Jeff Haynes, Aerojet Rocketdyne’s program manager for the AR-22 engine, told reporters today during a teleconference. The hydrogen-fueled AR-22 is largely based on the RS-25 engine that was used on the space shuttle and will be used on NASA’s heavy-lift Space Launch System. “We’ve upgraded the ‘brain’ for this derivative mission,” using an advanced controller, Haynes said. Aerojet, Boeing and the Pentagon’s Defense Advanced Research Projects Agency, or DARPA, set up the 240-hour test between June 26 and July 6 to see whether the AR-22 could be turned around rapidly enough for a 100-second, full-throttle firing every day. The bottom line? It can. “We had 68 minutes to spare when we finished the last test,” Haynes said. Along the way, the team had to deal with two direct lightning strikes that damaged the test facility at NASA’s Stennis Space Center in Mississippi. Engineers also had to work out a procedure to get rid of the moisture that gathered in the engine during firings. “Trying to run the engine again without drying that out would lead to catastrophic events,” Haynes said. At first, the procedure took about 17 hours, but they eventually got the time down to as little as six hours. During the shuttle program, a similar process took days to accomplish, Haynes said. Thanks to the successful test, the Phantom Express program — also known as the Experimental Spaceplane or XS-1 — is on track for an initial demonstration flight in 2021, said Steve Johnston, director of launch at Boeing Phantom Works. Scott Wierzbanowski, DARPA’s program manager for the Experimental Spaceplane, said the two-stage launch system is being designed for 10 liftoffs in 10 days. After each launch, the reusable first-stage booster would glide to an airplane-like landing. Phantom Express should be capable of delivering 3,000 pounds of payload to low Earth orbit at a cost of less than $5 million a flight. Those performance levels represent a “sweet spot” for military as well as commercial applications, Wierzbanowski said. Boeing’s Johnston said the specifications for the Phantom Express plane are going through critical design review, leading up to the start of assembly in mid-2019. “A lot of our design philosophies and design guidelines are actually derived from the commercial airplane business,” he said. “The materials system that we’re using is actually the materials system that was originally developed for application on the all-composite 787.” The liquid-oxygen tank already has been fabricated at Boeing’s Advanced Developmental Composite Facility in the Seattle area. “It went really well. … We have some additional outfitting to do to that tank,” Johnston said. The design of the plane’s upper stage is still in flux, and the launch site for the first demonstration flight in 2021 has not yet been selected. That initial suborbital flight will test only the first-stage booster, Johnston said. DARPA is providing up to $146 million for the project, with Boeing and Aerojet kicking in an additional unspecified amount for development. Technicians inspect the AR-22 rocket engine after a hot-fire test. (Aerojet Rocketdyne Photo) Haynes said the lessons learned from the 10-day engine test could be applied not only to the Phantom Express, but also to Aerojet’s work on the RS-25 engines for the Space Launch System. For example, the SLS could benefit from a sensor-based performance-monitoring system that was tested on the AR-22, known as the Advanced Anomaly Command and Control Center, or AC3. “We actually tricked the engine to thinking it was experiencing a red-line condition, which under the shuttle program would have been an immediate shutdown of the engine,” Haynes said. “We allowed our software to throttle down the engine automatically, assess the situation and then do a stepwise recovery of the thrust profile in a matter of seconds.” Aerojet is pioneering a new generation of engineering for Phantom Express and the Space Launch System — with the aid of a new generation of engineers, Haynes said. “We have experienced engineers that really cut their teeth on the shuttle program,” he said. “And we have a large amount of new engineers now that are able to be mentored and trained through the process of this highly aggressive program that we just did through the last two weeks.” An artist’s conception shows Boeing’s Phantom Express XS-1 space plane in flight. (Boeing Illustration) Phantom Express by the numbers: Length: 100 feet Wingspan: 62 feet Weight at liftoff, fully fueled: 240,000 pounds AR-22 engine liftoff thrust: More than 375,000 pounds AR-22 propellants: Liquid hydrogen, liquid oxygen Maximum speed: Mach 10 (7,600 mph) Sources: Boeing, Aerojet Rocketdyne
News Brief: A robotic Russian Progress cargo craft today was that got it to the International Space Station in less time than it takes to drive from Seattle to Spokane. Liftoff of the Progress’ Soyuz rocket from Russia’s Baikonur Cosmodrome in Kazakhstan came at 2:51 p.m. PT. The Progress and its payload, consisting of nearly three tons’ worth of food, fuel and supplies, arrived at the orbital outpost at 6:31 p.m. PT. The fastest-ever cargo run took less than four hours, rather than the usual two days, due to a carefully planned, time-saving, two-orbit trajectory that Russia wants to use for crewed as well as uncrewed flights. | <urn:uuid:3d80f288-f6dd-4de2-817d-eff33d2c452a> | 2.84375 | 7,069 | Content Listing | Science & Tech. | 48.276945 | 95,569,217 |
Research Project: Next generation sequening approach to study water pollution response Prerequisite: Completed General Biology, General Chemistry and Elemental Functions Description: Michigan is blessed with water resources. Unfortunately, past economic activities have harmed waterways in the Great Lakes region. The goal of the project is to understand how the base of aquatic food chains (e.g., water plants) respond to water pollution. One of the most common pollutants is phosphate. To identify genes that respond to phosphate, duckweed was exposed to high and low phosphate conditions. Next generation sequencing technology was used to identify 5,566 gene products that were differentially expressed. The laboratories focus during the summer of 2016 will be to assess if the expression patterns observed in the laboratory are similar to expression in natural settings. Additionally, next generation sequencing approach, called metagenomics, will be used to identify microorganisms important to the plant. The techniques used in this investigation are the same as used in many biomedical research laboratories. | <urn:uuid:4e07280a-41a7-41b2-b805-85112dd426e7> | 2.625 | 193 | Academic Writing | Science & Tech. | 16.942374 | 95,569,234 |
Q=KAH/I is what formula?
Q=KAH/I is what formula?
derive the darcy`s law equation.
True groundwater movement is determined by slope from high areas to low areas.
What is hydraulic pressure?
Groundwater movement follows a topography from high to low.Areas of high pressure to low areas.
Groundwater flows underground in response to elevation differences (downwards) and pressure differences (from areas of high pressure to areas of low pressure). Near the water table, this means that groundwater usually flows ‘downhill’, i.e. from a higher level to a lower level, just as it would on the surface.
The difference in energy between two points that are l metres apart horizontally on a sloping water table is determined by the difference in height (h) between them. This height is called the head of water. The slope of the water table is called the hydraulic gradient and is defined as h/l. The rate of groundwater movement (Q - the volume of water flowing in unit time, with units of m3 s-1) is related to the hydraulic gradient by Darcy's law:
In the equation, K is the hydraulic conductivity and is defined as the volume of water that will flow through a unit cross-sectional area of rock per unit time, under a unit hydraulic gradient and at a specified temperature.
A is the cross-sectional area at right angles to the flow path.
The units of hydraulic conductivity are metres per second (ms-1 ) or metres per day.
The hydraulic conductivity depends on the properties of the rock that allow water to flow through it (its permeability) and also on the properties of the water.
Unlike hydraulic conductivity, permeability is an intrinsic property of the rock, so it is the same whatever the nature of the fluid flowing through the rock - whether water, as in this instance, or oil or gas.
The hydraulic conductivity (K), however, depends on the density and viscosity of the fluid, so it will vary accordingly. When the fluid is water, the most important factor that affects the hydraulic conductivity is temperature. For example, an increase in water temperature from 5 °C to about 30 °C will double the hydraulic conductivity and, from Darcy's law, will therefore double the speed at which the groundwater flows.
Rocks can be divided into two broad categories - permeable and impermeable - on the basis of their hydraulic conductivity. Rocks generally regarded as permeable have hydraulic conductivities of 1 m per day or more.
Hydraulic conductivity is proportional to permeability. So from Darcy's law it can be deduced that in a rock of constant hydraulic conductivity (K), and hence of constant permeability for a given fluid, the rate (Q) at which the groundwater flows will increase as the hydraulic gradient (h/l, the slope of the water table) increases.
The flow of groundwater in the direction of the slope of the water table is only part of the picture, for groundwater is also in motion at greater depths, where it generally moves in a curved path rather than a straight line when seen in cross-section, towards a stream or river, a spring, or even a well. This path is the result of movement towards an area of discharge, such as the stream.
The direction of flow of groundwater at depth is not parallel to the water table; instead, water moves in a curved path, converging towards a point of discharge. In (Section a) the rock is uniformly permeable, and the water discharges into streams in the valleys - it may approach the stream from below. In (Section b) the hill is capped by a permeable rock which is underlain by an impermeable rock. The water is diverted laterally by the impermeable rock, and springs result where the boundary between the permeable and impermeable rocks intersects the ground surface. | <urn:uuid:d80c016e-899a-4715-9e15-a457f54e3312> | 3.578125 | 822 | Knowledge Article | Science & Tech. | 47.316422 | 95,569,237 |
New study reveals how magma injected into the crust from below has contributed to the uplift of the spectacular Altiplano-Puna plateau in the central Andes
A new analysis of the topography of the central Andes shows the uplifting of the Earth's second highest continental plateau was driven in part by a huge zone of melted rock in the crust, known as a magma body.
The Altiplano-Puna plateau is a high, dry region in the central Andes that includes parts of Argentina, Bolivia, and Chile, with vast plains punctuated by spectacular volcanoes. In a study published October 25 in Nature Communications, researchers used remote sensing data and topographic modeling techniques to reveal an enormous dome in the plateau.
About 1 kilometer (3,300 feet) high and hundreds of miles across, the dome sits right above the largest active magma body on Earth. The uplifting of the dome is the result of the thickening of the crust due to the injection of magma from below, according to Noah Finnegan, associate professor of Earth and planetary sciences at UC Santa Cruz and senior author of the paper.
"The dome is the Earth's response to having this huge low-density magma chamber pumped into the crust," Finnegan said.
The uplifting of the dome accounts for about one-fifth of the height of the central Andes, said first author Jonathan Perkins, who led the study as a graduate student at UC Santa Cruz and is now at the U.S. Geological Survey in Menlo Park, Calif.
"It's a large part of the evolution of the Andes that hadn't been quantified before," Perkins said.
The other forces uplifting the Andes are tectonic, resulting from the South American continental plate overriding the Nazca oceanic plate. The subduction zone where the Nazca plate dives beneath the western edge of South America is the source of the magma entering the crust and feeding volcanic activity in the region. Water released from the subducting slab of oceanic crust changes the melting temperature of the overlying wedge of mantle rock, causing it to melt and rise into the overriding plate.
Perkins and Finnegan worked with researchers at the University of Arizona who had used seismic imaging to reveal the remarkable size and extent of the Altiplano-Puna magma body in a paper published in 2014. That study detected a huge zone of melted material about 11 kilometers thick and 200 kilometers in diameter, much larger than previous estimates.
"People had known about the magma body, but it had not been quantified that well," Perkins said. "In the new study, we were able to show a tight spatial coupling between that magma body and this big, kilometer-high dome."
Based on their topographic analysis and modeling studies, the researchers calculated the amount of melted material in the magma body, yielding an estimate close to the previous calculation based seismic imaging. "This provides a direct and independent verification of the size and extent of the magma body," Finnegan said. "It shows that you can use topography to learn about deep crustal processes that are hard to quantify, such as the rate of melt production and how much magma was pumped into the crust from below."
The Altiplano-Puna Volcanic Complex was one of the most volcanically active places on Earth starting about 10 million years ago, with several super-volcanoes producing massive eruptions and creating a large complex of collapsed calderas in the region. Although no major eruptions have occurred in several thousand years, there are still active volcanoes and geothermal activity in the region. In addition, satellite surveys of surface deformation since the 1990s have shown that uplifting of the surface is continuing to occur at a relatively rapid rate in a few places. At Uturuncu volcano located right in the center of the dome, the uplift is about 1 centimeter (less than half an inch) per year.
"We think the ongoing uplift is from the magma body," Perkins said. "The jury is still out on exactly what's causing it, but we don't think it's related to a supervolcano."
The growth of the crust beneath the Altiplano-Puna plateau, driven by the intrusion of magma from below, is a fundamental process in the building of continents. "This is giving us a glimpse into the factory where continents get made," Perkins said. "These big magmatic systems form during periods called magmatic flare-ups when lots of melt gets injected into Earth's crust. It's analogous to the process that created the Sierra Nevada 90 million years ago, but we're seeing it now in real time."
In addition to Perkins and Finnegan, the coauthors of the paper include Kevin Ward, George Zandt, and Susan Beck at the University of Arizona and Shanaka de Silva at Oregon State University. This research was funded by the National Science Foundation.
Tim Stephens | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Life Sciences
18.07.2018 | Life Sciences
18.07.2018 | Information Technology | <urn:uuid:e8dcb65f-3a5f-47da-b3f3-9ab4bc034420> | 3.6875 | 1,659 | Content Listing | Science & Tech. | 40.850511 | 95,569,240 |
With ultraviolet eyes, Cassini gazes at cloud bands and wavy structures in Saturn's southern hemisphere. In the ultraviolet, the gaseous part of the atmosphere is bright and high clouds and aerosols tend to be dark. The Cassini spacecraft narrow angle camera took the image on May 15, 2004, from a distance of 24.7 million kilometers (15.4 million miles) from Saturn through a filter centered at 298 nanometers. The image scale is 147 kilometers (91 miles) per pixel. Contrast in the image was enhanced to aid visibility. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini-Huygens mission for NASA's Office of Space Science, Washington, D.C. The Cassini orbiter and its two onboard cameras, were designed, developed and assembled at JPL. The imaging team is based at the Space Science Institute, Boulder, Colo. For more information, about the Cassini-Huygens mission visit, <a href="http://saturn.jpl.nasa.gov/">http://saturn.jpl.nasa.gov</a> and the Cassini imaging team home page, <a href="http://ciclops.org/">http://ciclops.org</a>. | <urn:uuid:56d080e7-ee5b-43ad-817e-5da7f328e40b> | 3.609375 | 298 | Knowledge Article | Science & Tech. | 51.743944 | 95,569,245 |
Comparing the theoretical versions of the Beaufort scale, the T-Scale and the Fujita scale
- Meaden G. Terence, Kochev S, Kolendowicz L, Kosa-Kiss A, Marcinoniene Izolda, Sioutas Michalis, Tooming Heino, Tyrrell John
- TORRO HQ, Tornado and Storm Research Organisation, Oxford Brookes University, Oxford, OX3 0BP, UK, TORRO Bulgaria, National Institute of Meteorology, 1784 Sofia, Bulgaria, TORRO Poland, Adam Mickiewicz University, Institute of Physical Geography, Poznań, Poland, TORRO Romania–Hungary, Str. I.C. Bratianu 3, 415500 Salonta, Romania, TORRO Lithuania, Lithuanian Hydrometeorological Service, 09300 Rudnios 6, Vilnius, Lithuania, TORRO Greece, ELGA–Meteorological Applications Center, 55103 Thessaloniki, Greece, TORRO Estonia, Estonian Meteorological Institute, 10143 Rävala 8, Tallinn, Estonia, TORRO Ireland, Department of Geography, National University of Ireland, Cork, Ireland
- Atmospheric Research SCI(E) SCOPUS
- Elsevier in 2006
- Cited Count
2005 is the bicentenary of the Beaufort Scale and its wind-speed codes: the marine version in 1805 and the land version later. In the 1920s when anemometers had come into general use, the Beaufort Scale was quantified by a formula based on experiment. In the early 1970s two tornado wind-speed scales were proposed: (1) an International T-Scale based on the Beaufort Scale; and (2) Fujita's damage scale developed for North America. The International Beaufort Scale and the T-Scale share a common root in having an integral theoretical relationship with an established scientific basis, whereas Fujita's Scale introduces criteria that make its intensities non-integral with Beaufort. Forces on the T-Scale, where T stands for Tornado force, span the range 0 to 10 which is highly useful world wide. The shorter range of Fujita's Scale (0 to 5) is acceptable for American use but less convenient elsewhere. To illustrate the simplicity of the decimal T-Scale, mean hurricane wind speed of Beaufort 12 is T2 on the T-Scale but F1.121 on the F-Scale; while a tornado wind speed of T9 (= B26) becomes F4.761. However, the three wind scales can be uni-fied by either making F-Scale numbers exactly half the magnitude of T-Scale numbers [i.e. F′half = T / 2 = (B / 4) − 4] or by doubling the numbers of this revised version to give integral equivalence with the T-Scale. The result is a decimal formula F′double = T = (B / 2) − 4 named the TF-Scale where TF stands for Tornado Force. This harmonious 10-digit scale has all the criteria needed for world-wide practical effectiveness.
1.Risk probabilities associated with tornado wind speeds. Proc. Symposium on Tornadoes, Lubbock, Texas, June 1976.(1976) Abbey R.F.
2.Bull. Amer. Meteorol. Soc. 12th Conference on Severe Local Storms.(1982) Elsom D.M. et al.
3.Advances in tornado and storm research in the UK and Europe. Atmos. Res.. Vol. 56. 1929(2001) Elsom D.M. et al.
4.Proposed characterization of tornadoes and hurricanes by area and intensity. Vol. vol. 91.(1971) Fujita T.T.
5.Tornadoes around the world. Weatherwise. Vol. 26. 5662(1973) Fujita
6.Tornadoes and Downbursts in the Context of Generalized Planetary Scales(1981) T. Theodore Fujita Journal of the Atmospheric SciencesEarth Science cited 0 times
7.Results of FPP classification of 1971 and 1972 tornadoes. Preprints 8th Conference on Severe Local Storms, Denver. October 1973. 142145(1973) Fujita T.T. et al.
8.Meaden, G.T., 1975-76. Tornadoes in Britain: their intensities and distribution in time and space. J. Meteorol. 1, 242-251 (based on a lecture to the Royal Meteorological Society in 1975). .
9.Meaden, G.T., 1985. A study of tornadoes in Britain, with assessments of the general tornado risk potential and the specific risk potential at particular regional sites. Prepared at the request of HM Nuclear Installations Inspectorate Health and Safety Executive. . | <urn:uuid:db6a2e49-8ec3-4e08-9ea2-a6178cc371fc> | 2.90625 | 1,007 | Academic Writing | Science & Tech. | 57.08196 | 95,569,269 |
Diamagnetic materials are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances the weak diamagnetic force is overcome by the attractive force of magnetic dipoles in the material. The magnetic permeability of diamagnetic materials is less than μ0, the permeability of vacuum. In most materials diamagnetism is a weak effect which can only be detected by sensitive laboratory instruments, but a superconductor acts as a strong diamagnet because it repels a magnetic field entirely from its interior.
Diamagnetism was first discovered when Sebald Justinus Brugmans observed in 1778 that bismuth and antimony were repelled by magnetic fields. In 1845, Michael Faraday demonstrated that it was a property of matter and concluded that every material responded (in either a diamagnetic or paramagnetic way) to an applied magnetic field. On a suggestion by William Whewell, Faraday first referred to the phenomenon as diamagnetic (the prefix dia- meaning through or across), then later changed it to diamagnetism.
|Material||χv [× 10−5 (SI units)]|
Diamagnetism, to a greater or lesser degree, is a property of all materials and always makes a weak contribution to the material's response to a magnetic field. For materials that show some other form of magnetism (such as ferromagnetism or paramagnetism), the diamagnetic contribution becomes negligible. Substances that mostly display diamagnetic behaviour are termed diamagnetic materials, or diamagnets. Materials called diamagnetic are those that laypeople generally think of as non-magnetic, and include water, wood, most organic compounds such as petroleum and some plastics, and many metals including copper, particularly the heavy ones with many core electrons, such as mercury, gold and bismuth. The magnetic susceptibility values of various molecular fragments are called Pascal's constants.
Diamagnetic materials, like water, or water-based materials, have a relative magnetic permeability that is less than or equal to 1, and therefore a magnetic susceptibility less than or equal to 0, since susceptibility is defined as χv = μv − 1. This means that diamagnetic materials are repelled by magnetic fields. However, since diamagnetism is such a weak property, its effects are not observable in everyday life. For example, the magnetic susceptibility of diamagnets such as water is χv = ×10−6−9.05. The most strongly diamagnetic material is bismuth, χv = ×10−4−1.66, although pyrolytic carbon may have a susceptibility of χv = ×10−4−4.00 in one plane. Nevertheless, these values are orders of magnitude smaller than the magnetism exhibited by paramagnets and ferromagnets. Note that because χv is derived from the ratio of the internal magnetic field to the applied field, it is a dimensionless value.
All conductors exhibit an effective diamagnetism when they experience a changing magnetic field. The Lorentz force on electrons causes them to circulate around forming eddy currents. The eddy currents then produce an induced magnetic field opposite the applied field, resisting the conductor's motion.
Curving water surfaces
If a powerful magnet (such as a supermagnet) is covered with a layer of water (that is thin compared to the diameter of the magnet) then the field of the magnet significantly repels the water. This causes a slight dimple in the water's surface that may be seen by its reflection.
Diamagnets may be levitated in stable equilibrium in a magnetic field, with no power consumption. Earnshaw's theorem seems to preclude the possibility of static magnetic levitation. However, Earnshaw's theorem applies only to objects with positive susceptibilities, such as ferromagnets (which have a permanent positive moment) and paramagnets (which induce a positive moment). These are attracted to field maxima, which do not exist in free space. Diamagnets (which induce a negative moment) are attracted to field minima, and there can be a field minimum in free space.
A thin slice of pyrolytic graphite, which is an unusually strong diamagnetic material, can be stably floated in a magnetic field, such as that from rare earth permanent magnets. This can be done with all components at room temperature, making a visually effective demonstration of diamagnetism.
The Radboud University Nijmegen, the Netherlands, has conducted experiments where water and other substances were successfully levitated. Most spectacularly, a live frog (see figure) was levitated.
In September 2009, NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California announced it had successfully levitated mice using a superconducting magnet, an important step forward since mice are closer biologically to humans than frogs. JPL said it hopes to perform experiments regarding the effects of microgravity on bone and muscle mass.
Recent experiments studying the growth of protein crystals have led to a technique using powerful magnets to allow growth in ways that counteract Earth's gravity.
A simple homemade device for demonstration can be constructed out of bismuth plates and a few permanent magnets that levitate a permanent magnet.
The electrons in a material generally circulate in orbitals, with effectively zero resistance and act like current loops. Thus it might be imagined that diamagnetism effects in general would be very, very common, since any applied magnetic field would generate currents in these loops that would oppose the change, in a similar way to superconductors, which are essentially perfect diamagnets. However, since the electrons are rigidly held in orbitals by the charge of the protons and are further constrained by the Pauli exclusion principle, many materials exhibit diamagnetism, but typically respond very little to the applied field.
The Bohr–van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. However, the classical theory for Langevin diamagnetism gives the same prediction as the quantum theory. The classical theory is given below.
Paul Langevin's theory of diamagnetism (1905) applies to materials containing atoms with closed shells (see dielectrics). A field with intensity B, applied to an electron with charge e and mass m, gives rise to Larmor precession with frequency ω = eB / 2m. The number of revolutions per unit time is ω / 2π, so the current for an atom with Z electrons is (in SI units)
The magnetic moment of a current loop is equal to the current times the area of the loop. Suppose the field is aligned with the z axis. The average loop area can be given as , where is the mean square distance of the electrons perpendicular to the z axis. The magnetic moment is therefore
If the distribution of charge is spherically symmetric, we can suppose that the distribution of x,y,z coordinates are independent and identically distributed. Then , where is the mean square distance of the electrons from the nucleus. Therefore, . If is the number of atoms per unit volume, the diamagnetic susceptibility in SI units is
The Langevin theory is not the full picture for metals because they have non-localized electrons. The theory that describes diamagnetism in a free electron gas is called Landau diamagnetism, named after Lev Landau, and instead considers the weak counteracting field that forms when the electrons' trajectories are curved due to the Lorentz force. Landau diamagnetism, however, should be contrasted with Pauli paramagnetism, an effect associated with the polarization of delocalized electrons' spins. For the bulk case of a 3D system and low magnetic fields, the diamagnetic susceptibility can be calculated using Landau quantization, which in SI units is
where is the Fermi energy. This is equivalent to , exactly times Pauli paramagnetic susceptibility, where is the Bohr magneton and is the density of states. This formula takes into account the spin degeneracy of the carriers (spin ½ electrons).
In doped semiconductors the ratio between Landau and Pauli susceptibilities may change due to the effective mass of the charge carriers differing from the electron mass in vacuum, increasing the diamagnetic contribution. The formula presented here only applies for the bulk; in confined systems like quantum dots, the description is altered due to quantum confinement. Additionally, for strong magnetic fields, the susceptibility of delocalized electrons oscillates as a function of the field strength, a phenomenon known as the de Haas–van Alphen effect, also first described theoretically by Landau.
- Jackson, Roland (21 July 2014). "John Tyndall and the Early History of Diamagnetism". Annals of Science: 4. doi:10.1080/00033790.2014.929743. PMC . Retrieved 28 October 2014.
- "diamagnetic, adj. and n.". OED Online. Oxford University Press. June 2017.
- Nave, Carl L. "Magnetic Properties of Solids". Hyper Physics. Retrieved 2008-11-09.
- Poole, Jr., Charles P. (2007). Superconductivity (2nd ed.). Amsterdam: Academic Press. p. 23. ISBN 9780080550480.
- Beatty, Bill (2005). "Neodymium supermagnets: Some demonstrations—Diamagnetic water". Science Hobbyist. Retrieved September 2011. Check date values in:
- Quit007 (2011). "Diamagnetism Gallery". DeviantART. Retrieved September 2011. Check date values in:
- "The Frog That Learned to Fly". High Field Laboratory. Radboud University Nijmegen. 2011. Retrieved September 2011. Check date values in:
- "The Real Levitation". High Field Laboratory. Radboud University Nijmegen. 2011. Retrieved September 2011. Check date values in:
- Liu, Yuanming; Zhu, Da-Ming; Strayer, Donald M.; Israelsson, Ulf E. (2010). "Magnetic levitation of large water droplets and mice". Advances in Space Research. 45 (1): 208–213. Bibcode:2010AdSpR..45..208L. doi:10.1016/j.asr.2009.08.033.
- Choi, Charles Q. (2009-09-09). "Mice levitated in lab". Live Science. Retrieved September 2011. Check date values in:
- Kleiner, Kurt (10 August 2007). "Magnetic gravity trick grows perfect crystals". New Scientist. Retrieved September 2011. Check date values in:
- "Fun with diamagnetic levitation". ForceField. 2 December 2008. Archived from the original on 12 February 2008. Retrieved September 2011. Check date values in:
- Kittel, Charles (1986). Introduction to Solid State Physics (6th ed.). John Wiley & Sons. pp. 299–302. ISBN 0-471-87474-4.
- Langevin, Paul (1905). "Sur la théorie du magnétisme". Journal de Physique Théorique et Appliquée (in French). 4 (1). doi:10.1051/jphystap:019050040067800&lang=fr. ISSN 0368-3893.
- Landau, L. D. "Diamagnetismus der metalle." Zeitschrift für Physik A Hadrons and Nuclei 64.9 (1930): 629-637.
- Chang, M. C. "Diamagnetism and paramagnetism" (PDF). NTNU lecture notes. Retrieved 2011-02-24.
- Drakos, Nikos; Moore, Ross; Young, Peter (2002). "Landau diamagnetism". Electrons in a magnetic field. Retrieved 27 November 2012.
- Lévy, L.P.; Reich, D.H.; Pfeiffer, L.; West, K. "Aharonov-Bohm ballistic billiards". Physica B: Condensed Matter. 189 (1-4): 204–209. Bibcode:1993PhyB..189..204L. doi:10.1016/0921-4526(93)90161-x.
- Richter, Klaus; Ullmo, Denis; Jalabert, Rodolfo A. "Orbital magnetism in the ballistic regime: geometrical effects". Physics Reports. 276 (1): 1–83. arXiv: . Bibcode:1996PhR...276....1R. doi:10.1016/0370-1573(96)00010-5.
- Video of a museum-style magnetic elevation train model that uses diamagnetism
- Videos of frogs and other diamagnets levitated in a strong magnetic field
- Diamagnetic Levitation (YouTube)
- Large Pyrolytic Carbon Square Floating (YouTube)
- Diamagnetism of water (YouTube, in Japanese)
- Video of a piece of neodymium magnet levitating between blocks of bismuth. | <urn:uuid:c861d5e2-3179-4075-95c8-410d1179b237> | 3.84375 | 2,840 | Knowledge Article | Science & Tech. | 45.426786 | 95,569,281 |
Imagine the vast, empty tundra in Alaska and Canada giving way to trees, shrubs and plants typical of more southerly climates. Imagine similar changes in large parts of Eastern Europe, northern Asia and Scandinavia, as needle-leaf and broadleaf forests push northward into areas once unable to support them. Imagine part of Greenland's ice cover, once thought permanent, receding and leaving new tundra in its wake.
Those changes are part of a reorganization of Arctic climates anticipated to occur by the end of the 21st century, as projected by a team of University of Nebraska-Lincoln and South Korean climatologists.
In an article to be published in a forthcoming issue of the scientific journal Climate Dynamics, the research team analyzed 16 global climate models from 1950 to 2099 and combined it with more than 100 years of observational data to evaluate what climate change might mean to the Arctic's sensitive ecosystems by the dawn of the 22nd century.
The study is one of the first to apply a specific climate classification system to a comprehensive examination of climate changes throughout the Arctic by using both observations and a collection of projected future climate changes, said Song Feng, research assistant professor in UNL's School of Natural Resources and the study's lead author.
Based on the climate projections, the new study shows that the areas of the Arctic now dominated by polar and sub-polar climate types will decline and will be replaced by more temperate climates – changes that could affect a quarter to nearly half of the Arctic, depending on future greenhouse gas emission scenarios, by the year 2099.
Changes to Arctic vegetation will naturally follow shifts in the region's climates: Tundra coverage would shrink by 33 to 44 percent by the end of the century, while temperate climate types that support coniferous forests and needle-leaf trees would push northward into the breach, the study shows.
"The expansion of forest may amplify global warming, because the newly forested areas can reduce the surface reflectivity, thereby further warming the Arctic," Feng said. "The shrinkage of tundra and expansion of forest may also impact the habitat for wildlife and local residents."
Also according to the study:
By the end of the century, the annual average surface temperature in Arctic regions is projected to increase by 5.6 to 9.5 degrees Fahrenheit, depending on the greenhouse gas emission scenarios.
The warming, however, is not evenly distributed across the Arctic. The strongest warming in the winter (by 13 degrees Fahrenheit) will occur along the Arctic coast regions, with moderate warming (by 4 to 6 degrees Fahrenheit) along the North Atlantic rim.
The projected redistributions of climate types differ regionally; in northern Europe and Alaska, the warming may cause more rapid expansion of temperate climate types than in other places.
Tundra in Alaska and northern Canada would be reduced and replaced by boreal forests and shrubs by 2059. Within another 40 years, the tundra would be restricted to the northern coast and islands of the Arctic Ocean.
The melting of snow and ice in Greenland following the warming will reduce the permanent ice cover, giving its territory up to tundra.
"The response of vegetation usually lags changes in climate. The plants don't have legs, so it takes time for plant seed dispersal, germination and establishment of seedlings," Feng said. Still, the shrub density in tundra regions has seen a rapid increase on decadal and shorter time scales, while the boreal forest expansion has seen a much slower response on century time scales.
Also, increasing drought conditions may help offset any potential benefits of warmer temperatures and reduce the overall vegetation growth in the Arctic regions, Feng said.
Non-climate factors – human activity, land use changes, permafrost thawing, pest outbreaks and wildfires, for example – may also locally affect the response of vegetation to temperature warming in the Arctic.
In addition to Feng, researchers on the project included climatologists Qi Hu and Robert Oglesby of UNL; Su-Jong Jeong and Chang-Hoi Ho of the School of Earth and Environmental Sciences at Seoul National University; and Baek-Min Kim of the Korean Polar Research Institute in Incheon.
Song Feng | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:5cab914e-5fae-49e4-b0a7-94252557fd82> | 3.796875 | 1,447 | Content Listing | Science & Tech. | 36.781519 | 95,569,286 |
The Solar Probe Plus, a small car-sized spacecraft will plunge directly into the sun's atmosphere approximately four million miles from our star's surface. It will explore a region no other spacecraft ever has encountered in an effort to unlock the sun's biggest mysteries.
Artist's concept of the Solar Probe Plus approaching the sun. (Credit: NASA) For decades, scientists have known that the corona, or the outer atmosphere, is several hundreds of times hotter than the visible solar surface and that the solar wind accelerates up to supersonic speeds as it travels through the corona. In the Solar Probe Plus mission, scientists hope to find answers to the questions: why is the solar corona so much hotter than the photosphere? And how is the solar wind accelerated? The answers to these questions can be obtained only through in-situ measurements of the solar wind down in the corona.
NRL's Wide-field Imager for Solar Probe (WISPR) is one of five science investigations selected by NASA for this mission. It is the only optical investigation because the solar environment is so hot the instruments need to be tucked behind a heat shield. NRL's Dr. Russell Howard, the principal investigator, says, "This is an extremely exciting mission - no other spacecraft has ever gone this close - it is like the early voyagers of the earth, we don't really know what to expect, but we know, whatever it is, it is going to be spectacular."
The imager is a telescope, which looks off to the side of the heat shield, and will make 2-D images of the sun's corona as the spacecraft flies through. But like a medical CAT scan, the orbit of the spacecraft through the corona will enable 3-D images and a determination of the 3-D structure of the corona. The experiment actually will see the solar wind and provide 3-D images of clouds and shocks as they approach and pass the spacecraft. "We'll be flying through the structures that we've only seen from 100 million miles away. We'll be able to see all the phenomena (mass ejections, streamers, shocks, comets, and dust) up close. Other instruments will be able to measure the magnetic and electric fields and the plasma itself," explains Howard. This investigation complements instruments on the spacecraft by providing direct measurements of the plasma far away as well as near the spacecraft - the same plasma the other instruments sample.
The other four investigations chosen for the Solar Probe Plus mission include:The Solar Wind Electrons Alphas and Protons Investigation will specifically count the most abundant particles in the solar wind -- electrons, protons and helium ions -- and measure their properties. The investigation also is designed to catch some of the particles in a special cup for direct analysis. (Smithsonian Astrophysical Observatory in Cambridge, Massachusetts)
The Fields Experiment will make direct measurements of electric and magnetic fields, radio emissions, and shock waves that course through the sun's atmospheric plasma. The experiment also serves as a giant dust detector, registering voltage signatures when specks of space dust hit the spacecraft's antenna. (University of California Space Sciences Laboratory in Berkeley, California)
The Integrated Science Investigation of the Sun consists of two instruments that will take an inventory of elements in the sun's atmosphere using a mass spectrometer to weigh and sort ions in the vicinity of the spacecraft. (Southwest Research Institute in San Antonio, Texas)
The Heliospheric Origins with Solar Probe Plus is led by Dr. Marco Velli who is the mission's observatory scientist, responsible for overseeing assembly of the spacecraft. He will ensure adjacent instruments do not interfere with one another and guide the overall science investigations after the probe enters the sun's atmosphere. (NASA's Jet Propulsion Laboratory in Pasadena, California)
The Solar Probe Plus mission is part of NASA's Living with a Star Program. The program is designed to understand aspects of the sun and Earth's space environment that affect life and society. The program is managed by NASA'S Goddard Space Flight Center in Greenbelt, Maryland, with oversight from NASA's Science Mission Directorate's Heliophysics Division. The Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, is the prime contractor for the spacecraft.
Donna McKinney | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:441d3af7-4dba-4d79-8064-2f94f7c428c3> | 3.59375 | 1,448 | Content Listing | Science & Tech. | 39.662032 | 95,569,287 |
Programming in Java Advanced Imaging
Publisher: Sun Microsystems, Inc. 1999
Number of pages: 488
This document introduces the Java Advanced Imaging API and how to program in it. It is intended for serious programmers who want to use Java Advanced Imaging for real projects. To best understand this document and the examples, you need a solid background in the Java programming language and some experience with imaging. In addition, you will need a working knowledge of other Java Extension APIs.
Download or read it online for free here:
by David Etheridge - BookBoon
This introductory book is the third part of the 'Java'-series written by David Etheridge. This volume gives the reader an introduction to Input/Output Packages, Streams, Iterators, Graphical User Interface (GUI) and much more.
- Sun Microsystems, Inc.
A conceptual description of the Java Sound API, with some code snippets as programming examples. It is assumed that the reader has a basic knowledge of programming in the Java language. Familiarity with audio and MIDI is helpful but not assumed.
by Jan Bodnar - ZetCode
Swing library is an official Java GUI toolkit released by Sun Microsystems. It is used to create Graphical user interfaces with Java. This is a Java Swing tutorial. The Java Swing tutorial is suited for beginners and intermediate Swing developers.
by Antonio Hernandez Bejarano - GitBook
This online book will introduce the main concepts required to write a 3D game using the LWJGL 3 library. LWJGL is a Java library that provides access to native APIs used in the development of graphics, audio and parallel computing applications. | <urn:uuid:b40e9a31-50e6-4be1-b984-5c9f02d75c5f> | 2.515625 | 343 | Content Listing | Software Dev. | 35.34142 | 95,569,298 |
|Building Blocks for Biobots|
|Contact: Paul Preuss, [email protected]|
"Biology today is at the same stage chemistry was a century ago it's growing up quickly, making the transition from a largely descriptive discipline to one where we use what we know about biological systems to build new things," says Jan Liphardt, a Divisional Fellow in Berkeley Lab's Physical Biosciences Division (PBD) and a newly named assistant professor of physics at the University of California at Berkeley.
"In response to this development," Liphardt says, "PBD has established the nation's first Synthetic Biology Department," which is headed by PBD staff scientist Jay Keasling, a professor of chemical engineering at UC Berkeley. As founding members, Liphardt and his group are particularly interested in the design and construction of what Carlos Bustamante, head of PBD's Advanced Microscopies Department and a UCB professor of biochemistry, molecular biology, and physics, has dubbed "biobots" autonomous, special-purpose robots, about the size of a virus or cell and composed of a small number of biological and artificial parts.
"One advantage of building biobots from the ground up" says Liphardt, "is that it's possible to use construction materials that are not normally found in biological systems."
Another advantage is that biobots at least in their initial forms will contain far too few components to replicate themselves, reducing any risk to the environment. "Since we still know only very little of how our biosphere works, it makes sense to proceed very cautiously," adds Derek Greenfield, a biophysics graduate student currently in the Keasling lab.
What to do with a biobot
Biobots have potential applications in medicine, national security, environmental protection, and many other fields. As one example, Liphardt envisions biobots designed to decontaminate toxic spills: "They could detect and identify specific hazardous chemicals, track down the extent of contamination, and internally manufacture whatever was needed to clean up the mess all with one trip to the site."
That scenario may be some time off, but the knowledge gained in learning to build special-purpose synthetic devices will be immediately useful in analyzing and modeling the complicated dynamics of living cells. Much simpler and less versatile than highly evolved living systems, dedicated biobots will nevertheless mimic nature in important ways.
"Biobots will be able to assemble themselves from externally provided components; outwardly they'll consist of enclosures resembling cell walls or viral capsids," Liphardt says. "Molecular motors like flagella will give them motility. They'll have modules for using light or chemicals from their surroundings to make ATP" small molecules cells use to store and transport energy "and modules for sensing their environment and for performing specialized tasks."
Liphardt says, "Essentially, we wish to develop a collection of functional and structural LEGO blocks that we can mix and match. These building blocks will need to be robust, and they will have to be able to perform in concert with other building blocks, with no mutual interference. Achieving module separability is one of the biggest problems in this new field ideally you would like to be able to combine, say, a motor with a sensor and a power source, without having to reengineer each of those modules every time you add another module, or subtly change one of them."
Still, the task of building biobots is much simpler that trying to develop a self-replicating cell from scratch. Liphardt compares the first biobot to the Wright brother's first powered airplane: "The Wrights didn't have to reproduce the flight of birds in all its details," Liphardt says. "It was important for them to know that it could be done, and it was up to them to figure out what the essential physical principles were, and then build a machine out of the available materials."
Likewise, the scientists in Berkeley Lab's Synthetic Biology Department will mix and match various materials, some biological, like proteins, lipids, and DNA and some, where needed, artificial, like silicon nitride or cadmium selenide.
Enclosures, for example, can be far less complex than real cell membranes. One kind of biobot enclosure might be self-assembled from proteins, while another biobot might get by with something as simple as a lipid bilayer. In the right environment, lipid molecules with hydrophilic (water loving) heads and hydrophobic (water fearing) tails readily self-assemble, heads out and tails in, in a tough, double-layered skin.
One of the early lessons of synthetic biology research is that living organisms can produce an amazing diversity of materials, ranging from nanoparticles of silver (made by certain bacteria) to single-mode optical fibers. Researchers at Bell Labs recently discovered that the deep-sea sponge Euplectella grows spiny spicula, which are optical fibers with excellent optical properties and better crack resistance than conventional, man-made fibers.
A biobotic tool kit
Constructing biobots will require special tools; Liphardt and his group are working to assemble the tool kit. One need is for a nanoscale analog of the industrial crane. "The established tools are optical tweezers, magnetic tweezers, and atomic force microscopes," he says, "but typically they are not used simultaneously." Imagine trying to build something and having to choose between being able to move girders around without knowing precisely where they are, or knowing exactly where they are but not being able to grab them. This is the current situation in the single-molecule field, and one of the most pressing tasks is to combine various forms of imaging and manipulations tools.
Combine the "crane" of optical tweezers with high-resolution imaging tools such as Fluorescence Resonance Energy Transfer (FRET), which can be used as a molecular ruler, and the construction of complex, nanometer-sized objects becomes more practical. In FRET, a molecule of fluorescent dye excited by incident light transfers its increased energy to an adjacent dye molecule; fluorescence decreases in the first molecule and increases in the second, a measure of the distance between them.
"We have recently built a combined optical tweezers/single molecule fluorescence instrument," Liphardt says, and his group has successfully used the new instrument to characterize nanoscale strain gauges based on conventional fluorescent dyes; it is designed to measure displacements and forces inside molecular machines. Fluorescent dyes can be specially designed or harvested from nature to bind at specific sites on target molecules. This makes it possible to optically measure by changes in color or brightness the forces involved in manipulating biological molecules, or to watch the mechanochemical processes inside living cells.
Thermodynamics on the nanoscale
Beyond the practical problems of assembling molecule-sized living machines lie daunting theoretical difficulties. Chemists and biologists are accustomed to describing material properties in terms of bulk averages. "Our intuition fails when applied to very small systems," Liphardt says. "Imagine a world where your car moves forward only on average, but every once in a while jumps backwards!"
Liphardt and his colleagues are using single-molecule techniques to refine and extend thermodynamic explanations, well suited to describing matter and energy in bulk, to the behavior of nanoscale devices. Much of traditional thermodynamics is formulated on the basis of equilibrium states in which the macroscopic properties of the system no longer change, and the free energy is at a minimum. A living thing, however, is better described as existing in a nonequilibrium steady state one requiring a constant flow of energy and mass through the system. The challenge is to relate these seemingly different states, extending the theory of thermodynamics to make predictions about small systems and individual molecules.
Using optical tweezers, Bustamante, Liphardt, and their colleagues performed pioneering experiments on the mechanical unfolding of RNA molecules, which established that a mathematical relation known as the Jarzynski Equality is applicable in exploring how energy states govern the ways large biological molecules fold. More recently Liphardt, Bustamante, and others, including equality author Chris Jarzynski from Los Alamos, have measured the statistical properties of microspheres (beads) driven through water by optical tweezers.
A bead fluctuating in an optical trap balances the frictional force of the water through which it moves and the optical forces holding it in the trap; at different speeds, these fluctuations constitute different nonequilibrium steady states. The results of the study showed that the Jarzynski Equality can be used to recover thermodynamic information about nanoscale systems previously thought inaccessible, effectively extending thermodynamics into the realm of living things including biobots.
"We are developing theoretical tools to understand the perturbation of small systems; we're building the hybrid instruments we need to assemble and evaluate nanoscale structures. Next is building an actual device," says Liphardt. "Only by doing it the hard way can we show we're on the right track."
The Wright brothers' first airplane didn't carry freight or passengers, and the very first biobot may not do much more than prove it can move and respond to its surroundings. Nevertheless, says Liphardt, "Our biobot program is helping to lay the foundations of a future science of molecular architecture. We're all going to be surprised at the remarkable developments just around the corner." | <urn:uuid:1df8e724-ca18-49c3-8513-546eca6bb4f5> | 3.203125 | 1,950 | Knowledge Article | Science & Tech. | 18.987315 | 95,569,336 |
The sample mean or empirical mean and the sample covariance are statistics computed from a collection (the sample) of data on one or more random variables. The sample mean and sample covariance are estimators of the population mean and population covariance, where the term population refers to the set from which the sample was taken.
The sample mean is a vector each of whose elements is the sample mean of one of the random variables – that is, each of whose elements is the arithmetic average of the observed values of one of the variables. The sample covariance matrix is a square matrix whose i, j element is the sample covariance (an estimate of the population covariance) between the sets of observed values of two of the variables and whose i, i element is the sample variance of the observed values of one of the variables. If only one variable has had values observed, then the sample mean is a single number (the arithmetic average of the observed values of that variable) and the sample covariance matrix is also simply a single value (a 1x1 matrix containing a single number, the sample variance of the observed values of that variable).
Due to their ease of calculation and other desirable characteristics, the sample mean and sample covariance are widely used in statistics and applications to numerically represent the location and dispersion, respectively, of a distribution.
Let be the ith independently drawn observation (i=1,...,N) on the jth random variable (j=1,...,K). These observations can be arranged into N column vectors, each with K entries, with the K ×1 column vector giving the ith observations of all variables being denoted (i=1,...,N).
The sample mean vector is a column vector whose jth element is the average value of the N observations of the jth variable:
Thus, the sample mean vector contains the average of the observations for each variable, and is written
The sample covariance matrix is a K-by-K matrix with entries
where is an estimate of the covariance between the jth variable and the kth variable of the population underlying the data. In terms of the observation vectors, the sample covariance is
Alternatively, arranging the observation vectors as the columns of a matrix, so that
which is a matrix of K rows and N columns. Here, the sample covariance matrix can be computed as
where is an N by 1 vector of ones. If the observations are arranged as rows instead of columns, so is now a 1×K row vector and is an N×K matrix whose column j is the vector of N observations on variable j, then applying transposes in the appropriate places yields
Like covariance matrices for random vector, sample covariance matrices are positive semi-definite. To prove it, note that for any matrix the matrix is positive semi-definite. Furthermore, a covariance matrix is positive definite if and only if the rank of the vectors is K.
The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector , a row vector whose jth element (j = 1, ..., K) is one of the random variables. The sample covariance matrix has in the denominator rather than due to a variant of Bessel's correction: In short, the sample covariance relies on the difference between each observation and the sample mean, but the sample mean is slightly correlated with each observation since it is defined in terms of all observations. If the population mean is known, the analogous unbiased estimate
using the population mean, has in the denominator. This is an example of why in probability and statistics it is essential to distinguish between random variables (upper case letters) and realizations of the random variables (lower case letters).
for the Gaussian distribution case has N in the denominator as well. The ratio of 1/N to 1/(N − 1) approaches 1 for large N, so the maximum likelihood estimate approximately equals the unbiased estimate when the sample is large.
For each random variable, the sample mean is a good estimator of the population mean, where a "good" estimator is defined as being efficient and unbiased. Of course the estimator will likely not be the true value of the population mean since different samples drawn from the same distribution will give different sample means and hence different estimates of the true mean. Thus the sample mean is a random variable, not a constant, and consequently has its own distribution. For a random sample of N observations on the jth random variable, the sample mean's distribution itself has mean equal to the population mean and variance equal to where is the variance of the random variable Xj.
In a weighted sample, each vector (each set of single observations on each of the K random variables) is assigned a weight . Without loss of generality, assume that the weights are normalized:
(If they are not, divide the weights by their sum). Then the weighted mean vector is given by
and the elements of the weighted covariance matrix are
If all weights are the same, , the weighted mean and covariance reduce to the sample mean and covariance mentioned above.
The sample mean and sample covariance are not robust statistics, meaning that they are sensitive to outliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notably quantile-based statistics such as the sample median for location, and interquartile range (IQR) for dispersion. Other alternatives include trimming and Winsorising, as in the trimmed mean and the Winsorized mean. | <urn:uuid:0791b6fb-64e7-4bdb-8dd7-5e98c8c2e9bf> | 3.75 | 1,150 | Knowledge Article | Science & Tech. | 36.888077 | 95,569,374 |
proton of rest mass 1.6726 x 10-27 kg, as shown by the following reaction:
3Li He + H
In this reaction, momentum and total energy are conserved. After the decay, the proton moves with a
non-relativistic speed of 1.95 x 107 m/s.
a) Determine the kinetic energy of the proton.
b) Determine the speed of the helium nucleus.
c) Determine the kinetic energy of the helium nucleus.
d) Determine the mass that was transformed into kinetic energy in this decay.
e) Determine the rest mass of the lithium nucleus.
Recently Asked Questions
- please explain in detail in terms of acidic level, and conjugate bases
- Critically evaluate any three popular fat loss programs or diet programs. How do they stack up against each other relative to their nutritional profile,
- The following hypothesis is an example of what type of hypothesis ? The mean annual cost to house inmates in the West is equal to the mean cost in the | <urn:uuid:c5df4c67-2d7d-46ee-bbb7-56786a878050> | 3.203125 | 219 | Content Listing | Science & Tech. | 57.985302 | 95,569,379 |
Does life use random number generators?
Molecular noise is widespread in living cells, but do cells ever exploit these fluctuations to achieve complex tasks?
Many sophisticated computer programs use random number generators to help solve challenging problems. These problems range from achieving secure communication across the Internet to deciding how best to invest in the stock market. Much research in recent years has found that randomness is also widespread in living cells, where it is often called “noise”. For example, the activity of some genes is so unpredictable to the extent that it appears random. Yet, relatively little is known about how such gene-expression noise propagates up to change how the cell behaves. Many open questions also remain about how cells might exploit these or other fluctuations to achieve complex tasks, like people use random number generators.
Bacteria perform a number of complex tasks. Some bacteria will swim toward chemicals that suggest a potential reward, such as food. Yet they swim away from chemicals that could lead them to harm. This ability is called chemotaxis and it relies on a network of interacting enzymes and other proteins that coordinates a bacterium’s movements with the input from its senses.
Keegstra et al. set out to find sources of noise that might act as random number generators and help the bacterium E. coli to best perform chemotaxis. An improved version of a technique called in vivo Förster resonance energy transfer (or in vivo FRET for short) was used to give a detectable signal when two proteins involved in the chemotaxis network interacted inside a single bacterium. The experiments showed that this protein network amplifies gene-expression noise for some genes while lessening it for others. In addition, the interactions between proteins encoded by genes acted as an extra source of noise, even when gene-expression noise was eliminated.
Keegstra et al. found that the amount of signaling within the chemotaxis network, as measured by in vivo FRET, varied wildly over time. This revealed two sources of noise at the level of protein signaling. One was due to randomness in the activity of the enzymes involved in tuning the cell’s sensitivity to changes in its environment. The other was due to protein interactions within a large complex that acts as the cell’s sensor. Unexpectedly, this second source of noise under some conditions could be so strong that it flipped the output of the cell’s signaling network back and forth between just two states: “on” and “off”.
Together these findings uncover how signaling networks can not only amplify or lessen gene-expression noise, but can themselves become a source of random events. The new knowledge of how such random events interact with a complex trait in a living cell — namely chemotaxis — could aid future antimicrobial strategies, because many bacteria use chemotaxis to help them establish infections. More generally, the new insights about noise in protein networks could help engineers seeking to build synthetic biochemical networks or produce useful compounds in living cells.
To find out more
Read the eLife research paper on which this eLife digest is based: | <urn:uuid:b356c220-1ae8-44d5-9bfd-d46c7b9c2510> | 3.71875 | 637 | Truncated | Science & Tech. | 35.403057 | 95,569,393 |
Aquatic macro-invertebrate communities as biomonitors of water quality
Water is considered one of the most indispensable resources and is the elixir of life for human beings other organisms of the environment and maintaining the balance of nature. Availability of safe drinking water is a worldwide problem especially in Pakistan where about 44 per cent of the population without access to clean water, while in rural areas 90 per cent of the population lacks such access. According to Pakistan Council of Research in Water Resources (PCRWR), about 200,000 children in Pakistan die every year of diarrheal diseases alone. Human activities (domestic, industrial, agricultural, artificial drainage and other different potential impacts) attributing negative effects on water quality and stream habitats (Saunders et al. 2002). As a resultant pollution produced generally change the water chemistry and decrease the dissolved oxygen. To estimate the level of various organic pollutants such as heavy metals, nitrates and phosphates is always expensive and difficult for the common people.
To assess the water impurities many fresh water organisms have been used in biological monitoring including bacteria, algae, vascular plants, macroinvertebrates and fish. Of these organisms macroinvertebrates includes molluscs, crustaceans, annelids and insects are frequently recommended for biomonitoring programs because of their diversity, ease of collection and ease of identification to levels for bioassessment. Biological assessments of human and environmental impacts on water quality and aquatic organisms have been used since the early 1900s by observing the presence and absence of sensitive organisms from a habitat (Paparisto et al. 2008).
For example 49 US states use macroinvertebrates in their water quality monitoring programs. Macroinvertebrates provide essential nutrients (proteins, lipids and energy) for secondary consumers (e.g. waterfowl, shorebirds, fish, amphibians and other vertebrate predators) and help in maintenance of water quality by facilitating organic decomposition and nutrient cycling (Batzer et al. 1999; Davis and Bidwell 2008). Because of their sensitivity to disturbance, aquatic macroinvertebrate communities are also excellent biological indicators for evaluating health of various wetland ecosystems. Additionally, the occurrence of these aquatic insects indicates the presence of more industrial effluents that has resulted in good growth of macrophytes in the river. From these techniques the use of macroinvertebrate which are an ideal indicators to assess water quality has become a standard addition to many countries; aquatic insects as bioindicators for water pollution is less expensive than the evaluation of physical and chemical parameters used in assessing quality (Arimoro and Ikomi 2009; Trigal et al. 2009; Lili et al. 2010).
Aquatic insects also play an important ecological role in nitrogen remobilization by eating small organisms and being consumed by other animals and fishes. Moreover aquatic insects are useful in assessing heavy metal pollution which is toxic even at very low concentration and to detect them at trace levels is very important. It is also important in terms of the environmental movement to be able to show how long metals stay in the water. The biological evaluation of water quality is linked to the number of pollution-tolerant organisms compared to the number of pollution intolerant ones. If the stream yields a higher proportion of pollution tolerant macroinvertebrates and no sensitive ones that could indicate poor water or habitat quality index. A more favorable water quality index would be characterized by finding sensitive organisms as well as tolerant organisms.
Every species has a certain range of physical and chemical conditions in which it can survive. Some organisms can survive in a wide range of conditions and can tolerate more pollution. Other organisms are very sensitive to changes in water conditions and cannot tolerate pollution. Examples of intolerant organisms are mayflies, stoneflies and some caddisflies (members of the Ephemeroptera, Plecoptera and Trichoptera orders respectively). Some pollution-tolerant organisms include leeches, aquatic worms and some Dipterous larvae. Water quality is evaluated by comparing the number of tolerant organisms to the number of intolerant organisms. A large number of pollution-tolerant organisms and few intolerant organisms may indicate poor water and/or habitat quality. However, pollution-tolerant organisms can also be found in a wide range of conditions, including pollution-free environments (Voshell and Reese 2002).
Despite the importance of aquatic insects regarding water quality assessment a very little scientific research work has been done on macroinvertebrates in Pakistan, a country which depends heavily on annual glacier melts and monsoon rains. The research on these organisms in relation to water quality variables would have been a great development especially in the rural areas where people are always struggling for the safe drinking water.
The authors are from the Department of Entomology, University of Agriculture Faisalabad, Pakistan. They can be reached at <email@example.com>
https://www.technologytimes.pk/aquatic-macro-invertebrate-communities-as-biomonitors-of-water-quality/ArticlesAquatic,biomonitors,communities,invertebrate,macro,quality,water Water is considered one of the most indispensable resources and is the elixir of life for human beings other organisms of the environment and maintaining the balance of nature. Availability of safe drinking water is a worldwide problem especially in Pakistan where about 44 per cent of the population...Technology TimesTechnology Times firstname.lastname@example.orgAdministratorTechnology Times is Pakistan's First Newspaper on Science and TechnologyTechnology Times | <urn:uuid:a96b01c8-cde9-4205-ba59-94949e16b852> | 3.671875 | 1,147 | Truncated | Science & Tech. | 9.323822 | 95,569,401 |
Optimal Seed Solver (OSS) is a dynamic-programming algorithm that finds the optimal seeds of a read, which renders the minimum total seed frequency.
Seed selection is an important step for pigeonhole based seed-and-extend read mappers. In seed selection, a read is broken into multiple non-overlapping substrings called seeds. Seeds are used as anchors to index into the reference genome. To tolerate e errors, a read is typically broken into e+1 seeds such that by the pigeonhole principle, at least one seed is error free.
The overall frequency of the selected seeds has a direct impact on the performance of the mapper. If seeds are frequent, the mapper has to verify many potential mappings through edit-distance calculation, which is a time-consuming process.
OSS aims to find the least set of e+1 non-overlapping seeds from a read using dynamic programing method. For details about the algorithm of OSS, please consult our manuscript online at: http://arxiv.org/abs/1506.08235
This repository contains many code files. Besides hosting the OSS code, it also contains other seed selection implementations, code files for main classes and misc code files. Below we summarize each code file.
Seed selection mechanisms:
- optimalSolverLN.h, optimalSolverLN.cc: contains the code of OSS.
- optimalSolver.h, optimalSolver.cc: contains the code of an optimal solver that has quadruple-complexity. We used this code to verify the integrity of OSS.
- basicSolver.h, basicSolver.cc: contains the code of a basic solver, which selects seeds at fix positions with fixed lengths.
- fastHASHSolver.h, fastHASHSolver.cc: contains the code simulating the fastHASH seed selection mechanism.
- hobbesSolver.h, hobbesSolver.cc: contains the code simulating the hobbes seed selection mechanism.
- spacedSeedSolver.h, spacedSeedSolver.cc: contains the code simulating the spaced seed selection mechanism.
- thresholdSolver.h, thresholdSolver.cc: contains the code simulating the Gem mapper's seed selection mechanism.
MISC code files:
- KmerHash.h, KmerHash.cc: a hash function that transforms a ASCII string into a 64-bit int number.
- HashTree.h, HashTree.cc: a suffix tree to index the reference genome. Notice that this implementation is not optimized for performance. It was initially built for gathering statistics of the reference genome. For human reference genome it needs 350 GB main memory for indexing. Please consider replacing HashTree with BWT implementations if you plan to integrate OSS into a production mapper.
- RefDB.h, RefDB.cc: a database that caches the reference genome.
Each seed selection mechanism has a complementary main-code file. They are named in a fashion of "test(mechanism).cc".
How to run:
There are two steps to run OSS (and similarly other seed selection mechanisms):
Index the reference genome:
$ ./testHashTree referenceName.fasta treeFile.tree
$ ./testOptimalSeedLN treeFile.tree readFile.fastq | <urn:uuid:7ceef4aa-106d-4ab6-b84b-d4062c76a188> | 2.734375 | 712 | Documentation | Software Dev. | 46.694226 | 95,569,417 |
Global warming 'hiatus' challenged by NOAA research
Scientists have long labored to explain what appeared to be a slowdown in global warming that began at the start of this century as, at the same time, heat-trapping emissions of carbon dioxide were soaring.
The slowdown, sometimes inaccurately described as a halt or hiatus, became a major talking point for people critical of climate science.
Now, new research suggests the whole thing may have been based on incorrect data.
When adjustments are made to compensate for recently discovered problems in the way global temperatures were measured, the slowdown largely disappears, the National Oceanic and Atmospheric Administration declared in a scientific paper published Thursday. And when the particularly warm temperatures of 2013 and 2014 are averaged in, the slowdown goes away entirely, the agency said.
"The notion that there was a slowdown in global warming, or a hiatus, was based on the best information we had available at the time," said Thomas Karl, director of the National Centers for Environmental Information, a NOAA unit in Asheville, N.C. "Science is always working to improve."
The change prompted accusations Thursday from some climate-change denialists that the agency was trying to wave a magic wand and make inconvenient data go away. Mainstream climate scientists not involved in the NOAA research rejected that charge, saying it was essential that agencies like NOAA try to deal with known problems in their data records.
At the same time, senior climate scientists at other agencies were in no hurry to embrace NOAA's specific adjustments. Several of them said it would take months of discussion in the scientific community to understand the data corrections and come to a consensus about whether to adopt them broadly.
"What you have is a reasonable effort to deal with known biases, and obviously there is some uncertainty in how you do that," said Gavin Schmidt, who heads a NASA climate research unit in New York that deals with similar issues.
Some experts also pointed out that, depending on exactly how the calculation is done, a recent slowdown in global warming still appears in the NOAA temperature record, although it may be smaller than before.
"These trends are very sensitive to the time periods you use to compute them," said Gerald Meehl, a senior scientist at the National Center for Atmospheric Research in Boulder, Colo.
Scientists like Meehl never accepted the notion, put forward by some climate contrarians, that the slowdown disproved the idea that global warming poses long-term risks. But they said they believe it is real and demands an explanation.
A leading hypothesis to explain the slowdown is that natural fluctuations in the Pacific Ocean may have temporarily pulled some heat out of the atmosphere, producing a brief flattening in the long-term increase of surface temperatures.
In their paper published online Thursday by the journal Science, scientists at NOAA said that in coming months they would roll out new versions of their temperature record that incorporate numerous improvements.
The previous record showed that temperatures from 2000 to 2014 had warmed at about two-thirds the rate of temperatures from 1950 to 1999. In the new analysis, the rate of warming in those two time periods is basically identical.
NOAA said the improvements in its data set included the addition of a huge number of land measurements from around the world, as a result of improving international cooperation. But the disappearance of the slowdown comes largely from adjustments in ocean temperatures. | <urn:uuid:18dc1397-1e81-4836-9ff5-86609ab6794f> | 2.9375 | 678 | Truncated | Science & Tech. | 30.847994 | 95,569,421 |
very interesting topic
very interesting topic
Water vapour is a very weak greenhouse gas, but it makes up for this weakness in numbers. It is by far the most abundant of the GHGs in volume. And although human contributions are insignificant, the behaviour of water vapour is already altering due to human-induced climate change. For example, increased ocean surface warming is causing more evaporation; this in turn causes more precipitation, more humidity, and more extreme weather events. You might otherwise wonder why climate change is blamed for extremely cold weather in certain northern latitudes, but keeping in mind increasing precipitation, the accentuation of seasonality is likely to be a concomitant weather phenomenon.
What's the Earth's GMST?
Muy buena explicacion
muy resumida e interesante sección.
IS METHANE A GREENHOUSE GAS?
This greenhouse gases contribute to global warming.Carbon dioxide is one of the major contributes.
What are the main sources of greenhouse gas emissions?
Writing in 1862, John Tyndall described the key to our modern understanding of why the Earth's surface is so much warmer than the effective radiating temperature.
"As a dam built across a river causes a local deepening of the stream, so our atmosphere, thrown as a barrier across the terrestrial rays, produces a local heightening of the temperature at the Earth's surface."
(Tyndall, 1862, quoted in Weart, 2004)
Tyndall's careful experimental work had established what others only suspected: expressed in modern scientific terms, certain atmospheric gases absorb infrared radiation with wavelengths in the range spanned by outgoing terrestrial radiation (about 4 to 100 μm). These are the greenhouse gases.
Tyndall identified water vapour and carbon dioxide (CO2), but the list of natural greenhouse gases (naturally present in the atmosphere long before human activities began to make their mark) also includes methane (CH4), nitrous oxide (N2O) and ozone (O3). The main mechanism by which these gases absorb infrared radiation is through the vibrations of their molecules.
The natural greenhouse gases absorb infrared wavelengths throughout most of the terrestrial range; there is only one region, between 8 and 13 μm, where absorption is weak.
Known as the ‘atmospheric window’, this allows some of the longwave radiation from the surface to escape directly to space, but most of it is intercepted by the atmosphere.
Now most of the longwave radiation from the surface is effectively ‘trapped’ and recycled by the atmosphere, being repeatedly absorbed and re-emitted in all directions by the greenhouse gases. This warms the atmosphere.
Some of the re-emitted radiation ultimately goes out to space, maintaining an overall radiation balance at the top of the atmosphere. This prevents the whole Earth-atmosphere system from heating up without limit.
The crucial difference is that much of the re-emitted radiation goes back down and is absorbed by the surface. It is this additional energy input - over and above the absorbed solar radiation - that keeps the Earth's GMST over 30 °C warmer than it otherwise would be. | <urn:uuid:49fc9956-7049-4276-b5db-695c45c06065> | 3.671875 | 655 | Knowledge Article | Science & Tech. | 31.824819 | 95,569,438 |
When pressures and temperatures become unbalanced in the Earth's atmosphere, it can produce some dramatic effects as the system tries to rebalance itself, or re-equilibrate (i.e. reach equilibrium). Many of these changes are things that, on a small scale, we are used to witnessing every day in the form of weather, such as wind, rain, snow, fog, etc.
Sometimes, however, very large imbalances can cause massive and very disruptive change. Hurricane Katrina in 2005 is one good example of a weather system that was caused by pressure and temperature re-balancing, and also one that had disastrous consequences for hundreds of thousands of people. Read up on the hurricane here: http://www.katrina.noaa.gov/ (there are several sub- pages on that site, so be sure to look at the links labeled "Aftermath Photos," "Maps," "Response," and "Impact on Region," especially).
Many sorts of natural disasters are caused by vast imbalances in nature; from earthquakes to tsunamis, they all pose risks to people. Start with the Center for Disease Control website http://www.bt.cdc.gov/disasters/ and investigate the natural disasters that might affect your area. You may also want to use NOAA's website on weather (http://www.noaa.gov/wx.html) to help get you started.
What kinds of imbalances might affect your region
What people in your area can do to prepare for them or prevent them.
Give some specific examples of weather incidents that occurred in your area in the past and discuss how those were handled.
What major natural disasters of the past are most similar to one that might affect your area? Explain.
Compare your own area's similarity to New Orleans: what similarities and differences exist between the areas?
Explain how you think a massive rescue and recovery effort might play out differently in your area, versus how it played out, and continues to, in post-Katrina New Orleans.
My area is Michigan.© BrainMass Inc. brainmass.com July 17, 2018, 1:44 pm ad1c9bdddf
Hi and thank you for using Brainmass the Solution below should get you started. Remember that there are a host of issues that we can look into. The key however is to choose one or two that you can really delve into as a subject to explore in a paper such as this. The more you know about it, the more you can discuss either in writing or in class. If you have any questions about this solution, just let me know via the feedback section. Good luck with your studies.
OTA 105878/Xenia Jones
Michigan Region: Environmental Concerns related to Imbalances
The state of Michigan is located in the Great Lakes region of the US with its name originating from an Ojibwa word that means 'large lake'. It is the 8th most populous US state with over 9.8 million residents. It has the longest freshwater coastline in the US and the world bounded by 5 lakes. It also has over 64,000 inland lakes and ponds. For residents, access to a freshwater resource is easy. The state is also made up of 2 peninsulas. The upper peninsula is an important tourism destination and a source of natural resources while the lower peninsula is the source of industry and manufacturing activities of the state. The Lakes that border Michigan are as follows - Lake Huron, Lake Michigan, Lake Superior, Lake Eire and Lake St. Clair. The capital of the state is Lansing while the largest city is America's motor city - Detroit. While the manufacturing lower peninsula is home to industrial activity, the upper peninsula is considered one of the US great natural reserves. Its lush forests and mountains and rich marshlands teeming with migrating birds as well as a special population of animals roaming free in the forests plus reserves of freshwater fish make it an ideal destination for naturalists and an advocacy for environmental protection by conservationists as much of the state, especially the upper peninsula remain undeveloped.
Environmental Issues & Possible Imbalances
So, what are the environmental issues that face the region? Russ Harding (2004) believes there are 7. These are air quality/pollution, water quality/pollution/protection, land-use and regionalization (misuse) plus protection of wetlands, ecosystem protection and threat of exotic species (i.e. carp), pollution from the shipping industry and lastly, the issue of waste management and landfills. Harding says that the state follows all legal emission rules. But he says these emission rules are not enough. The heavy ...
The solution relates the environmental concerns related to the area/state of Michigan. In particular, environmental issues and concerns of imbalances are discussed, the disasters and threats that can happen due to a variety of present factors. Similarities to New Orleans is also discussed and scenarios of prevention as well as rescue and recovery efforts is also included in the discussion. References are listed for further exploration of the topic. A word version of the document is also attached. | <urn:uuid:0af1f66e-dfe9-48dc-bee2-1200d5e7f174> | 3.625 | 1,050 | Tutorial | Science & Tech. | 46.864621 | 95,569,453 |
These five temperture variables are among several ecological settings variables that collectively characterize the biophysical setting of each 30 m cell at a given point in time (McGarigal et al 2017). The temperature regime strongly affects species composition, as well as rates of ecological processes such as nutrient cycling. We’ve chosen five variables to represent different aspects of temperature. All five variables have future versions that incorporate climate change via General Circulation Models (GCMs) (as described in the technical document on climate, McGarigal et al 2017).
Environmental Sciences | Sustainability
McGarigal, Kevin; Compton, Brad; Plunkett, Ethan; DeLuca, Bill; and Grand, Joanna, "Designing Sustainable Landscapes: Growing season degree days, Heat index, and Minimum winter temperature settings variables" (2017). Data and Datasets. 13. | <urn:uuid:e1e89e3f-6cec-419f-86fc-ce79d162e29b> | 2.828125 | 181 | Academic Writing | Science & Tech. | 11.000744 | 95,569,479 |
NOAA's GOES-East satellite image showed a large circulation associated with Tropical Depression 8 or TD8 after it was officially designated a depression by the National Hurricane Center. The image was created by NASA's GOES Project at the NASA Goddard Space Flight Center in Greenbelt, Md.
NOAA's GOES-East satellite captured the birth of Tropical Depression Eight in the southwestern Gulf of Mexico at 2:31 p.m. EDT on Sept. 6.
Image Credit: NASA GOES Project
The center of TD8 formed right along the eastern coast of Mexico near Tampico and was making landfall after it formed. At 2:30 p.m. EDT, the center of the depression was directly over Tampico. It had maximum sustained winds near 35 mph/55 kph and was moving to the west-southwest at 6 mph/9 kph.
Minimum central pressure is 1009 millibars. Despite making landfall quickly, there are no watches and warnings in effect, although it is expected to drop rainfall between 3 and 5 inches in the Mexican states of Veracruz and Tamaulipas. Some areas may receive isolated maximum amounts up to 10 inches, and could experience flash-flooding and mudslides.
According to the National Hurricane Center, TD8 is going to be short-lived because it is moving over land. In fact, the NHC expects the depression to become a remnant low pressure area over the weekend of Sept. 7 and 8 as it drops more rainfall on its track to the west-southwest.Text credit: Rob Gutro
Rob Gutro | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Life Sciences
18.07.2018 | Materials Sciences
18.07.2018 | Health and Medicine | <urn:uuid:2059b799-2989-4ad2-9504-0037124eb34b> | 3.34375 | 962 | Content Listing | Science & Tech. | 47.857277 | 95,569,481 |
Small button containing 241AmO2 from a smoke alarm
|Natural abundance||none (synthetic)|
|Isotope mass||241.056829144 u|
52.936008 MeV52,936.008 keV
7.543272 MeV7,543.272 keV
|Decay mode||Decay energy (MeV)|
|CD (Cluster Decay)||93.923|
|Complete table of nuclides|
Americium-241 (241Am) is an isotope of americium. Like all isotopes of americium, it is radioactive. 241Am is the most common isotope of americium. It is the most prevalent isotope of americium in nuclear waste. Americium-241 has a half-life of 432.2 years. It is commonly found in ionization type smoke detectors. It is a potential fuel for long-lifetime radioisotope thermoelectric generators (RTGs). Its common parent nuclides are β− from 241Pu, EC from 241Cm and α from 245Bk. 241Am is fissile and the critical mass of a bare sphere is 57.6-75.6 kilograms and a sphere diameter of 19–21 centimeters. Americium-241 has a specific activity of 3.43 Ci/g (curies per gram or 126.9 gigabequerels (GBq) per gram). It is commonly found in the form of americium-241 dioxide (241AmO2). This isotope also has one meta state; 241mAm, with an exitation energy of 2.2 MeV, and a half-life of 1.23 μs. Its presence in plutonium is determined by the original concentration of plutonium-241 and the sample age. Because of the low penetration of alpha radiation, americium-241 only poses a health risk when ingested or inhaled. Older samples of plutonium containing plutonium-241 contain a buildup of 241Am. A chemical removal of americium-241 from reworked plutonium (e.g. during reworking of plutonium pits) may be required in some cases.
Americium-241 has been produced in small quantities in nuclear reactors for decades, and many kilograms of 241Am have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about US$1,500 per gram of 241Am, remains almost unchanged owing to the very complex separation procedure.
Americium-241 is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process:
The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am:
The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it converts to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 14 years, and the 241Am amount reaches a maximum after 70 years.
The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm:
The α-decay energies are 5.486 MeV for 85% of the time (the one which is widely accepted for standard α-decay energy), 5.443 MeV for 13% of the time, and 5.388 MeV for the remaining 2%. The γ-ray energy is 59.5409 keV for the most part, with little amounts of other energies such as 13.9 keV, 17.8 keV and 26.4 keV.
The second most common type of decay for americium-241 is cluster decay, with a branching ratio of less than 7.4×10−16. Also shown as follows:
The least common (rarest) type of decay that americium-241 undergoes is spontaneous fission, with a branching ratio of 4×10−12 and happening 1.2 times a second per gram of 241Am. It is written as such (the asterisk denotes an excited nucleus):
Ionization-type smoke detector
Americium-241 is the only synthetic isotope to have found its way into the household, where the most common type of smoke detector (the ionization-type) uses 241AmO2 (americium-241 dioxide) as its source of ionizing radiation. This isotope is preferred over 226Ra as 241Am emits 5 times more alpha particles and also emits relatively little harmful gamma radiation. With its half-life of 432.2 years, the americium in a smoke detector decreases and includes about 3% neptunium after 19 years, and about 5% after 32 years.The amount of americium in a typical new smoke detector is 0.29 microgram (about one-third the weight of a grain of sand) with an activity of 1 microcurie/37 kilobequerels (1.0 μCi/37 kBq). Some old industrial smoke detectors (notably from the Pyrotronics Corporation) can contain up to 80 μCi. The amount of 241Am declines slowly as it decays into neptunium-237 a different transuranic element with a much longer half-life (about 2.14 million years). The radiated alpha-particles pass through an ionization chamber, an air-filled space between two electrodes, which allows a small, constant electric current to pass between the capacitor plates due to the radiation ionizing the air space between. Any smoke that enters the chamber blocks/absorbs some of the alpha particles from freely passing through and reduces the ionization and therefore causes a drop in the current. The alarm's circuitry detects this drop in the current and as a result, triggers the piezoelectric buzzer to sound. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering. However, it is more prone to false alarms.
The process for making the americium used in the buttons on ionization-type smoke detectors begins with americium dioxide. The AmO2 is thoroughly mixed with gold, shaped into a briquette, and fused by pressure and heat at over 1470°F (800°C). A backing of silver and a front covering of gold (or an alloy of gold or palladium) are applied to the briquette and sealed by hot forging. The briquette is then processed through several stages of cold rolling to achieve the desired thickness and levels of radiation emission. The final thickness is about 0.008 inches (0.2 mm), with the gold cover representing about one percent of the thickness. The resulting foil strip, which is about 0.8 inches (20 mm) wide, is cut into sections 39 inches (1 meter) long. The sources are punched out of the foil strip. Each disc, about 0.2 inches (5 mm) in diameter, is mounted in a metal holder, usually made of aluminium. The holder is the housing, which is the majority of what is seen on the button. The thin rim on the holder is rolled over to completely seal the cut edge around the disc.
As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active isotope of radioisotope thermoelectric generators, for use in spacecraft. Even though americium-241 produces less heat and electricity than plutonium-238 (the power yield is 114.7 mW/g for 241Am vs. 390 mW/g for 238Pu). and although its radiation poses a bigger threat to humans owing to gamma and neutron emission, it has advantages for long duration missions with a significantly longer half life and the European Space Agency is working on RTGs based on americium-241 for its space probes, as a result of the global shortage of plutonium-238 and easy access to americium 241 in Europe from nuclear waste reprocessing.
Its shielding requirements in an RTG are the second lowest of all possible isotopes: only 238Pu requires less. An advantage over 238Pu is that it is produced as nuclear waste and is nearly isotopically pure. Prototype designs of 241Am RTGs expect 2-2.2 We/kg for 5-50 We RTGs design, putting 241Am RTGs at parity with 238Pu RTGs within that power range.
Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction:
The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations.
Production of other elements
Americium-241 is sometimes used as a starting material for the production of other transuranic elements and transactinides – for example, neutron bombardment of 241Am yields 242Am:
From there, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu:
In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm:
Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 253Es (einsteinium) or 263Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron that had been used for many previous experiments. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O.
Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, this isotope has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity).
Americium-241 gamma rays has been used to provide passive diagnosis of thyroid function. This medical application is now obsolete. Americium-241's gamma rays can provide reasonable quality radiographs, with a 10-minute exposure time. 241Am radiographs have only been taken experimentally due to the long exposure time which increases the effective dose to living tissue. Reducing exposure duration reduces the chance of ionization events causing damage to cells and DNA, and is a critical component in the "time, distance, shielding" maxim used in radiation protection.
Americium-241 is a form of americium therefore having the same general hazards. Americium and its isotopes are both extremely toxic and radioactive. Although α-particles can be stopped by a sheet of paper, there are serious health concerns for ingestion of α-emitters. Americium and its isotopes are also very chemically toxic as well, in the form of heavy-metal toxicity. As little as 0.03 μCi (1,110 Bq) is the maximum permissible body burden for 241Am.
Americium-241 is an α-emitter with a weak γ-ray byproduct. Safely handling americium-241 requires knowing and following proper safety precautions, as without them it would be extremely dangerous. Its specific gamma dose constant is 3.14 x 10−1 mR/hr/mCi or 8.48 x10−5 mSv/hr/MBq at 1 meter.
If consumed, americium-241 is excreted within a few days and only 0.05% is absorbed in the blood. From there, roughly 45% of it goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity.
Americium-241 often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In the U.S., the "Radioactive Boy Scout" David Hahn was able to concentrate americium-241 from smoke detectors after managing to buy a hundred of them at remainder prices and also stealing a few. There have been a few cases of exposure to americium-241, the worst case being that of Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75, not as a result of exposure, but of a heart disease which he had before the accident.
- http://typhoon.tokai-sc.jaea.go.jp/icnc2003/Proceeding/paper/6.5_022.pdf[full citation needed] Dias et al.
- https://www-nds.iaea.org/relnsd/vcharthtml/VChartHTML.html. Missing or empty
|title=(help) (dead link 11 February 2018)
- Greenwood, p. 1262
- Smoke detectors and americium, World Nuclear Association, January 2009, Retrieved 28 November 2010
- BREDL Southern Anti-Plutonium Campaign, Blue Ridge Environmental Defense League, Retrieved 28 November 2010
- Sasahara, A.; et al. (2004). "Neutron and Gamma Ray Source Evaluation of LWR High Burn-up UO2 and MOX Spent Fuels". Journal of Nuclear Science and Technology. 41 (4): 448–456. doi:10.3327/jnst.41.448. Archived from the original on 2010-11-19. article/200410/000020041004A0333355.php Abstract
- "GAMMA RAY SPECTRUM OF AM-241 IN A BACK SCATTERING GEOMETRY USING A HIGH PURITY GERMANIUM DETECTOR" (PDF).
- "Smoke Detectors and Americium", Nuclear Issues Briefing Paper, 35, May 2002, archived from the original on 2008-03-03, retrieved 2015-08-26
- Residential Smoke Alarm Performance, Thomas Cleary. Building and Fire Research Laboratory, National Institute of Standards and Technology; UL Smoke and Fire Dynamics Seminar. November 2007
- Bukowski, R. W. et al. (2007) Performance of Home Smoke Alarms Analysis of the Response of Several Available Technologies in Residential Fire Settings, NIST Technical Note 1455-1
- "Smoke detectors and americium-241 fact sheet" (PDF). Canadian Nuclear Society. Retrieved 31 August 2009.
- Gerberding, Julie Louise (2004). "Toxicological Profile For Americium" (PDF; 2.1 MB). United States Department of Health and Human Services/Agency for Toxic Substances and Disease Registry. Archived (PDF) from the original on 6 September 2009. Retrieved 29 August 2009.
- "Smoke Detector".
- Basic elements of static RTGs
- G.L. Kulcinski, NEEP 602 Course Notes (Spring 2000), Nuclear Power in Space, University of Wisconsin Fusion Technology Institute (see last page)
- Dr Major S. Chahal, , UK Space Agency, 9 Feb 2012, retrieved 13 Nov 2014.
- Space agencies tackle waning plutonium stockpiles, Spaceflight now, 9 July 2010
- Nell Greenfield-Boyce, Plutonium Shortage Could Stall Space Exploration, NPR, 28 Sep 2009, retrieved 2 Nov 2010.
- R.M. Ambrosi, et al. , Nuclear and Emerging Technologies for Space (2012), retrieved 23 Nov 2014.
- Binder, Harry H. (1999). Lexikon der chemischen Elemente: das Periodensystem in Fakten, Zahlen und Daten : mit 96 Abbildungen und vielen tabellarischen Zusammenstellungen. ISBN 978-3-7776-0736-8.
- Greenwood, p. 1252
- Nuclear Data Viewer 2.4, NNDC
- "Americium-241 Uses" (PDF). Archived from the original (PDF) on 2015-11-29.
- "Americium Am".
- "AMERICIUM-241 [241Am]".
- Frisch, Franz Crystal Clear, 100 x energy, Bibliographisches Institut AG, Mannheim 1977, ISBN 3-411-01704-X, p. 184
- Ken Silverstein, The Radioactive Boy Scout: When a teenager attempts to build a breeder reactor. Harper's Magazine, November 1998
- "'Radioactive Boy Scout' Charged in Smoke Detector Theft". Fox News. 4 August 2007. Archived from the original on 8 December 2007. Retrieved 28 November 2007.
- "Man dubbed 'Radioactive Boy Scout' pleads guilty". Detroit Free Press. Associated Press. 27 August 2007. Archived from the original on 29 September 2007. Retrieved 27 August 2007.
- "'Radioactive Boy Scout' Sentenced to 90 Days for Stealing Smoke Detectors". Fox News. 4 October 2007. Archived from the original on 13 November 2007. Retrieved 28 November 2007.
- Cary, Annette (25 April 2008). "Doctor remembers Hanford's 'Atomic Man'". Tri-City Herald. Archived from the original on 10 February 2010. Retrieved 17 June 2008.
- AP wire (3 June 2005). "Hanford nuclear workers enter site of worst contamination accident". Archived from the original on 13 October 2007. Retrieved 17 June 2007. | <urn:uuid:7033261e-5c8f-4641-8d67-7a41d9f61976> | 3.5 | 4,023 | Knowledge Article | Science & Tech. | 58.481201 | 95,569,492 |
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Fetching latest commit…
Cannot retrieve the latest commit at this time.
simplec 1.0d1 - demonstration of compiler construction with yacc & lex Copyright (C) 2007 Andrew Tomazos <firstname.lastname@example.org> This software is available under the licensing terms set forth in the attached LICENSE file. simplec demonstrates the use of yacc and lex to construct a basic compiler using C and C++. The grammar of the simplec language is as follows: CompilationUnit: (Statement ';')+ # a compilation unit conists of # semi-colon seperated statements Statement: LocalVarDecl # declare a variable: int a; Assignment # assign a variable: a = b; ScanStmt # read input into variable: scan a; PrintStmt # print out a variable: print a; LocalVarDecl: "int" Identifier PrintStmt: "print" Identifier ScanStmt: "scan" Identifier Assignment: Identifier "=" Expression Expression: Identifier Number "(" Expression ")" Expression Operator Expression "-" Expression # negation Operator: "+" | "-" | "*" | "/" To create the compiler, call make: $ cd simplec-1.0 $ make This will compile everything into an executable called "simplec" and then will perform a test. Ideally you should see "test passed". To use the resulting "simplec", write a program with the above grammar. (See "test_program" for an example). Save it to a file "my_program" and then execute it as follows: $ simplec my_program Standard input is used for scan statements, standard output is used for print statements. They can be redirected. To execute the included test manually do the following: $ simplec test_program < test_input > test_output $ diff test_output test_expected Have fun, Andrew. -- Andrew Tomazos <email@example.com> <http://www.tomazos.com> | <urn:uuid:dfcadf85-ce71-49ad-b717-53f7f2f41fc0> | 2.53125 | 455 | Documentation | Software Dev. | 32.812999 | 95,569,512 |
Please see attached.
Hi. Can someone please show me how to do the following problem? Thank
= 632.8 nm) strikes a pair of slits at normal incidence, forming a
double-slit interference pattern on a screen located 1.20 m from the
slits. Figure 28-34 shows the interference pattern observed on the
screen. What is the slit separation?
Recently Asked Questions
- the economic theory on the concept and its effect on the economy? concepts include demand and supply, consumer behavior and choices, costs of firms,
- Need help cant figures these out. TRIG ! Ugh How do you find the exact value of all angles in the interval [0,2π) for theses equations? 1) 2cos(2θ)-1=0 2)
- In appositional growth of cartilage, Select one: a. chondroblasts within the tissue proliferate and add more matrix from the inside. b. new chondrocytes and | <urn:uuid:8201fdc5-f420-4bd5-87b9-e09d5b08863f> | 3.34375 | 210 | Q&A Forum | Science & Tech. | 66.751685 | 95,569,514 |
Study may lead to more efficient water-desalination systems, fundamental understanding of fluid flow
Consider the nearest water surface: a half-full glass on your desk, a puddle outside your window, or a lake across town. All of these surfaces represent liquid-vapor interfaces, where liquid meets air. Molecules of water vapor constantly collide with these liquid surfaces: Some make it through the surface and condense, while others simply bounce off.
The probability that a vapor molecule will bounce, or reflect, off a liquid surface is a fundamental property of water, much like its boiling point. And yet, in the last century, there has been little agreement on the likelihood that a water molecule will bounce off the liquid surface.
"When a water vapor molecule hits a surface, does it immediately go into the liquid? Or does it come off and hit again and again, then eventually go in?" says Rohit Karnik, an associate professor of mechanical engineering at MIT. "There's a lot of controversy, and there's no easy way to measure this basic property."
Knowing this bouncing probability would give scientists an essential understanding of a variety of applications that involve water flow: the movement of water through soil, the formation of clouds and fog, and the efficiency of water-filtration devices.
This last application spurred Karnik and his colleagues — Jongho Lee, an MIT graduate student in mechanical engineering, and Tahar Laoui, a professor at the King Fahd University of Petroleum and Minerals (KFUPM) in Saudi Arabia — to study water's probability of bouncing. The group is developing membranes for water desalination; this technology's success depends, in part, on the ability of water vapor to flow through the membrane and condense on the other side as purified water.
By observing water transport through membranes with pores of various sizes, the group has measured a water molecule's probability of condensing or bouncing off a liquid surface at the nanoscale. The results, published in Nature Nanotechnology, could help in designing more efficient desalination membranes, and may also expand scientists' understanding of the flow of water at the nanoscale.
"Wherever you have a liquid-vapor surface, there is going to be evaporation and condensation," Karnik says. "So this probability is pretty universal, as it defines what water molecules do at all such surfaces."
Getting in the way of flow
One of the simplest ways to remove salt from water is by boiling and evaporating the water — separating it from salts, then condensing it as purified water. But this method is energy-intensive, requiring a great deal of heat.
Karnik's group developed a desalination membrane that mimics the boiling process, but without the need for heat. The razor-thin membrane contains nanoscale pores that, seen from the side, resemble tiny tubes. Half of each tube is hydrophilic, or water-attracting, while the other half is hydrophobic, or water-repellant.
As water flows from the hydrophilic to the hydrophobic side, it turns from liquid to vapor at the liquid-vapor interface, simulating water's transition during the boiling process. Vapor molecules that travel to the liquid solution on the other end of the nanopore can either condense into it or bounce off of it. The membrane allows higher water-flow rates if more molecules condense, rather than bounce.
Designing an efficient desalination membrane requires an understanding of what might keep water from flowing through it. In the case of the researchers' membrane, they found that resistance to water flow came from two factors: the length of the nanopores in the membrane and the probability that a molecule would bounce, rather than condense.
In experiments with membranes whose nanopores varied in length, the team observed that greater pore length was the main factor impeding water flow — that is, the greater the distance a molecule has to travel, the less likely it is to traverse the membrane. As pores get shorter, bringing the two liquid solutions closer together, this effect subsides, and water molecules stand a better chance of getting through.
But at a certain length, the researchers found that resistance to water flow comes primarily from a molecule's probability of bouncing. In other words, in very short pores, the flow of water is constrained by the chance of water molecules bouncing off the liquid surface, rather than their traveling across the nanopores. When the researchers quantified this effect, they found that only 20 to 30 percent of water vapor molecules hitting the liquid surface actually condense, with the majority bouncing away.
A no-bounce design
They also found that a molecule's bouncing probability depends on temperature: 64 percent of molecules will bounce at 90 degrees Fahrenheit, while 82 percent of molecules will bounce at 140 degrees. The group charted water's probability of bouncing in relation to temperature, producing a graph that Karnik says researchers can refer to in computing nanoscale flows in many systems.
"This probability tells us how different pore structures will perform in terms of flux," Karnik says. "How short do we have to make the pore and what flow rates will we get? This parameter directly impacts the design considerations of our filtration membrane."
Lee says that knowing the bouncing probability of water may also help control moisture levels in fuel cells.
"One of the problems with proton exchange membrane fuel cells is, after hydrogen and oxygen react, water is generated. But if you have poor control of the flow of water, you'll flood the fuel cell itself," Lee says. "That kind of fuel cell involves nanoscale membranes and structures. If you understand the correct behavior of water condensation or evaporation at the nanoscale, you can control the humidity of the fuel cell and maintain good performance all the time."
The research was funded by the Center for Clean Water and Clean Energy at MIT and KFUPM.
ARCHIVE: Need a water filter? Peel a tree branch: http://web.mit.edu/newsoffice/2014/need-a-water-filter-peel-a-tree-branch-0226.html
Abby Abazorius | EurekAlert!
Princeton-UPenn research team finds physics treasure hidden in a wallpaper pattern
20.07.2018 | Princeton University
Relax, just break it
20.07.2018 | DOE/Argonne National Laboratory
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:7a7ddf72-095f-4a63-accb-6cd007592a58> | 3.984375 | 1,851 | Content Listing | Science & Tech. | 37.794116 | 95,569,527 |
Scientists say sea's surface helps clear up smog while we sleep
- Effect seen in cities close to the coast
- Ocean removes about 15 percent of these chemicals overnight
The surface of the sea takes up nitrogen oxides that build up in polluted air at night, new measurements on the coast of southern California have shown.
The ocean removes about 15 percent of these chemicals overnight along the coast in cities such as Los Angeles.
They says the sea's importance may have been overlooked in the battle against smog.
The skyline of downtown Los Angeles through a layer of smog: Researchers say the surface of the sea takes up nitrogen oxides that build up in polluted air at night.
The surface of the sea takes up nitrogen oxides that build up in polluted air at night, new measurements on the coast of southern California with this instrument have shown
Nitrogen oxides, formed by the burning of fossil fuels, generate photochemical smog.
The team say conditions were just right one night in February when onshore winds blew a polluted air mass from the Los Angeles Basin along the coast and toward the sea, allowing the researchers to track what happened to the nitrogen oxide gases as they swept across the surface of the sea.
'One often neglected path
is reaction at the surface of the sea,' said Tim Bertram, an assistant
professor of chemistry at the University of California, San Diego, who
led the research.
'The sea has a salty, rich, organic surface with the potential for a variety of chemical reactions.'
track the cycle of nitrogen in the atmosphere, they studied dinitrogen
pentoxide, a molecule that results from the oxidation of nitrogen
It can react with chloride from sea salt, for example, to form
Houston, we have a rpblem: The city has suffered major fog issues - and researcher now say they have a better understanding of how the sea affects smog
When sunlight hits nitryl chloride the next morning, it regenerates nitrogen oxides and frees a chlorine radical that attacks other molecules in reactions that can lead to the formation of ozone.
a team of atmospheric chemists reports in the early online edition of the Proceedings of the National Academy of Sciences the week of March 3.
Atmospheric chemists would like to account for the fates of these molecules in a kind of budget that indentifies their sources and sinks – ways in which they are removed from the air.
Michelle Kim, a graduate student at UC San Diego’s Scripps Institution of Oceanography working with Bertram, deployed instruments at the end of the institution’s pier in La Jolla, Calif., to measure the flux of these molecules.
On the night of February 20, 2013, the usual offshore breeze reversed to provide the pure ocean fetch needed to measure the exchange between air and sea.
The airmass she measured also contained emissions from Los Angeles that had blown out to sea.
That gave Kim an opportunity to measure the fates of dinitrogen pentoxide and its product, nitryl chloride, over the course of a night.
By simultaneously measuring concentrations of both molecules and turbulence in the air above the sea surface, Kim saw a net movement of dinitrogen pentoxide into the ocean, but was surprised to see no net exit of nitryl chloride into the air.
'We knew from previous work that nitrogen oxides are lost to various surfaces – sea spray and other aerosols, even snowpack,' she said.
'This study shows – for the first time - that the ocean is a terminal sink for nocturnal nitrogen oxides, and not a source for nitryl chloride under these sampling conditions.'
Most watched News videos
- Sir David Attenborough shuts down Naga Munchetty's questions
- May urges EU to take more flexible view on Irish border issue
- Man fatally shoots a father during an argument over a handicap spot
- Drone footage shows missing Scottish climber in the Himalayas
- Cohen taped Trump discussing payment to Playboy model
- London commuter sings out loud and doesn't care who hears him
- Shocking video shows mother brutally beating her twin girls
- Roseanne Bar explains her Valerie Jarrett tweet in eccentric rant
- Dennis Quaid says he ‘disappeared’ during Meg Ryan relationship
- Family and friends pay their respects to Alesha MacPhail
- Duck boats struggle to stay afloat on Missouri river
- Roseanne Barr gives official statement on her Valerie Jarrett tweet | <urn:uuid:029e13a7-d591-42bb-b4bf-63ef4021c093> | 3.140625 | 924 | Truncated | Science & Tech. | 25.240078 | 95,569,534 |
Despite the extensive network of moisture-sensitive tree-ring chronologies in western North America, relatively few are long enough to document climatic variability before and during the Medieval Climate Anomaly (MCA) ca. AD 800-1300. We developed a 2300-yr tree-ring chronology extending to 323 BC utilizing live and remnant Douglas-fir (Pseudotsuga menziesii) from the Tavaputs Plateau in northeastern Utah. A resulting regression model accounts for 70% of the variance of precipitation for the AD 1918-2005 calibration period. Extreme wet and dry periods without modern analogues were identified in the reconstruction. The MCA is marked by several prolonged droughts, especially prominent in the mid AD 1100s and late 1200s, and a lack of wet or dry single-year extremes. The frequency of extended droughts is not markedly different, however, than before or after the MCA. A drought in the early AD 500s surpasses in magnitude any other drought during the last 1800 yr. A set of four long high-resolution records suggests this drought decreased in severity toward the south in the western United States. The spatial pattern is consistent with the western dipole of moisture anomaly driven by El Niño and is also similar to the spatial footprint of the AD 1930s "Dust Bowl" drought. © 2009 Elsevier Inc. on behalf of University of Washington.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:04a040ad-a690-41d8-909c-c748d2a3722f> | 2.875 | 304 | Academic Writing | Science & Tech. | 38.450192 | 95,569,554 |
Anatoly I. Papusha is a world-famous scientist, a Laureate of the State Prize of the USSR, Doctor of Technical Sciences, and a scientist. In the 80s, the world’s most powerful rocket engine was created for the Buran space program. To test the engine, A.I. Papusha was part of the group that created the largest combustion chamber in the world.
The new rocket engine released a jet of gases that contained about a ton of carbon monoxide each second. For 180 seconds, the engine could produce as much CO gas as is produced by all vehicles in Moscow in a day. However, A.I Papush came up with an innovation that reduced the emission of CO from 1000kg to just 1Kg per second.
For his achievement, he was given the State Prize of the USSR. The invention he created has various applications. The most important is in the area of processing oil residues and producing liquid fuels such as gasoline, kerosene, and diesel. The main area of the development of this project has already been completed. A.I. Papusha has already patented the technology. If it was adopted globally, it might help to save the ecology of the earth.
Challenges Papusha Seeks To Solve
Right now, the world is awake to global warming. Everyone is aware that the ecology of the world is in a bad state. There is still a lot that needs to be done to save the world. This will ensure that there is a good world for coming generations.
All nations have refineries, which face the issue of storage and utilization of the residue. The current residue rates are about 20%. That means for every million barrels, there are about a million barrels of residue created.
Most refineries will usually have a surrounding storage, which contains a huge mass of black waste that pollutes the earth. Each day, this waste continues to grow. The current solutions for processing this waste are too expensive and have a low impact. Besides that, the payback period for existing methods is quite drawn out.
The Papusha Blockchain Rocket & Space Fuel Oil Technology Solution
A.I. Papush intends to bring about new technology that will change the world. This project will utilize transonic combustion, which will eliminate toxic residues. Via modifications of inventions created by A.I Papush, it will be possible to extract valuable products like kerosene, diesel, and synthetic oil.
The PRT unit that has been developed has small dimensions, reliable, high efficiency, and a low payback period. This small PRT unit will have an output of up to 60% liquid fuel. There is simply no competitor to this technology in the world. There has already been a lot of interest expressed by major oil refineries in the world. A single PRT unit costs about $950,000 USD, which is many times less than what other less efficient units cost. The payback period has been estimated at about 7 months. | <urn:uuid:923a8934-a3a0-4b0b-bfcb-44eaa5859e27> | 2.640625 | 610 | Product Page | Science & Tech. | 55.803875 | 95,569,605 |
Exploring Linkages between Abiotic Oceanographic Processes and a Top-trophic Predator in an Antarctic Ecosystem
- 218 Downloads
Climatic variation affects the physical and biological components of ecosystems, and global-climate models predict enhanced sensitivity in polar regions, raising concern for Antarctic animal populations that may show direct responses to changes in sea-ice distribution and extent, or indirect responses to changes in prey distribution and abundance. Here, we show that over a 30-year period in the Ross Sea, average weaning masses of Weddell seals, Leptonychotes weddellii, varied strongly among years and were correlated to large-scale climatic and oceanographic variations. Foraging success of pregnant seals (reflected by weaning mass the following pupping season) increased during summers characterized by reduced sea-ice cover and positive phases of the southern oscillation. These results demonstrate a correlation between environmental variation and an important life history characteristic (weaning mass) of an Antarctic marine mammal. Understanding the mechanisms that link climatic variation and animal life history characteristics will contribute to understanding both population dynamics and global climatic processes. For the world’s most southerly distributed mammal species, the projected trend of increasing global climate change raises concern because increasing sea-ice trends in the Ross Sea sector of Antarctica will likely reduce populations due to reduced access to prey as expressed through declines in body condition and reproductive performance.
KeywordsAntarctica El Nino southern oscillation sea-ice extent climatic variation Weddell seal Leptonychotes weddellii
Funding for this project was provided by a National Science Foundation grant, OPP−0225110. The data were collected under various National Science Foundation grants to R. A. Garrott and J. J. Rotella at Montana State University, D. B. Siniff at the University of Minnesota, M.A. Castellini at the University of Alaska, Fairbanks, and J. W. Testa at the University of Alaska, Fairbanks. We thank all the personnel who participated in the long-term Weddell seal demography study since 1969. We also thank Doug DeMaster and two anonymous reviewers for their insightful comments on an earlier version of this paper.
- Ainley DJ. 2003. The Ross Sea, Antarctica, where all ecosystem processes still remain for study, but maybe not for long. Mar Ornithol 31:1–16Google Scholar
- Baker JD, Fowler CW. 1992. Pup weight and survival of northern fur seals Callorhinus ursinus. J Zool (London) 227:231–8Google Scholar
- Bruggeman JE, Garrott RA, Bjornlie DD, White PJ, Watson FGR. 2006. Temporal variability in winter travel patterns of Yellowstone bison: the effects of road grooming. Ecol Appl (in press)Google Scholar
- Bureau of Meteorology, Australian Government. 2005. Southern Oscillation Index Archives, 1976 to present. http://www.bom.gov.au/climate/Google Scholar
- Burnham KP, Anderson DR. 2002. Model selection and multimodel inference: a practical information-theoretic approach, 2nd edn. Springer Science and Business Media, Inc., New YorkGoogle Scholar
- Castellini MA, Davis RW, Kooyman GL. 1992. Annual cycles of diving behavior and ecology of the Weddell seal. Bull Scripps Inst Oceanogr 28:1–54Google Scholar
- Comiso JC 1999. updated 2003. Bootstrap sea-ice concentrations for NIMBUS−7 SMMR and DMSP SSM/I, 1978–2003. National Snow and Ice Data Center, Boulder, CO, USAGoogle Scholar
- Garrott RA, Rotella J, Siniff DB, Hadley GL, Proffitt KM. 2005. An aberrant year in a long-term study of Weddell seal population dynamics: what can be learned from observing extremes? IX SCAR International Biology Symposium, Curitiba, Brazil, pp 25–29Google Scholar
- Hastings KK, Testa JW, Rexstad EA. 1999. Interannual variation in survival of juvenile Weddell seals (Leptonychotes weddellii) from McMurdo Sound, Antarctica: effects of cohort, sex, and age. J Zool London 248:307–23Google Scholar
- Hill SE. 1987. Reproductive ecology of Weddell seals (Leptonychotes weddellii) in McMurdo Sound, Antarctica. Ph.D. dissertation. University of Minnesota, St. Paul, MNGoogle Scholar
- Houghton JT, Ding Y, Griggs DJ, Noguer M, van der Linden PJ, Dai X, Maskell K, Johnson CA, Eds. 2001. Climate change 2001: the scientific basis. Cambridge University Press, CambridgeGoogle Scholar
- Kellogg WM. 1975. Climate feedback mechanisms involving the polar regions. In Weller G, Bowling SA, Eds. Climate of the Arctic. University of Alaska, Fairbanks. pp 111–6Google Scholar
- Lefebvre W, Goosse H, Timmermann R, Fichefet T. 2004. Influence of the southern annular mode on the sea-ice–ocean system. J Geophys Res Oceans. 109. DOI 10.1029/2004JC002403Google Scholar
- McMahon CR, Burton HR, Bester MN. 2000. Weaning mass and the future survival of juvenile southern elephant seals, Mirounga leonina, at Macquarie Island. Antarct Sci 12:149–53Google Scholar
- National Ice Center. 2005. Sea-ice gridded climatology (SIGC) coverage and extent statistics, 1973–1994. [online] URL: http://www.natice.noaa.gov/.
- Oftedal OT, Boness DJ, Tedman RA. 1987. The behavior, physiology, and anatomy of lactation in the Pinnipedia. Curr Mammal 1:175–245Google Scholar
- Zwally HJ, Comiso JC, Parkinson CL, Cavalieri DJ, Gloerson P. 2002. Variability of Antarctic sea-ice 1979–1998. J Geophys Res Oceans 107:9–1Google Scholar | <urn:uuid:62d90d41-f498-4b20-9d50-8f54450f431a> | 2.734375 | 1,319 | Academic Writing | Science & Tech. | 46.988471 | 95,569,634 |
|Size-exclusion chromatography is one of the chromatographic methods by which based on the size of the molecules the solution can be separated, and in some cases molecular weight is also calculated for separation of molecules.
Innovations are new idea, device or process. Innovations are the application of better solutions that meet new requirements, inarticulated needs or existing market needs. It is proficient through more effective products, processes, services, technologies, or new ideas that are readily available to markets, governments and society. Innovations are something original and novel, as a significant, new that breaks into the market or society. | <urn:uuid:c56027f1-7594-469c-ad04-d975e003a093> | 2.8125 | 128 | Knowledge Article | Science & Tech. | 8.138 | 95,569,664 |
A new positive effect of climate change has been identified. The warming climate is causing wetland plants, like the common cattail, to thrive while at the same time forest cover is being lost, causing increased methane production by northern freshwater lakes.
A study recently published in Nature Communications (Emilson et al. ďClimate-driven shifts in sediment chemistry enhance methane production in northern lakesĒ) looked at the amount of methane released during the decomposition of three types of plant debris commonly found in the sediment in freshwater lakes in the northern parts of North America: pine needles, leaves from deciduous trees and cattails.
The researchers found that the organic matter from the decaying pine needles and deciduous leaves inhibited methane production whereas decomposing cattails produced large amounts of methane (400 times more than the pine debris sediment and almost 2,800 times that of the deciduous leaf debris sediment!). Methane is an important greenhouse gas - 25 times more potent than carbon dioxide - and freshwater lakes are big methane producers (they contribute 16 percent of Earthís natural methane emissions). With models predicting that northern lakes could double their methane emissions in the next 50 years as cattail populations increase, this could mean a significant unexpected pulse of methane into our already greenhouse gas heavy atmosphere.
I was thinking about this study while participating in my local marsh cleanup this weekend. We were trudging around the edge of the marsh picking up debris washed in by the flooding this past winter. This is a freshwater marsh that borders a saltwater marsh. It was a beautiful day. Red-winged blackbirds and marsh wrens were singing from the cover of dried out cattail spikes. New, bright green, baby cattails were poking up through last yearís dead grasses. I was telling friends about the cattail study when I noticed an oily sheen on the water around the cattail stems. I was so excited! This wasnít pollution, there hadnít been a nearby oil spill, this was natural oil formed by the bacteria responsible for decomposing plant debris - just like in the study!
Any time decomposition is happening you can be certain it is being done by either bacteria or fungi. Decomposition is the process of breaking down big molecules into their parts. In cattail marshes, there are bacteria that live up near the surface where there is plenty of oxygen (aerobic bacteria) and also bacteria that live down in the muck where there isnít any oxygen (anaerobic bacteria). Aerobic bacteria release carbon dioxide as they chew through those big molecules, and anaerobic bacteria release methane. Methane, a gas, bubbles out of these swamps into the atmosphere. However, not all the methane escapes, some gets converted into heavier hydrocarbons (methane is the simplest hydrocarbon with a chemical formula of CH4) that donít evaporate, but instead float to the surface and produce an oily sheen that looks just like fossil fuel-based gasoline and oil spills because gas and oil are also hydrocarbons (they come from very old dead things). This oily sheen can also be formed just from the sheer amount of dead plant material lying around decomposing into its various parts ó there is a lot of natural oil in a plant cell - it is an important component of cell membranes and used as an energy source.
While I donít love the news about yet another positive feedback on climate change, I do love this reminder of the importance of bacteria to virtually every living system on Earth. I love being able to look at a marsh and see evidence of the presence of all those bacteria with something as simple as that oily sheen.
Susan Pike, a researcher and an environmental sciences and biology teacher at St. Thomas Aquinas High School, welcomes your ideas for future column topics. She may be reached at firstname.lastname@example.org. Read more of her Nature News columns online. | <urn:uuid:cec2c80a-fa87-4cda-94b9-0656c67c5823> | 3.8125 | 808 | Personal Blog | Science & Tech. | 37.254462 | 95,569,672 |
why do we need interfaces in java
I was also thinking about how interfaces are used. I hope this will help others:
An interface is a contract (or a protocol, or a common understanding) of what the classes can do. When a class implements a certain interface, it promises to provide implementation to all the abstract methods declared in the interface. Interface defines a set of common behaviors. The classes implement the interface agree to these behaviors and provide their own implementation to the behaviors. This allows you to program at the interface, instead of the actual implementation. One of the main usage of interface is provide a communication contract between two objects. If you know a class implements an interface, then you know that class contains concrete implementations of the methods declared in that interface, and you are guaranteed to be able to invoke these methods safely. In other words, two objects can communicate based on the contract defined in the interface, instead of their specific implementation. Secondly, Java does not support multiple inheritance (whereas C++ does). Multiple inheritance permits you to derive a subclass from more than one direct superclass.
This poses a problem if two direct superclasses have conflicting implementations. (Which one to follow in the subclass? ). However, multiple inheritance does have its place. Java does this by permitting you to "implements" more than one interfaces (but you can only "extends" from a single superclass). Since interfaces contain only abstract methods without actual implementation, no conflict can arise among the multiple interfaces. (Interface can hold constants but is not recommended. If a subclass implements two interfaces with conflicting constants, the compiler will flag out a compilation error. ) from: Showing 1 - 13 of 13 Ans Guest Answered On : Jun 29th, 2006 Java don't support Multiple Inheritance. For that we can use Interface. Interface is like the getway to comunicate between to classes. Not always we write methods in Interface, we can also create objects in the Interface. The Object which will be shared between two classes can be created in an Interface. Creating Seperate Interface is one of the key factor of Software Engineering.
Suppose Class "A" has many methods depends on several Classes' objects. Think that, one method creates connection depends on Class B object and another method makes business rules depending on Class C object. In this situation, you can actually use to Seperate Interface for better performence. rabbi Answered On : Jun 29th, 2006 Some times you want to hide your detail implementation, you can use interface as a representation of a class and interface method is like abstract so implementation depends on the client class. Pranita Answered On : Jul 17th, 2006 Interface acts as communicator for the classes. since interface contains many empty bodies in it so diffrent implementations can be provided for same method as per its use in diffrent classes so interfaces are needed. sujatham Answered On : Jul 29th, 2007 Without interfaces multiple inheritence is not possible in java. sampra Answered On : Feb 22nd, 2008 vkg_16my Answered On : Mar 1st, 2008 Interface is the contract between two party where one party will implement to that contract and other party will use that contract to invoke the implementation. that means contract will remane same but implementation can vary freely.
Example -- collection calsses, RMI. sampra Answered On : Mar 6th, 2008 ankit. mca4 Answered On : Mar 28th, 2009 A class can contain multiple interface. If B class want to define two methods of class A and class C wants three methods of class A then class A implements two interface one for B and another for C. kajal Answered On : Sep 28th, 2011 If we want the concept of multiple inheritance in java than we use interface. dust Answered On : Oct 25th, 2011 Interface basically provides a hierarchy of methods so we only implement that particular interface instead of making the instance of that. ok sampra Answered On : Mar 6th, 2012 Java dont support Multiple Inheritance. For that we can use Interface. punyamca Answered On : Feb 6th, 2015 For late binding purpose. to design a method so that a part of functionality has to be mentioned by end user(implementing class) Bhavesh Answered On : Feb 28th, 2015 To get the hierarchy of classes that I might need later.
- Views: 22
why do we use void main in c programming
why do we use annotations in java
why do we partition and format a hard disk
why do we need import statement in java
why do we need import statement in java
why do you need an audio interface
why do we use interfaces in java | <urn:uuid:8b1719bb-e72f-4dae-90a4-6c9c635262fc> | 3.390625 | 970 | Comment Section | Software Dev. | 40.637955 | 95,569,688 |
The team has pioneered work on the use of nanopores—tiny chambers that mimic the ion channels in the membranes of cells—for the detection and identification of a wide range of molecules, including DNA. Ion channels are the gateways by which the cell admits and expels materials like proteins, ions and nucleic acids. The typical ion channel is so small that only one molecule can fit inside at a time.
By tethering gold nanoparticles (large spheres in top image) to the nanopore (violet), the temperature around the nanopore can be changed quickly and precisely with laser light, allowing scientists to distinguish between similar molecules in the pore that behave differently under varied temperature conditions.
Previously, team members inserted a nanopore into an artificial cell membrane, which they placed between two electrodes. With this setup, they could drive individual molecules into the nanopore and trap them there for a few milliseconds, enough to explore some of their physical characteristics.
"A single molecule creates a marked change in current that flows through the pore, which allows us to measure the molecule's mass and electrical charge with high accuracy," says Joseph Reiner, a physicist at VCU who previously worked at NIST. "This enables discrimination between different molecules at high resolution. But for real-world medical work, doctors and clinicians will need even more advanced measurement capability."
A goal of the team's work is to differentiate among not just several types of molecules, but among the many thousands of different proteins and other biomarkers in our bloodstream. For example, changes in protein levels can indicate the onset of disease, but with so many similar molecules in the mix, it is important not to mistake one for another. So the team expanded their measurement capability by attaching gold nanoparticles to engineered nanopores, "which provides another means to discriminate between various molecular species via temperature control," Reiner says.
The team attached gold nanoparticles to the nanopore via tethers made from complementary DNA strands. Gold's ability to absorb light and quickly convert its energy to heat that conducts into the adjacent solution allows the team to alter the temperature of the nanopore with a laser at will, dynamically changing the way individual molecules interact with it.
"Historically, sudden temperature changes were used to determine the rates of chemical reactions that were previously inaccessible to measurement," says NIST biophysicist John Kasianowicz. "The ability to rapidly change temperatures in volumes commensurate with the size of single molecules will permit the separation of subtly different species. This will not only aid the detection and identification of biomarkers, it will also help develop a deeper understanding of thermodynamic and kinetic processes in single molecules."
The team is researching ways to improve semiconductor-based nanopores, which could further expand this new measurement capability.
*J.E. Reiner, J.W.F. Robertson, D.L. Burden, L.K. Burden, A. Balijepalli and J.J. Kasianowicz. Temperature sculpting in yoctoliter volumes. Journal of the American Chemical Society, DOI: 10.1021/ja309892e. Jan. 24, 2013.
Chad Boutin | EurekAlert!
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:5ad3bd4b-cb1a-45ee-bfd0-97ae5fdf4c71> | 4.03125 | 1,300 | Content Listing | Science & Tech. | 34.153986 | 95,569,691 |