text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Are aphid parasitoids locally adapted to the prevalence of defensive symbionts in their hosts? - Journal Article Rights / licenseCreative Commons Attribution 4.0 International Background Insect parasitoids are under strong selection to overcome their hosts’ defences. In aphids, resistance to parasitoids is largely determined by the presence or absence of protective endosymbionts such as Hamiltonella defensa. Hence, parasitoids may become locally adapted to the prevalence of this endosymbiont in their host populations. To address this, we collected isofemale lines of the aphid parasitoid Lysiphlebus fabarum from 17 sites in Switzerland and France, at which we also estimated the frequency of infection with H. defensa as well as other bacterial endosymbionts in five important aphid host species. The parasitoids’ ability to overcome H. defensa-mediated resistance was then quantified by estimating their parasitism success on a single aphid clone (Aphis fabae fabae) that was either uninfected or experimentally infected with one of three different isolates of H. defensa. Results The five aphid species (Aphis fabae fabae, A. f. cirsiiacanthoides, A. hederae, A. ruborum, A. urticata) differed strongly in the relative frequencies of infection with different bacterial endosymbionts, but there was also geographic variation in symbiont prevalence. Specifically, the frequency of infection with H. defensa ranged from 22 to 47 % when averaged across species. Parasitoids from sites with a high prevalence of H. defensa tended to be more infective on aphids possessing H. defensa, but this relationship was not significant, thus providing no conclusive evidence that L. fabarum is locally adapted to the occurrence of H. defensa. On the other hand, we observed a strong interaction between parasitoid line and H. defensa isolate on parasitism success, indicative of a high specificity of symbiont-conferred resistance. Conclusions This study is the first, to our knowledge, to test for local adaptation of parasitoids to the frequency of defensive symbionts in their hosts. While it yielded useful information on the occurrence of facultative endosymbionts in several important host species of L. fabarum, it provided no clear evidence that parasitoids from sites with a high prevalence of H. defensa are better able to overcome H. defensa-conferred resistance. The strong genetic specificity in their interaction suggests that it may be more important for parasitoids to adapt to the particular strains of H. defensa in their host populations than to the general prevalence of this symbiont, and it highlights the important role symbionts can play in mediating host-parasitoid coevolution Show more Journal / seriesBMC Evolutionary Biology Pages / Article No. SubjectAphis; Bacterial endosymbionts; Defensive symbiosis; Hamiltonella; Local adaptation; Lysiphlebus; Parasitoids; Resistance Organisational unit03705 - Jokela, Jukka 146341 - Symbiont-mediated coevolution in an insect host-parasitoid system (SNF) MoreShow all metadata
<urn:uuid:6d2144b6-4e3b-4c45-86d4-1822e776496a>
2.734375
710
Academic Writing
Science & Tech.
15.21875
95,608,888
In the rapidly growing field of synthetic biology, in which organisms can be engineered to do things like decompose plastic and manufacture biofuels and medicines, production of custom DNA sequences is a fundamental tool for scientific discovery. Yet the process of DNA synthesis, which has remained virtually unchanged for more than 40 years, can be slow and unreliable. Now in what could address a critical bottleneck in biology research, researchers at the Department of Energy’s Joint BioEnergy Institute (JBEI), based at Lawrence Berkeley National Laboratory (Berkeley Lab), announced they have pioneered a new way to synthesize DNA sequences through a creative use of enzymes that promises to be faster, cheaper, and more accurate. The discovery, led by JBEI graduate students Sebastian Palluk and Daniel Arlow, was published in Nature Biotechnology in a paper titled “De novo DNA Synthesis Using Polymerase-Nucleotide Conjugates.” “DNA synthesis is at the core of everything we try to do when we build biology,” said JBEI CEO Jay Keasling, the corresponding author on the paper and also a Berkeley Lab senior faculty scientist. “Sebastian and Dan have created what I think will be the best way to synthesize DNA since [Marvin] Caruthers invented solid-phase DNA synthesis almost 40 years ago. What this means for science is that we can engineer biology much less expensively – and in new ways – than we would have been able to do in the past.” The Caruthers process uses the tools of organic chemistry to attach DNA building blocks one at a time and has become the standard method used by DNA synthesis companies and labs around the world. However, it has drawbacks, the main ones being that it reaches its limit at about 200 bases, partly due to side reactions than can occur during the synthesis procedure, and that it produces hazardous waste. For researchers, even 1,000 bases is considered a small gene, so to make longer sequences, the shorter ones are stitched together using a process that is failure-prone and can’t make certain sequences. Buying your genes online A DNA sequence is made up of a combination of four chemical bases, represented by the letters A, C, T, and G. Researchers regularly work with genes of several thousand bases in length. To obtain them, they either need to isolate the genes from an existing organism, or they can order the genes from a company. “You literally paste the sequence into a website, then wait two weeks,” Arlow said. “Let’s say you buy 10 genes. Maybe nine of them will be delivered to you on time. In addition, if you want to test a thousand genes, at $300 per gene, the costs add up very quickly.” Palluk and Arlow were motivated to work on this problem because, as students, they were spending many long, tedious hours making DNA sequences for their experiments when they would much rather have been doing the actual experiment. “DNA is a huge biomolecule,” Palluk said. “Nature makes biomolecules using enzymes, and those enzymes are amazingly good at handling DNA and copying DNA. Typically our organic chemistry processes are not anywhere close to the precision that natural enzymes offer.” Thinking outside the box The idea of using an enzyme to make DNA is not new – scientists have been trying for decades to find a way to do it, without success. The enzyme of choice is called TdT (terminal deoxynucleotidyl transferase), which is found in the immune system of vertebrates and is one of the few enzymes in nature that writes new DNA from scratch rather than copying DNA. What’s more, it’s fast, able to add 200 bases per minute. In order to harness TdT to synthesize a desired sequence, the key requirement is to make it add just one nucleotide, or DNA building block, and then stop before it keeps adding the same nucleotide repeatedly. All of the previous proposals envisioned using nucleotides modified with special blocking groups to prevent multiple additions. However, the problem is that the catalytic site of the enzyme is not large enough to accept the nucleotide with a blocking group attached. “People have basically tried to ‘dig a hole’ in the enzyme by mutating it to make room for this blocking group,” Arlow said. “It’s tricky because you need to make space for it but also not screw up the activity of the enzyme.” Palluk and Arlow came up with a different approach. “Instead of trying to dig a hole in the enzyme, what we do is tether one nucleotide to each TdT enzyme via a cleavable linker,” Arlow said. “That way, after extending a DNA molecule using its tethered nucleotide, the enzyme has no other nucleotides available to add, so it stops. A key advantage of this approach is that the backbone of the DNA – the part that actually does the chemical reaction – is just like natural DNA, so we can try to get the full speed out of the enzyme.” Once the nucleotide is added to the DNA molecule, the enzyme is cleaved off. Then the cycle can begin again with the next nucleotide tethered to another TdT enzyme. Keasling finds the approach clever and counterintuitive. “Rather than reusing an enzyme as a catalyst, they said, ‘Hey, we can make enzymes really inexpensively. Let’s just throw it away.’ So the enzyme becomes a reagent rather than a catalyst,” he said. “That kind of thinking then allowed them to do something very different from what’s been proposed in the literature and – I think – accomplish something really important.” They demonstrated their method by manually making a DNA sequence of 10 bases. Not surprisingly, the two students were initially met with skepticism. “Even when we had first results, people would say, ‘It doesn’t make sense; it doesn’t seem right. That’s not how you use an enzyme,’” Palluk recalled. The two still have much work to do to optimize their method, but they are reasonably confident that they will be able to eventually make a gene with 1,000 bases in one go at many times the speed of the chemical method. Berkeley Lab has world-renowned capabilities in synthetic biology, technology development for biology, and engineering for biological process development. A number of technologies developed at JBEI and by the Lab’s Biosciences Area researchers have been spun into startups, including Lygos, Afingen, TeselaGen, and CinderBio. “After decades of optimization and fine-tuning, the conventional method now typically achieves a yield of about 99.5 percent per step. Our proof-of-concept synthesis had a yield of 98 percent per step, so it’s not quite on par yet, but it’s a promising starting point,” Palluk said. “We think that we’ll catch up soon and believe that we can push the system far beyond the current limitations of chemical synthesis.” “Our dream is to make a gene overnight,” Arlow said. “For companies trying to sustainably biomanufacture useful products, new pharmaceuticals, or tools for more environmentally friendly agriculture, and for JBEI and DOE, where we’re trying to produce fuels and chemicals from biomass, DNA synthesis is a key step. If you speed that up, it could drastically accelerate the whole process of discovery.” The Latest on: DNA synthesis via Google News The Latest on: DNA synthesis TdT Positive Cells - Not Just for Neoplasms on July 10, 2018 at 11:15 pm TdT acts by catalyzing the reaction that leads to the addition of deoxynucleotides to hydroxyl ends of oligonucleotides as part of DNA synthesis. It also contributes to differences in antibody recogni... […] New DNA Synthesis Method Could Soon Build a Genome in a Day on July 10, 2018 at 8:00 am Synthetic biologists are the computer programmers of biology. Their code? DNA. The whole enterprise sounds fantastical: you insert new snippets of DNA code—in the form of a chain of A, T, C, G letters ... […] DNA Script Awarded $2.7M in Innovation Grants From Bpifrance ? Total Financing Raised to Date Is $27M on July 10, 2018 at 3:50 am based on the promise of the company's innovative enzymatic DNA synthesis technology and on the company's potential to become a global leader in biomanufacturing for cell and gene therapy. As the field ... […] Global Oligonucleotide Synthesis Market - Opportunities in Future with Different Segments on July 9, 2018 at 3:47 am Oligonucleotide Synthesis Market by Product (Primer, Probe, Linker, Adaptor, Custom, Reagent, Equipment), Application (PCR, DNA, RNAi, Research, Therapeutic), End User (Academic, Pharmaceutical, Biote... […] When we run out of room for data, scientists want to store it in DNA on July 8, 2018 at 4:06 am Traditional thinking on DNA-based data storage focused on the synthesis of new DNA molecules; mapping the sequence of bits to the sequence of DNA’s four base pairs and making enough molecules to repre... […] DNA cops make sure deadly viruses don't get rebuilt on July 6, 2018 at 11:00 pm "The synthesis of horsepox virus takes the world one step closer ... However, the pace of innovation is quickening: Several companies can now tailor strands of DNA in ways that could revolutionize fie... […] In the age of mail-order DNA, a firm seeks to increase safety without slowing progress on July 6, 2018 at 1:40 am Part of your inventory would do nothing at all. This is the quandary facing many biotech companies that specialize in synthesizing or printing DNA. By selling genes, they have empowered synthetic biol... […] Gene Synthesis Market Overview, Driving Factors, Key Players, Growth Opportunities and Restraints 2018 - 2025 on July 5, 2018 at 9:09 am Gene synthesis is a procedure used to chemically synthesize a strand of DNA base by base. Gene synthesis involves the addition of nucleotides to a single stranded molecule that later serves as a templ... […] Life Sciences Industry Veteran Todd R. Nelson Named SGI-DNA CEO to Lead Spin-out from Synthetic Genomics on July 2, 2018 at 6:01 pm SAN DIEGO, July 2, 2018 /PRNewswire/ -- SGI-DNA, a leader in commercial DNA synthesis, today announced industry veteran Todd R. Nelson, Ph.D., MBA has been appointed Chief Executive Officer to lead th... […] via Bing News
<urn:uuid:ca760be3-4033-4bfa-bc1d-75e6136165c0>
3.78125
2,332
Content Listing
Science & Tech.
48.190574
95,608,900
22 June 2005 Why your brain has a Jennifer Aniston cell’ Obsessed with reruns of the TV sitcom Friends? Well then you probably have at least one “Jennifer Aniston cell” in your brain, suggests research on the activity patterns of single neurons in memory-linked areas of the brain. The results point to a decades-old and dismissed theory tying single neurons to individual concepts and could help neuroscientists understand the elusive human memory. “For things that you see over and over again, your family, your boyfriend, or celebrities, your brain wires up and fires very specifically to them. These neurons are very, very specific, much more than people think,” says Christof Koch at the California Institute of Technology in Pasadena, US, one of the researchers. In the 1960s, neuroscientist Jerry Lettvin suggested that people have neurons that respond to a single concept such as, for example, their grandmother. The notion of these hyper-specific neurons, coined “grandmother cells” was quickly rejected by psychologists as laughably simplistic. But Rodrigo Quiroga, at the University of Leicester, UK, who led the new study, and his colleagues have found some very grandmother-like cells. Previous unpublished findings from the team showed tantalising results: a neuron that fired only in response to pictures of former US president Bill Clinton, or another to images of the Beatles. But for such “grandmother cells” to exist, they must invariably respond to the “concept” of Bill Clinton, not just similar pictures. Wired up, fired up To investigate further, the team turned to eight patients currently undergoing treatment for epilepsy. In an attempt to locate the brain areas responsible for their seizures, each patient had around 100 tiny electrodes implanted in their brain. Many of the wires were placed in the hippocampus – an area of the brain vital to long-term memory formation. They first gave each subject a screening test, showing them between 71 and 114 images of famous people, places, and even food items. For each subject, the researchers measured the electrical activity or “firing” of the neurons connected to the electrodes. Of the 993 neurons sampled, 132 fired to at least one image. The team then went back for a testing phase, this time showing participants three to seven different pictures of the initial 132 photo subjects that hit. For example, one woman saw seven different photos of the Jennifer Aniston alongside 80 other photos of animals, buildings or additional famous people such as Julia Roberts. The neuron almost ignored all other photos, but fired steadily each time Aniston appeared on screen. The team found similar results with another woman who had a neuron for pictures of Halle Berry, including a drawing of her face and an image of just the words of her name. “This neuron is responding to the concept, the abstract entity, of Halle Berry,” says Quiroga. “If you show a line drawing or a profile, it’s the same response. We also showed pictures of her as Catwoman, and you can hardly see her because of the mask. But if you know it is Halle Berry then the neurons still fire.” Given more time and an exhaustive list of images, the team may well have landed upon other images that spiked the activity of the “Halle Berry” neuron. In one participant, the “Jen” neuron also fired in response to a picture of her former Friends cast-mate, Lisa Kudrow. The pattern suggests that the actresses are tied together in the memory associations of this particular woman, says Charles Connor, a neuroscientist at Johns Hopkins University in Baltimore, US. These object-specific neurons may be at the core of how we make memories, say Connor. “I think that’s the excitement to these results,” he says. “You are looking at the far end of the transformation from metric, visual shapes to conceptual memory-related information. It is that transformation that underlies our ability to understand the world. It’s not enough to see something familiar and match it. It’s the fact that you plug visual information into the rich tapestry of memory that brings it to life.” Journal reference: Nature (vol 435 p 1102)
<urn:uuid:6bbf23c4-a01b-41b6-88f4-ad668f531ffb>
2.859375
897
Truncated
Science & Tech.
41.535151
95,608,913
While the Earth is home to many different weather systems, the most extreme terrestrial conditions are mild compared to weather on other planets. All of the other bodies in the solar system large enough to maintain an atmosphere have their own weather systems, ranging from Earthlike to almost unimaginable. Humanity’s exploration of neighboring planets is far from complete, but scientists can draw some conclusions about conditions on other worlds. Mercury’s position closest to the sun leaves it with very little atmosphere due to the proximity of the nearby star. What thin atmosphere the planet does possess flows away from it like a comet’s tail due to the powerful solar wind, without any discernible weather patterns. Venus has an extremely dense atmosphere, layered with carbon dioxide and corrosive clouds. Its primary weather features are high winds and lightning storms high in the atmosphere, while the lowest levels remain calmer and extremely hot due to the planet’s runaway greenhouse effect. Temperatures at the surface are high enough to melt lead, rendering even the hardiest landing probes inoperative within hours of touching down. A number of probes sent to Mars have revealed much about the planet’s weather patterns. Dust storms are the primary weather pattern on the planet, and while clouds of ice crystals occasionally form in the atmosphere, the pressure is too low for liquid precipitation. During the Viking II mission, frost regularly appeared at the probe’s landing site during the Martian winter. The Gas Giants Jupiter, Saturn, Uranus and Neptune all share similar physical characteristics, as they are primarily made up of gases rather than solid matter and so share similar weather patterns. The gas giants all experience extremely high winds, hundreds of miles per hour at the equator. Storms in the atmosphere can last for extremely long times, such as Jupiter’s Red Spot or Saturn’s hexagonal storm at its north pole. Uranus has a unique tilt and rotation that freezes one portion of the planet for decades before it rotates back into the sunlight, triggering violent storms with the warming effect. Neptune’s atmosphere features high cirrus clouds formed of methane that travel rapidly across the upper reaches of its atmosphere. The Kuiper Belt While Pluto may have lost its status as a full-fledged planet, it and the other objects in the Kuiper belt outside the orbit of Neptune remain targets for study. The limited observation that the U.S. National Aeronautics and Space Administration has performed on these planets suggests that their atmospheres are thin and predictably cold. Their extreme distance from the sun reduces the difference in temperature between the day and night sides, removing the temperature fluctuations that could help drive weather patterns.
<urn:uuid:2e7e850a-d2cc-423f-9def-f5605839656d>
4.375
545
Knowledge Article
Science & Tech.
33.864872
95,608,933
Such is the case for a team of Whitehead Institute scientists, whose latest research on the evolution of the human Y chromosome confirms that the Y—despite arguments to the contrary—has a long, healthy future ahead of it. Proponents of the so-called rotting Y theory have been predicting the eventual extinction of the Y chromosome since it was first discovered that the Y has lost hundreds of genes over the past 300 million years. The rotting Y theorists have assumed this trend is ongoing, concluding that inevitably, the Y will one day be utterly devoid of its genetic content. Over the past decade, Whitehead Institute Director David Page and his lab have steadily been churning out research that should have permanently debunked the rotting Y theory, but to no avail. “For the past 10 years, the one dominant storyline in public discourse about the Y is that it is disappearing,” says Page. “Putting aside the question of whether this ever had a sound scientific basis, the story went viral—fast—and has stayed viral. I can’t give a talk without being asked about the disappearing Y. This idea has been so pervasive that it has kept us from moving on to address the really important questions about the Y.” To Page, this latest research represents checkmate in the chess match he’s been drawn into against the “rotting Y” theorists. Members of his lab have dealt their fatal blow by sequencing the Y chromosome of the rhesus macaque—an Old World monkey whose evolutionary path diverged from that of humans some 25 million years ago—and comparing it with the sequences of the human and chimpanzee Y chromosomes. The comparison, published this week in the online edition of the journal Nature, reveals remarkable genetic stability on the rhesus and human Ys in the years since their evolutionary split. Grasping the full impact of this finding requires a bit of historical context. Before they became specialized sex chromosomes, the X and Y were once an ordinary, identical pair of autosomes like the other 22 pairs of chromosomes humans carry. To maintain genetic diversity and eliminate potentially harmful mutations, autosome pairs swap genes with each other in a process referred to as “crossing over.” Roughly 300 million years ago, a segment of the X stopped crossing over with the Y, causing rapid genetic decay on the Y. Over the next hundreds of millions of years, four more segments, or strata, of the X ceased crossing over with the Y. The resulting gene loss on the Y was so extensive that today, the human Y retains only 19 of the more than 600 genes it once shared with its ancestral autosomal partner. “The Y was in free fall early on, and genes were lost at an incredibly rapid rate,” says Page. “But then it leveled off, and it’s been doing just fine since.” How fine? Well, the sequence of the rhesus Y, which was completed with the help of collaborators at the sequencing centers at Washington University School of Medicine and Baylor College of Medicine, shows the chromosome hasn’t lost a single ancestral gene in the past 25 million years. By comparison, the human Y has lost just one ancestral gene in that period, and that loss occurred in a segment that comprises just 3% of the entire chromosome. The finding allows researchers to describe the Y’s evolution as one marked by periods of swift decay followed by strict conservation. “We’ve been carefully developing this clearcut way of demystifying the evolution of the Y chromosome,” says Page lab researcher Jennifer Hughes, whose earlier work comparing the human and chimpanzee Ys revealed a stable human Y for at least six million years. “Now our empirical data fly in the face of the other theories out there. With no loss of genes on the rhesus Y and one gene lost on the human Y, it’s clear the Y isn’t going anywhere.” “This paper simply destroys the idea of the disappearing Y chromosome,” adds Page. “I challenge anyone to argue when confronted with this data.” This work was supported by the National Institutes of Health, the Howard Hughes Medical Institute, and the Charles A. King Trust. Written by Matt Fearer David Page’s primary affiliation is with Whitehead Institute for Biomedical Research, where his laboratory is located and all his research is conducted. He is also a Howard Hughes Medical Institute investigator and a professor of biology at Massachusetts Institute of Technology. “Strict evolutionary conservation followed rapid gene loss on human and rhesus Y chromosomes” Nature, online February 22, 2012 Jennifer F. Hughes (1), Helen Skaletsky (1), Laura G. Brown (1), Tatyana Pyntikova (1), Tina Graves (2), Robert S. Fulton (2), Shannon Dugan (3), Yan Ding (3), Christian J. Buhay (3), Colin Kremitzki (2), Qiaoyan Wang (3), Hua Shen (3), Michael Holder (3), Donna Villasana (3), Lynne V. Nazareth (3), Andrew Cree (3), Laura Courtney (2), Joelle Veizer (2), Holland Kotkiewicz (2), Ting-Jan Cho (1), Natalia Koutseva (1), Steve Rozen (1), Donna M. Muzny (3), Wesley C. Warren (2), Richard A. Gibbs (3), Richard K. Wilson (2), David C. Page (1). 1. Howard Hughes Medical Institute, Whitehead Institute, and Department of Biology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA. 2. The Genome Institute, Washington University School of Medicine, St. Louis, Missouri, USA. 3. Human Genome Sequencing Center, Baylor College of Medicine, Houston, Texas, USA. Matt Fearer | Newswise Science News Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:4f1d8f79-3e0b-4f95-9df9-00b1fc46bf2b>
3.21875
1,876
Content Listing
Science & Tech.
45.166831
95,608,935
I was going to write a simple little program for me and a friend to make secret coded messages and I have an idea in my head but I'm not sure how to execute it. To make it simple I have a number code: 1 0 5 1 9 8 6 The idea is when you write the code you take the letters and push them forward acording to the numbers and when you reach the end of the number you loop around back to the start same thing with the alphabet if you use a letter to far forward in the alphabet and it goes past Z it should loop back to A, to me this sound like a for loop and an array thing where I make 7 cells in the array one for each number then when I type in a message into the tex box object and press enter I want the letters to be run through the array before finally printed out in the text below but changed. "Hello" would translate into "Ieqmx" My thought process is this and I'm not sure if it's possible or if I'm overcomplicating it. While the selection box Normal to Code is sellected when the text box message is typed, you hit enter in this case we say "Hello" again, the text is then boiled down to variables for each letter A being 1, B being 2 etc, turning the word into "8,5,12,12,15" those variables is then put into the array adding the numbers from each cell onto the numbers making it into "9,5,17,13,24" then written out through variable into the text below as "Ieqmx" as those would be the corisponding letters to the variables. Then do the whole thing in reverse when I chose the "Code to Normal" option in the slection box where you subtract the number in the cells of the Array. Do you guys think it's possible and how should I proceed to make it actually work, it sounds easy in my head (And it probably is and I'm just a dumb xD) but when it comes to executing it I'm not sure.
<urn:uuid:8cc8940a-bd6e-4415-b5d3-3796c29a88cf>
2.71875
430
Comment Section
Software Dev.
33.506343
95,608,956
This unusual event is allowing astronomers to probe for even fainter objects and may give them a glimpse of matter disappearing into the massive black hole at the centre of our galaxy. The Galactic centre is one of the most dynamic places in our Galaxy. It is thought to be home to a gigantic black hole, called Sagittarius A* (pronounced 'A star'). Since the beginning of the Integral mission, ESA's gamma ray observatory has allowed astronomers to keep watch on this ever-changing environment. Integral has discovered many new sources of high-energy radiation near the galactic centre. From February 2005, Integral began to regularly monitor the centre of the Galaxy, and its immediate environment, known as the Galactic bulge. Erik Kuulkers of ESA's Integral Science Operations Centre, ESAC, Spain, leads the Galactic bulge monitoring programme. Integral now keeps its high-tech eyes on about 80 high-energy sources in the galactic bulge. "Most of these are X-ray binaries," says Kuulkers. X-ray binaries are made up of two stars in orbit around one another. One star is a relatively normal star; the other is a collapsed star, such as a white dwarf, neutron star or even a black hole. If the stars are close enough together, the strong gravity of the collapsed star can pull off gaseous material from the normal star. As this gas spirals down around the collapsed star, it is heated to over a million degrees centigrade and this causes it to emit high energy X-rays and gamma rays. The amount of gas falling from one star to the other determines the brightness of the X-ray and gamma-ray emission. According to the Integral observations in April 2006, the high-energy rays from about ten sources closest to the galactic centre all faded temporarily. Kuulkers excludes the possibility that a mysterious external force is acting on all the objects to drive them into quiescence. "All the sources are variable and it was just by accident or sheer luck that they had turned off during that observation," he says with a smile. The fortuitous dimming allows astronomers to set new limits on how faint these X-ray binaries can become. It also allows a number of new investigations to be undertaken with the data. "When these normally bright sources are faint, we can look for even fainter sources," says Kuulkers. These could be other X-ray binaries or the high-energy radiation from giant molecular clouds interacting with past supernovae. There is also the possibility of detecting the faint high-energy radiation from the massive black hole in our Galaxy's centre. Integral's Galactic bulge monitoring programme will continue throughout this year. The data is made available, within a day or two of being collected, to the scientific community via the Internet from a dedicated webpage at the Integral Science Data Centre (IDSC), Geneva, Switzerland. This way, anyone interested in specific sources can watch for interesting changes and trigger follow up observations with other telescopes in good time. Erik Kuulkers | alfa Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e2d4ea8f-dd23-4241-93c5-b3c0fe40618b>
3.71875
1,209
Content Listing
Science & Tech.
40.814982
95,608,976
posted by Niles Three gases (8.00 g of methane, CH4, 18.0 g of ethane, C2H6, and an unknown amount of propane, C3H8) were added to the same 10.0-L container. At 23.0 ∘C, the total pressure in the container is 4.40 atm . Calculate the partial pressure of each gas in the container. Express the pressure values numerically in atmospheres, separated by commas. Enter the partial pressure of methane first, then ethane, then propane. pv = n r t find total moles convert grams to moles partial pressure is proportional to molar fraction
<urn:uuid:253c0774-0fc0-4eec-9402-0f02d85510ba>
2.796875
149
Tutorial
Science & Tech.
72.83475
95,608,977
Water shortages are anticipated to occur all over the world and are likely to have a significant effect on the availability of water for water splitting processes, such as photocatalysis and electrolysis, as well as for drinking and industrial water. To overcome this problem, it has been suggested that seawater could be used as an alternative resource for the various water industries, including hydrogen production, industrial, and drinking water. Seawater contains a large amount of dissolved ion components, thus allowing it to be utilized as an electrolyte in PEC systems for producing hydrogen. In this study, anodized TiO2electrodes are prepared and used as the photoanodes in a photoelectrochemical (PEC) system designed to convert natural seawater into hydrogen with the assistance of an external bias, and their electrochemical and morphological properties were characterized, and correlated with the hydrogen evolution rate and photocurrent. In order to prepare light sensitized TiO2electrodes, titanium was anodized in single and mixed chemicals and annealed under various conditions. Based on the comparison of their electrical and physical properties and hydrogen evolution rate, the TiO2electrode anodized in a mixture of chemicals (NH4FH2OC3H8O2(ethylene glycol)) showed the best performance among the other electrodes. The experimental results showed that the hydrogen evolution rate obtained using seawater in the PEC system is ca. 215 μmol/cm2h, thus confirming that this is an effective seawater electrolyte for hydrogen production, and the optimum external bias supplied by the solar cell is at least 3.0 V. © 2010 Elsevier B.V. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:7e0b4483-eeb7-48e8-bcdd-9badafd36c6c>
2.8125
360
Academic Writing
Science & Tech.
10.472216
95,608,978
A protein that transports the simple chemical choline plays a major role in vesicle trafficking, ion homeostasis, and growth and development in plants, according to two new studies publishing 28 December in the open-access journal PLOS Biology, by Dai-Yin Chao of the Shanghai Institutes for Biological Sciences, China, and Sheng Luan of the University of California, Berkeley, USA, and co-workers. The protein, called choline transporter-like 1 (CTL1), had been previously identified as essential for formation of sieve plates, cell wall perforations that regulate passage of materials in plant phloem. But the mechanism of its function, and whether it played other roles in plants, was unknown. Chao and colleagues found CTL1 while screening for genes that control ion homeostasis in the model plant, Arabidopsis thaliana. They found that loss of CTL1 in the root led to ion disturbances in leaves, and deformations in plasmodesmata, a type of intercellular channel, in the root. CTL1 mutation also altered the distribution of ion transporters, which, combined with previous work localizing CTL1 to the trans-Golgi network, led the authors to investigate whether CTL1 played a direct role in vesicle trafficking. Sure enough, they showed that loss of CTL1 disrupted localization of multiple proteins, including an auxin transporter - auxin is the main growth hormone in plants. Luan and colleagues began by mapping the distribution of CTL1 in Arabidopsis, and found that it was ubiquitous but was highest where auxin was highest: in the growing tips, in the vascular tissue, and in the "apical hook" that seedlings lead with as they push up through the soil. Intracellularly, they too found that CTL1 localized to the trans-Golgi network, and appeared to control trafficking to and from the plasma membrane; the authors observed that without CTL1, auxin transporters were misdirected, and the plant displayed the classic signs of auxin loss, including lack of cell elongation. Chao also showed that excess choline inhibited endocytosis, mimicking the effects of CTL1 loss and suggesting that a critical CTL1 function is to sequester choline into endosomes. They suggest that keeping choline levels low outside endosomes promotes the activity of an enzyme, phospholipase D, that cleaves multiple lipids and, in so doing, has a direct effect on vesicle lipid composition and thus destination. In this model, loss of CTL1 raises choline, which inhibits the enzyme, altering vesicle lipids, and ultimately misdirecting the vesicles, which would account for the multiple effects of CTL1 mutation, including ion imbalances, plasmodesmata defects, and auxin mislocalization. CTL1 is also found in animal cells, Chao noted, and thus the study concluded that "characterizing CTL1 as a new regulator of protein sorting may enable researchers to understand not only ion homeostasis in plants but vesicle trafficking in general."
<urn:uuid:8f2f2a49-da01-469f-82fd-dfe557146682>
2.515625
657
News Article
Science & Tech.
21.128406
95,608,982
Offshore Wind Turbine May Have Killed Young Whale The carcass of a young humpback whale washed ashore Friday morning in Rhode Island, causing experts to think that a nearby offshore wind turbine may be to blame. Rescue workers and two veterinarians from a nearby aquarium collected samples from the dead whale, and suspect that the nearby Block Island offshore wind farm could be responsible for the whale’s death. Noise from the turbine allegedly hampers the sonar that whales use to navigate and communicate. “If necropsy shows that a perfectly healthy whale beached itself where offshore wind turbines do exist, they need to really check what kind of sound these things are putting out,” Bonnie Brady, director of the Long Island Commercial Fishing Association who regularly discusses the impacts of noise on marine mammals, told The Daily Caller News Foundation. “There have been an unusual amount of strandings this year.” Both construction and ordinary operations noises from offshore wind turbines can travel immense distances under water. This harms whales, dolphins, marine mammals and fish that communicate with noises in order to breed. For this reason, National Oceanic and Atmospheric Administration (NOAA) guidelines show that high noise levels can cause marine mammals like whales and dolphins to go deaf and disrupt their vocal communications. “This was a humpback whale, other whale species use different frequencies of sound,” Brady said. “Vibrations from the spinning wind turbines create noise that can be heard under the water line. NOAA needs to do some long term investigation in amount of the stranding that have occurred. The possibility that it is wind turbines is something we need to know now, not later.” Roughly 46 dead humpback whales have washed ashore on the Atlantic coast since January, prompting concern from NOAA. “[It is an] unusual mortality event,” Jennifer Goebel, a spokeswoman for the Atlantic region of NOAA, told The Jamestown Press. “A stranding that is unexpected; involved a significant die-off of any marine mammal population; and demands immediate response.” When workers construct offshore wind turbines, they use a loud pile driver to anchor the windmill to the seabed. Water magnifies sounds, so underwater the pile driver’s noise can reach levels up to 220 decibels. Putting this number into perspective, 150 decibels of sound can burst human eardrums, and 185 to 200 decibels is the range usually considered to be the threshold for causing human death. Marine environmental experts blame offshore wind turbines for the deaths of three minke whales that washed up on British beaches in May near several offshore wind farms. The noise generated by wind turbines affected the sonar that whales use to navigate, causing them to beach themselves. There are several commercial offshore wind farms close to where the whales beached themselves. “My personal opinion is that it could be a consequence of wind farms and the amount of sand in the water,” John Cresswell, chairman of the Felixstowe Volunteer Coast Patrol Rescue Service, told The Times after a family of whales beached themselves near him. “If you stop the boat off the coast you can feel the vibrations and hear the noise.” The sheer loudness of the turbines can also maim and kill fish. The noise produced when building the turbines poses a particular danger for fish with an organ highly sensitive to acoustics called a swim bladder, which adjusts a fish’s level of buoyancy and determines whether it floats or sinks. Send tips to andrew@ Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact email@example.com.
<urn:uuid:86b47b86-11b8-4dca-aa81-89c5c416af0f>
2.578125
786
Truncated
Science & Tech.
36.947788
95,609,000
Wednesday, August 17, 2011 The Nine Oribtal Elements Mean and true planetary longitudes in the Zodiac is computed by Nine Orbital Elements, in Indian Astronomy. Mean longitude of Planet, Graha Madhyama , M Daily Motion of the Mean Longitude, Madhyama Dina Gathi, Md Aphelion, Mandoccha, Ap Daily Motion of Aphelion, Mandoccha Dina Gathi, Apd Ascending Node, Patha, N Daily Motion of Ascending Node, Patha Dina Gathi, Nd Heliocentric Distance, Manda Karna, radius vector, mndk Maximum Latitude, L, Parama Vikshepa In Western Astronomy, we have six orbital elements Mean Anomaly, m Argument of Perihelion, w Ascending Node, N Inclination, i, inclinent of orbit Semi Major Axis, a With the Nine Orbital Elements, true geocentric longitude of the planet is computed, using multi step algorithms. There is geometrical equivalence between both the Epicycle and the Eccentric Models. The radius of the Epicycle, r = e, the distance of the Equant from the Observer.
<urn:uuid:4f71e63f-6a08-4191-907a-39e0b5b91b77>
2.953125
274
Personal Blog
Science & Tech.
3.800747
95,609,051
Surprise, surprise, Albert Einstein is still right. Over a century since its conception, the general theory of relativity still holds true — even in an extreme three star system many light-years away. Albert Einstein's travel diaries reveal a different side to the famed physicist. While he’s a celebrated humanitarian later in his life, excerpts on his Asian travels show that he used to harbor racist ideas on Chinese people. For Einstein, quantum entanglement is one of the mysteries he could not solve. For him, there cannot be an action at a distance without an interaction. However, modern physicists beg to differ. A new study proves itself to be the strongest evidence that the universe is really undergoing quantum entanglement. Is there a better teacher than a WiFi-connected Albert Einstein? Researchers at the University of Exeter have revealed a scientific explanation on how the infamous Santa Claus move from chimney to chimney without being spotted or heard by expecting children. A team of scientists from London and Canada is set to challenge one of Einstein's accepted theories regarding the classification of the speed of light as constant. The data will determine the size and shape of the black hole and could prove or disprove that Einstein's theory of relativity. Albert Einstein’s well-worn leather jacket was sold at an auction for more than $144,000 and has quite a history behind it. Scientists have said that LIGO's first detection of gravitational waves from two merging black holes is 12 billion years in the making, and there will be thousands of collisions to occur in the future. Scientists from LIGO has detected a second batch of gravitational waves from two colliding black holes, disproving a previous assumption that collisions produce bursts of radiations. ESA successfully completed an experiment using the Laser Interferometre Space Antenna (Lisa) Pathfinder. What can be said about Albert Einstein's Theory of Relativity? Anything, but it is false. Yes, researchers have found out why. Thanks to the Atacama Large Millimeter/submillimeter Array (ALMA) telescope, astronomers were able to spot an "Einstein ring" in an ancient, far-away galaxy - and no, it's not the famed Ring of Sauron from The Lord of the Rings trilogy.
<urn:uuid:59ec7e5b-0053-4e9c-8538-6d8371464fd5>
2.65625
469
Content Listing
Science & Tech.
40.822414
95,609,059
A paper describing the work appears in the March issue of The Astrophysical Journal. The Hubble constant has previously been calculated by using NASA's Hubble Space Telescope to look at distant supernovae, and by measurements of the cosmic microwave background -- radiation leftover from the Big Bang, said Chris Fassnacht, associate professor of physics at UC Davis. The new method provides an independent check on the other two, he said. A gravitational lens is a distant object, such as a galaxy surrounded by dark matter, that exerts a gravitational pull on light passing through it. Other galaxies behind the lens, from our point of view, appear distorted. In the case of the object B1608+656, astronomers on Earth see four distorted images of the same background object. Fassnacht began studying B1608+656 as a graduate student a decade ago. Because the mass distribution of the lens is now well understood as a result of recent Hubble Space Telescope observations, it is possible to use it to calculate the Hubble constant, he said. It works something like this. Two photons of light leave the background galaxy at the same time and travel around the lens, their paths distorted in different ways by the gravitational field so that they arrive on Earth at slightly different times. Based on that time delay, it is possible to calculate the distance of the entire route, and then infer the Hubble constant. The timing is set by waiting for a change in the background object -- for example, for it to become more luminous. If the travel times are slightly different, the different images of the background object will seem to brighten at slightly different times. Imagine two drivers leaving Stanford to drive to Davis, one by the East Bay and one through San Francisco, Fassnacht said. Assuming both drivers maintain the exact same speed, they will arrive at Davis at different times. That difference can be used to work out the overall distance. Gravitational lensing has never before been used in such a precise way, said co-author Philip Marshall of the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at the U.S. Department of Energy’s SLAC National Accelerator Laboratory and Stanford University. Several groups are now working on extending the technique with other gravitational lenses. The study was led by Sherry Suyu, University of Bonn, Germany. Other authors are: Stefan Hilbert, University of Bonn; Matthew Auger and Tommaso Treu, UC Santa Barbara; Roger Blandford, KIPAC and Stanford University; and Leon Koopmanns, Kapteyn Astronomical Institute, The Netherlands. Andy Fell | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:23cee0e6-1416-4fd3-b4d8-0e2291680ef8>
4.03125
1,128
Content Listing
Science & Tech.
39.046142
95,609,079
LMU/MPQ-physicists succeed in realizing an analogue of the Meissner effect by measuring edge currents in a ladder-like crystal of light. When a superconductor is exposed to a magnetic field, a current on its surface appears which creates a counter field that cancels the magnetic field inside the superconductor. Schematic representation of the light crystal with ladder-like shape. The blue and yellow spheres represent the atoms traveling in opposite directions, as in the Meissner phase. In the experiment the strength of the current was measured, which indicated a transition from the vortex to the Meissner phase. (Graphic: MPQ, Quantum Many Body Systems Division) This phenomenon, known as “Meissner-Ochsenfeld effect” after its discoverers, was first observed in 1933. This quantum effect has found applications in a large variety of fields, ranging from magnetic levitation of objects to medicine and industry. For the first time, scientists in the group of Professor Immanuel Bloch (Ludwig-Maximilians-University, Munich and Max Planck Institute of Quantum Optics, Garching) in collaboration with theoretical physicist Dr. Belén Paredes from the Institute for Theoretical Physics (IFT) in Madrid have succeeded in measuring an analogue of the Meissner effect in an optical crystal with ultracold atoms. The system realized by the team in fact constitutes the minimal system in which such a Meissner analogue can be observed and realizes theoretical predictions dating back more than 20 years. Furthermore, the scientists have been able to observe a transition from this Meissner phase to a vortex phase where the ‘screening’ of the external field breaks down. (Nature Physics, 2998 (2014)). When a superconductor is cooled down below its critical temperature, which is typically on the order of a few tens of Kelvin, it undergoes a phase transition to a superconducting state. In that state, in addition to be able to transport electric currents without losses, the material presents a very special feature: when it is exposed to an external magnetic field, a current appears on its surface that fully cancels the field in its core. As the external field is increased, the strength of the current also increases. This feature, called Meissner effect, is of key importance in condensed matter physics. For some special types of superconductors this effect can only exist up to a critical strength of the external field. If the field is increased above that value, the current flows and spins around imaginary axis forming a vortex-like structure. In that vortex phase, the external field is only partially cancelled. These two behaviours have been already observed for real materials, and are of fundamental interest for the superconducting properties. “However, this kind of phenomenon had never been observed with ultracold atoms in optical crystals”, explains Marcos Atala, a scientist in the team of Professor Bloch. In their experiments, an extremely cold gas of Rubidium atoms was loaded into an optical lattice: a periodic structure of bright and dark areas, created by the interference of counter-propagating laser beams. In this lattice structure, the atoms are held in either dark or bright spots, depending on the wavelength of the light, and therefore align themselves in a regular pattern. The resulting periodic structure of light resembles the geometry of simple solid state crystals where the atoms play the role of the electrons, making it an ideal model system to simulate condensed matter physics. In this case, the experimentalists chose a special lattice configuration, which creates an optical crystal with a ladder-like shape (see Fig. 1). When the electrons in a material are exposed to a magnetic field, they feel the effect of the Lorentz force, which acts perpendicular to their direction of motion, causing them to move in circles. However, the atoms in the optical crystal are electrically neutral and they do not feel that force. The experimentalists overcome this difficulty by implementing a special laser configuration that simulates the effect of a magnetic field: they used a pair of lasers that give a momentum kick to the atoms when they move from the left to the right leg of the ladder, and give a kick in the opposite direction when they move from the right to the left leg. These kicking lasers simulate the effect of a magnetic field of several thousand Tesla, something that is practically impossible to achieve with real magnetic fields. The ladder system that the experimentalists realized also presents a Meissner- and a vortex-like phase, with the only difference that the neutral current here does not produce a backaction and thereby a screening of the magnetic field. In order to see the transition between the two phases, the Munich researchers implemented a protocol to measure the current on the individual legs of the ladder. That current is maximal in the Meissner phase and has a vortex structure in the vortex phase. The measurement idea was to prepare the atoms in either the Meissner or the vortex phase and then to suddenly split the ladder into an array of isolated two-site systems, similar to when a flowing liquid is suddenly stop by an array of barriers. This method allowed the scientist to determine the strength of the current along the legs of the ladder, and they were able to clearly identify a transition from the vortex phase to the Meissner phase. This experiment marks an important step forward in the simulation of real material properties using ultracold atoms in optical lattices, and opens the path to the observation of many other phenomena like the quantum Hall effect or even the fractional quantum Hall effect if interparticle interactions are present. Furthermore, by combining this technique with the new available single site resolution, experimentalist could resolve the vortex structure in the ladder locally. “The new experimental probes help us to gain a better understanding of phase transitions and dynamics of quantum matter under the action of extreme magnetic fields”, points out Prof. Immanuel Bloch. Marcos Atala, Monika Aidelsburger, Michael Lohse, Julio T. Barreiro, Belén Paredes and Immanuel Bloch Observation of chiral currents with ultracold atoms in bosonic ladders Nature Physics 2998 (2014), Advance Online Publication Prof. Dr. Immanuel Bloch Chair of Quantum Optics, LMU Munich Schellingstr. 4, 80799 München, and Director at Max Planck Institute of Quantum Optics 85748 Garching, Germany Phone: +49 (0) 89 / 32 905 -138 Dr. Belén Paredes Instituto de Física Teórica UAM/CSIC C/Nicolás Cabrera 13-15 28049 Madrid, Spain Phone: +34 91 299 9862 Dipl. Phys. Marcos Atala Phone: +49 89 2180 6133 Dr. Olivia Meyer-Streng Press & Public Relations Max Planck Institute of Quantum Optics Phone: +49 (0) 89 32 905 -213 Dr. Olivia Meyer-Streng | Max-Planck-Institut Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:1990920f-18ac-44b9-8810-56d8542a40f8>
3.578125
2,052
Content Listing
Science & Tech.
38.288921
95,609,096
Climate change effects are already being felt in Austria: A new report shows how rising temperatures, changing precipitation patterns, and melting glaciers have affected the country and what lies in store for the future. The Austrian Climate Change Assessment Report (AAR14), released today, is the first national-level climate report that mirrors the breadth and rigor of the Intergovernmental Panel on Climate Change (IPCC), examining the historical and future development of climate change, as well as the potential response and mitigation measures for the problem. “Climate change is happening at the global level, but its effects will be different in every country,” said IIASA Deputy Director General Nebojsa Nakicenovic, who was the project leader. “It is vital that we understand how climate change will impact Austria, in order to embark on mitigation and adaptation strategies at a national level.” Over 240 scientists from over 50 institutions contributed data and findings to the report, and the review process included 71 external reviewers, over 2900 comments and questions, and 13 review editors who ensured that all comments were taken into consideration. Key findings of the report include: • Since 1880, average temperature in Austria has risen by nearly 2°C, compared with a global average increase of 0.85°C. The report projects that by 2050, the average temperature in Austria will likely increase by approximately 1.4°C compared to current temperatures. • Extremely hot days are expected to become more frequent in summer, while very cold days will become rarer in winter. • Snow cover duration and glacier extent have decreased significantly in recent decades, and this trend is likely to continue. • Precipitation patterns are likely to change, but with significant regional differences. On average, the report projects an increase in precipitation in winter months, and a decrease in summer months. • The risk of natural disasters including landslides and rockfalls, as well as forest fires, is project to increase as precipitation patterns change and the temperature increases. The report also suggests that current efforts by the Austrian government to promote energy efficiency and renewable energy sources alone will not be adequate to meet the expected contribution of the country to achieve the global goal of limiting climate change to an average 2°C rise over pre-industrial levels. Nakicenovic says, “Mitigation and adaptation strategies for climate change in Austria need to involve all sectors and stakeholders to achieve the ambitious goal consistent with a 2 degree stabilization in average global surface temperature.” IIASA and the APCC IIASA’s role in the AAR14 was to ensure rigor and the highest standard of the peer-review process, drawing from expertise developed through the Institute’s work on major global assessments such as the IPCC reports and in coordinating the 2012 Global Energy Assessment. IIASA Energy Program Director Keywan Riahi and researcher Mathis Rogner led and coordinated the review process. In addition, IIASA researchers contributed scientific findings to the report: Mitigation of Air Pollution and Greenhouse Gases Researcher Wilfried Winiwarter served as a Coordinating Lead Author, and Risk, Policy, and Vulnerability Program Deputy Director Reinhard Mechler was a Lead Author. Katherine Leitzell | idw - Informationsdienst Wissenschaft Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Science Education 23.07.2018 | Health and Medicine 23.07.2018 | Life Sciences
<urn:uuid:38a7ed7c-8d43-40d3-81c0-96ac466d8396>
3.40625
1,279
Content Listing
Science & Tech.
30.489325
95,609,116
3 months ago Scientists have discovered the first hard evidence of a large and ancient protoplanet inside space diamonds that fell to Earth about 10 years ago. The diamonds were embedded inside a small asteroid that hit the atmosphere over the Nubian Desert in northeastern Sudan in October 2008. The diamonds are extremely small, about the width of a human hair, reports The Los Angeles Times, but within them are chemical clues that suggest they could have only been formed deep within a Mercury- or Mars-sized almost-planet that formed in the hectic early days of the solar system. “What makes this study so exciting is that it is direct evidence from an actual rock that there was a large protoplanetary body that is no longer around,” said Meenakshi Wadhwa, who studies meteorites at Arizona State University and who was not involved in the new work, to The LA Times. The research team, led by Farhang Nabiei of the Ecole Polytechnique Federale de Lausanne in Switzerland, used a high-powered electron microscope to study the tiny diamonds found inside the meteorite.
<urn:uuid:0869aa98-8319-4ef6-8156-3439b7b66dd3>
3.59375
229
News Article
Science & Tech.
22.536253
95,609,184
A group of Finnish scientists suggests a new climate-biosphere interaction mechanism for the underlying processes in a new study, which will be published on February 14, 2007 in PLoS ONE, the international, peer-reviewed, open-access, online publication from the Public Library of Science (PLoS). The theory invokes cold, ice-containing climates as a key precursor for multicellular life. If the model turns out to be correct, one can assume that complex life might exist also around stars which are more massive and short-lived than the Sun. Since remote sensing of highly reflecting glaciers should be possible, this may help designing future astronomical observation programmes for earthlike extrasolar planets. Multicellular life was preceded by the cold Neoproterozoic climate 600-800 million years ago which at times produced widespread glaciations. According to the new theory, the coldness was due to low carbon dioxide concentration brought about by strong algal growth in the oceans. The algal growth was maintained by the lack of grazing animals and the ability of cold seawater to mix and transport nutrients efficiently. A moderately high seawater oxygen concentration developed as a byproduct of the algal growth. This enabled diffusive breathing of primitive multicellulars which were larger than their unicellular counterparts. The ability of cold water to contain more dissolved oxygen also helped the multicellulars to thrive. The diversification of the marine food webs introduced by multicellular predators as well as the moving and burrowing activity of animals on the seafloor contributed to a more efficient decomposition of the algae-produced organic carbon, which slowed the rate of organic carbon sequestration. This in turn increased the atmospheric carbon dioxide level and ended the severe glaciations and the reign of unicellular algae, initiating the development of a modern-type climate. Andrew Hyde | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:827576e2-219a-42fe-9024-27e5b7757bb0>
3.515625
973
Content Listing
Science & Tech.
31.80223
95,609,189
Until the 1990s, it was generally accepted that medicines were first developed for adults and their use in children was investigated later, if at all. One of the main tasks of hospital pharmacies was the manufacturing of child-appropriate formulations in a more or less makeshift way. The first change came in 1997 with U.S. legislation that rewarded manufacturers to do voluntary pediatric research. Ten years later, the European Union passed legislation that required manufacturers to discuss all pediatric aspects, including formulations, with the regulatory authorities as a condition of starting the registration procedure. In consequence, manufacturers must now cover all age groups, including the youngest ones. So far, pediatric formulations were more a focus for academic researchers. Through the changed regulatory environment, there is now a sudden high commercial demand for age-appropriate formulations. This book begins by highlighting the anatomical, physiological and developmental differences between adults and children of different ages. It goes on to review the existing technologies and attempts to draw a roadmap to better, innovative formulations, in particular for oral administration. The regulatory, clinical, ethical and pharmaceutical framework is also addressed. A reproduction of the Edexcel a-level mathematics formula booklet. Photographs have been added to the formula booklet to make it more pleasing to read, rather than just the plain formulae. All formulae in this booklet are available for free from Edexcel directly in a pdf format. M1 and M4 have no given formulae within the formula booklet. This book also contains statistical tables relevant to the S1 through S4 specifications. This is an economy black and white print of a book intended to be viewed in full colour, thus quality will not be of the same standard as that of other editions. These steam tables have been calculated using the international standard for the thermodynamic properties of water and steam, the IAPWS-IF97 formulation, and the international standards for transport and other properties. In addition, the complete set of equations of IAPWS-IF97 is presented including all supplementary backward equations adopted by IAPWS between 2001 and 2005 for fast calculations of heat cycles, boilers, and steam turbines.
<urn:uuid:b2a63521-2ad3-40f4-b521-4052bd0c5f02>
2.734375
429
Knowledge Article
Science & Tech.
26.529736
95,609,198
Coupled Neural Networks Multilayered feed-forward networks (perceptrons) are special cases of the general McCulloch-Pitts neural network with arbitrarily interconnected neurons. On the other hand, any general “recurrent” neural network can be considered to be represente by a feed-forward perceptron, albeit one with possibly very many layers. The reason for this strange equivalence is that the temporal evolution (3.5) of an arbitrary network constructed from binary neurons is necessarily periodic. This statement follows immediately from the observation that the N neurons can only assume 2 N configurations altogether, and hence some state of the network must reoccur after at most 2 N steps. Since only the present state of the network enters on the right-hand side of the evolution law (14.1), the subsequent evolution proceeds strictly periodically from that moment on. If one considers the neural network at a certain moment t = n as the nth layer of a perceptron (with all layers identical!), the temporal-evolution law can be viewed as the law governing the flow of information from one layer to the next. It is then sufficient to take into account only a finite number of such layers, just as many as there are time steps leading up to the first repetition of a network configuration. KeywordsOriginal Network Synaptic Coupling Couple Neural Network Network Hierarchy Prescribe Trajectory Unable to display preview. Download preview PDF. - 1.It is common to treat only some of the neurons as receptor neurons; then the I i do not vanish only for those neurons.Google Scholar - 2.If the fixed-point equation (14.3) has more than one solution, it may well depend on the start configuration which fixpoint is reached. Here we do not consider this case of multi-stability further, and concentrate on a single task to be learned by the network.Google Scholar - 3.The same complication occurs if lateral synaptic connections between the neurons contained in the same layer of a perceptron are allowed.Google Scholar - 4.The existence of a stationary state of the assistant network is guaranteed if the original network has a fixed point, since its dynamics corresponds to that of the linearized original network in the vicinity of its fixed point, but run backwards in time. The matrix w ik of the synaptic connections of the assistant network contains the transpose of the synaptic matrix w ki of the original neural net, which means that the directions of all synapses have been reversed.Google Scholar - 5.In a similar differential equation was found to describe an electronic network of coupled nonlinear circuits.Google Scholar
<urn:uuid:7c7c031a-fef4-4505-bd50-97ca183feb69>
2.828125
540
Truncated
Science & Tech.
41.651927
95,609,200
Chlorine-phase Partitioning at Melpitz near Leipzig Hydrochloric acid (HCl) in the gas phase, chloride, and sodium in particle phase were measured first time with high time-resolution and simultaneously with a number of other atmospheric components (in gas and particulate phase) as well meteorological parameters during a campaign at the research station Melpitz (Germany) in summer 2006 to study the Cl partitioning. On most of the 19 measurement days the HCl concentration showed a broad maximum around noon/afternoon (on average 0.1 μg m−3) and much lower concentrations during night (0.01 g m−3) with high correlation to HNO3. The data support that (1) HNO3 is responsible for Cl depletion, (2) there is an increase in the Na/Cl ratio due to faster HCl removal during continental air mass transport, and (3) on average 50% of total Cl is being as gas-phase HCl. Keywords Atmospheric chemistry, chlorine, degassing, multiphase chemistry, particulate matter, partitioning, sea salt KeywordsTotal Suspended Matter Steam Chamber Multiphase Chemistry Pronounced Diurnal Variation Marine Particulate Matter Unable to display preview. Download preview PDF. - 4.Junge, C.E., J. Meteor., 11, 323–333 (1954).Google Scholar
<urn:uuid:fbf66c38-b587-47be-8d40-92eb93fdd727>
2.546875
294
Academic Writing
Science & Tech.
47.759856
95,609,214
Chemical analysis of ancient rocks reveals earliest record yet of Earth's atmosphere Chemical analysis of some of the world’s oldest rocks, by an international team led by McGill University researchers, has provided the earliest record yet of Earth's atmosphere. The results show that the air 4 billion years ago was very similar to that more than a billion years later, when the atmosphere -- though it likely would have been lethal to oxygen-dependent humans -- supported a thriving microbial biosphere that ultimately gave rise to the diversity of life on Earth today. The findings, published last week in the Proceedings of the National Academy of Sciences, could help scientists better understand how life originated and evolved on the planet. Until now, researchers have had to rely on widely varying computer models of the earliest atmosphere's characteristics. The new study builds on previous work by former McGill PhD student Jonathan O’Neil (now an assistant professor at Ottawa University) and McGill emeritus professor Don Francis, who reported in 2008 that rocks along the Hudson Bay coast in northern Quebec, in an area known as the Nuvvuagittuq Greenstone Belt, were deposited as sediments as many as 4.3 billion years ago -- a couple of hundred million years after the Earth formed. In the new study, a team led by researchers from McGill’s Earth and Planetary Sciences Department, used mass spectrometry to measure the amounts of different isotopes of sulfur in rocks from the Nuvvuagittuq belt. The results enabled the scientists to determine that the sulfur in these rocks, which are at least 3.8 billion years old and possibly 500 million years older, had been cycled through the Earth's early atmosphere, showing the air at the time was extremely oxygen-poor compared to today, and may have had more methane and carbon dioxide. "We found that the isotopic fingerprint of this atmospheric cycling looks just like similar fingerprints from rocks that are a billion to 2 billion years younger," said Emilie Thomassot, a former postdoctoral researcher at McGill and lead author of the paper. Emilie Thomassot is now with the Centre de Recherches Pétrographiques et Géochimiques (CRPG) in Nancy France. “Those younger rocks contain clear signs of microbial life and there are a couple of possible interpretations of our results," says Boswell Wing, an associate professor at McGill and co-author of the new study. "One interpretation is that biology controlled the composition of the atmosphere on early Earth, with similar microbial biospheres producing the same atmospheric gases from Earth’s infancy to adolescence. We can’t rule out, however, the possibility that the biosphere was decoupled from the atmosphere. In this case geology could have been the major player in setting the composition of ancient air, with massive volcanic eruptions producing gases that recurrently swamped out weak biological gas production." The research team is now extending its work to try to tell whether the evidence supports the “biological” or the “geological” hypothesis -- or some combination of both. In either case Emilie Thomassot says, the current study "demonstrates that the Nuvvuagittuq sediments record a memory of Earth’s surface environment at the very dawn of our planet. And surprisingly, this memory seems compatible with a welcoming terrestrial surface for life". The team is now extending their investigation to early Archean sediments from other localities in Canada, such as the Labrador coast (see www.saglek-expedition.org). The research was supported by the Natural Sciences and Engineering Research Council of Canada and the Canadian Space Agency, and by France’s Lorraine region and the Centre National de la Recherche Scientifique. "Atmospheric record in the Hadean Eon from multiple sulfur isotope measurements in Nuvvuagittuq Greenstone Belt (Nunavik, Quebec)" E. Thomassot, J. O’Neil, D. Francis, P. Cartigny, B. A. Wing. Proceedings of the National Academy of Sciences, published online Jan. 5, 2015. Cynthia Lee | newswise Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:022a3484-5a8d-4cf4-9739-7decda688ce4>
3.421875
1,437
Content Listing
Science & Tech.
38.545441
95,609,220
|Male common side-blotched lizard| Baird & Girard, 1852 Several, see text Side-blotched lizards are lizards of the genus Uta. They are some of the most abundant and commonly observed lizards in the deserts of western North America. Their cycle among three colorized breeding patterns has achieved notoriety and is best described in the common side-blotched lizard. They commonly grow to six inches including the tail, with the males normally being the larger sex. Males often have bright throat colors. These lizards are prey for many desert species. Snakes, larger lizards, and birds all make formidable predators to side-blotched lizards. Larger lizard species, such as collared, leopard, and spiny lizards, and roadrunners are the main predators. In turn, the side-blotched lizards eat arthropods, such as insects, spiders, and occasionally scorpions. As a result of their high predation rate, these lizards are very prolific breeders. From April to June they breed, with the young emerging as early as late May. These inch-long young appear all through the summer, and into September. Side-blotched lizards are notable for having the highest number of distinct male and female morphs or "genders" within a species: three male and two female. Reproductively the males have testes and the females have ovaries. However they show a diversity of behaviors associated with reproduction, which are often referred to as "alternative reproductive tactics". Orange-throated males are "ultra-dominant, high testosterone", who establish large territories and control areas that contain multiple females. Yellow stripe-throated males ("sneakers") do not defend a territory, but cluster on the fringes of orange-throated lizard territories, and mate with the females on those territories while the orange-throat is absent, as the territory to defend is large. Blue-throated males are less aggressive and guard only one female; they can fend off the yellow stripe-throated males but cannot withstand attacks by orange-throated males. Orange-throated females lay many small eggs and are very territorial. Yellow-throated females lay fewer, larger eggs, and are more tolerant of each other. This is called the rock paper scissors effect., borrowed from the name of the playground game, because the outcome of the mating success shows that one morph of the lizard takes advantage over another but not over the third. The orange and blue-throated males can sometimes be seen approaching a human "intruder". One speculation is that he could be giving the female(s) a chance to escape, but whether he is defending the female has not been documented. Another speculation is that he is highly motivated to engage whenever he sees movement on his territory, which he may be interpreting as a possible intruding male, or another female. The systematics and phylogeny of the side-blotched lizards is very confusing, with many local forms and morphs having been described as full species. Following the 1997 review of Upton & Murphy which included new data from mtDNA cytochrome b and ATPase 6 sequences, the following species can be recognized pending further research: - Eastern side-blotched lizard, U. stejnegeri - formerly included in U. stansburiana - San Pedro Martir side-blotched lizard, U. palmeri - Angel de la Guarda side-blotched lizard (undescribed species, formerly included in U. stansburiana) - Salsipuedes side-blotched lizard, U. antiqua - formerly included in U. stansburiana - Santa Catalina side-blotched lizard, U. squamata - sometimes included in U. stansburiana - San Esteban side-blotched lizard (undescribed species, formerly included in U. stansburiana) - San Pedro Nolasco side-blotched lizard, U. nolascensis - Common side-blotched lizard, U. stansburiana - Enchanted side-blotched lizard, U. encantadae - possibly belongs into U. stansburiana - El Muerto side-blotched lizard, U. lowei - possibly belongs into U. stansburiana - Swollen-nosed side-blotched lizard, U. tumidarostra - possibly belongs into U. stansburiana - Socorro side-blotched lizard, U. auriculata - possibly belongs into U. stansburiana - Clarion side-blotched lizard, U. clarionensis - possibly belongs into U. stansburiana - Ornate side-blotched lizard, U. mannophora - possibly belongs into U. stansburiana Uta stellata and U. concinna are now usually considered subspecies of U. stansburiana. U. encantadae, U. lowei, and U. tumidarostra might be subspecies of a distinct species (Las Encantadas side-blotched lizard), instead. Similarly, U. auriculata and U. clarionensis might be subspecies of a single species, the Revillagigedo side-blotched lizard. - Sinervo, B.; C.M. Lively (1996). "The rock–paper–scissors game and the evolution of alternative male strategies". Nature 380 (6571): 240–243. doi:10.1038/380240a0. - Pennock et al. (1968) - Taborsky,M & Brockmann HJ (2010) Alternative reproductive tactics and life history phenotypes. pp 537-586, In P. Kappeler, Ed. Animal Behaviour: Evolution and Mechanisms. Springer Berlin Heidelberg - Roughgarden, Joan (2004). Evolution's Rainbow: Diversity, Gender, and Sexuality in Nature and People. University of California Press. ISBN 0-520-24073-1 Especially chapter 6, Multiple Gender Families, pp. 90-93. - Sinervo, B & Lively C.M. (1996) The rock-scissors-paper game and the evolution of alternative male strategies. Nature 340: 240-246 - Goodenough, J (2010). Perspectives on Animal Behaviour. p. 70. - See e.g. Oliver (1943) - Collins, Joseph T. (1991): Viewpoint: a new taxonomic arrangement for some North American amphibians and reptiles. Herpetological Review 22(2): 42-43. PDF fulltext - Grismer, L.L. (1994): Three new species of intertidal side-blotched lizards (Genus Uta) from the Gulf of California, Mexico. Herpetologica 50: 451–474. - Murphy, Robert W. & Aguirre-León, Gustavo (2002): The Nonavian Reptiles: Origins and Evolution. In: Case, Ted & Cody, Martin (eds.): A New Island Biogeography of the Sea of Cortés: 181-220. Oxford University Press. ISBN 0-19-513346-3 PDF fulltext Appendices 2-4 - Oliver, James A. (1943): The Status of Uta ornata lateralis Boulenger. Copeia 1943(2): 97-107. doi:10.2307/1437774 (First page image) - Pennock, Lewis A.; Tinkle, Donald W. & Shaw, Margery W. (1968): Chromosome Number in the Lizard Genus Uta (Family Iguanidae). Chromosoma 24(4): 467-476. doi:10.1007/BF00285020 PDF fulltext - Upton, Darlene E. & Murphy, Robert W. (1997): Phylogeny of the side-blotched lizards (Phrynosomatidae: Uta) based on mtDNA sequences: support for midpeninsular seaway in Baja California. Mol. Phyl. Evol. 8(1): 104-113. doi:10.1006/mpev.1996.0392 PDF fulltext |Wikisource has the text of the 1905 New International Encyclopedia article Uta.|
<urn:uuid:57530245-92d3-4bed-932c-74122d151f93>
3.15625
1,792
Knowledge Article
Science & Tech.
44.938585
95,609,271
Differential Interference Contrast Wavefront Shear in Wollaston and Nomarski Prisms Explore how Wollaston and Nomarski prisms act as a beamsplitter to separate or shear a polarized beam of light into two coherent and orthogonal components that pass through and interact with slightly different areas of a specimen in differential interference contrast (DIC) microscopy. This interactive tutorial examines differences between the location of the interference plane in both prism types, and how the position of the plane can be varied with changes to the optical axis orientation in a single prism wedge. The tutorial initializes with a standard Wollaston prism appearing in the window and a beam of linear (plane) polarized light entering through the bottom portion of the prism at a 45-degree angle. As the linearly polarized wave enters the lower portion of the Wollaston prism, it is split (or sheared) into two plane-polarized components that are oriented mutually perpendicular (orthogonal) to each other. One of the waves is designated the ordinary (O) wave and vibrates in a direction perpendicular to the optical axis of the prism, while the other is termed the extraordinary (E) wave with a vibration direction parallel to the prism optical axis. Ordinary wavefronts are represented by blue bars as they progress through the Wollaston prism, and extraordinary wavefronts are characterized by red bars. In addition, the optical axes of the individual prism wedges are indicated by an arrow (parallel to the browser window) in the lower wedge, or a bull's-eye target (perpendicular to the browser window) in the upper wedge. The interference plane is represented by a dashed line. In order to operate the tutorial, use the Prism Position slider to translate the compound prism back and forth (to the left and right) across the incident polarized light beam. As the prism is moved to the right, the beam travels a greater distance through the lower prism half, affecting the relationship between the emerging ordinary and extraordinary wavefronts. When the prism is moved to the left, the beam travels through only a short distance in the lower prism, while traversing a larger portion of the upper wedge. Translating the Prism Type slider to the left produces a change in the orientation of the optical axis in the lower prism wedge, and also transforms the compound prism from a Wollaston to a Nomarski design. Simultaneously, the position of the interference plane is shifted from the central region of the compound prism, first into the upper wedge, and eventually to the exterior of the prism. As the prism type is altered, the trajectories of the ordinary and extraordinary wavefronts are modified to reflect how each of these orthogonal components traverse the prism. A Wollaston prism is composed of two geometrically identical wedges of quartz or calcite (which are birefringent, or doubly-refracting materials) cut in a way that their optical axes are oriented perpendicular when they are cemented together to form the prism. The polarizer in a DIC microscope (positioned beneath the Wollaston prism) is oriented so that linearly polarized light enters the prism at a 45-degree angle with respect to the optical axes of the two birefringent prism halves. Each of the sheared wavefronts experiences a slightly different refractive index that varies with the composition of the Wollaston prism. Prisms made of quartz, which is a positive uniaxial crystal, display a refractive index difference on the order of 0.6 percent. Because propagation speed of the waves through the crystal is inversely proportional to the refractive index, each wave travels at a slightly different velocity. In quartz, the ordinary ray travels faster than the extraordinary wave due to a slightly lower refractive index. Alternatively, in negative uniaxial crystals (such as calcite), the extraordinary wave experiences a lower refractive index and propagates faster than the ordinary wave. In the tutorial, each wavefront is represented by either a blue (ordinary) or red (extraordinary) bar, as previously discussed. Because the virtual Wollaston prism in this tutorial is composed of quartz, the ordinary wave (blue bar) progresses through the lower portion of the crystal at a higher velocity than does the extraordinary wave (red bar). When the waves encounter the cemented surface residing in the interior of the Wollaston prism, they under go angular wave splitting and each wave takes a slightly different course due to refraction at the interface. In addition, the orientation of the Wollaston prism crystalline axes reverses the refractive index differences at the boundary, resulting in the ordinary wave becoming the extraordinary wave as it enters the upper portion of the prism. A similar situation occurs with the extraordinary wave, which reverses roles to become the ordinary wave. This concept is demonstrated in the tutorial by changes in the wavefront bar color as each wave encounters the cemented prism junction (the blue wave becomes the red wave and vice versa). When the tutorial initializes, the incident polarized light wave enters the lower central portion of the Wollaston prism and is diverted into an ordinary and extraordinary wavefront. As the wave travels through the lower portion of the prism, the ordinary wavefront (blue bar) progresses toward the cemented boundary faster than the extraordinary wavefront (red bar). After passing through the boundary and reversing identities, the extraordinary wavefront (which was originally the ordinary wavefront) slows down while the ordinary wavefront (originally the extraordinary wavefront) gains velocity. Eventually, the ordinary wavefront advances to an identical position with the extraordinary wavefront and the two emerge from the upper surface of the Wollaston prism simultaneously with zero path difference. Because each wavefront encounters an identical refractive index (that of the air) upon exiting the prism, the two waves travel at identical velocities on their way to the specimen. In order to modify the optical path difference between the sheared waves, the Wollaston prism can be shifted in a direction perpendicular to that of the incident polarized light beam using the Prism Position slider. When the slider is translated to the left, the Wollaston prism also shifts to the left, decreasing the distance traveled through the lower portion of the prism by the wavefronts. In this case, the ordinary wavefront does not advance to a significant degree over the extraordinary wavefront before the cemented boundary is encountered. Upon reversal of the wavefront identities in the upper portion of the prism, the new ordinary wavefront progresses faster than the extraordinary wavefront and exits the prism surface first. This creates an optical path difference that can be varied by adjusting the position of the Wollaston prism. Alternatively, when the prism is shifted to the right by the slider, the ordinary wavefront advances to a considerable degree with respect to the extraordinary wavefront in the lower portion of the prism. When the boundary is encountered, there is only a short distance for each wave to travel before exiting the prism. In this case, the new extraordinary wavefront (previously the ordinary wavefront in the lower prism section) is far ahead of the ordinary wavefront and encounters the upper surface of the Wollaston prism first. The net result is an optical path difference that is opposite of the one demonstrated when the Wollaston prism is shifted to the left. The Nomarski prism, like a Wollaston prism, consists of two optical quartz wedges cemented together at the hypotenuse. One of the wedges is identical to a conventional Wollaston quartz wedge and has the optical axis oriented parallel to the surface of the prism. However, the second wedge is modified by cutting the quartz crystal in such a manner that the optical axis is oriented obliquely with respect to the flat surface of the prism. When the wedges are combined to form a birefringent compound prism, the focal plane (and interference fringes produced when polarized light passes through the prism) lies outside the prism plate, as described above and illustrated in Figure 1. This effect occurs because shear now takes place at the air-quartz interface (Figure 1(b)), and refraction at the interface between the quartz wedges causes the sheared wavefronts to converge with a crossover point outside the prism. The actual position of the Nomarski prism focal plane can be adjusted over a range of several millimeters by altering the oblique angle of the optical axis in the second quartz wedge utilized to construct the prism (using the Prism Type slider in the tutorial). Although Nomarski prisms are widely employed as objective prisms in modern differential interference contrast microscopes, there are fewer spatial constraints for condenser prisms, which can often be positioned precisely within the aperture plane. Therefore, a conventional Wollaston prism can sometimes be inserted into the microscope condenser, but in many cases, a Nomarski prism is used instead. When a Nomarski prism is utilized in the condenser, the prism is designed to produce an interference plane that is located much closer to the prism than those constructed for use with objectives. As a result, aside from being mounted in frames having different geometries, the two Nomarski prisms found in modern DIC microscopes are cut differently and are not interchangeable. In summary, for differential interference contrast microscopy, the condenser prism (also referred to as a secondary, auxiliary, or compensating compound prism) acts as a primary beamsplitter to shear the polarized wavefront, while the objective prism (the principal prism) recombines the separated waves and regulates the degree of retardation between the ordinary and extraordinary wavefronts. The degree of shear for a particular Wollaston or Nomarski prism is set by the manufacturer and must coincide with that of a matching beam combining prism, located at the (effective) objective rear focal plane. Each sheared pair of light rays will pass through the condenser and be refracted by the lens elements so that the two beams travel parallel to each other as they leave the condenser and pass through the specimen. The beams are actually separated by a very small distance that is beneath the resolving power of the objective (the beam separation or shear distance, which usually ranges between 0.15 and 0.6 micrometers, is greatly exaggerated in the tutorial). Because the individual light beams are derived from the same source prior to being sheared by a Wollaston or Nomarski prism, they are coherent and capable of interference. After leaving the condenser, the sheared light beams pass through closely adjacent regions of the specimen, which often induces an optical path difference between the two beams due to localized refractive index and thickness variations. Light beams exiting the specimen are captured by the objective and brought into focus at the rear focal plane, where a second Wollaston or Nomarski prism is strategically placed to recombine the beams into a common path. The paired light beams, still polarized and oriented with vibration directions that are perpendicular, next pass through a second polarizer (the analyzer). The analyzer has a polarization plane that is crossed with respect to the first polarizer and is oriented at a 45-degree angle to the beams exiting the second prism. Components of the light beams vibrating in the polarization plane of the analyzer are able to recombine and interfere to form the image observed in the microscope eyepieces or captured by a traditional or CCD camera system. Douglas B. Murphy - Department of Cell Biology and Anatomy and Microscope Facility, Johns Hopkins University School of Medicine, 725 N. Wolfe Street, 107 WBSB, Baltimore, Maryland 21205. Jan Hinsch - Leica Microsystems, Inc., 110 Commerce Drive, Allendale, New Jersey, 07401. Edward D. Salmon - Department of Cell Biology, The University of North Carolina, Chapel Hill, North Carolina 27599. Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657. Matthew J. Parry-Hill, Robert T. Sutter, and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310. BACK TO DIFFERENTIAL INTERFERENCE CONTRAST MICROSCOPY Questions or comments? Send us an email. © 1998-2018 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners. Last modification: Thursday, Feb 25, 2016 at 05:46 PM Access Count Since June 3, 2001: 43694 For more information on microscope manufacturers, use the buttons below to navigate to their websites:
<urn:uuid:2a8db971-3937-4d3f-bec7-1187d394ee3c>
2.921875
2,650
Tutorial
Science & Tech.
24.916823
95,609,279
Might have Minkowski discovered the cause of gravitation before Einstein? - Publication Year: - Usage 197 - Downloads 197 - PhilSci-Archive 197 - Repository URL: - Most Recent Tweet View All Tweets There are two reasons for asking such an apparently unanswerable question. First, Max Born's recollections of what Minkowski had told him about his research on the physical meaning of the Lorentz transformations and the fact that Minkowski had created the full-blown four-dimensional mathematical formalism of spacetime physics before the end of 1907 (which could have been highly improbable if Minkowski had not been developing his own ideas), both indicate that Minkowski might have arrived at the notion of spacetime independently of Poincare (who saw it as nothing more than a mathematical space) and at a deeper understanding of the basic ideas of special relativity (which Einstein merely postulated) independently of Einstein. So, had he lived longer, Minkowski might have employed successfully his program of regarding four-dimensional physics as spacetime geometry to gravitation as well. Moreover, Hilbert (Minkowski's closest colleague and friend) had derived the equations of general relativity simultaneously with Einstein. Second, even if Einstein had arrived at what is today called Einstein's general relativity before Minkowski, Minkowski would have certainly reformulated it in terms of his program of geometrizing physics and might have represented gravitation fully as the manifestation of the non-Euclidean geometry of spacetime (Einstein regarded the geometrical representation of gravitation as pure mathematics) exactly like he reformulated Einstein's special relativity in terms of spacetime. [NOTE: The text of two explanations (on the reality of spacetime and on the status of gravitational energy) was taken from my previous paper "Is Gravitation Interaction or just Curved-Spacetime Geometry?" because those explanations had to be repeated in the present paper anyway.]
<urn:uuid:83cfd80b-20d5-4802-9b11-9d797c9adcf9>
3
406
Academic Writing
Science & Tech.
0.365811
95,609,309
It’s the season of mixed precipitation. The spring temperature profile is conducive to an icy mix and the last few systems that rolled by didn’t disappoint. I was walking to my office during one of those “events” when I overheard a co-worker comment on the freezing rain that was hitting the window. I knew right away that we were getting ice pellets and not freezing rain. Ice pellets are small, translucent balls of ice. They are smaller than hailstones which fall from thunderstorms rather than during the winter or early spring. Ice pellets form when the layer of cold air (below freezing) close to the ground extends upward far enough so that raindrops that fall from the cloud freeze into little balls of ice before reaching the ground. Ice pellets often bounce when they hit the ground or other solid objects, and make a higher-pitched "tap" sound when striking objects like jackets, windshields, and dried leaves. Freezing rain on the other hand forms when the layer of cold air close to the ground is very shallow. The raindrops that fall from the cloud don’t have time to change to ice before they reach the ground. The droplets become supercooled — meaning they remain in a liquid state below 0. Those raindrops freeze when they come into contact with cold objects on or near the ground. I’d like to say that we won’t get any more of either, but I’d be fibbing.
<urn:uuid:eaecf626-76ad-4ab0-a9d8-d3062c36a925>
2.609375
308
Personal Blog
Science & Tech.
63.300955
95,609,316
Techniques for detecting X-rays and gamma-rays Pair production Creation of elementary particle and its antiparticle from a photon. Occurs only if enough. Published byModified over 3 years ago Presentation on theme: "Techniques for detecting X-rays and gamma-rays Pair production Creation of elementary particle and its antiparticle from a photon. Occurs only if enough."— Presentation transcript: Techniques for detecting X-rays and gamma-rays Pair production Creation of elementary particle and its antiparticle from a photon. Occurs only if enough energy present to create the pair – at least the total rest mass energy of the two particles. Techniques for detecting X-rays and gamma-rays Pair production The positron and electron after creation produce trails of ionisation until eventually they have expended all their energy. If the positron comes to rest near an electron it will annihilate to create a pair of 0.511 keV gamma rays. This is a good calibration source for any detector. Techniques for detecting X-rays and gamma-rays Bringing this all together For a fixed Z the photoelectric effect is dominant at low photon energies. Pair production is dominant at high energies. Mid energy interactions favour Compton scattering. Techniques for detecting X-rays and gamma-rays Mass absorption coefficient The mass absorption coefficient is a measurement of how strongly a substance absorbs photons at a given energy. Monoenergetic photons with an incident intensity I o, penetrating a layer of material with mass thickness x, mass absorption coefficient, and density ρ, emerges with intensity I given by the exponential attenuation law: Techniques for detecting X-rays and gamma-rays Total mass attenuation coefficient This graph shows both absorption and scattering processes. Scattering is process in which a portion of the photons coming from a source scatter (bounce) off molecules and other small particles in the atmosphere or target in the case of a detector. The line in the graph indicating the total absorption coefficient only receives a contribution from absorption processes and disregards any contribution due to scattering. The total attenuation coefficient receives contributions from both scattering and absorption processes. Techniques for detecting X-rays and gamma-rays Rayleigh scattering Rayleigh scattering is the scattering of photons by tiny particles. It can occur when photons pass through solids or liquids, but most often in gases. (In the case of our blue sky it is these scattered photons that give the sky its brightness and colour. Techniques for detecting X-rays and gamma-rays Line for Compton scattering and one for Compton absorption – why ? In Compton effect the interacting photon passes on in a new direction (Compton scatters) having given up part of its energy to smack into an electron (Compton absorption). This means that the photon will have deviated from the direct line of sight between source and detector. Techniques for detecting X-rays and gamma-rays So do I use the total attenuation or absorption coefficient ? If you have a detector with which you want to measure the photon rate from a distant source then you would want to know the TOTAL attenuation coefficient as you are interested in the absolute intensity arriving at your target. If you would just like to know the intensity as a consequence only of absorption and wish to ignore completely the effect of scattering, then you would want to know the total absorption coefficient at the energy of interest. http://physics.nist.gov/PhysRefData/XrayMassCoef/tab3.html http://physics.nist.gov/PhysRefData/XrayMassCoef/tab4.html On the NIST website: The total mass attenuation coefficient is defined as The total mass absorption coefficient is Techniques for detecting X-rays and gamma-rays Detectors types used to detect x-rays and gamma rays Detectors are designed to allow photons to interact with some sort of target material causing them to interact via the photoelectric effect or Compton effect or pair production. Photoelectric effect = photon is absorbed by a target atom and photoelectron is created. This shoots off in a specific direction and bounces into other atoms creating a trail of ionisation along its path until it loses all its energy. Compton effect = the photon gradually loses its energy by momentum transfer to orbital electrons until all original photon energy is lost. Pair production = the photon is absorbed and all its energy (minus the energy required to create them i.e. the rest mass) is given in the form of kinetic energy to an electron and a positron which then create ionisation trails. Techniques for detecting X-rays and gamma-rays Detectors types used to detect x-rays and gamma rays Detectors then either measure this charge directly or measure it indirectly by for example recording the light produced as the ionised atoms and electrons recombine. We will look at the following types… Proportional counter Semiconductor detector MPPC measure charge produced Microchannel plate Scintillation counter measure light from scintillation Some detectors can only measure the total number of photons, others can record number as a function of energy (creating a spectrum) and some can record the path of the photon as it interacts. Techniques for detecting X-rays and gamma-rays Detectors types used to detect x-rays and gamma rays Proportional counter Semiconductor detector MPPC Microchannel plate Scintillator Main detector types Proportional counters Gas (e.g. argon + 10% methane) filled container with a central electrode to attract the charge (ionisation) created by the photon. Primary electrons "see" an increasing electrical field on route to the central anode wire. Electrons speed up and create additional ionisation colliding with atoms in their path. These ‘new’ ( secondary) electrons collide with other atoms creating a cascade amplifying the original signal. The electron cloud reaches the central wire where a current pulse is recorded by basic and rugged electronics. If all photon energy is given up inside the chamber then the size of the pulse is proportional to the energy of the original photon. Main detector types Proportional counters Some are now capable of discerning the path of ionisation through the gas. Rectangular boxes containing grids of orthogonal wires (Multi-wire Proportional Chambers).. Ionisation track is drifted within an electric field toward grids. Upon arrival it creates a signal on both sets of wires and triangulation provides x and y coordinates. z coordinate is determined by measuring the drift time from the ionization event to the wires. Path of ionisation can be reconstructed. http://www.youtube.com/watch?v=cAIKp0cu7UM Main detector types Proportional counters The ionisation trail has been determined using x,y,z, coordinates. Main detector types Scintillation Detectors Scintillators produce light when either ionising particles pass through them or photons interact with target atoms. They can be gases or liquids or solids (but they must always be transparent to the light they produce!) An incident photon interacts to create high speed electrons which go on to create a path of ionisation through the target. Main detector types Scintillation Detectors Look at the table below and appreciate that the original X-ray photon energy must be suitably high to enable a respectable output signal from the PMT. Putting these values together sodium iodide scintillation detectors need around 230 eV of incident photon energy to create an electron which will then pass to the 1st dynode.
<urn:uuid:65376746-b4a5-40d1-90ad-4d54c1edbb0e>
3.90625
1,533
Truncated
Science & Tech.
36.023561
95,609,330
Personal computers have made life convenient in many ways, but what about their impacts on the environment due to production, use and disposal? Manufacturing computers requires prodigious quantities of fossil fuels, toxic chemicals and water. Rapid improvements in performance mean we often buy a new machine every 1-3 years, which adds up to mountains of waste computers. How should societies respond to manage these environmental impacts? This volume addresses the environmental impacts and management of computers through a set of analyses on issues ranging from environmental assessment, technologies for recycling, consumer behaviour, strategies of computer manufacturing firms, and government policies. One conclusion is that extending the lifespan of computers (e.g. through reselling) is an environmentally and economically effective strategy that deserves more attention from governments, firms and the general public. Series: Eco-Efficiency in Industry and Science Number Of Pages: 285 Published: 15th September 2007 Publisher: Springer-Verlag New York Inc. Country of Publication: US Dimensions (cm): 24.0 x 16.0 x 1.32 Weight (kg): 0.92
<urn:uuid:b2ab4ba2-e70b-4de3-92fe-c64aeb130858>
2.984375
221
Product Page
Science & Tech.
33.683846
95,609,333
The new technique provides a detailed look into processes that until now were proven but never visualized. The more detailed view of DNA being made into RNA in a single cell will help answer questions about how much of a gene is made over time and how much that level varies from cell to cell. Insight into how genes work at a more precise level, ultimately advances understanding of disease mechanisms that trigger cancer, for example, which arise when genes no longer work at their correct capacity or time. “The classic textbook cartoon illustration of a single strand of DNA with little mRNA pieces coming off it can now be shown with real photographs,” explained Daniel Zenklusen, Ph.D., an Einstein post-doctoral fellow and first author of the study. The technique was developed in the laboratory of Robert Singer, Ph.D., co-chair and professor of anatomy and structural biology at Einstein. The new technology is a powerful refinement of fluorescent in-situ hybridization (FISH), developed in Dr. Singer’s laboratory more than 26 years ago. FISH is now a widely used research tool to study gene activation; that is how much a gene has been “turned on” in groups of cells. FISH is also used in genetic counseling to detect the presence of gene features that diagnose conditions including Down’s syndrome or Prader-Willi syndrome. Advances in fluorescence, microscopy and data analysis enabled the more powerful FISH application described in the paper. Until this work, FISH could only be used to look at genes or their messages that are present at very high levels and only in tissues, not at the smaller level of the cell. However, this it the first time that all the individual mRNA molecules within single cells can be counted. Dr. Singer’s “single RNA counting” technique has the potential to change some fundamental theories about how genes are regulated. As Dr. Singer explained, “our study using this new technique has already generated enough new ideas to keep students busy for the next 10 years.” One of the most important findings of this study was that “housekeeping” genes, which all cells need to survive, are not always expressed at a constant level. Variability, however, is restricted to a narrow range that seems to be characteristic for housekeeping genes. Combining single molecule measurement with mathematical modeling allowed the team to precisely determine how variability is controlled. This showed that unlike the findings of previous studies, housekeeping genes are not transcribed by transcriptional bursts but at a fairly constant rate. Bursting expression, however, is found for special classes for genes where higher variability might be an advantage for the cell. The next step is to see if this continuous/non-bursting theory of housekeeping gene control applies also to human cells. The work from Dr. Singer’s group was performed in yeast cells. Dr. Singer believes the approach of looking at biological processes in natural contexts (rather than in a test tube) at a single cell level reveals details that can advance the field of cancer and other disease research. “Cancer derives from a single cell. So current microarray technologies that are used on a tissue-wide level and are based on “grinding up a tumor” may be a good first step at directing us where to focus, but they may need to be combined with newer techniques that provide the precision to home in on single cells,” Dr. Singer said.The study, “Single-RNA Counting Reveals Alternative Modes of Gene Expression in Yeast,” by Daniel Zenklusen, Daniel R. Larson and Robert H. Singer appears in the November 16, 2008 online edition of Nature Structural and Molecular Biology. http://www.nature.com/nsmb/journal/vaop/ncurrent/index.htmlAbout Albert Einstein College of Medicine of Yeshiva University Michael Heller | Newswise Science News Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:a19babc4-d006-4ccc-bc1e-5f51808a1e77>
3.78125
1,391
Content Listing
Science & Tech.
41.998539
95,609,335
|Name, symbol||Hydrogen-2,2H or D| |Natural abundance||0.0115% (Earth)| |Isotope mass||2.01410178 u| |Excess energy||13135.720± 0.001 keV| |Binding energy||2224.52± 0.20 keV| |Complete table of nuclides| Deuterium (or hydrogen-2, symbol , also known as heavy hydrogen) is one of two stable isotopes of hydrogen (the other being protium, or hydrogen-1). The nucleus of deuterium, called a deuteron, contains one proton and one neutron, whereas the far more common protium has no neutron in the nucleus. Deuterium has a natural abundance in Earth's oceans of about one atom in of hydrogen. Thus deuterium accounts for approximately 0.0156% (or, on a mass basis, 0.0312%) of all the naturally occurring hydrogen in the oceans, while protium accounts for more than 99.98%. The abundance of deuterium changes slightly from one kind of natural water to another (see 6420Vienna Standard Mean Ocean Water). The deuterium isotope's name is formed from the Greek deuteros, meaning "second", to denote the two particles composing the nucleus. Deuterium was discovered and named in 1931 by Harold Urey. When the neutron was discovered in 1932, this made the nuclear structure of deuterium obvious, and Urey won the Nobel Prize in 1934. Soon after deuterium's discovery, Urey and others produced samples of "heavy water" in which the deuterium content had been highly concentrated. Deuterium is destroyed in the interiors of stars faster than it is produced. Other natural processes are thought to produce only an insignificant amount of deuterium. Nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago, as the basic or primordial ratio of hydrogen-1 to deuterium (about 26 atoms of deuterium per million hydrogen atoms) has its origin from that time. This is the ratio found in the gas giant planets, such as Jupiter (see references 2,3 and 4). However, other astronomical bodies are found to have different ratios of deuterium to hydrogen-1. This is thought to be a result of natural isotope separation processes that occur from solar heating of ices in comets. Like the water cycle in Earth's weather, such heating processes may enrich deuterium with respect to protium. The analysis of deuterium/protium ratios in comets found results very similar to the mean ratio in Earth's oceans (156 atoms of deuterium per million hydrogens). This reinforces theories that much of Earth's ocean water is of cometary origin. The deuterium/protium ratio of the comet 67P/Churyumov-Gerasimenko, as measured by the Rosetta space probe, is about three times that of earth water. This figure is the highest yet measured in a comet. Deuterium/protium ratios thus continue to be an active topic of research in both astronomy and climatology. - 1 Differences from common hydrogen (protium) - 2 Properties - 3 Applications - 4 History - 5 Data for elemental deuterium - 6 Antideuterium - 7 See also - 8 References - 9 External links Differences from common hydrogen (protium) Deuterium is frequently represented by the chemical symbol D. Since it is an isotope of hydrogen with mass number 2, it is also represented by 2 . IUPAC allows both D and 2 , although 2 is preferred. A distinct chemical symbol is used for convenience because of the isotope's common use in various scientific processes. Also, its large mass difference with protium (1H) (deuterium has a mass of 102 u, compared to the 2.014mean hydrogen atomic weight of 947 u, and protium's mass of 1.007825 u) confers non-negligible chemical dissimilarities with protium-containing compounds, whereas the isotope weight ratios within other chemical elements are largely insignificant in this regard. 1.007 In quantum mechanics the energy levels of electrons in atoms depend on the reduced mass of the system of electron and nucleus. For the hydrogen atom, the role of reduced mass is most simply seen in the Bohr model of the atom, where the reduced mass appears in a simple calculation of the Rydberg constant and Rydberg equation, but the reduced mass also appears in the Schrödinger equation, and the Dirac equation for calculating atomic energy levels. The reduced mass of the system in these equations is close to the mass of a single electron, but differs from it by a small amount about equal to the ratio of mass of the electron to the atomic nucleus. For hydrogen, this amount is about 1837/1836, or 1.000545, and for deuterium it is even smaller: 3671/3670, or 1.0002725. The energies of spectroscopic lines for deuterium and light hydrogen (hydrogen-1) therefore differ by the ratios of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the corresponding lines of light hydrogen, by a factor of 1.000272. In astronomical observation, this corresponds to a blue Doppler shift of 0.000272 times the speed of light, or 81.6 km/s. The differences are much more pronounced in vibrational spectroscopy such as infrared spectroscopy and Raman spectroscopy, and in rotational spectra such as microwave spectroscopy because the reduced mass of the deuterium is markedly higher than that of protium. In nuclear magnetic resonance spectroscopy, deuterium has a very different NMR frequency (e.g. 61 MHz when protium is at 400 MHz) and is much less sensitive. Deuterated solvents are usually used in protium NMR to prevent the solvent from overlapping with the signal, although deuterium NMR on its own right is also possible. Big Bang nucleosynthesis Deuterium is thought to have played an important role in setting the number and ratios of the elements that were formed in the Big Bang. Combining thermodynamics and the changes brought about by cosmic expansion, one can calculate the fraction of protons and neutrons based on the temperature at the point that the universe cooled enough to allow formation of nuclei. This calculation indicates seven protons for every neutron at the beginning of nucleogenesis, a ratio that would remain stable even after nucleogenesis was over. This fraction was in favor of protons initially, primarily because the lower mass of the proton favored their production. As the universe expanded, it cooled. Free neutrons and protons are less stable than helium nuclei, and the protons and neutrons had a strong energetic reason to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Through much of the few minutes after the big bang during which nucleosynthesis could have occurred, the temperature was high enough that the mean energy per particle was greater than the binding energy of weakly bound deuterium; therefore any deuterium that was formed was immediately destroyed. This situation is known as the deuterium bottleneck. The bottleneck delayed formation of any helium-4 until the universe became cool enough to form deuterium (at about a temperature equivalent to 100 keV). At this point, there was a sudden burst of element formation (first deuterium, which immediately fused to helium). However, very shortly thereafter, at twenty minutes after the Big Bang, the universe became too cool for any further nuclear fusion and nucleosynthesis to occur. At this point, the elemental abundances were nearly fixed, with the only change as some of the radioactive products of big bang nucleosynthesis (such as tritium) decay. The deuterium bottleneck in the formation of helium, together with the lack of stable ways for helium to combine with hydrogen or with itself (there are no stable nuclei with mass numbers of five or eight) meant that an insignificant amount of carbon, or any elements heavier than carbon, formed in the Big Bang. These elements thus required formation in stars. At the same time, the failure of much nucleogenesis during the Big Bang ensured that there would be plenty of hydrogen in the later universe available to form long-lived stars, such as our Sun. Deuterium occurs in trace amounts naturally as deuterium gas, written 2 2 or D2, but most natural occurrence in the universe is bonded with a typical 1 atom, a gas called hydrogen deuteride (HD or 1 The existence of deuterium on Earth, elsewhere in the solar system (as confirmed by planetary probes), and in the spectra of stars, is also an important datum in cosmology. Gamma radiation from ordinary nuclear fusion dissociates deuterium into protons and neutrons, and there are no known natural processes other than the Big Bang nucleosynthesis, which might have produced deuterium at anything close to the observed natural abundance of deuterium (deuterium is produced by the rare cluster decay, and occasional absorption of naturally occurring neutrons by light hydrogen, but these are trivial sources). There is thought to be little deuterium in the interior of the Sun and other stars, as at temperatures there nuclear fusion reactions that consume deuterium happen much faster than the proton-proton reaction that creates deuterium. However, deuterium persists in the outer solar atmosphere at roughly the same concentration as in Jupiter, and this has probably been unchanged since the origin of the Solar System. The natural abundance of deuterium seems to be a very similar fraction of hydrogen, wherever hydrogen is found, unless there are obvious processes at work that concentrate it. The existence of deuterium at a low but constant primordial fraction in all hydrogen is another one of the arguments in favor of the Big Bang theory over the Steady State theory of the universe. The observed ratios of hydrogen to helium to deuterium in the universe are difficult to explain except with a Big Bang model. It is estimated that the abundances of deuterium have not evolved significantly since their production about bya. 13.8 Measurements of Milky Way galactic deuterium from ultraviolet spectral analysis show a ratio of as much as 23 atoms of deuterium per million hydrogen atoms in undisturbed gas clouds, which is only 15% below the WMAP estimated primordial ratio of about 27 atoms per million from the Big Bang. This has been interpreted to mean that less deuterium has been destroyed in star formation in our galaxy than expected, or perhaps deuterium has been replenished by a large in-fall of primordial hydrogen from outside the galaxy. In space a few hundred light years from the Sun, deuterium abundance is only 15 atoms per million, but this value is presumably influenced by differential adsorption of deuterium onto carbon dust grains in interstellar space. The abundance of deuterium in the atmosphere of Jupiter has been directly measured by the Galileo space probe as 26 atoms per million hydrogen atoms. ISO-SWS observations find 22 atoms per million hydrogen atoms in Jupiter. and this abundance is thought to represent close to the primordial solar system ratio. This is about 17% of the terrestrial deuterium-to-hydrogen ratio of 156 deuterium atoms per million hydrogen atoms. Cometary bodies such as Comet Hale Bopp and Halley's Comet have been measured to contain relatively more deuterium (about 200 atoms D per million hydrogens), ratios which are enriched with respect to the presumed protosolar nebula ratio, probably due to heating, and which are similar to the ratios found in Earth seawater. The recent measurement of deuterium amounts of 161 atoms D per million hydrogen in Comet 103P/Hartley (a former Kuiper belt object), a ratio almost exactly that in Earth's oceans, emphasizes the theory that Earth's surface water may be largely comet-derived. Most recently the deuterium/protium (D/H) ratio of 67P/Churyumov-Gerasimenko as measured by Rosetta is about three times that of earth water, a figure that is high. This has caused renewed interest in suggestions that Earth's water may be partly of asteroidal origin. Deuterium has also observed to be concentrated over the mean solar abundance in other terrestrial planets, in particular Mars and Venus. Deuterium is produced for industrial, scientific and military purposes, by starting with ordinary water—a small fraction of which is naturally-occurring heavy water—and then separating out the heavy water by the Girdler sulfide process, distillation, or other methods. In theory, deuterium for heavy water could be created in a nuclear reactor, but separation from ordinary water is the cheapest bulk production process. The world's leading supplier of deuterium was Atomic Energy of Canada Limited, in Canada, until 1997, when the last heavy water plant was shut down. Canada uses heavy water as a neutron moderator for the operation of the CANDU reactor design. Another major producer of heavy water is India. All but one of India's atomic energy plants are pressurised heavy water plants, which use natural (i.e., not enriched) uranium. India has eight (seven are in operation) heavy water plants, six (five) based on D-H exchange in ammonia gas and two plants extract deuterium from natural water in a process that uses hydrogen sulphide gas at high pressure. While India is self-sufficient in heavy water for its own use, India now also exports reactor-grade heavy water. The physical properties of deuterium compounds can exhibit significant kinetic isotope effects and other physical and chemical property differences from the hydrogen analogs. D2O, for example, is more viscous than H2O. Chemically, there are differences in bond energy and length for compounds of heavy hydrogen isotopes compared to normal hydrogen, which are larger than the isotopic differences in any other element. Bonds involving deuterium and tritium are somewhat stronger than the corresponding bonds in hydrogen, and these differences are enough to cause significant changes in biological reactions. Pharmaceutical firms are interested in the fact that deuterium is harder to remove from carbon than hydrogen. Deuterium can replace the normal hydrogen in water molecules to form heavy water (D2O), which is about 10.6% denser than normal water (so that ice made from it sinks in ordinary water). Heavy water is slightly toxic in eukaryotic animals, with 25% substitution of the body water causing cell division problems and sterility, and 50% substitution causing death by cytotoxic syndrome (bone marrow failure and gastrointestinal lining failure). Prokaryotic organisms, however, can survive and grow in pure heavy water, though they develop slowly. Despite this toxicity, consumption of heavy water under normal circumstances does not pose a health threat to humans. It is estimated that a kg (154 lb) person might drink 4.8 liters (1.2 gallons) of heavy water without serious consequences. 70 Small doses of heavy water (a few grams in humans, containing an amount of deuterium comparable to that normally present in the body) are routinely used as harmless metabolic tracers in humans and animals. The deuteron has spin +1 ("triplet") and is thus a boson. The NMR frequency of deuterium is significantly different from common light hydrogen. Infrared spectroscopy also easily differentiates many deuterated compounds, due to the large difference in IR absorption frequency seen in the vibration of a chemical bond containing deuterium, versus light hydrogen. The two stable isotopes of hydrogen can also be distinguished by using mass spectrometry. The triplet deuteron nucleon is barely bound at EB = , and none of the higher energy states are bound. The singlet deuteron is a virtual state, with a negative binding energy of 2.23 MeV. There is no such stable particle, but this virtual particle transiently exists during neutron-proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton. ~60 keV Nuclear properties (the deuteron) Deuteron mass and radius The nucleus of deuterium is called a deuteron. It has a mass of 553212745(40) u2.013 The charge radius of the deuteron is fm. 2.1413(25) Like the proton radius, measurements using muonic deuterium produce a significantly smaller result: 62(78) fm. 2.125 This is 6σ less than the accepted CODATA 2014 value, measured using electrons, and confirms the unresolved proton charge radius anomaly. Spin and energy Deuterium is one of only five stable nuclides with an odd number of protons and an odd number of neutrons. (2 ; also, the long-lived radioactive nuclides 40 occur naturally.) Most odd-odd nuclei are unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Deuterium, however, benefits from having its proton and neutron coupled to a spin-1 state, which gives a stronger nuclear attraction; the corresponding spin-1 state does not exist in the two-neutron or two-proton system, due to the Pauli exclusion principle which would require one or the other identical particle with the same spin to have some other different quantum number, such as orbital angular momentum. But orbital angular momentum of either particle gives a lower binding energy for the system, primarily due to increasing distance of the particles in the steep gradient of the nuclear force. In both cases, this causes the diproton and dineutron nucleus to be unstable. The proton and neutron making up deuterium can be dissociated through neutral current interactions with neutrinos. The cross section for this interaction is comparatively large, and deuterium was successfully used as a neutrino target in the Sudbury Neutrino Observatory experiment. Isospin singlet state of the deuteron Due to the similarity in mass and nuclear properties between the proton and neutron, they are sometimes considered as two symmetric types of the same object, a nucleon. While only the proton has an electric charge, this is often negligible due to the weakness of the electromagnetic interaction relative to the strong nuclear interaction. The symmetry relating the proton and neutron is known as isospin and denoted I (or sometimes T). Isospin is an SU(2) symmetry, like ordinary spin, so is completely analogous to it. The proton and neutron form an isospin doublet, with a "down" state (↓) being a neutron, and an "up" state (↑) being a proton. This is a nucleus with one proton and one neutron, i.e. a deuterium nucleus. The triplet is and thus consists of three types of nuclei, which are supposed to be symmetric: a deuterium nucleus (actually a highly excited state of it), a nucleus with two protons, and a nucleus with two neutrons. The latter two nuclei are not stable or nearly stable, and therefore so is this type of deuterium (meaning that it is indeed a highly excited state of deuterium). Approximated wavefunction of the deuteron The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, the wavefunction need not be antisymmetric in general). Apart from their isospin, the two nucleons also have spin and spatial distributions of their wavefunction. The latter is symmetric if the deuteron is symmetric under parity (i.e. have an "even" or "positive" parity), and antisymmetric if the deuteron is antisymmetric under parity (i.e. have an "odd" or "negative" parity). The parity is fully determined by the total orbital angular momentum of the two nucleons: if it is even then the parity is even (positive), and if it is odd then the parity is odd (negative). The deuteron, being an isospin singlet, is antisymmetric under nucleons exchange due to isospin, and therefore must be symmetric under the double exchange of their spin and location. Therefore, it can be in either of the following two different states: - Symmetric spin and symmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (+1) from spin exchange and (+1) from parity (location exchange), for a total of (−1) as needed for antisymmetry. - Antisymmetric spin and antisymmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (−1) from spin exchange and (−1) from parity (location exchange), again for a total of (−1) as needed for antisymmetry. In the first case the deuteron is a spin triplet, so that its total spin s is 1. It also has an even parity and therefore even orbital angular momentum l ; The lower its orbital angular momentum, the lower its energy. Therefore, the lowest possible energy state has s = 1, l = 0. In the second case the deuteron is a spin singlet, so that its total spin s is 0. It also has an odd parity and therefore odd orbital angular momentum l. Therefore, the lowest possible energy state has s = 0, l = 1. Since s = 1 gives a stronger nuclear attraction, the deuterium ground state is in the s =1, l = 0 state. The same considerations lead to the possible states of an isospin triplet having s = 0, l = even or s = 1, l = odd. Thus the state of lowest energy has s = 1, l = 1, higher than that of the isospin singlet. The analysis just given is in fact only approximate, both because isospin is not an exact symmetry, and more importantly because the strong nuclear interaction between the two nucleons is related to angular momentum in spin-orbit interaction that mixes different s and l states. That is, s and l are not constant in time (they do not commute with the Hamiltonian), and over time a state such as s = 1, l = 0 may become a state of s = 1, l = 2. Parity is still constant in time so these do not mix with odd l states (such as s = 0, l = 1). Therefore, the quantum state of the deuterium is a superposition (a linear combination) of the s = 1, l = 0 state and the s = 1, l = 2 state, even though the first component is much bigger. Since the total angular momentum j is also a good quantum number (it is a constant in time), both components must have the same j, and therefore j = 1. This is the total spin of the deuterium nucleus. To summarize, the deuterium nucleus is antisymmetric in terms of isospin, and has spin 1 and even (+1) parity. The relative angular momentum of its nucleons l is not well defined, and the deuteron is a superposition of mostly l = 0 with some l = 2. Magnetic and electric multipoles g(l) and g(s) are g-factors of the nucleons. Since the proton and neutron have different values for g(l) and g(s), one must separate their contributions. Each gets half of the deuterium orbital angular momentum and spin . One arrives at where subscripts p and n stand for the proton and neutron, and g(l)n = 0. For the s = 1, l = 0 state (j = 1), we obtain For the s = 1, l = 2 state (j = 1), we obtain The measured value of the deuterium magnetic dipole moment, is N, which is 97.5% of the 0.857 µ N value obtained by simply adding moments of the proton and neutron. This suggests that the state of the deuterium is indeed to a good approximation 0.879 µs = 1, l = 0 state, which occurs with both nucleons spinning in the same direction, but their magnetic moments subtracting because of the neutron's negative moment. But the slightly lower experimental number than that which results from simple addition of proton and (negative) neutron moments shows that deuterium is actually a linear combination of mostly s = 1, l = 0 state with a slight admixture of s = 1, l = 2 state. The measured electric quadrupole of the deuterium is e·fm2. While the order of magnitude is reasonable, since the deuterium radius is of order of 1 femtometer (see below) and its 0.2859 electric charge is e, the above model does not suffice for its computation. More specifically, the electric quadrupole does not get a contribution from the l =0 state (which is the dominant one) and does get a contribution from a term mixing the l =0 and the l =2 states, because the electric quadrupole operator does not commute with angular momentum. The latter contribution is dominant in the absence of a pure l = 0 contribution, but cannot be calculated without knowing the exact spatial form of the nucleons wavefunction inside the deuterium. Higher magnetic and electric multipole moments cannot be calculated by the above model, for similar reasons. Deuterium has a number of commercial and scientific uses. These include: Deuterium is used in heavy water moderated fission reactors, usually as liquid D2O, to slow neutrons without the high neutron absorption of ordinary hydrogen. This is a common commercial use for larger amounts of deuterium. Experimentally, deuterium is the most common nuclide used in nuclear fusion reactor designs, especially in combination with tritium, because of the large reaction rate (or nuclear cross section) and high energy yield of the D–T reaction. There is an even higher-yield D–3 fusion reaction, though the breakeven point of D–3 is higher than that of most other fusion reactions; together with the scarcity of 3 , this makes it implausible as a practical power source until at least D–T and D–D fusion reactions have been performed on a commercial scale. However, commercial nuclear fusion is not yet an accomplished technology. Deuterium is most commonly used in hydrogen nuclear magnetic resonance spectroscopy (proton NMR) in the following way. NMR ordinarily requires compounds of interest to be analyzed as dissolved in solution. Because of deuterium's nuclear spin properties which differ from the light hydrogen usually present in organic molecules, NMR spectra of hydrogen/protium are highly differentiable from that of deuterium, and in practice deuterium is not "seen" by an NMR instrument tuned for light-hydrogen. Deuterated solvents (including heavy water, but also compounds like deuterated chloroform, CDCl3) are therefore routinely used in NMR spectroscopy, in order to allow only the light-hydrogen spectra of the compound of interest to be measured, without solvent-signal interference. Nuclear magnetic resonance spectroscopy can also be used to obtain information about the deuteron's environment in isotopically labelled samples (Deuterium NMR). For example, the flexibility in the tail, which is a long hydrocarbon chains, in deuterium-labelled lipid molecules can be quantified using solid state deuterium NMR. Deuterium NMR spectra are especially informative in the solid state because of its relatively small quadrupole moment in comparison with those of bigger quadrupolar nuclei such as chlorine-35, for example. In chemistry, biochemistry and environmental sciences, deuterium is used as a non-radioactive, stable isotopic tracer, for example, in the doubly labeled water test. In chemical reactions and metabolic pathways, deuterium behaves somewhat similarly to ordinary hydrogen (with a few chemical differences, as noted). It can be distinguished from ordinary hydrogen most easily by its mass, using mass spectrometry or infrared spectrometry. Deuterium can be detected by femtosecond infrared spectroscopy, since the mass difference drastically affects the frequency of molecular vibrations; deuterium-carbon bond vibrations are found in spectral regions free of other signals. Measurements of small variations in the natural abundances of deuterium, along with those of the stable heavy oxygen isotopes 17O and 18O, are of importance in hydrology, to trace the geographic origin of Earth's waters. The heavy isotopes of hydrogen and oxygen in rainwater (so-called meteoric water) are enriched as a function of the environmental temperature of the region in which the precipitation falls (and thus enrichment is related to mean latitude). The relative enrichment of the heavy isotopes in rainwater (as referenced to mean ocean water), when plotted against temperature falls predictably along a line called the global meteoric water line (GMWL). This plot allows samples of precipitation-originated water to be identified along with general information about the climate in which it originated. Evaporative and other processes in bodies of water, and also ground water processes, also differentially alter the ratios of heavy hydrogen and oxygen isotopes in fresh and salt waters, in characteristic and often regionally distinctive ways. The ratio of concentration of 2H to 1H is usually indicated with a delta as δ2H and the geographic patterns of these values are plotted in maps termed as isoscapes. Stable isotope are incorporated into plants and animals and an analysis of the ratios in a migrant bird or insect can help suggest a rough guide to their origins. Neutron scattering techniques particularly profit from availability of deuterated samples: The H and D cross sections are very distinct and different in sign, which allows contrast variation in such experiments. Further, a nuisance problem of ordinary hydrogen is its large incoherent neutron cross section, which is nil for D. The substitution of deuterium atoms for hydrogen atoms thus reduces scattering noise. Hydrogen is an important and major component in all materials of organic chemistry and life science, but it barely interacts with X-rays. As hydrogen (and deuterium) interact strongly with neutrons, neutron scattering techniques, together with a modern deuteration facility, fills a niche in many studies of macromolecules in biology and many other areas. This is discussed below. It is notable that although most stars, including the Sun, generate energy over most of their lives by fusing hydrogen into heavier elements, such fusion of light hydrogen (protium) has never been successful in the conditions attainable on Earth. Thus, all artificial fusion, including the hydrogen fusion that occurs in so-called hydrogen bombs, requires heavy hydrogen (either tritium or deuterium, or both) in order for the process to work. A deuterated drug is a small molecule medicinal product in which one or more of the hydrogen atoms contained in the drug molecule have been replaced by deuterium. Because of the kinetic isotope effect, deuterium-containing drugs may have significantly lower rates of metabolism, and hence a longer half-life. In 2017, deutetrabenazine became the first deuterated drug to receive FDA approval. Reinforced essential nutrients Deuterium can be used to reinforce specific oxidation-vulnerable C-H bonds within essential or conditionally essential nutrients, such as certain amino acids, or polyunsaturated fatty acids (PUFA), making them more resistant to oxidative damage. Deuterated polyunsaturated fatty acids, such as linoleic acid, slow down the chain reaction of lipid peroxidation that damage living cells. Deuterated ethyl ester of linoleic acid (RT001), developed by Retrotope, is in a compassionate use trial in infantile neuroaxonal dystrophy and has successfully completed a Phase I/II trial in Friedreich’s ataxia. Suspicion of lighter element isotopes The existence of nonradioactive isotopes of lighter elements had been suspected in studies of neon as early as 1913, and proven by mass spectrometry of light elements in 1920. The prevailing theory at the time was that isotopes of an element differ by the existence of additional protons in the nucleus accompanied by an equal number of nuclear electrons. In this theory, the deuterium nucleus with mass two and charge one would contain two protons and one nuclear electron. However it was expected that the element hydrogen with a measured average atomic mass very close to , the known mass of the proton, always has a nucleus composed of a single proton (a known particle), and could not contain a second proton. Thus, hydrogen could have no heavy isotopes. 1 u It was first detected spectroscopically in late 1931 by Harold Urey, a chemist at Columbia University. Urey's collaborator, Ferdinand Brickwedde, distilled five liters of cryogenically produced liquid hydrogen to mL of liquid, using the low-temperature physics laboratory that had recently been established at the National Bureau of Standards in Washington, D.C. (now the 1 National Institute of Standards and Technology). The technique had previously been used to isolate heavy isotopes of neon. The cryogenic boiloff technique concentrated the fraction of the mass-2 isotope of hydrogen to a degree that made its spectroscopic identification unambiguous. Naming of the isotope and Nobel Prize Urey created the names protium, deuterium, and tritium in an article published in 1934. The name is based in part on advice from G. N. Lewis who had proposed the name "deutium". The name is derived from the, Greek deuteros (second), and the nucleus to be called "deuteron" or "deuton". Isotopes and new elements were traditionally given the name that their discoverer decided. Some British scientists, such as Ernest Rutherford, wanted the isotope to be called "diplogen", from the Greek diploos (double), and the nucleus to be called diplon. The amount inferred for normal abundance of this heavy isotope of hydrogen was so small (only about 1 atom in 6400 hydrogen atoms in ocean water (156 deuteriums per million hydrogens)) that it had not noticeably affected previous measurements of (average) hydrogen atomic mass. This explained why it hadn't been experimentally suspected before. Urey was able to concentrate water to show partial enrichment of deuterium. Lewis had prepared the first samples of pure heavy water in 1933. The discovery of deuterium, coming before the discovery of the neutron in 1932, was an experimental shock to theory, but when the neutron was reported, making deuterium's existence more explainable, deuterium won Urey the Nobel Prize in Chemistry in 1934. Lewis was embittered by being passed over for this recognition given to his former student. "Heavy water" experiments in World War II Shortly before the war, Hans von Halban and Lew Kowarski moved their research on neutron moderation from France to England, smuggling the entire global supply of heavy water (which had been made in Norway) across in twenty-six steel drums. During World War II, Nazi Germany was known to be conducting experiments using heavy water as moderator for a nuclear reactor design. Such experiments were a source of concern because they might allow them to produce plutonium for an atomic bomb. Ultimately it led to the Allied operation called the "Norwegian heavy water sabotage", the purpose of which was to destroy the Vemork deuterium production/enrichment facility in Norway. At the time this was considered important to the potential progress of the war. After World War II ended, the Allies discovered that Germany was not putting as much serious effort into the program as had been previously thought. They had been unable to sustain a chain reaction. The Germans had completed only a small, partly built experimental reactor (which had been hidden away). By the end of the war, the Germans did not even have a fifth of the amount of heavy water needed to run the reactor,[clarification needed] partially due to the Norwegian heavy water sabotage operation. However, even had the Germans succeeded in getting a reactor operational (as the U.S. did with a graphite reactor in late 1942), they would still have been at least several years away from development of an atomic bomb with maximal effort. The engineering process, even with maximal effort and funding, required about two and a half years (from first critical reactor to bomb) in both the U.S. and U.S.S.R, for example. In thermonuclear weapons The 62-ton Ivy Mike device built by the United States and exploded on 1 November 1952, was the first fully successful "hydrogen bomb" or thermonuclear bomb. In this context, it was the first bomb in which most of the energy released came from nuclear reaction stages that followed the primary nuclear fission stage of the atomic bomb. The Ivy Mike bomb was a factory-like building, rather than a deliverable weapon. At its center, a very large cylindrical, insulated vacuum flask or cryostat, held cryogenic liquid deuterium in a volume of about 1000 liters (160 kilograms in mass, if this volume had been completely filled). Then, a conventional atomic bomb (the "primary") at one end of the bomb was used to create the conditions of extreme temperature and pressure that were needed to set off the thermonuclear reaction. Within a few years, so-called "dry" hydrogen bombs were developed that did not need cryogenic hydrogen. Released information suggests that all thermonuclear weapons built since then contain chemical compounds of deuterium and lithium in their secondary stages. The material that contains the deuterium is mostly lithium deuteride, with the lithium consisting of the isotope lithium-6. When the lithium-6 is bombarded with fast neutrons from the atomic bomb, tritium (hydrogen-3) is produced, and then the deuterium and the tritium quickly engage in thermonuclear fusion, releasing abundant energy, helium-4, and even more free neutrons. Data for elemental deuterium Formula: D2 or 2 - Density: at 0.180 kg/m3STP (, 0 °C). 101.325 kPa - Atomic weight: 1017926 u. 2.014 - Mean abundance in ocean water (from VSMOW) 155.76 ± 0.1 ppm (a ratio of 1 part per approximately 6420 parts), that is, about of the atoms in a sample (by number, not weight) 0.015% Data at approximately for D2 ( 18 Ktriple point): - Liquid: 162.4 kg/m3 - Gas: 0.452 kg/m3 - Viscosity: µPa·s at 12.6 (gas phase) 300 K - Specific heat capacity at constant pressure cp: - Solid: 2950 J/(kg·K) - Gas: 5200 J/(kg·K) An antideuteron is the antimatter counterpart of the nucleus of deuterium, consisting of an antiproton and an antineutron. The antideuteron was first produced in 1965 at the Proton Synchrotron at CERN and the Alternating Gradient Synchrotron at Brookhaven National Laboratory. A complete atom, with a positron orbiting the nucleus, would be called antideuterium, but as of 2005 antideuterium has not yet been created. The proposed symbol for antideuterium is , that is, D with an overbar. |Deuterium is an isotope of hydrogen |Decay product of: |Decays to: | - O'Leary, Dan (2012). "The deeds to deuterium". Nature Chemistry. 4: 236. Bibcode:2012NatCh...4..236O. doi:10.1038/nchem.1273. - Hartogh, Paul; Lis, Dariusz C.; Bockelée-Morvan, Dominique; De Val-Borro, Miguel; Biver, Nicolas; Küppers, Michael; Emprechtinger, Martin; Bergin, Edwin A.; et al. (2011). "Ocean-like water in the Jupiter-family comet 103P/Hartley 2". Nature. 478 (7368): 218–220. Bibcode:2011Natur.478..218H. doi:10.1038/nature10519. PMID 21976024. - Hersant, Franck; Gautier, Daniel; Hure, Jean‐Marc (2001). "A Two‐dimensional Model for the Primordial Nebula Constrained by D/H Measurements in the Solar System: Implications for the Formation of Giant Planets" (PDF). The Astrophysical Journal. 554 (1): 391–407. Bibcode:2001ApJ...554..391H. doi:10.1086/321355. see fig. 7. for a review of D/H ratios in various astronomical objects - Altwegg, K.; Balsiger, H.; Bar-Nun, A.; Berthelier, J. J.; et al. (2014). "67P/Churyumov-Gerasimenko, a Jupiter family comet with a high D/H ratio". Science. 347: 1261952. Bibcode:2015Sci...347A.387A. doi:10.1126/science.1261952. PMID 25501976. retrieved Dec 12, 2014 - "§ IR-3.3.2 Provisional Recommendations". Nomenclature of Inorganic Chemistry. Chemical Nomenclature and Structure Representation Division, IUPAC. Archived from the original on 27 October 2006. Retrieved 3 October 2007. - Hébrard, G.; Péquignot, D.; Vidal-Madjar, A.; Walsh, J. R.; Ferlet, R. (7 Feb 2000), Detection of deuterium Balmer lines in the Orion Nebula (PDF) - Water Absorption Spectrum. lsbu.ac.uk - Weiss, Achim. "Equilibrium and change: The physics behind Big Bang Nucleosynthesis". Einstein Online. Retrieved 2007-02-24. - IUPAC Commission on Nomenclature of Inorganic Chemistry (2001). "Names for Muonium and Hydrogen Atoms and their Ions" (PDF). Pure and Applied Chemistry. 73 (2): 377–380. doi:10.1351/pac200173020377. - "Cosmic Detectives". The European Space Agency (ESA). 2 April 2013. Retrieved 2013-04-15. - NASA FUSE Satellite Solves the Case of the Missing Deuterium. NASA - graph of deuterium with distance in our galactic neighborhood Archived 5 December 2013 at the Wayback Machine. See also Linsky, J. L.; Draine, B. T.; Moos, H. W.; Jenkins, E. B.; Wood, B. E.; Oliviera, C.; Blair, W. P.; Friedman, S. D.; Knauth, D.; Lehner, N.; Redfield, S.; Shull, J. M.; Sonneborn, G.; Williger, G. M. (2006). "What is the Total Deuterium Abundance in the Local Galactic Disk?". The Astrophysical Journal. 647: 1106–1124. arXiv: . Bibcode:2006ApJ...647.1106L. doi:10.1086/505556. - Lellouch, E; Bézard, B.; Fouchet, T.; Feuchtgruber, H.; Encrenaz, T.; De Graauw, T. (2001). "The deuterium abundance in Jupiter and Saturn from ISO-SWS observations" (PDF). Astronomy & Astrophysics. 670 (2): 610–622. Bibcode:2001A&A...370..610L. doi:10.1051/0004-6361:20010259. - Lide, D. R., ed. (2005). CRC Handbook of Chemistry and Physics (86th ed.). Boca Raton (FL): CRC Press. ISBN 0-8493-0486-5. - Halford, Bethany (4 July 2016). "The deuterium switcheroo". Chemical & Engineering News. American Chemical Society. pp. 32–36. - Kushner, D. J.; Baker, A.; Dunstall, T. G. (1999). "Pharmacological uses and perspectives of heavy water and deuterated compounds". Can. J. Physiol. Pharmacol. 77 (2): 79–88. doi:10.1139/cjpp-77-2-79. PMID 10535697. - Vertes, Attila, ed. (2003). "Physiological effect of heavy water". Elements and isotopes: formation, transformation, distribution. Dordrecht: Kluwer. pp. 111–112. ISBN 978-1-4020-1314-0. - Neutron-Proton Scattering. (PDF). mit.edu. Retrieved on 2011-11-23. - deuteron mass in u. Physics.nist.gov. Retrieved on 2016-01-07. - deuteron rms charge radius. Physics.nist.gov. Retrieved on 2016-01-07. - Pohl, Randolf; Nez, François; Fernandes, Luis M. P.; et al. (The CREMA Collaboration) (12 August 2016). "Laser spectroscopy of muonic deuterium". Science. 353 (6300): 669–673. Bibcode:2016Sci...353..669P. doi:10.1126/science.aaf2468. PMID 27516595. - Yirka, Bob (12 August 2016). "New measurement with deuterium nucleus confirms proton radius puzzle is real". Phys.org. - , Deuteron States, Nessa Journal of Physics, 2017, vol. 1, issue 2, pp. 1-17 - See neutron cross section#Typical cross sections - Seelig, J. (1971). "Flexibility of hydrocarbon chains in lipid bilayers". J. Am. Chem. Soc. 93 (20): 5017–5022. doi:10.1021/ja00749a006. PMID 4332660. - "Oxygen – Isotopes and Hydrology". SAHRA. Archived from the original on 2 January 2007. Retrieved 2007-09-10. - West, Jason B. (2009). Isoscapes: Understanding movement, pattern, and process on Earth through isotope mapping. Springer. - Hobson, K. A.; Van Wilgenburg, S. L.; Wassenaar, L. I.; Larson, K. (2012). "Linking Hydrogen (δ2H) Isotopes in Feathers and Precipitation: Sources of Variance and Consequences for Assignment to Isoscapes". PLOS ONE. 7 (4): e35137. Bibcode:2012PLoSO...735137H. doi:10.1371/journal.pone.0035137. - "NMI3 – Deuteration". NMI3. Retrieved 2012-01-23. - Sanderson K (March 2009). "Big interest in heavy drugs". Nature. 458 (7236): 269. doi:10.1038/458269a. PMID 19295573. - Katsnelson A (June 2013). "Heavy drugs draw heavy interest from pharma backers". Nature Medicine. 19 (6): 656. doi:10.1038/nm0613-656. PMID 23744136. - Gant TG (May 2014). "Using deuterium in drug discovery: leaving the label in the drug". Journal of Medicinal Chemistry. 57 (9): 3595–611. doi:10.1021/jm4007998. PMID 24294889. - Schmidt, Charles (10 July 2017). "First deuterated drug approved". Nature Biotechnology. 35: 493–494. doi:10.1038/nbt0617-493. Retrieved 10 July 2017. - Demidov, VV. (2007) Heavy isotopes to avert ageing? Trends Biotechnol. 25, 371-375. PMID 17681625 DOI: 10.1016/j.tibtech.2007.07.007 - Halliwell, Barry; Gutteridge, John M. C. (2015). Free Radical Biology and Medicine (5th ed.). Oxford: Clarendon Press. ISBN 9780198717485. - Hill, S. et al. (2012) Small amounts of isotope-reinforced PUFAs suppress lipid autoxidation. Free Radic. Biol. Med. 53, 893. doi: 10.1016/j.freeradbiomed.2012.06.004 PMID 22705367 - Schmidt, C. (2017) First deuterated drug approved. Nature Biotechnol. 35, 493-494. - Brickwedde, Ferdinand G. (1982). "Harold Urey and the discovery of deuterium". Physics Today. 35 (9): 34. Bibcode:1982PhT....35i..34B. doi:10.1063/1.2915259. - Urey, Harold; Brickwedde, F.; Murphy, G. (1932). "A Hydrogen Isotope of Mass 2". Physical Review. 39: 164–165. Bibcode:1932PhRv...39..164U. doi:10.1103/PhysRev.39.164. - "Science: Deuterium v. Diplogen". Time. 19 February 1934. - Sherriff, Lucy (1 June 2007). "Royal Society unearths top secret nuclear research". The Register. Situation Publishing Ltd. Retrieved 2007-06-03. - "The Battle for Heavy Water Three physicists' heroic exploits". CERN Bulletin. European Organization for Nuclear Research. 25 March 2002. Retrieved 2015-11-02. - Massam, T; Muller, Th.; Righini, B.; Schneegans, M.; Zichichi, A. (1965). "Experimental observation of antideuteron production". Il Nuovo Cimento. 39: 10–14. Bibcode:1965NCimS..39...10M. doi:10.1007/BF02814251. - Dorfan, D. E; Eades, J.; Lederman, L. M.; Lee, W.; Ting, C. C. (June 1965). "Observation of Antideuterons". Phys. Rev. Lett. 14 (24): 1003–1006. Bibcode:1965PhRvL..14.1003D. doi:10.1103/PhysRevLett.14.1003. - Chardonnet, P.; Orloff, Jean; Salati, Pierre (1997). "The production of anti-matter in our galaxy". Physics Letters B. 409: 313–320. arXiv: . Bibcode:1997PhLB..409..313C. doi:10.1016/S0370-2693(97)00870-8. |Look up deuterium in Wiktionary, the free dictionary.| - Nuclear Data Evaluation Lab - Mullins, Justin (27 April 2005). "Desktop nuclear fusion demonstrated with deuterium gas". New Scientist. Retrieved 2007-09-10. - Annotated bibliography for Deuterium from the Alsos Digital Library for Nuclear Issues - Missing Gas Found in Milky Way. Space.com
<urn:uuid:fcfa2ca4-fe79-4056-b8b3-66670cb51736>
3.859375
11,105
Knowledge Article
Science & Tech.
53.745763
95,609,345
A simulated quantum computer went online on the Internet last month. With the ability to control 31 quantum bits, it is the most powerful of its type in the world. Software engineers can use it to test algorithms that might one day be applied in real computer networks. Many computing problems in fundamental physics or mathematics require huge amounts of processing power – far more than present-day computers are capable of providing. A well-known example is the prime factoring of very large numbers: Computer scientists use this technique to measure computer performance, and apply them for advanced encryption systems. Quantum computers, based on the laws of quantum physics, would be much more efficient at solving such complex problems than today’s “ordinary” computers. Unlike classical binary digits (0 or 1), their smallest units of information can assume any value between 0 and 1. This could permit massively parallel computation and multiplies storage capacity by a factor of many billions. But quantum computers are still at a very early stage of development. The hardware requirements are extremely demanding and the few existing quantum computing devices only have a limited processing capacity of at best 7 qubits (27 = 128 bits processing size). Since mid-June, a research group at the Fraunhofer Institute for Computer Architecture and Software Technology FIRST has been offering Internet access to the world’s most powerful (31 qubit) quantum simulator, at www.qc.fraunhofer.de. Using a standard browser, interested parties in research and industry can see how quantum waves and atomic particles are used to process information, and thus gain a better understanding of how quantum processes work. The demonstration area of the site contains examples of several standard problems. Users can set up their own new algorithms and logical operations after registering online (free of charge). The simulator demonstrates the way in which a quantum computer would go about solving the calculation. Is the newly developed algorithm suitable for quantum computing, and does it achieve the desired result? “The main focus of our project lies in the simulation of Hamiltonians, i.e. the experimental implementation of quantum algorithms,” emphasizes Helge Rosé. “This will give us a better understanding of the differences between real and theoretically ideal quantum computing devices.” It is also a means of gathering knowledge that will later be needed to build real quantum computers. “Members of the quantum computing community have no need to wait for the next generation of quantum computers – they can test their developments and ideas today,” the project manager concludes. Explore further: Tuning into quantum: Scientists unlock signal frequency control of precision atom qubits
<urn:uuid:c1924c80-c0fb-434e-a490-2f519f7dd67c>
3.6875
530
News Article
Science & Tech.
31.381565
95,609,371
What began as an homage to achievement in the field of coral reef geology has evolved into the discovery of an unexpected link between corals of the Pacific and Atlantic. Dr. Ann F. Budd from the University of Iowa and Dr. Donald McNeill of the University of Miami named a new species of fossil coral found on the Island of Curaçao – some six million years old – after renowned coral reef geologist and University of Miami Rosenstiel School professor, Dr. Robert N. Ginsburg. The new species, originally thought to be an elkhorn coral (genus Acropora), a species widely distributed throughout the Caribbean, was informally christened Acropora ginsburgi in 1995 on Ginsburg’s 70th birthday. Having great difficulty in distinguishing fossil acroporid species when formally describing the new species, Budd elicited the help of Dr. Carden C. Wallace of the Museum of Tropical Queensland, Australia, who recognized why a positive identification had been so challenging -- the genus was not Acropora after all, but rather, a Pacific acroporid genus named Isopora. Detailed in an upcoming issue of the journal Palaeontology, scientists sampled 67 localities around Curaçao, Netherlands Antilles and discovered two new species -- Isopora ginsburgi and Isopora curacaoensis. The coral genus Isopora, a sister group of the modern dominant Acropora, until now was only known from the Pliocene to Recent of the Indo-Pacific. Study of large collections made systematically throughout the area indicates that Isopora first occurred in the Caribbean during the Mio–Pliocene, at approximately the same time as the origination of many modern Caribbean reef coral dominants including Acropora cervicornis, the well known “staghorn coral.” The occurrences of Isopora reported in this study are the oldest records of Isopora worldwide, and are important for understanding the biogeographic separation between reef coral faunas in the Caribbean and Indo-Pacific regions. “We now know that Isopora last occurred in the region during the late Pliocene, a million years ago as part of a pulse of extinction, in which several genera that live today in the Indo-Pacific became extinct in the Caribbean,” said Budd, “This research has further illuminated that these corals co-occurred with the two abundant modern Caribbean species of elkhorn and staghorn corals Acropora (A. cervicornis and A. palmate), often living side-by-side with the two newly-evolved common Caribbean reef corals." “It is certainly an honor to have a fossil of Pacific coral from the Caribbean named after me,” said Ginsburg. “This discovery marks a milestone in my career, and serves as a special tribute to the decades of research I have done on these amazing animals which are so critical to our reefs.” Ginsburg, an explorer, world-class sedimentary geologist, educator and coral reef conservationist, received his bachelor’s degree at the University of Illinois, Urbana-Champaign, and his doctoral degree at the University of Chicago. He has been associated with the University of Miami’s Rosenstiel School of Marine and Atmospheric Science since the 1950s, and served as a long-time member of the Geological Society of America’s Committee on the History of Geology. Founded in the 1940’s, the University of Miami’s Rosenstiel School of Marine & Atmospheric Science has grown into one of the world's premier marine and atmospheric research institutions. Offering dynamic interdisciplinary academics, the Rosenstiel School is dedicated to helping communities to better understand the planet, participating in the establishment of environmental policies, and aiding in the improvement of society and quality of life.Media Contacts: Barbra Gonzalez | RSMAS MIAMI Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:890f24b2-c658-491b-9ba9-5ea144fac6b4>
3.03125
1,470
Content Listing
Science & Tech.
31.583284
95,609,373
Cotinis nitida, commonly known as the green June beetle, June bug or June beetle, is a beetle of the family Scarabaeidae. It is found in the eastern United States, where it is most abundant in the South. It is sometimes confused with the related southwestern species figeater beetle Cotinis mutabilis, which is less destructive. The green June beetle is active during daylight hours. The adult is usually 15–22 mm (0.6–0.9 in) long with dull, metallic green wings; its sides are gold and the head, legs and underside are very bright shiny green. Their habitat extends from Maine to Georgia, and as far west as Kansas, with possible population crossover in Texas with their western cousin, the figeater beetle. The complete life cycle for the green June beetle is one year. Mating occurs in the early morning. The male is attracted by a strongly scented milky fluid secreted by the female. Mating last only a few minutes after which the female enters her burrow or crawls under matted grass. Once the mating process has taken place, the female will lay between 60 and 75 eggs underground during a two-week period. The eggs, when first laid, appear white and elliptical in shape, gradually becoming more spherical as the larvae develop. The eggs hatch in approximately 18 days into small, white grubs. The grubs will grow to about 40 mm (1.6 in) and are white with a brownish-black head and brown spiracles along the sides of the body. The larvae will molt twice before winter. The fully grown larva color is glassy yellowish white shading toward green or blue at the head and tail. The larva has stiff ambulatory bristles on its abdomen which assist movement. The larva normally travels on its back. The underground speed is considered more rapid than any other known genus of Scarabaeidae in the United States and is comparable to that of the hairy caterpillar. The larvae feed largely on humus and mold but can do considerable damage to plant root systems. Injury has been reported to vegetables and ornamental plants, particularly those which have been mulched. The larvae are considered pests when they cause damage to lawns or turf grasses. The insect is considered more injurious in its larval stages than as a beetle. Pupation occurs after the third larval stage, which lasts nearly nine months. The pupal stage occurs in an oval cocoon constructed of dirt particles fastened together by a viscid fluid excreted by the larva. The pupa is white when first formed but develops greenish tints just before emergence. The adults begin to appear in June after 18 days of the pupation period. The adult is from 15–22 mm (0.6–0.9 in) in length and 12 mm (0.5 in) in width. The color varies from dull brown with green stripes to a uniform metallic green. The margins of the elytra vary from light brown to orange yellow. The adult beetle will feed upon a variety of fruits including berries, grapes, peaches, nectarines, apples, pears and figs. Adults are particularly attracted to rotting fruit which often occurs after an initial damage to sound fruit. The grub of the beetle is largely held in control by natural enemies. The larva stage of the friendly fly or large flesh fly (Sarcophaga aldrich) have been observed attached near the base of the head and thorax of the adult beetle. The fly larva have been observed inside the devoured thorax and abdomen of the beetle. The flesh fly (Sarcophaga Helicobia) has been observed to prey on both the larva and adult stage of the June beetle. The digger wasp (Discolia dubia) attacks the larva stage of the beetle. The female will crawl into the larva burrow and lay her eggs on the grub. Below ground large number of larva are consumed by moles. During rainy periods, when the burrows of the larva are flooded, the larva will crawl to the surface. At these times the larva are subject to predation by raccoons, gophers, skunks, opossums, chipmunks and birds. Birds will also attack the adult, notably the common crow, common grackle, mocking bird and blue jay. One of the most effective controls is used during the larva stage. Beetle larva can be controlled using milky spore disease (Bacillus popilliae), which occurs naturally in some larva. Milky spore treatment was first developed by the USDA in the 1930s to combat the Japanese beetle but milky spore controls the June bug and Oriental beetle as well. Milky spore treatment was the first microbial product ever registered in the US. Milky spore begins working after treatment wherever larva are feeding. In warm climates milky spore disease can achieve control in 2–3 years. Colder climates may require longer. The soil is inoculated annually for from 3 to 5 years and once the treatment is established, it is effective for 10 years or more dependent upon climate conditions. - "'Cotinis nitida' on the Common Names of Insects Database". Entomological Society of America. Retrieved 2017-07-09. - Chittenden, F. H.; Fink, D. E. (July 28, 1922). The Green June Beetle (Bulletin No. 891 ed.). US Department of Agriculture. - Dutky, S. R. (July 1, 1940). "TWO NEW SPORE-FORMING BACTERIA CAUSING MILKY DISEASES OF JAPANESE BEETLE LARVAE". Journal of Agricultural Research. 61 (1). - Couch, Gary (Fall 2000). "Does Milky Spore Disease Work?". CORNELL UNIVERSITY TURFGRASS TIMES.
<urn:uuid:ebd5bdfd-146a-4454-81c3-68aa38312265>
3.484375
1,239
Knowledge Article
Science & Tech.
56.567462
95,609,387
This discovery was made by researchers at University of Kalmar in Sweden, in collaboration with researchers in Gothenburg, Sweden, and Spain. The findings are described in an article in the prestigious academic journal Nature. "It was long thought that algae were the only organisms in the seas that could use sunlight to grow," says Jarone Pinhassi, a researcher in Marine Microbiology at Kalmar University College. These microscopic algae carry out the same process as green plants on land, namely, photosynthesis with the help of chlorophyll. In 2000 scientists in the U.S. found for the first time that many marine bacteria have a gene in their DNA that codes for a new type of light-capturing pigment: proteorhodopsin. Proteorhodopsin is related to the pigment in the retina that enables humans to see colors. It should be possible for this pigment to enable marine bacteria to capture solar light to generate energy, but until now it had not been possible to confirm this hypothesis. Last year researchers from Kalmar collected 20 marine bacteria from different ocean areas and mapped their genomes. Several of them proved to contain the pigment proteorhodopsin. This made it possible to run a series of experiments that clearly show that growth in bacteria with this pigment is stimulated by sunlight, because the pigment converts solar energy to energy for growth. In other words, the scientists had found a new type of bacterial photosynthesis that takes place in the seas. It's easier to understand the importance of understanding new mechanisms in marine bacteria to making efficient use of solar energy if we consider the fact that one liter of natural sea water contains roughly a billion bacteria. The activity of these bacteria is of great importance to the carbon cycle, through, for example, the amount of carbon dioxide they produce, and also to how the solar energy that reaches the earth is channeled through the nutrition cycle. "Bacteria in the surface water of the world's oceans swim in a sea of light," says Jarone Pinhassi. "And it is shouldn't be too surprising that evolution has favored microorganisms that can use this rich source of energy. This type of protein may also play a role in commercial and environmental perspectives, for the development of artificial photosynthesis for the environmentally friendly production of energy." Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:28125c8c-0316-4496-b0f4-2f6132d316c4>
4.03125
1,122
Content Listing
Science & Tech.
38.85571
95,609,391
For more than a decade, astronomers have been puzzled by bright galaxies in the distant universe that appear to be forming stars at phenomenal rates. What prompted the prolific star creation, they wondered. And what kind of spatial environment did these galaxies inhabit? Now, using a super-sensitive camera/spectrometer on the Herschel Space Observatory, astronomers – including a UC Irvine team led by Asantha Cooray – have mapped the skies as they appeared 10 billion years ago. The UCI scientists discovered that these glistening galaxies preferentially occupy regions of the universe containing more dark matter and that collisions probably caused the abundant star production. “Thanks to the superb resolution and sensitivity of the SPIRE [Spectral & Photometric Imaging Receiver] instrument on Herschel, we managed to map in detail the spatial distribution of massively star-forming galaxies in the early universe,” said Cooray, associate professor and Chancellor’s Fellow in physics & astronomy. “All indications are that these galaxies are . . . crashing, merging and possibly settling down at centers of large dark-matter halos.” This information will enable scientists to adapt conventional theories of galaxy formation to accommodate the strange, star-filled versions. The European Space Agency’s Herschel observatory carries the largest astronomical telescope operating in space today; it collects data at far-infrared wavelengths invisible to the naked eye. One of three cameras on Herschel, SPIRE has let Cooray and colleagues survey large areas of the sky – about 60 times the size of the full moon – in the constellations of Ursa Major and Draco. The UCI team also included Alexandre Amblard, project scientist in physics & astronomy; Paolo Serra, postdoctoral fellow; and physics students Ali Khostovan and Ketron Mitchell-Wynne. The data analyzed in this study was among the first to come from the Herschel Multi-Tiered Extragalactic Survey, the space observatory’s largest project. UCI is one of only four U.S. educational institutions involved in Herschel using the SPIRE instrument. Seb Oliver, a University of Sussex professor who leads the survey, called the findings exciting. “It’s just the kind of thing we were hoping for from Herschel,” he said, “and was only possible because we can see so many thousands of galaxies. It will certainly give the theoreticians something to chew over.” The study will appear in a special issue of Astronomy & Astrophysics dedicated to the first scientific results from Herschel. The project will continue to collect images over larger areas of the sky in order to build up a more complete picture of how galaxies have evolved and interacted over the past 10 billion years. About the University of California, Irvine: Founded in 1965, UCI is a top-ranked university dedicated to research, scholarship and community service. Led by Chancellor Michael Drake since 2005, UCI is among the most dynamic campuses in the University of California system, with nearly 28,000 undergraduate and graduate students, 1,100 faculty and 9,000 staff. Orange County’s largest employer, UCI contributes an annual economic impact of $3.9 billion. For more UCI news, visit www.today.uci.edu. News Radio: UCI maintains on campus an ISDN line for conducting interviews with its faculty and experts. Use of this line is available for a fee to radio news programs/stations that wish to interview UCI faculty and experts. Use of the ISDN line is subject to availability and approval by the university. Cathy Lawhon | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:a713b3d5-044a-47fd-b3bd-2d3082262f5a>
3.453125
1,398
Content Listing
Science & Tech.
36.209064
95,609,392
Samarium–neodymium dating is a radiometric dating method useful for determining the ages of rocks and meteorites, based on radioactive decay of a long-lived samarium (Sm) isotope to a radiogenic neodymium (Nd) isotope. Neodymium isotope ratios together with samarium-neodymium ratios are used to provide information on the source of igneous melts, as well as to provide age information. It is sometimes assumed that at the moment when crustal material is formed from the mantle the neodymium isotope ratio depends only on the time when this event occurred, but thereafter it evolves in a way that depends on the new ratio of samarium to neodymium in the crustal material, which will be different from the ratio in the mantle material. Samarium–neodymium dating allows us to determine when the crustal material was formed. The usefulness of Sm–Nd dating stems from the fact that these two elements are rare earths and are thus, theoretically, not particularly susceptible to partitioning during sedimentation and diagenesis. Fractional crystallisation of felsic minerals changes the Sm/Nd ratio of the resultant materials. This, in turn, influences the rate at which the 143Nd/144Nd ratio increases due to production of radiogenic 143Nd. In many cases, Sm–Nd and Rb–Sr isotope data are used together. Sm–Nd radiometric dating Samarium has five naturally occurring isotopes, and neodymium has seven. The two elements are joined in a parent–daughter relationship by the alpha decay of 147Sm to 143Nd with a half-life of 1.06×1011 years and by the alpha decay of 146Sm (an almost-extinct nuclide with a half-life of 1.08×108 years) to produce 142Nd. (Some of the 146Sm may itself have originally been produced through alpha-decay from 150Gd, which has a half-life of 1.79×106 years.) To find the date at which a rock (or group of rocks) formed one can use the method of isochron dating. This involves making a graph of 143Nd:144Nd ratio versus 147Sm:144Nd ratio for various minerals or rocks. From the slope of the "isochron" line through these points the date of formation can be determined. Alternatively, one can assume that the material formed from mantle material which was following the same path of evolution of these ratios as chondrites, and then again the time of formation can be calculated (see #The CHUR model). Sm and Nd geochemistry The concentration of Sm and Nd in silicate minerals increase with the order in which they crystallise from a magma according to Bowen's reaction series. Samarium is accommodated more easily into mafic minerals, so a mafic rock which crystallises mafic minerals will concentrate neodymium in the melt phase relative to samarium. Thus, as a melt undergoes fractional crystallization from a mafic to a more felsic composition, the abundance of Sm and Nd changes, as does the ratio between Sm and Nd. Thus, ultramafic rocks have high Sm and low Nd and therefore high Sm/Nd ratios. Felsic rocks have low concentrations of Sm and high Nd and therefore low Sm/Nd ratios (for example komatiite has 1.14 parts per million (ppm) Sm and 3.59 ppm Nd versus 4.65 ppm Sm and 21.6 ppm Nd in rhyolite). The importance of this process is apparent in modeling the age of continental crust formation. The CHUR model Through the analysis of isotopic compositions of neodymium, DePaolo and Wasserburg (1976) discovered that terrestrial igneous rocks at the time of their formation from melts closely followed the "chondritic uniform reservoir" or "chondritic unifractionated reservoir" (CHUR) line – the way the 143Nd:144Nd ratio increased with time in chondrites. Chondritic meteorites are thought to represent the earliest (unsorted) material that formed in the Solar system before planets formed. They have relatively homogeneous trace-element signatures, and therefore their isotopic evolution can model the evolution of the whole Solar system and of the "bulk Earth". After plotting the ages and initial 143Nd/144Nd ratios of terrestrial igneous rocks on a Nd evolution vs. time diagram, DePaolo and Wasserburg determined that Archean rocks had initial Nd isotope ratios very similar to that defined by the CHUR evolution line. Since 143Nd/144Nd departures from the CHUR evolution line are very small, DePaolo and Wasserburg argued that it would be useful to create a form of notation that described 143Nd/144Nd in terms of their deviations from the CHUR evolution line. This is called the epsilon notation, whereby one epsilon unit represents a one part per 10,000 deviation from the CHUR composition. Algebraically, epsilon units can be defined by the equation Since epsilon units are finer and therefore a more tangible representation of the initial Nd isotope ratio, by using these instead of the initial isotopic ratios, it is easier to comprehend and therefore compare initial ratios of crust with different ages. In addition, epsilon units will normalize the initial ratios to CHUR, thus eliminating any effects caused by various analytical mass fractionation correction methods applied. Nd model ages Since CHUR defines initial ratios of continental rocks through time, it was deduced that measurements of 143Nd/144Nd and 147Sm/144Nd, with the use of CHUR, could produce model ages for the segregation from the mantle of the melt that formed any crustal rock. This has been termed TCHUR. In order for a TCHUR age to be calculated, fractionation between Nd/Sm would have to have occurred during magma extraction from the mantle to produce a continental rock. This fractionation would then cause a deviation between the crustal and mantle isotopic evolution lines. The intersection between these two evolution lines then indicates the crustal formation age. The TCHUR age is defined by the following equation: The TCHUR age of a rock can yield a formation age for the crust as a whole if the sample has not suffered disturbance after its formation. Since Sm/Nd are rare-earth elements (REE), their characterisity enables theitic immobile ratios to resist partitioning during metamorphism and melting of silicate rocks. This therefore allows crustal formation ages to be calculated, despite any metamorphism the sample has undergone. The depleted-mantle model Despite the good fit of Archean plutons to the CHUR Nd isotope evolution line, DePaolo and Wasserburg (1976) observed that the majority of young oceanic volcanics (Mid Ocean Ridge basalts and Island Arc basalts) lay +7 to +12 ɛ units above the CHUR line (see figure). This led to the realization that Archaen continental igneous rocks that plotted within the error of the CHUR line could instead lie on a depleted-mantle evolution line characterized by increasing Sm/Nd and 143Nd/144Nd ratios over time. To further analyze this gap between the Archean CHUR data and the young volcanic samples, a study was conducted on the Proterozoic metamorphic basement of the Colorado Front Ranges (the Idaho Springs Formation). The initial 143Nd/144Nd ratios of the samples analyzed are plotted on a ɛNd versus time diagram shown in the figure. DePaolo (1981) fitted a quadratic curve to the Idaho Springs and average ɛNd for the modern oceanic island arc data, thus representing the neodymium isotope evolution of a depleted reservoir. The composition of the depleted reservoir relative to the CHUR evolution line, at time T, is given by the equation - ɛNd(T) = 0.25 T2 – 3 T + 8.5. Sm-Nd model ages calculated using this curve are denoted as TDM ages. DePaolo (1981) argued that these TDM model ages would yield a more accurate age for crustal formation ages than TCHUR model ages – for example, an anomalously low TCHUR model age of 0.8 Gy from McCulloch and Wasserburg’s Grenville composite was revised to a TDM age of 1.3 Gy, typical for juvenile crust formation during the Grenville orogeny. |The Wikibook Historical Geology has a page on the topic of: Other isochron methods| - McCulloch, M. T.; Wasserburg, G. J. (1978). "Sm-Nd and Rb-Sr Chronology of Continental Crust Formation". Science. 200 (4345): 1003–11. Bibcode:1978Sci...200.1003M. doi:10.1126/science.200.4345.1003. PMID 17740673. - Depaolo, D. J.; Wasserburg, G. J. (1976). "Nd isotopic variations and petrogenetic models". Geophysical Research Letters. 3 (5): 249. Bibcode:1976GeoRL...3..249D. doi:10.1029/GL003i005p00249. - Dickin, A. P., 2005. Radiogenic Isotope Geology, 2nd ed. Cambridge: Cambridge University Press. ISBN 0-521-82316-1 pp. 76–77. - DePaolo, D. J. (1981). Neodymium isotopes in the Colorado Front Range and crust – mantle evolution in the Proterozoic. Nature 291, 193–197.
<urn:uuid:648de218-02a3-4fb8-bc3e-9abd25819254>
3.75
2,110
Knowledge Article
Science & Tech.
50.118999
95,609,399
For the first time, scientists have discovered how C-reactive protein, or CRP, is able to access endothelial cells. The UC Davis researchers findings will be published in the July issue of Arteriosclerosis, Thrombosis, and Vascular Biology, one of the American Heart Associations leading journals. CRP is a known risk marker for heart disease and, in a study published earlier this year, UC Davis researchers Ishwarlal Jialal and Sridevi Devaraj found that endothelial cells also produce CRP, which is increased 100-fold when cytokines are secreted by human macrophages, a key finding that helps to explain how plaque formation is initiated. Devaraj and Jialal have now discovered how CRP affects endothelial cells, cells that line the cerebral and coronary arteries, and promotes plaque rupture, leading to heart attacks and strokes. CRP appears to bind to a family of immunoglobulin-processing receptors known as Fc-gamma receptors. Kelly Gastman | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:7f173200-70a5-4c96-a9f7-c7debe9833bd>
3.390625
860
Content Listing
Science & Tech.
38.825389
95,609,420
Einstein received the Nobel Prize in 1921 for his theoretical explanation in 1905 of the so-called photo-effect -- that is, the emission of electrons from a metal surface by incident light. In Einstein's time, laboratory light sources provided light of very low intensity in comparison with modern lasers like those at K-State. Back then, experiments could measure the energy -- or speed -- of light-emitted electrons but could not resolve their motion in time. In modern laboratories, lasers are used as light sources that provide very short and intensive flashes of light. Uwe Thumm, K-State professor of physics, and Chang-hua Zhang, a postdoctoral research associate in physics, are theorists who have developed a model that allows them to compute not just the energy of photo-emitted electrons, but also the times after their release at which they can be detected. Within their quantum mechanical model, Thumm and Zhang found that electrons that are emitted by ultra-short laser pulses from different parts of a metal surface will arrive at an electron detector at slightly different times. "It's a feat that would be impossible without high-intensity lasers like those at K-State's J. R. Macdonald Laboratory," Thumm said. "With the help of ultrashort laser pulses, the motion of electrons can now be followed in time. This has started an entire new area of research, called attosecond physics." An attosecond is a billionth of a billionth of a second. It's an incredibly short time to humans -- but not to electrons, Thumm said. "Fifty attoseconds is about the time resolution needed to resolve the motion of electrons in matter," he said. In agreement with a recent experiment, their calculation shows that electrons of a metal surface that are near atomic nuclei are photo-emitted with a delay of about 110 attoseconds relative to another type of electron. These conduction electrons are not attached to individual atoms and enable metals to conduct electricity. Thumm and Zhang published their work in Physical Review Letters in March. Their research was supported by the National Science Foundation and the U.S. Department of Energy. Thumm said that Einstein's research, which laid the groundwork for their own research, is often understood as a proof for light behaving as a particle called a photon rather than as a wave. Einstein showed that only light above a certain minimal frequency -- in the blue end of the visible spectrum -- could make metals emit electrons. "It was a celebrated model, and it's still in textbooks as an explanation that light is made up of photons," Thumm said. "You can talk to a lot of physics students who get it wrong." While Einstein's model is not wrong, it is not a proof for the particle-character of light, Thumm said. Einstein published his model about two decades before modern quantum theory was developed. Modern quantum theory of matter predicts the emission of an electron even when light is regarded as a classical electro-magnetic wave. Today, physicists have lasers that provide light at such high intensities that electrons can be emitted at lower frequencies, toward the red end of the visible spectrum. And today, scientists look at light as behaving both like a particle and a wave. "There is a bit of a philosophical debate," Thumm said. Thumm said that the new and exciting part of this research is that short pulses from ultrafast lasers like the Kansas Light Source at K-State's J.R. Macdonald Lab allow physicists to measure the timing of electrons emitting from metals, thus building on models like the one he and Zhang developed. Researchers can use short, intense pulses of extreme ultraviolet light to get a tungsten surface to emit electrons. They can synchronize these extreme ultraviolet pulses with a delayed infrared pulse, into which the electron is emitted. Thumm said that this infrared pulse changes the energy of the emitted electrons over time and serves as a measuring stick to judge the timing of the electron emissions. He said that it is a bit like how high-speed photography in the 19th century proved that all four of a horse's hooves leave the ground while running. "In this case it's not the horse's hooves but the electrons that we're seeing," Thumm said. "The bigger picture is that if we resolve in time how electrons move, we can understand the timing of chemical reactions taking place. We can understand the basics of chemistry, biology and life." While Thumm and other K-State physicists continue to delve further into attosecond research, the university will be host to the Second International Conference on Attosecond Physics from July 28 to Aug. 1, bringing physicists from around the world to the K-State campus in Manhattan. Further reports about: > chemical reaction > electro-magnetic wave > electrons emissions > emission of electrons > infrared pulse > intensive flashes of light > laser pulses > light source > modern quantum theory > photo-electronics > photo-emitted electrons > ultra-short laser pulses > ultrafast laser research > ultraviolet pulses What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:9ce6ea75-8ae9-48e5-b907-3a383f103614>
4
1,677
Content Listing
Science & Tech.
43.24841
95,609,421
+44 1803 865913 By: Elisabetta Turtle(Author), Erik Asphaug(Author) 300 pages, 120 illus., 10 in colour Planetary surfaces throughout the Solar System, and the current interest in impacts on the Earth, attest to the importance of impact cratering as a geological process. Impact craters have been identified on all but one of the planetary surfaces we have explored. The authors document the wide variety of crater types that have been observed on the planetary surfaces explored to date, approaching the subject in the context of comparative planetology. Each chapter focusses on a specific crater morphology and discusses what the craters tell us about the surfaces of bodies on which they are found. This approach illustrates not only impact craters themselves, but also emphasizes similarities and differences in crater morphology throughout the Solar System and the implications thereof. - What are impact craters and what can we learn from them? - The Cratering Process - Simple Craters - Complex Craters - Multiple Ring Craters - Craters on Small Bodies - Crater Populations and Surface Ages There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects Your prompt attention has beaten almost every other material supplier hands down. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:470c1509-78aa-4c7c-8d5a-97485705540f>
3.71875
298
Product Page
Science & Tech.
31.631536
95,609,435
Most climate records predating the Industrial Revolution are the product of modern sampling used to reconstruct past conditions. However, Shinto priests have been recording the timing of the omiwatari on Lake Suwa, Japan for over 500 years (beginning 1443). The omiwatari is a buckling of lake ice that occurs just after freezing, and is said to be caused by a dragon god as he uses the ice to cross the lake and visit a goddess. The date of ice breakup on the river Torne, Finland has also been recorded for centuries (since 1693), in part due to the importance of the river for transportation and commerce. These culturally and economically motivated long-term records pre-date the Industrial Revolution (beginning ~1780), after which climate warming hastened, as evidenced by later ice formation (Suwa) and earlier ice breakup (Torne). Furthermore, Lake Suwa has failed to freeze with increasing frequency in recent years. These ancient records are direct human observations of the post-industrial acceleration of climate warming. More coverage on this paper and topic: - York University news release by Megan Mueller: Biologist tracks climate change drivers from as far back as medieval era - Story by Lisa Borre on the National Geographic Voices blog: Lake Suwa's Shinto Legend and the Oldest Lake Ice Record on Earth: What It Tells Us About Climate Change and Variability Sapna Sharma, John J. Magnuson, Ryan D. Batt, Luke A. Winslow, Johanna Korhonen & Yasuyuki Aono. 2016. Direct observations of ice seasonality reveal changes in climate over the past 320�570 years. Scientific Reports 6, Article number: 25061 See the paper DOI (abstract)
<urn:uuid:210f6dd9-8443-4720-ad11-aa94cb7d4271>
3.65625
358
Academic Writing
Science & Tech.
33.722929
95,609,440
posted by anonymus Triangle ABCis right angled at a .AD is perpendicular to BC.If AB=5cm,BC=13cm and AC=12cm,Find the area of triangle abc .Aiso findthe length of AD. I hope you can figure the area of ABC with no trouble. Using similar triangles, because angle B is the same in both, and both are right triangles, 12/13 = AD/5 AD = 60/13 anirudh the great Base of the triangle =13cm altitude of the triangle=12cm Area of the triangle=1/2(b)(a) 60/13AD and 30cm2triangle of ABC.
<urn:uuid:b61a0cda-db41-4bc3-8664-f09e5dd8f17a>
3.515625
152
Q&A Forum
Science & Tech.
84.791535
95,609,458
+44 1803 865913 The South China Sea, making up about 3,400,000 sq km of the Western Pacific Ocean, including more than 200 coral islets, and extending from the equator near Singapore to the Tropic of Cancer, is an area with significant biodiversity. This book is a good introduction to that diversity and opens with 2 short chapters - one on the state of marine biodiversity in the South China Sea, the second on the island disputes in the region. These are followed by 14 chapters on the biota. One of these is a bibliography on plants, the other 13 contain checklists of a number of the major taxa occurring in the region. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:591a0aff-f3c0-40ce-a97a-c70e64acd31e>
2.953125
192
Product Page
Science & Tech.
51.729032
95,609,459
File:Carbon Dioxide 800kyr.svg English: This figure shows the variations in concentration of carbon dioxide (CO2) in the atmosphere during the last 800 thousand years. Throughout most of the record, the largest changes can be related to glacial/interglacial cycles within the current ice age. Although the glacial cycles are thought to be directly caused by changes in the Earth's orbit (i.e. Milankovitch cycles), these changes also influence the carbon cycle, which in turn feeds back into the glacial system. Since the Industrial Revolution, circa 1800, the burning of fossil fuels has caused a rapid increase of CO2 in the atmosphere, reaching levels unprecedented in the last million years. This increase has been implicated as a primary cause of global warming. The spacing of carbon dioxide samples varies through time. At the present, the atmosphere is sampled routinely and complete annual averages are available. From the four ice cores presented on this plot the sampling varies from as rapid as one point every few years (recent parts of the Law Dome record) to as sparse as one sample every few thousand years (oldest parts of the Vostok and Dome C records). In principle, the sparse sampling in the oldest parts of the record could hide abrupt excursions; however, isotopic measurements of ice cores (which are made continuously along the entire core) and our current understanding of the rates of natural processes for creating and removing carbon dioxide from the atmosphere make it unlikely that any positive excursions in carbon dioxide comparable to the Industrial Revolution have happened during the interval presented above. This figure is an update on File:Carbon Dioxide 400kyr.png, originally made by User: Dragons flight originally made by Dragons flight. The explanation of the figure is also derived from that work. Two new data sets are added: the CO2 measurements from 2004 to 2017 and additional data from Dome C. Some other data sets were omitted for simplicity.The code to produce this figure is freely available on Github: https://github.com/Femkemilene/Global-Warming-Figures. The first few lines of the code make it easy to change the language. - (light blue) Dome C ice core: Lüthi, D., M. Le Floch, B. Bereiter, T. Blunier, J.-M. Barnola, U. Siegenthaler, D. Raynaud, J. Jouzel, H. Fischer, K. Kawamura, and T.F. Stocker (2008). "High-resolution carbon dioxide concentration record 650,000-800,000 years before present". Nature 453: 379-382. DOI:10.1038/nature06949. - (dark blue) Vostok ice core: Fischer, H., M. Wahlen, J. Smith, D. Mastroianni, and B. Deck (1999). "Ice core records of Atmospheric CO2 around the last three glacial terminations". Science 283: 1712-1714. - (orange) Taylor Dome ice core: Indermühle, A., E. Monnin, B. Stauffer, T.F. Stocker, M. Wahlen (1999). "Atmospheric CO2 concentration from 60 to 20 kyr BP from the Taylor Dome ice core, Antarctica". Geophysical Research Letters 27: 735-738. - (green) Law Dome ice core: D.M. Etheridge, L.P. Steele, R.L. Langenfelds, R.J. Francey, J.-M. Barnola and V.I. Morgan (1998) "Historical CO2 records from the Law Dome DE08, DE08-2, and DSS ice cores" in Trends: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tenn., U.S.A. - (black) Mauna Loa Observatory, Hawaii: Keeling, C.D. and T.P. Whorf (2004) "Atmospheric CO2 records from sites in the SIO air sampling network" in Trends: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tenn., U.S.A. I, the copyright holder of this work, hereby publish it under the following license: Click on a date/time to view the file as it appeared at that time. |current||13:22, 10 June 2018||509 × 344 (68 KB)||Femkemilene||Changed the background color to a more neutral color. Changed the rectangle to oval to indicate that the inset figure zooms in further than the rectangle showed before.| |11:58, 27 May 2018||509 × 344 (68 KB)||Femkemilene||User created page with UploadWizard| - You cannot overwrite this file. File usage on Commons File usage on other wikis The following other wikis use this file: - Usage on en.wikipedia.org
<urn:uuid:0e8a3417-c29c-41c3-92cc-4bbdf41057f9>
3.578125
1,082
Knowledge Article
Science & Tech.
60.650891
95,609,465
In the field of molecular biology, trans-acting (trans-regulatory, trans-regulation), in general, means "acting from a different molecule" (i.e., intermolecular). It may be considered the opposite of cis-acting (cis-regulatory, cis-regulation), which, in general, means "acting from the same molecule" (i.e., intramolecular). In the context of transcription regulation, a trans-acting element is usually a DNA sequence that contains a gene. This gene codes for a protein (or microRNA or other diffusible molecule) that will be used in the regulation of another target gene. The trans-acting gene may be on a different chromosome as the target gene, but the activity is via the intermediary protein or RNA that it encodes. Cis-acting elements, on the other hand, do not code for protein or RNA. Both the trans-acting gene and the protein/RNA that it encodes are said to "act in trans" on the target gene. |This biochemistry article is a stub. You can help Wikipedia by expanding it.|
<urn:uuid:ae1a3f83-38dd-4918-944f-366cf0c6cc70>
3.4375
234
Knowledge Article
Science & Tech.
43.021219
95,609,466
Earthquake scientists show off rocks from the depths of the San Andreas fault For the first time in the history of earthquake studies, scientists have penetrated deep into the very heart of an active earthquake zone and brought up a ton of rocks as they seek to understand how and why quakes behave. Their harvest comes from a borehole drilled more than two-and-a-half miles into the San Andreas fault just north of the tiny Monterey county village of Parkfield and show clear evidence that hundreds of tiny quakes have been jolting the ground there in just the past few months. One hundred thirty-five feet of core samples were gathered across three traces of the fault that is known for its constantly creeping motion and for the hundreds - if not thousands - of tiny underground temblors that barely register on instruments and certainly can't be felt by humans on the surface. "Now we can hold the San Andreas in our hands," said Mark D. Zoback, a Stanford geophysicist. "We know what it's made of, and we can study how it works." Zoback and his colleagues, William Ellsworth and Stephen Hickman of the U. S. Geological Survey in Menlo Park, are leaders of a project called the San Andreas Fault Observatory at Depth, or SAFOD, which for years to come will be analyzing the quake-shaken rocks. "It's not unlike bringing rocks back from the moon," Ellsworth said. The newly acquired core samples are so uniquely precious they must be kept shrink-wrapped and refrigerated, Ellsworth said as he touched a length of the exposed rock with almost loving delicacy. Parkfield, near the scientists' drill site, is long famed for the frequent quakes with magnitude six and more that have struck that region for more than a century. Parkfield is by far the most heavily-instrumented seismic research area in the nation. The next step, the scientists said, will be to send a steel-encased package containing an array of seismic sensors down the drill hole and into the seismically active zone. The package will include miniaturized seismometers, accelerometers and tiltmeters to monitor the underground activity continuously. The instruments will remain underground for at least a decade, sending their sensitive readings up to the surface by fiber-optic cable. They will become the crucial heart of SAFOD, the long-planned seismic observatory. At a Stanford press conference Thursday, the excited project scientists displayed three separate lengths of their 4-inch-wide core samples, each revealing about five feet of alternating black and gray rock segments flecked with tiny dull green patches of a mineral called serpentine. The cores were recovered only recently - two in August and one on Sept. 7. The serpentine in them, formed from magnesium, silica and water, often contains the slippery mineral called talc, which Zoback explained might well be a lubricant that helps the sides of a fault slide slowly past each other as the fault creeps - and perhaps even accelerates the growth of small quakes into much larger and damaging ones. Some fragmented cuttings brought up earlier by the SAFOD drilling project showed similar evidence of serpentine, and recent laboratory studies showed the presence of slippery, watery, reddish-brown talc in the rock. That find was reported by USGS scientist Diane Moore and her colleagues in the journal Nature on Aug. 16. Tiny samples of the drill cores will be distributed in December to international teams of scientists assembled at the USGS in Menlo Park, Hickman said. Drilling at the site began in 2004, and the work has been interrupted many times by storms, floods, drill rig breakdowns and even modest by quakes. The scientists could tell that their drill had reached the highly active seismic zone this summer because constant shaking from the tiny deep quakes accumulated enough strength to warp the steel casing of the core drill. The "microquakes" that shiver the ground inside the fault zone where the borehole ends are accompanied every once in awhile by larger jolts - still tiny by Loma Prieta or 1906 standards -but registering with magnitudes around 2, Ellsworth said. But the project is particularly important because it is tapping the San Andreas itself - the 20-million-year-old fault that stretches for 800 miles along the length of California and marks the boundary between two vast tectonic plates. These are the immensely thick slabs of the Earth's crust called the Pacific Plate and the North American Plate that have been sliding past each other at nearly 2 inches a year for millions of years - fast enough for Los Angeles to reach San Francisco some day if they last that long. The SAFOD project is the first venture of a new national project called EarthScope, a $200 million experiment that will deploy more than 100 smaller borehole seismic stations across the United States, with 800 global positioning system units tracking their readings. Scientists will follow those with another 400 earthquake monitoring stations at specific locations in the United States. They will be in operation for two years at a time, and then be moved in relays until the entire country has been surveyed for seismic activity. EarthScope is funded by the National Science Foundation.
<urn:uuid:810ef11e-4d20-4641-9267-e042f8715f60>
3.4375
1,079
News Article
Science & Tech.
42.164015
95,609,474
Washington: Analysing data from NASA’s planet-hunter, the Kepler space telescope, astronomers have captured for the first time a brilliant flash of an exploding star’s shockwave or “shock breakout” in the optical wavelength or visible light. The team led by Peter Garnavich, astrophysics professor at the University of Notre Dame in Indiana, analysed light captured by Kepler every 30 minutes over a three-year period from 500 distant galaxies, searching some 50 trillion stars. They were hunting for signs of massive stellar death explosions known as supernovae. For the first time, a supernova shockwave has been observed in the optical wavelength or visible light as it reaches the surface of the star. This early flash of light is called a “shock breakout”. The explosive death of this star, called KSN 2011d, as it reaches its maximum brightness takes 14 days. The shock breakout itself lasts only about 20 minutes, so catching the flash of energy is an investigative milestone for astronomers. In 2011, two of these massive stars, called red supergiants, exploded while in Kepler’s view. The first behemoth, KSN 2011a, is nearly 300 times the size of our sun and a mere 700 million light years from Earth. The second, KSN 2011d, is roughly 500 times the size of our sun and around 1.2 billion light years away. “To put their size into perspective, Earth’s orbit about our sun would fit comfortably within these colossal stars,” said Garnavich. The “shock breakout” itself lasts only about 20 minutes, so catching the flash of energy is an investigative milestone for astronomers. “In order to see something that happens on timescales of minutes, like a shock breakout, you want to have a camera continuously monitoring the sky,” Garnavich added. Supernovae like these – known as Type II – begin when the internal furnace of a star runs out of nuclear fuel causing its core to collapse as gravity takes over. The two supernovae matched up well with mathematical models of Type II explosions reinforcing existing theories. But they also revealed what could turn out to be an unexpected variety in the individual details of these cataclysmic stellar events. Understanding the physics of these violent events allows scientists to better understand how the seeds of chemical complexity and life itself have been scattered in space and time in our Milky Way galaxy “All heavy elements in the universe come from supernova explosions. For example, all the silver, nickel, and copper in the earth and even in our bodies came from the explosive death throes of stars,” explained Steve Howell, project scientist for NASA’s Kepler and K2 missions.
<urn:uuid:758f8e67-08a3-4f93-82af-10f78fe21ee2>
3.59375
573
News Article
Science & Tech.
40.891373
95,609,501
There is no magic bullet, no single thing that can replace all our fossil fuel combustion at a stroke. We can’t swap our fossil fuel consumption for something less harmful completely, immediately. It would be good if we could do it really quickly, but we can’t. If we simply stopped using fossil fuels at a stroke, one of the results would be deaths on a massive scale, from cold, thirst, hunger – and violence. This could be as bad as what we’re trying to prevent! We could stop some of our more frivolous uses of fossil fuels pretty quickly without replacing them, but mostly we have to develop replacements and gradually reduce fossil fuel use as we do so. To some extent this is being done, but too gradually – we need a greater sense of urgency, we need to attach more importance to the process – and we need to remember that the replacements are supposed to be replacements, they’re not supposed to be additions, with the fossil fuels still being used as profligately as before. That’s the immediately bit. Now let’s consider the completely bit. Actually, we don’t need to replace fossil fuel use completely. We can be pretty sure that the Earth’s systems can cope with some human carbon dioxide production – in fact, it’s possible that human carbon dioxide production is the reason the last few thousand years’ climate has been relatively stable (see Climate: Patterns in the Chaos), and that continuing fossil fuel combustion (on a much reduced scale) is the only way we can continue to stave off the otherwise inevitable arrival of the next ice age. It would, anyway, be very difficult to replace fossil fuel use completely – at least, in the foreseeable future. Let’s concentrate on how we can reduce it – drastically, but not completely. (I’ve been writing “fossil fuel use” – but actually not all fossil fuel uses produce carbon dioxide. Fossil fuels can be used as feedstocks for some chemical processes, such as plastics manufacture, where the carbon ends up in the products, not as carbon dioxide vented to atmosphere. There may be other issues around such activities, but in the main they’re not a global warming issue, and certainly not a big one. I’ll continue to write “fossil fuel use” meaning those uses that produce carbon dioxide.) Let’s consider that list of fossil fuel uses: generating electricity, keeping ourselves warm in our homes and workplaces, heating water, transport, cooking and industrial process heat. I’ll deal with them one at a time – and remember, we’re trying to reduce fossil fuel use a lot, but not necessarily immediately, and not necessarily completely. If we can make a 20% cut somewhere relatively easily but only improve on that with difficulty, then it might well be better to do that and then look somewhere else for another cut, and only come back to the difficult case when we’ve done all the relatively easy things. But we shouldn’t spend too much time doing a hundred easy cuts of 0.001% each, and forget to do the big cuts altogether! The hundred easy cuts of 0.001% each are things like turning off equipment that’s on standby (actually 0.3%, according to one government source – but I strongly suspect that this is an overestimate). The frivolous uses I mentioned earlier are a much bigger deal. We could cut out a great deal of fossil fuel use in transport and in industry simply by stopping some unnecessary activities altogether. Many of the products of industry are simply junk, bought by consumers just because they’ve got the money to buy them, and they’re there to be bought, but not contributing in any significant way to their lives. How do we stop people doing this? I don’t know – although economic recession seems to be quite an effective way! There are issues here about employment – but employing people to produce and transport junk is make-work, it’s not really productive (yet its output appears in GDP figures! Damned lies and economics) If there aren’t enough worthwhile things to do to occupy the whole workforce forty hours a week, let’s cut down the working week – don’t make work. Of course there are worthwhile things to do anyway – we’ve all that infrastructure to build to replace fossil fuel use. There’s a skills issue there, of course, and a training issue behind that. This is about 30% of our fossil fuel use. It’s one area where it’s actually relatively easy to see how to make a big reduction – by using renewable sources of energy. (Nuclear power is NOT a good answer, or even part of one – but that’s the subject of another essay Nuclear Power?) At the moment, at the present state of research and development, wind turbines are the biggest and best source of renewable energy for electricity generation, and we should be building them much faster than we are, and aiming for a much higher percentage of our electricity production with them. There’s an intermittency issue with wind, but when the wind blows reasonably strongly we could generate all our electricity (or almost all of it) from the wind, and turn off (or turn right down) the fossil fuel burning power stations. When the wind isn’t blowing much, we’d still have to use fossil fuels, but we could be making a big reduction in carbon footprint here. When it’s blowing a lot, we’d have electricity to spare – which could be sold cheaply for uses which are currently too power-hungry to consider, or stored for future use. Storage of energy isn’t without costs, but it’s not as difficult or expensive as the opponents of wind and solar electricity generation like to pretend (see Storing Energy). Research and development in this area would be far cheaper and far more worthwhile than in nuclear generation, particularly fusion! Keeping ourselves warm This is about 26% of our fossil fuel use. Of course part of this overlaps with the electricity generation use, because some of the electricity is used for this. There are several things we can do here, which between them could make quite a big difference quite quickly. In the longer term, if we can generate plenty of electricity from renewable sources, then a changeover to electric heating would reduce our carbon footprint. There are ways to use electricity for heating more efficiently than is usually done, too – but that’s another essay. The most immediate thing we can do about heating’s carbon footprint is to turn the thermostats down! We used to survive with our houses much less warm than we keep them nowadays. I’m not suggesting we should go back to shivering every time we climb out of bed, but just turning the thermostat down two or three degrees would make a big difference to our energy consumption – and we can easily wear slightly warmer clothes. Improving the insulation of our homes and other buildings is also relatively easy. It’s being done, but could be done more rapidly. In some properties, reducing draughts would reduce heat loss – but you have to be careful not to reduce ventilation too much. There are ways of having ventilation without the normal associated heat loss. Fossil fuel power stations only convert less than half the energy in the fuel into electrical energy – the rest is lost as heat. In some places, this waste heat is used for heating buildings, a process called CHP (Combined Heating and Power). CHP is being adopted more widely, but that could be done a lot more rapidly. It can also be done locally, by replacing gas boilers in homes and other buildings with micro-CHP units. This makes CHP, with its very substantial carbon footprint reduction, feasible in many places where it would not be otherwise. Finally, we could partially replace fuel burning for heating with renewable energy sources, such as solar. This is being done to some extent, but much less than it could be, and much more slowly than it could be. Solar heating has the advantage that it’s fairly easy to do – and the disadvantage that it’s least available when it’s most wanted. Wind is much better from this point of view – it’s more often available in cold than in warm seasons, and a windy day takes heat out of a building much faster than a still day, even at the same temperature. But wind is not as easy to convert to heat – the easiest way is via electricity. This is about 9% of our fossil fuel use. Again, part of this is an overlap with electricity generation use. CHP and renewable energy sources are the ways forward here. This is the other biggy – 34% of our fossil fuel use. It’s also the most difficult area to change to renewable energy. The biggest improvements would be: 1) Changing to smaller cars with less “performance”, using cars less by switching to buses or trains, making fewer and shorter journeys, and walking or cycling locally as much as possible. There would also be gains in health from walking or cycling more (particularly if proper provision were made for cyclists). 2) Reducing consumerism. This would reduce the demand for goods vehicle movements. 3) Reducing the expectation of speed in travel. Trains, buses and cars use much less fuel if they go more slowly. Accidents, particularly for cars and buses, are also fewer and less serious. 4) Reversing the trend to increased air travel, which is particularly profligate. Using trains or ships instead. (3) and (4) would both be helped by reduced working hours and increased leisure. Why do we have such short holidays and such long working weeks, while there are still so many people unemployed? And why do we spend so much time producing – or importing, distributing and selling – stuff that we really don’t need at all? Switching to renewable energy is much more difficult for transport, other than rail where electricity from a renewable source can be used. Electric cars could eventually be part of the solution, once electricity is largely from renewable sources, but they’re actually of very limited relevance. Hydrogen is also of very limited importance (see Hydrogen. Yes, but...). Liquid biofuels could be of some significance, but it’s important to avoid conflict with food production, or destruction of rainforest (Biofuels. Yes, but...).
<urn:uuid:1790391c-ae82-402a-b61d-e59880d825e9>
3.203125
2,200
Personal Blog
Science & Tech.
50.250125
95,609,503
The number 30 is the smallest number evenly divisible by both six and 10. There are several ways to arrive at the least common multiple for a group of numbers, but the mathematical formula involves figuring it from the prime factors of each number. The prime factors of six are two and three. The prime factors of ten are two and five. In mathematical terms, 2 x 3 = 6, and 2 x 5 =10. Each prime number is used the highest amount of times it is used for each number to figure the LCM. In this case, 2 x 3 x 5 = 30.
<urn:uuid:0035eb0d-c5cc-4f2d-b2a1-afc023837eb8>
3.671875
119
Knowledge Article
Science & Tech.
77.731068
95,609,507
Will ocean acidification disrupt the planet's ecosystem before climate change does? Water-smart urban design and drought in the American West. Could climate change lead to fewer males? For the first time in 35 years, atmospheric ozone actually increased, according to NASA measurements. The next generation of solar power might be waiting beneath the Pacific waves, in the form of an armchair-sized clam. Biological research tracks predatory carnivores, who are increasingly veering into North American cities. The federal Wilderness Act was signed 50 years ago. Plant ecologist Charles F. Cooper wrote prescient and succinct words on the topic of climate change back in 1978. What’s the single biggest action a person can take to reduce their personal impact on climate change? It would seem that the answer is to eat less beef. Environmental engineer and newly-minted MacArthur Fellow Tami Bond is an expert on "black carbon."
<urn:uuid:1c14dc06-bbd0-461f-ae59-bc6a55148e14>
2.515625
191
Content Listing
Science & Tech.
44.6875
95,609,530
New machine learning techniques can help experimentalists probe systems of particles exponentially faster than conventional, brute-force techniques The same techniques used to train self-driving cars and chess-playing computers are now helping physicists explore the complexities of the quantum world. For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. This method will allow scientists to thoroughly probe systems of particles exponentially faster than conventional, brute-force techniques. Complex systems that would require thousands of years to reconstruct with previous methods could be wholly analyzed in a matter of hours. The research will benefit the development of quantum computers and other applications of quantum mechanics, the researchers report February 26 in Nature Physics. “We have shown that machine intelligence can capture the essence of a quantum system in a compact way,” says study co-author Giuseppe Carleo, an associate research scientist at the Center for Computational Quantum Physics at the Flatiron Institute in New York City. “We can now effectively extend the capabilities of experiments.” Carleo, who conducted the research while a lecturer at ETH Zurich, was inspired by AlphaGo. This computer program used machine learning to outplay the world champion of the Chinese board game Go in 2016. “AlphaGo was really impressive,” he says, “so we started asking ourselves how we could use those ideas in quantum physics.” Systems of particles such as electrons can exist in lots of different configurations, each with a particular probability of occurring. Each electron, for instance, can have either an upward or downward spin, similar to Schrödinger’s cat being either dead or alive in the famous thought experiment. In the quantum realm, unobserved systems don’t exist as any one of these arrangements. Instead, the system may be thought of as being is in all possible configurations simultaneously. When measured, the system collapses into one configuration, just like Schrödinger’s cat is either dead or alive once you open its box. This quirk of quantum mechanics means that you can never observe the entire complexity of a system in a single experiment. Instead, experimentalists conduct the same measurements over and over until they can determine the state of the whole system. That method works well for simple systems containing only a few particles. But “things get nasty with a lot of particles,” Carleo says. As the number of particles increases, the complexity skyrockets. If only considering that each electron can have either spin up or down, a system of five electrons has 32 possible configurations. A system of 100 electrons has more than 1 million trillion trillion. The entanglement of particles further complicates matters. Through quantum entanglement, independent particles become intertwined and can no longer be treated as purely separate entities even when physically separated. This entanglement alters the probability of different configurations. Conventional methods, therefore, just aren’t feasible for complex quantum systems. Giacomo Torlai of the University of Waterloo and the Perimeter Institute in Canada, Carleo and colleagues circumvented these limitations by tapping machine learning techniques. The researchers fed experimental measurements of a quantum system to a software tool based on artificial neural networks. The software learns over time and attempts to mimic the system’s behavior. Once the software ingests enough data, it can accurately reconstruct the complete quantum system. The researchers tested the software using mock experimental datasets based on different sample quantum systems. In these tests, the software far surpassed conventional methods. For eight electrons, each with spin up or down, the software could accurately reconstruct the system with only around 100 measurements. For comparison, a conventional brute-force method required almost 1 million measurements to reach the same level of accuracy. The new technique can also handle much larger systems. In turn, this ability can help scientists validate that a quantum computer is correctly set up and that any quantum software would run as intended, the researchers suggest. Capturing the essence of complex quantum systems with compact artificial neural networks has other far-reaching consequences. Center for Computational Quantum Physics co-director Andrew Millis notes that the ideas provide an important new approach to the center’s ongoing development of novel methods for understanding the behavior of interacting quantum systems, and connect with work on other quantum physics–inspired machine learning approaches. Besides applications to fundamental research, Carleo says that the lessons the team learned as they blended machine learning with ideas from quantum physics could improve general-purpose applications of artificial intelligence as well. “We could use the methods we developed here in other contexts,” he says. “Someday we might have a self-driving car inspired by quantum mechanics, who knows.” The Latest on: Quantum systems Q&A with Microsoft's Jerry Nixon: Quantum Computing and the Future of Software Development on July 17, 2018 at 12:06 pm Also, because quantum will be expensive at first. No. 2: The other is like machine learning and the modern enterprise -- companies will be foolish to discount the advantage of augmenting their systems ... […] Chinese Researchers Achieve Stunning Quantum-Entanglement Record on July 17, 2018 at 9:49 am (There are plenty of quantum experiments involving more than 18 qubits, but in those experiments, the qubits aren’t all entangled. Instead, the systems entangle just a few neighboring qubits for each ... […] D-Wave’s quantum computer successfully models a quantum system on July 16, 2018 at 2:15 pm D-Wave's hardware has always occupied a unique space on the computing landscape. It's a general-purpose computer that relies on quantum mechanical effects to perform calculations. And, while other qua... […] Researchers Demonstrate an Artificial Quantum System on July 16, 2018 at 12:57 pm July 16, 2018 — Researchers from Russia and Britain have demonstrated an artificial quantum system, in which a quantum bit interacts with an acoustic resonator in the quantum regime. This allows the f... […] Quantum Corporation Optimizes Data Intensive Workloads at ISC 2018 on July 16, 2018 at 9:01 am Quantum offerings featured at their exhibit included high-performance ... storage presented by NVMesh as block-level devices — peak performance through the file system was recorded at 17 GB/s, outperf... […] New Quantum Computer Milestone Would Make Richard Feynman Very Happy on July 14, 2018 at 6:00 am Scientists from D-Wave announced they have simulated a large quantum mechanical system with their 2000Q machine — essentially a cube of connected bar magnets. The D-Wave can’t take on the futuristic, ... […] Tuning into quantum: Scientists unlock signal frequency control of precision atom qubits on July 13, 2018 at 11:04 am Credit: CQC2T Tuning in and individually controlling qubits within a 2 qubit system is a precursor to demonstrating the entangled states that are necessary for a quantum computer to function and carry ... […] UPDATE - D-Wave Demonstrates Large-Scale Programmable Quantum Simulation on July 12, 2018 at 9:05 pm BURNABY, British Columbia, July 12, 2018 (GLOBE NEWSWIRE) -- D-Wave Systems Inc., the leader in quantum computing systems and software, announced the publication of a significant scientific result in ... […] How to Fight the Coming Quantum Data Decryption Threat on July 12, 2018 at 5:16 am There’s also a better encryption option for defeating quantum attacks – double encryption. Encrypting your data at the disc or block level and again at the file system level ensures it’s protected whe... […] China's latest quantum radar could help detect stealth planes, missiles on July 11, 2018 at 11:00 am The CETC claims its system is now capable of tracking high altitude objects, likely by increasing the coherence time entangled photons. CETC envisions that its quantum radar will be used in the strato... […] via Google News and Bing News
<urn:uuid:4a736480-41a3-4adf-ba8f-d5766bb87d55>
3.375
1,667
Content Listing
Science & Tech.
38.361108
95,609,567
The researchers, from the University of Toronto and the University of Chicago, find that people are more likely to attribute human qualities or traits to inanimate objects if the product fits with their expectations of relevant human qualities – and are also more likely to positively evaluate an anthropomorphized item. “We sometimes see cars as loyal companions going so far as to name them. We argue with, cajole, and scold malfunctioning computers and engines,” explain Pankaj Aggarwal (University of Toronto) and Ann L. McGill (University of Chicago). “We find that if the product has a feature that is typically associated with a human prototype, then people are more likely to humanize the product, and also evaluate it more positively.” For example, the researchers found that people are more likely to buy into the idea of a “family” of products if all the products are differently sized, with some products representing “parents” and others representing a teenager and a small kid. Similarly, non-identical products presented as “twins” fared worse in evaluations than identical objects presented as twins. The researchers also found that products with positive traits were better liked than products with rebellious or negative traits. In the study, identical looking objects presented as “good twins” were better liked than the same products presented as “evil twins.” As the researchers explain: “Efforts by marketers to anthropomorphize products may be viewed as shifting the category of evaluation from product to human, and more specifically, to particular human categories such as friends, helpers, families, or spokespeople.” Suzanne Wu | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:6f0c0e72-e10d-45b2-93d7-c3b9c15c4de7>
2.890625
936
Content Listing
Science & Tech.
30.633725
95,609,579
Found within the Small Magellanic Cloud – a galactic neighbor of the Milky Way – the large region of ionized hydrogen gas is designated "LHa115-N19," and "contains a number of massive stars and overlapping supernova remnants," said Rosa Williams, an astronomer at the U. of I. "We can tell there has been a fair amount of stellar activity going on." From birth to death, massive stars have a tremendous impact on their surroundings. While alive, these stars generate stellar winds that push away nearby gas and dust, forming low-density cavities inside expanding bubbles. When the stars die, shock waves from their death throes can enlarge those bubbles into huge supernova remnants. "In N19, we have not one star, but a number of massive stars blowing bubbles and we have several supernova remnants," Williams said. "Some of these cavities may overlap with one another. Eventually, these bubbles could merge into one enormous cavity, called a superbubble." To identify the locations of massive stars, stellar-wind bubbles and supernova remnants in N19, Williams and colleagues combined optical images, X-ray data and spectroscopic measurements. "We caught this particular region of N19 at a neat moment in time," Williams said. "The stars are just dispersed enough that their stellar winds and supernova blasts are working together, but have not yet carved out a full cavity. We are witnessing the birth of a superbubble." The behavior of matter and energy within a superbubble has implications for the formation of planetary systems, said Williams, who will present her team's findings at the American Astronomical Society meeting in Seattle, on Tuesday (Jan. 9). During its life and death, a massive star forges the heavy elements that enrich the interstellar medium and form planets. "Our own solar system may have formed within the confines of a superbubble," said Williams, who uses an analogy with people to help explain her interest in superbubbles. "Some people live pretty independently in isolated country houses, while others live in large cities that require a centralized structure," Williams said. "In N19, we are looking at a possible bridge between an individual star living its life and dying its death, and a community of stars, where living and dying affects other stars and planets, and creates a structure around them." James E. Kloeppel | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:01fa1f11-9d64-40ba-89c4-48c2070131ec>
3.34375
1,070
Content Listing
Science & Tech.
39.470686
95,609,580
Geneticists show the COPIA-R7 transposon enhances the immunity of its host against a pathogenic microorganism Transposons are DNA elements that can multiply and change their location within an organism’s genome. Discovered in the 1940s, for years they were thought to be unimportant and were called “junk DNA.” Also referred to as transposable elements and jumping genes, they are snippets of “selfish DNA” that spread in their host genomes serving no other biological purpose but their own existence. Now Tokuji Tsuchiya and Thomas Eulgem, geneticists at the University of California, Riverside, challenge that understanding. They report online in the Proceedings of the National Academy of Sciences that they have discovered a transposon that benefits its host organisms. Working on the model plant Arabidopsis, they found that the COPIA-R7 transposon, which has jumped into the plant disease resistance gene RPP7, enhances the immunity of its host against a pathogenic microorganism that is representative of a large group of fungus-like parasites that cause various detrimental plant diseases. “We provide a new example for an ‘adaptive transposon insertion’ event – transposon insertions that can have beneficial effects for their respective host organisms – and uncover the mechanistic basis of its beneficial effects for plants,” said Thomas Eulgem, an associate professor of plant cell biology and the senior author of the research paper. “While it has been known for a while that transposon insertions can have positive effects for their respective host organisms and accelerate evolution of their hosts, cases of such adaptive transposon insertions have been rarely documented and are, so far, poorly understood.” The COPIA-R7 transposon affects RPP7 by interfering with the latter’s epigenetic code. In contrast to the well known 4-letter genetic DNA code, which provides instructions for the synthesis of proteins, the “epigenetic code” defines the activity states of genes and determines to what extent their genetic information is utilized. Eulgem explained that the transposition of transposons is typically inhibited by epigenetic silencing signals associated with their DNA. Such epigenetic signals are like molecular “flags” or “tags” that are attached to special proteins, around which DNA is wrapped. A type of molecular flag, referred to as H3K9me2, prohibits transposons from being active and jumping in their host genomes. “An exciting aspect of our work is that H3K9me2 signals associated with COPIA-R7 have acquired a completely new meaning in RPP7 and promote the activity of this disease-resistance gene,” said Eulgem, a member of UC Riverside’s Center for Plant Cell Biology. “By modulating levels of this silencing signal in RPP7, plants can adjust the activity of this disease resistance gene. “Silencing of transposon activity is a complex process that is based on the interplay between different types of epigenetic signals,” Eulgem continued. “Typically H3K9me2 is of critical importance for transposon silencing. However, we found H3K9me2 is not important for COPIA-R7 silencing, perhaps because this type of epigenetic signal has acquired a different function within the RPP7 gene. While we found H3K9me2 to promote RPP7 activity, it seems to have lost its function for COPIA-R7 silencing.” Arabidopsis plants use H3K9me2-mediated messenger RNA processing to accurately set RPP7 activity to precisely defined levels. In principle, scientists interested in crop improvement can now use the UCR discovery to design new types of molecular switches based on H3K9me2-mediated messenger RNA processing. Using standard molecular biological methods, transposon sequences that are naturally associated with this epigenetic signal can be inserted into suitable genes and thereby alter the activity levels of these genes. “Our results are critical for the basic understanding of how transposons can affect the evolution of their hosts – something not well understood at this time,” said Tokuji Tsuchiya, the first author of the research paper and an assistant specialist in Eulgem’s lab. “Besides this impact on basic research, the epigenetic mechanism we discovered can possibly be utilized for biotechnological crop improvement. In principle, the switch mechanism we discovered can be applied to all crop species that can be genetically modified.” Next, Eulgem plans to expand his lab’s research to how plants use the modulation of H3K9me2 levels at COPIA-R7 to dynamically adjust RPP7 activity when they are attacked by a pathogenic microorganism and to explore if this mechanism also applies to additional genes. “It would make sense to assume that at other transposons, H3K9me2 levels are also modulated during immune responses and that this epigenetic mark affects the activity of other genes that are important for plant immunity,” Eulgem said. “If this is true, we have uncovered a completely new genetic – or epigenetic – mechanism that allows plants to sense that they are under pathogen attack and to initiate appropriate immune responses.”
<urn:uuid:b6904727-3be5-48cf-b2bb-c34e2d4c7ddb>
3.375
1,130
News Article
Science & Tech.
16.193077
95,609,594
The team has evaluated high-res pictures from the American space probe "Mars Reconnaissance Orbiter" (MRO) and they show that on the surface of the planet a gully about two metres wide, caused by erosion, has increased in length. Between November 2006 and May 2009 it lengthened by around 170 metres. “The changes to the gully – especially in its length – are the result of small quantities of water ice melting in spring and the subsequent flow movements of a mixture of water and sand,” is the researchers’ conclusion. The annual mean temperature on Mars is around minus 60 degrees Celsius, but towards the end of winter it rises and can go above zero. Then, changes on the surface of Mars can be seen. Black areas on the dunes point to carbon dioxide ice which is either thawing or changing directly from a solid to a gaseous state, i.e. undergoing sublimation. In the spring of the first observed year on Mars – a year there lasts 687 days – a small gully caused by erosion in the so-called Russell Crater grew by about 50 metres in length. This was repeated in the spring of the following Mars year. The gully lengthened down the slope by about 120 metres. How could these gullies develop? Possible explanations are movements of dry masses or the transportation of dry material influenced by liquid carbon dioxide or liquid water. “We can definitely rule out movements of dry masses due to the morphological characteristics of the canals,” says Dennis Reiss. Also, the gullies show one special feature, namely that become thinner and thinner down the slope. This is a general indication of the fact that some liquid seeping into the soil is likely to be responsible for the development. Carbon dioxide becoming liquid for a short time is ruled out by the researchers. “Evaluation of the spectral data shows that in both years all the carbon dioxide ice had already undergone sublimation before the canal arose,” says PhD student Gino Erkeling. The most likely explanation in the opinion of the researchers is a small quantity of melting water ice which is protected from sublimation by an overlying layer of carbon dioxide ice. The calculations made by the Münster researchers show that the surface temperatures in the Russell Crater at the beginning of spring rise above the freezing point for water. PhD student Karin Bauch is certain that “the carbon dioxide ice – and subsequently the water ice underneath – then begin to melt and there would be a possibility of liquid water on the surface for a short time.” When the water then flows down the slope and collects in gullies, erosion is the result. Moreover, the phases of erosion in both years are almost identical, which leads to the conclusion that it is seasonal effects which are responsible. Prof. Harald Hiesinger, the Director of the Institute of Planetology at Münster University, is also impressed by the fact that there were changes to the gullies over the past years. “These observations,” he says, “are the clearest evidence so far that today water can still flow on the surface of Mars, and in a quantity that is sufficient to cause erosion.” However, only small gullies are made. “The climate on Mars today only allows very little air humidity which can settle on the surface as frost. The quantities which can melt and lead to liquid water are correspondingly small,” explains Dennis Reiss. “So it’s not enough to make large valleys such as were formed in the early years of Mars.” Reference: Reiss D. et al. (2010): Evidence for present day gully activity on the Russell crater dune field, Mars. GEOPHYSICAL RESEARCH LETTERS, VOL. 37, doi:10.1029/2009GL042192 Dr. Christina Heimken | idw New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:4c1e920c-dd0f-4431-bd56-a99dd888f485>
3.75
1,442
Content Listing
Science & Tech.
45.931643
95,609,620
NIST Releases a Major Upgrade of Mass Spectra Library News Dec 23, 2005 After three years of development, the National Institute of Standards and Technology (NIST) have released a major upgrade of the widely used NIST/EPA/NIH Mass Spectral Library. The library is an encyclopedic database of "fingerprints" used to identify chemical compounds with a technique called mass spectrometry. The method uses the masses of molecules to identify unknown chemicals. Samples are first vaporized, then ionized by stripping away one or more electrons, leading to fragmentation. These fragments are finally sorted by their mass-to-charge ratios using magnetic or electric fields, producing a "mass spectrum." Even a sample of a pure element generally produces a spectrum with several peaks representing a unique distribution of masses due to isotopes with varying numbers of neutrons. The edition of the library, NIST 05, adds approximately 20,000 spectra, bringing the total number of compounds found in the database to 163,000. Each spectrum has been analyzed and critically evaluated to ensure that the library has the best possible current data. The upgraded library also includes two important new classes of chemical reference data. Gas-phase "retention index" data-used in gas chromatography to identify volatile organic compounds-have been added for 25,000 different compounds. And a separate collection of 2,000 tandem mass spectrometry (MS/MS) spectra has been added. MS/MS spectra arise from a process where the ionization and fragmentation steps are separated. Chemical Compound Class Possesses Potential for Treating Zika VirusNews A new and promising class of chemical compounds has major potential for treating Zika virus and respiratory syncytial virus, or RSV, according to a new study.READ MORE
<urn:uuid:ac0fe84c-ec06-41da-9aa3-c57f22b5c935>
2.734375
373
News Article
Science & Tech.
25.517308
95,609,623
To mark UNESCO's International Year of Astronomy (IYA2009), six leading astronomers from the UK, the US, Europe and Asia write in March's Physics World about the biggest challenges and opportunities facing international astronomers over the next couple of decades. Many of those challenges are purely scientific, including the quest to clarify the true nature of dark matter and dark energy; the search for extra-terrestrial life among the myriad of extrasolar planets that are set to be discovered; and finding the first stars that formed after the Big Bang. Other challenges are political - including the need for mass international collaboration to fund and manage astronomical facilities, many of which are being so large and expensive that no single country can afford them alone. For example, the Atacama Large Millimeter Array, which is being built in Chile, involves astronomers from the UK, US and Japan. The contributors are Catherine Cesarsky, President of the International Astronomical Union, Martin Rees, the UK's Astronomer Royal, Tim de Zeeuw, Director General of the European Southern Observatory, John Huchra, President of the American Astronomical Society, Andrew Fabian, President of the Royal Astronomical Society, and Seok Jae Park, President of the Korea Astronomy and Space Science Institute. All contributors express optimism about the future of global astronomy, reflecting on the advances that new facilities promise: from the Planck Satellite making detailed observation of fossil radiation, due to take off next month; NASA's planned joint dark energy mission; 2013's launch of the James Webb Space Telescope to help answer questions about the Universe's very first stars; and the European Southern Observatory's European Extremely Large Telescope, which, if built, could be the "world's biggest eye on the sky". As Tim de Zeeuw, Director of the European Southern Observatory, writes, "Technological developments now make it possible to observe planets orbiting other stars, peer deeper than ever into the universe, use particles and gravitational waves to study celestial sources, and to carry out in situ exploration of objects in our solar system. This promises tremendous progress towards answering key astronomical questions." But, as Seok Jae Park, President of the Korea Astronomy and Space Science Institute, says, "The greatest challenge for astronomy is international collaboration, because building big and expensive telescopes can no longer be accomplished by a single country alone. It is my hope that IYA2009 will enable astronomers from around the world to create a new tradition of cooperation in astronomy." Catherine Cesarsky, President of the International Astronomical Union, underlines her wish during IYA2009 to communicate the joys and benefits of astronomy. "It is [the] sense of discovery and awe that astronomers wish to share with our fellow citizens all over the world. We thus hope to stimulate a long-term increase in student enrolment in science and technology, and an appreciation for lifelong learning." Source: Institute of Physics Explore further: South Africa unveils super radio telescope
<urn:uuid:66b6cbee-4acc-4e88-9399-067db0f1986d>
2.859375
607
News (Org.)
Science & Tech.
5.000682
95,609,644
Solitary Wasp: Mellinus Arvensis Everyone is surely familar with the common yellow and black wasp species, Vespula vulgaris; a social wasp, living in colonies and extremely aggresive if disturbed or threatened. However, there are many other species of wasps, not all of which are yellow and black, and most of which live a pretty solitary life. This species, Mellinus arvensis, is one of those solitary wasp species. Also known as one of the 'digger' wasps, this species excavates a burrow by digging with her mandible and legs. She then finds a spider (or may be more than one deppending on the size of the spider) and paralyses it with her sting and then drags the spider in to her burrow where she then lays her eggs inside the spider. The eggs hatch and the larva eat the spider before pupating and over wintering to the emerge as adults in the spring. A bit gruesome may be but this sort of thing is going on, usually unseen, in the natural world all the time. Find out more about the nature of Dorset at www.natureofdorset.co.uk
<urn:uuid:092b0d19-ef0e-4d02-9c02-7dbbc95e651b>
3
252
Personal Blog
Science & Tech.
50.59471
95,609,653
Authors: William JE Brown Although mathematically basic, the geometrical principles enshrined within Edwin Abbott Abbott’s 1884 work, Flatland: A Romance of Many Dimensions are unyieldingly consistent, and although Albert Einstein did not directly credit EA Abbott in Part III of his 1916 popular work Relativity, he deployed the little Flatlanders to great effect assuring us that ‘the three-dimensional spherical space is quite analogous to the two-dimensional spherical surface’. In this series of 15 concise scientific essays we will follow through on the simplicity and consistency of Abbott’s approach. Deriving from Flatland a set of named principles [Appendix 1] which are held to be true of the geometrical relationships between (n-1)D, nD, and (n+1)D, these are brought to bear on the contemporary scientific paradigm with the aim of exploring the potential for a consistent dimensional structure for the whole of nature. Flatland extrapolation through 1/2/3/4D reveals the action of the temporal dimension to be a product of the dimensional viewpoint of the observer; time is therefore not intrinsic to the 4th Dimension. The dimensional structure thus derived exists as a fundamental framework for all of nature, of which combinations of length, width, height, and time merely exhibit properties. Within this structure the universe emerges at the level of the 3rd Dimension (observable) and 4th Dimension (global), adhering strictly to Flatland principles applied spherically throughout. The model described is the finite 3-sphere of Einstein, with the crucial difference that observer and origin are located at antipodean centres (poles) of the 3-hemispheres, rendering the whole ‘observer-centric’. Without altering constants, GR, or QM, the model solves the horizon problem of CMB uniformity, explains the 1998 distant SNe Ia light anomaly, shows the universe to have net zero gravity (explaining so-called dark energy), reveals the correct mechanism behind expansion, shows in terms of information transfer why both gravity and light exist at c, describes the mechanism by which the universe diminishes to a Big Bang singularity, and provides a theoretical basis for the Equivalence principle. In the process it dispenses with infinity, superluminality, Cosmic Inflation, the G/DE knife-edge, recent acceleration, and the cosmological constant. Comments: 117 Pages. [v1] 2018-03-14 12:54:14 Unique-IP document downloads: 32 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:1e3fed76-2f01-4d7c-a752-ceda1fea1c49>
2.859375
671
Knowledge Article
Science & Tech.
27.129009
95,609,654
|Scientific Name:||Ocybadistes knightorum Lambkin & Donaldson, 1994| |Red List Category & Criteria:||Endangered B1ab(iii,v)+2ab(iii,v) ver 3.1| |Assessor(s):||Andren, M. & Cameron, M.A.| The Black Grass-dart Butterfly is assessed as Endangered. It has an extent of occurrence (EOO) of 312 km2, an area of occupancy (AOO) of 76 km2 and is known from three locations. There is continuing decline in the area, extent and quality of habitat and the number of mature individuals due to invasion of introduced weeds and this is predicated to continue in the future as climate change results in sea level rise. Fire, residential and commercial development and agricultural development are also threats. |Range Description:||The Black Grass-dart Butterfly is a subtropical narrow-range endemic species of the New South Wales (NSW) north coast, Australia. The habitat is located in two regions, Sawtell (containing 94% of the habitat) and Warrell Creek. The habitat is generally very low-lying, with the exception of two headland occurrences. Most of the habitat (over 90%) is 1-2 m asl.| A simple minimum convex polygon that encompassed all the potential habitat mapped by Andren and Cameron (2012) was used to calculate the extent of occurrence (EOO). The EOO is 312 km2. This area contains some ocean and very extensive unsuitable areas such as those that have been highly modified by urban and rural development. The Andren and Cameron (2012) mapping was also used as the basis for calculating area of occupancy (AOO). While potential habitat was mapped, since over 97% of it was found to be occupied by O. knightorum, all potential habitat was included in this assessment. The area was scaled up in order to ensure that the assessment against the criteria thresholds was evaluated at the appropriate scale, as recommended in the IUCN guidelines (IUCN 2011). This was achieved by overlaying the habitat map with a 2 km grid and then counting each occupied 4 km2 grid cell. The total area of habitat mapped by Andren and Cameron (2012) is 0.324 km2. When the 4 km2 grid was overlain, habitat was found to occur in 19 grid cells, equating to an AOO of 76 km2. The mapping of Andren and Cameron (2012) identifies 293 habitat patches. Most of the patches are very small and poor or zero butterfly habitat value; O. knightorum was absent from 48% of them at the time of sampling. The total area (0.324 km2) is still minor when compared with most other species. Despite the discovery of a second population at Warrell Creek, the habitat remains highly concentrated, with about 75% of it located along a single 7 km stretch of Pine Creek. Sea-level rise is a clear threatening event that will impact O. knightorum. It will affect virtually all the low-lying patches by 2100. Only the two headland populations will not be impacted by this threat. However, each small headland population (in total only 0.4 ha in size or 1% of the total habitat) could easily be affected by a single threatening event, such as fire, weed invasion or competition from native plants. Therefore, there are three locations. Native:Australia (New South Wales) |Range Map:||Click here to open the map viewer and explore range.| |Population:||Historical decline has not been documented, but can be inferred from the observed extent of weed invasion of habitat patches and the extent of urban and rural development throughout the range of the species (Andren and Cameron 2014).| |Current Population Trend:||Decreasing| |Habitat and Ecology:||Floyd's Grass (Alexfloydia repens) is the food plant of this species. At Sawtell, high tide is often 0.6 m to 0.8 m asl, and occasionally about 1.0 m asl during the largest king tides or High Higher Water Solstices Springs (HHWSS). Only 7% of the O. knightorum habitat is within this zone of occasional HHWSS tidal inundation, strongly indicating a lack of tolerance by the food plant A. repens for highly saline conditions. A. repens is suspected not be able to easily migrate to higher elevations as the sea level rises. Firstly, the rise is very rapid in geomorphological terms and the alluvial terraces that the species currently occupies will not have time to be reformed at higher elevations. Secondly, in many areas the fact that A. repens does not currently occupy the higher sites demonstrates that there are natural restrictions to its spread. Finally, there are many instances where higher elevations are occupied by weeds that are currently invading and would restrict migration (Andren and Cameron 2014).| |Continuing decline in area, extent and/or quality of habitat:||Yes| A clear threat to this species is from rising sea levels (New 2011), since the habitat of the species sits directly above the king tide mark. On the north coast of NSW, there is predicted to be a rise in sea level of 0.9 m above the 1990 level by 2100 (DECCW 2009). Although the science on which this estimate is based is rapidly-evolving, it was recently reviewed and still found to be adequate (NSW Chief Scientist and Engineer 2012). Assuming the assumptions hold (see Andren and Cameron 2014), 85% of the current habitat would be inundated or too saline for A. repens by 2100. Furthermore, the remaining 15% would be located in thin strips of isolated marginal habitat that would be unlikely to be high quality for the butterfly. O. knightorum may struggle to survive such extreme habitat loss, as it has not demonstrated a capacity to migrate quickly to higher elevations. There is also predicted to be an increase in the frequency, height and extent of flood events under the currently accepted climate change scenario (DECCW 2010). This is likely to impact the low-lying, riparian habitat. Fire regimes may also be modified by climate change. Increased evaporation and drier conditions in winter are predicted to occur on the NSW north coast (DECCW 2010), which will lead to seasonal alterations in fire frequency and intensity. Immature stages of O. knightorum may be unable to escape intense fires (D. Sands pers. comm. 2010). Therefore, fires that occur early in the season (e.g. August or September) may result in severe impact if there are few butterflies in the adult stage that are able to escape. Some of the best habitat is associated with peat formations (Sands 1997), increasing susceptibility to fire. At least one patch has experienced a protracted burn consistent with a peat fire (M. Smith pers. comm. 2010, NPWS). Butterflies may be affected by higher temperatures. Two woodland species have been shown to have lower longevity and fecundity in high temperature regimes (Karlsson and Wiklund 2005). The hotter predicted temperatures on the NSW north coast under climate change (DECCW 2010) may therefore also have a detrimental impact on O. knightorum. The crucially important Pine Creek and Bonville Creek catchments (containing 79% of the known habitat) are subject to impoundment, although this has rarely occurred in the recent past. The creek mouth closed in 2012 for the first time in over 50 years and increased water levels in both catchments. Invasion by introduced weeds, particularly L. camara and P. mandiocanum, has previously been identified as a major threat to the habitat quality of O. knightorum (Braby 2000, Sands and New 2002) and in developed catchments there is a significant threat posed by anthropogenic disturbance (Andren and Cameron 2012). These threats continue to operate. In the Sawtell population, L. camara occurred in 63% of patches, with significant infestations in 32% of patches, while P. mandiocanum occurred in 50% of patches, with significant infestations in 23% of patches. Habitat quality is of fundamental importance for the maintenance of butterfly populations (Thomas et al. 2001, Wood and Pullin 2002, Dover and Settele 2009). Some preliminary data was collected on the magnitude of the threat due to weed invasion of the two most serious weeds, Lantana Lantana camara and Broad-leaved Paspalum Paspalum mandiocanum. Anecdotal observations suggest that these weeds have actively invaded habitat patches (M. Smith pers. comm. 2010, NPWS, M. Andren pers. obs. 2012). Other weeds recorded less frequently included Ochna Ochna serrulata, Senna Senna pendula, Trad Tradescantia fluminensis, Plantain Plantago lanceolata and Rhodes Grass Chloris gayana. In 2000, Braby assessed the species as Vulnerable at a national level, largely due to its restricted distribution and the threat posed by introduced weeds (Braby 2000). Sands and New (2002) evaluated the conservation status in the Action Plan for Australian Butterflies. They also found the species to be nationally Vulnerable, based on: (i) few breeding populations in protected areas; (ii) small number of populations (five known at the time) and small range of occurrence; and (ill) management that was inadequate for reducing the risks of extinction in the longer term (Sands and New 2002). O. knightorum was not, however, listed under the national Environment Protection and Biodiversity Conservation (EPBC) Act 1999. Finally, it was assessed by the NSW Scientific Committee in 2002 and as a result listed as an Endangered species in NSW in that year. They found that the butterfly occurred in only a few small discrete patches (all threatened by weed invasion) and likely to become extinct unless the factors threatening its survival cease to operate (NSW Scientific Committee 2002). Conservation planning by the New South Wales State Government is under way for this species under their Save Our Species Program. As a first step in developing a sea-level rise adaptation strategy, potential sea level rise refugia need to be identified for the species. Improving the quality of habitat can be more important for butterfly conservation than, for example, increasing habitat size or connectivity (WallisDeVries 2004). To this end, it is vital that weeding programs are maintained and expanded. Research is also needed on the genetic variation within and between the two major and apparently isolated populations. On a more positive note, O. knightorum is a species where active management can play a successful role in securing its long-term persistence. The degree to which the habitat is contained in protected areas is high (Andren and Cameron 2012) and the threat from weed invasion is being managed in some reserves where key patches have responded well to weeding, providing a number of luxuriant examples of successful habitat rehabilitation.The existence of the headland populations demonstrates that the typical swamp forest habitat is not obligatory for either A. repens or O. knightorum. This suggests that the fundamental niche of both species is considerably broader than that realised across most of their current distribution. Translocation of A. repens has been successful on four occasions and O. knightorum occupies all four translocated patches (Andren and Cameron 2012). It is not known whether the butterfly was inadvertently translocated with its food plant, or later colonised the translocated patches. Regardless of the dispersal mechanism, the translocations are highly successful. |Citation:||Andren, M. & Cameron, M.A. 2014. Ocybadistes knightorum. The IUCN Red List of Threatened Species 2014: e.T64004480A64004482.Downloaded on 21 July 2018.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
<urn:uuid:cf3cef26-cc3e-4dda-9018-f027997bc1d7>
2.640625
2,519
Knowledge Article
Science & Tech.
47.54833
95,609,674
Compton Spectrometer and Imager Explorer - COSI-X is a wide-field gamma-ray telescope fitted onto a super pressure balloon. The Large Area Telescope collaboration operates a gamma-ray telescope onboard the Fermi Gamma Ray Space Telescope mission and has revolutionized our view of the gamma-ray Universe, by increasing the number of known sources, unveiling new classes of gamma-ray emitters, and probing particle acceleration and electromagnetic emission in space with unprecedented detail. NASA's Fermi Gamma-ray telescope is a powerful space observatory. Last June, NASA launched a new telescope called the Fermi Gamma-ray telescope into orbit by strapping it on the back of a rocket. If we had an even more powerful gamma-ray telescope , to examine the Moon in more detail, the limb (apparent edge) of the lunar disk would be found to be much brighter than its center, Thompson predicts. The scientific objectives of the GLAST mission require a high-energy gamma-ray telescope The gamma-ray telescope was tethered to a helium-filled balloon which floated 24 miles above Earth's surface for 12 days, using the spiraling polar vortex, before it was cut down, and it had been sitting on the icy plains since. The spacecraft carries an imaging gamma-ray telescope as its main payload. The mission's scientific objectives require a high-energy gamma-ray telescope with angular resolution sufficient to identify point sources with objects at other wavelengths, a wide field-of-view that will permit the study of sources that exhibit extreme intensity variations on timescales from seconds to months or longer, and a large effective area to detect a large sample of sources and determine their energy spectra. Third, we will use an innovative method to isolate the CIB from early epochs using data from the Fermi gamma-ray telescope Three minutes and 12 seconds after Swifts gamma-ray telescope found the burst, the craft automatically turned its X-ray telescope to the same point in the sky. India intends to set up one more gamma-ray telescope at Hanle, Ladakh in Jammu and Kashmir besides the one that is all set to be transported to the place.
<urn:uuid:2ee5358d-0359-4adc-b2a5-6c423c124623>
3.578125
464
Knowledge Article
Science & Tech.
17.632281
95,609,683
Michael Mann knows more about Earth's climate over the last 1,000 years than most. He's studied the history of changes in Earth's climate over the past 1,000 years to better understand how human-driven climate change of the 21st century compares. Mann has also pioneered techniques the climate scientists use today to discover patterns in past climate change to better understand how our situation may develop in the next 10, 100, and 1,000 years. Mann is outspoken against skeptics of climate change: One of his notable achievements was helping to found Real Climate, a website run by a group of climate scientists who strive to provide the scientific facts on mainstream discussions of climate change. If you're looking for the statistics, facts, and other real science of the latest news or hype regarding what a celebrity, politician, or even the Pope said about climate change, you're likely to find it on Real Climate. For his work, Mann has received numerous awards including the first Friend of the Planet Award from the National Center for Science Education. Mann is a distinguished professor of Meteorology and Director of Earth System Science Center at Penn State University. This news has been published by title 50 Groundbreaking Scientists Who Are Changing The Way We See The World If the page you entry is mistake or not entry perfectly, please visit the native web in source CLICK HERE Thank you for your visit to our website, hopefully the guidance we convey is useful, do not forget to allowance and subscribe our web to get more information.[TAG]219
<urn:uuid:f1d33569-6d4f-4562-885a-c3596a5be834>
2.75
308
News (Org.)
Science & Tech.
35.406119
95,609,768
Populations of marine mammals, birds, reptiles and fish have dropped by about half in the past four decades, with fish critical to human food suffering some of the greatest declines, WWF warned Wednesday. In a new report, the conservation group cautioned that over-fishing, pollution and climate change had significantly shrunk the size of commercial fish stocks between 1970 and 2010. WWF’s Living Blue Planet Report indicated that species essential to the global food supply were among the hardest hit. One family of fish, that includes tuna and mackerel, had for instance declined 74 percent during the 40-year period, it found. “In the space of a single generation, human activity has severely damaged the ocean by catching fish faster than they can reproduce while also destroying their nurseries,” Marco Lambertini, head of WWF International, said in a statement. “Overfishing, destruction of marine habitats and climate change have dire consequences for the entire human population, with the poorest communities that rely on the sea getting hit fastest and hardest,” he warned. “Profound changes are needed to ensure abundant ocean life for future generations,” he insisted. Fish are not the only marine species that are suffering. The WWF report also shows there has been a steep decline in coral reefs, mangroves and seagrasses that support fish species — with more than one third of fish tracked for the study relying on coral reefs and some 850 million people around the world relying on them for their livelihoods. A previous report from the group showed that half of all corals have already vanished, and they are all expected to be gone by 2050 if temperatures continue to rise at the same rate. WWF’s analysis tracked 5,829 populations of 1,234 species — nearly twice as many as in its past studies, giving “a clearer, more troubling picture of ocean health.” One in four species of both sharks and rays is facing extinction, largely due to overfishing, the report said. WWF called on global leaders to ensure that ocean recovery and coastal habitat health figure high on the list of priorities when the United Nations’ Sustainable Development Goals for the next 15 years are formally approved later this month. “We must take this opportunity to support the ocean and reverse the damage while we still can,” Lambertini said. – Solutions possible – While highlighting the severity of the crisis, WWF stressed that the ocean is a renewable resource and that marine life can be restored if the human population lives within “sustainable limits.” The report called for the amount of ocean area worldwide that is currently protected (3.4 percent) to be tripled by 2020. Among its other recommendations was a call for consumers and sellers of fish products to increasingly demand stock from companies that follow internationally recognised best practices. A further suggestion was that funds specifically allocated to restore marine life would be repaid with future profits from the fishing industry. “The pace of change in the ocean tells us there’s no time to waste,” Lambertini said. “These changes are happening in our lifetime. We can and we must correct course now.”
<urn:uuid:216e5aa3-be1f-4015-b814-183e18a2f1ad>
3.5
663
News Article
Science & Tech.
36.33116
95,609,778
Gram staining of bacteria is a routine diagnostic method of long standing that can be used for initial diagnoses and to simplify the choice of antibiotics. It is a simple way to classify bacteria into two classes—Gram-positive and Gram-negative—under a microscope. In the journal Angewandte Chemie, American researchers have now introduced an improvement to this method: magnetic Gram staining. This allows for the class-specific, automated, magnetic detection and separation of bacteria. Gram staining was developed about a hundred years ago by Danish bacteriologist Hans Christian Gram. In this technique, bacterial cultures are colored by a stain known as crystal violet, which enters into the murein layer of the bacterial cell walls. Treatment with an iodine-containing solution forms water-insoluble complexes between the crystal violet and iodine. There are two classes of bacteria that differ in the structures of their cell walls. A thick murein layer surrounds one class; the others have only a thin one. Whereas subsequent treatment with ethanol dissolves the stain complex out of the thin murein layer, it remains firmly lodged in the thick murein layers. Bacteria whose stain can be washed away in this manner are classified as Gram-negative; those that remain dark purple are Gram-positive. Scientists working with Ralph Weissleder at Harvard University in Boston (USA) have now developed Gram staining into a magnetic diagnostic technique. To achieve this, they attached a “molecular hook” to the molecules of crystal violet. With this modified dye, the staining process works just as it does with the original. After staining, however, “eyes” that correspond to the “hooks” are used to attach magnetic nanoparticles to the stain. This makes it easy to quantify the bacteria: nuclear magnetic resonance (NMR) instruments detect the magnetization of the nanoparticles.It is possible to take an NMR measurement before washing with ethanol to obtain the total number of Gram-positive and Gram-negative bacteria, and again after the washing step to determine the concentration of Gram-positive bacteria. The advantage of this magnetic detection method is its high sensitivity. It is possible that samples could be directly magnetized and measured without prior purification or culture of the bacteria. By using the simple but sensitive miniaturized micro-NMR instruments developed by this research group, fast and sensitive on-the-spot diagnosis is conceivable. In addition, the magnetization could be used for the separation of bacteria from the sample. Ralph Weissleder | Angewandte Chemie Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:c31190da-9d6e-412c-bf63-3762328dce68>
4.0625
1,110
Content Listing
Science & Tech.
34.540177
95,609,795
Why do things burn up on re-entry into earth's atmosphere? The main reason why things heat up when they hit the Earth’s atmosphere is they've got huge amounts of kinetic energy - they're going incredibly fast. When they bash into the Earth’s atmosphere, most of the heating is actually because the air they bash into hasn’t got time to get out of the way, so the air get compressed, and when you compress air - you may have noticed if you've ever pumped off a bicycle tire very, very quickly - it gets hotter. So the air in front of the asteroid heats up very, very incredibly hot, and that starts to erode the surface of the meteorite and you get this tail of a very, very hot stuff behind the meteorite which you see as a shooting star. With very, very small things, because the friction is so much larger compared to their mass, they tend to lose their speed very high up in the atmosphere much more gently, so they slow down much more gently and don’t get as hot. Once they slow down enough, they just drift down like dust does gently through the atmosphere. So, it is conceivable that something like a bacteria on a small dust grain could survive, whereas a big lump would melt up very quickly. Heat = atoms smashing into one another. Hot is when the atoms are smashing into each other a lot at high speed. There are no atoms in outer space, so meteors, etc,?Are freezing. The second the meteor enters our atmosphere, it hits air. Thus, we have very fast meteors hitting atoms. Thus, we have heat. If you remember that "air" really is "something" (it's a gas).... then consider that, when a meteor or a space vehicle or a piece of space debris starts to move toward Earth.... and, at the speeds that they travel (a space shuttle begins its descent at about 18,000 MPH!).... then the FRICTION of the air upon the impacting surface becomes so great that a lot of heat is developed. When one of the space shuttles broke apart (over Texas and Louisiana) we learned that there were tiles on the "under-side" that impinged on the Earth's environment (it lead, going in to the air/atmosphere) the temperatures were great enough to melt the metal that was under the tiles. With the tiles compromised, the high temp impinged upon the underside of the shuttle's metal belly and started making holes in it..... until those holes upset the aerodynamics of the vehicle, caused it to go out of control and, ultimately, break apart and fall to Earth.... Most space debris ("meteors" or "shooting stars") are small objects (few are even as large as a baseball), which glow hot upon starting in to the Earth's air atmosphere.... and they quickly break apart and the resulting debris (rather dustlike) falls harmlessly to Earth.... As many as several TONS of space debris intersects with Earth each year!!!! Very large objects, like those that struck Crater Lake, Arizona or the Russian steppe, in the early 1900s can very very destructive when they land.... In the case of space ships, they're solid and moving super fast. Think of running really fast and doing a belly dive into some carpet. Friction burns are no fun, and it's a similar idea to what happens when a space ship "hits" the atomic gases of the atmosphere on re-entry. by John B Badd 12 months ago I am not asking about the effect of burning oil or spilling it in the ocean. I am looking to find out how removing oil from the ground effects the ecosystem. Imagine we drained it and did not use it or spill it if it would help put you in the mind state for this forum.The way I... by samad ansari 2 years ago What happen when the percentage of oxygen on earth is incresed from 21% to 78% ?As we see our atmosphere is polluted so much and what happen if the percentage of oxygen on earth is changed from 21% to 78% and the percentage of nitrogen changed from 78% to 21%. then, how it effect on earth ? by Alexander Brenner 6 years ago If you could control one element fire, water, air or earth, which would you control? Why?A la Avatar: the Last Airbrener by PR Morgan 3 years ago Do you think that humans will one day inhabit another planet? And if so, which one? by PeterStip 3 years ago The scientific opinion on climate change is that the Earth's climate warming up..Why still argue ?There is a 99% Probability that Manmade Emissions Have Caused Climate ChangeWhy do we still debate if there is a climate change at all ? by Castlepaloma 6 years ago Noah's Arc left behind most people and animalsWas that an Evil or a Good act?My thoughts is Safety first! Automatic I've would of thrown out life jackets or peaces of wood for all of God's creatures to be saved during the flood. If God made everything perfect, why would God have 95% of Land Animals... Copyright © 2018 HubPages Inc. and respective owners. Other product and company names shown may be trademarks of their respective owners. HubPages® is a registered Service Mark of HubPages, Inc. HubPages and Hubbers (authors) may earn revenue on this page based on affiliate relationships and advertisements with partners including Amazon, Google, and others. |HubPages Device ID||This is used to identify particular browsers or devices when the access the service, and is used for security reasons.| |Login||This is necessary to sign in to the HubPages Service.| |HubPages Traffic Pixel||This is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.| |Remarketing Pixels||We may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.| |Conversion Tracking Pixels||We may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.|
<urn:uuid:a3616599-6232-4e0b-ab89-db869b822e3c>
3.328125
1,352
Comment Section
Science & Tech.
62.405513
95,609,809
Mars today is a dry, frozen place. But this was not always the case. Ancient Mars was likely warm and wet, much like Earth. So what happened to change it? Thanks to brand new results from NASA's MAVEN mission, announced today, we may finally know. Blame the Solar winds. The Solar Winds "When we look at ancient mars we see a different kind of surface, an environment that was able to support water on the surface" said MAVEN principle investigator Bruce Jakovsky today. "So what happened to the carbon dioxide in that atmosphere, what happened to the water on early Mars?" Over the last year, MAVEN has been studying Mars' atmosphere carefully. Using that data, researchers revealed (along with the simultaneous publication of the results in Science and Geophysical Research Letters) that, under bombardment from solar winds, Mars' atmospheric gases were being striped away. While the solar winds don't directly strike the planet's surface, the solar winds have been stripping atmospheric gas around the planet steadily, coming off in bursts as Jasper Halekas, MAVEN's lead instrument investigator explained "like the shock wave around a jet plane." As the solar winds carried off more and more of Mars' atmospheric gas, like you see above, the planet was left less and less able to sustain the watery surface it once had. "The analogy I use," Jakovsky explained, "is when I step out of the shower into the breeze, the water in my hair is just whisked away by the wind." Today, that loss of atmospheric gas is continuing at the rate of about 113g of gas per second. But, researchers believe that in Mars' earlier days, as it first began to lose atmospheric gas, that rate was much higher. And even today, that rate can easily jump 10-20 times during a solar storm, like you see in this comparison between the average rates and the solar storm rates: Is Mars Earth's Future? So, if Mars was a once a wet planet that lost its water along with its atmosphere, what about Earth? Should we worry that our own home might one day look like the red planet's dry, dusty surface? Fortunately, there's something significant standing in the way: Earth's magnetic field. Like Mars, Earth is also subject to powerful solar winds — and like Mars, it also loses some of its atmospheric gases. But the geomagnetic field repels it enough that it does not run up as directly against the atmospheric gases: Of course, as researchers noted during their announcement of the results today, 3.7 billion years ago, early Mars also once had a geomagnetic field protecting it, a fact that allowed its ancient waters to flow. It was losing this that set off the chain reaction that eventually turned Mars into the dusty, red, frozen surface that we know. "The turn-off of the magnetic field is what allowed the turn-on of the stripping by the solar wind," noted Jakovsky. Top image: Solar winds sweep against Mars' atmosphere/NASA. All images and charts via NASA
<urn:uuid:99832db3-3e4b-4020-850f-00e1c6f06ecb>
3.875
629
News Article
Science & Tech.
51.001619
95,609,828
Though viruses are the most abundant life form on Earth, our knowledge of the viral universe is limited to a tiny fraction of the viruses that likely exist. In a paper published this week in the online journal mBio, researchers from the University of Pittsburgh, Washington University in St. Louis, and the University of Barcelona found that raw sewage is home to thousands of novel, undiscovered viruses, some of which could relate to human health. There are roughly 1.8 million species of organisms on our planet, and each one is host to untold numbers of unique viruses, but only about 3,000 have been identified to date. To explore this diversity and to better characterize the unknown viruses, Professor James Pipas, Distinguished Professor of Biological Sciences Roger Hendrix, and Assistant Professor Michael Grabe, all of the Department of Biological Sciences in Pitt's Kenneth P. Dietrich School of Arts and Sciences, are developing new techniques to look for novel viruses in unique places around the world. With coauthors David Wang and Guoyan Zhao of Washington University in St. Louis and Rosina Girones of the University of Barcelona, the team searched for the genetic signatures of viruses present in raw sewage from North America, Europe, and Africa. In the paper, titled "Raw Sewage Harbors Diverse Viral Populations," the researchers report detecting signatures from 234 known viruses that represent 26 different families of viruses. This makes raw sewage home to the most diverse array of viruses yet found. "What was surprising was that the vast majority of viruses we found were viruses that had not been detected or described before," says Hendrix. The viruses that were already known included human pathogens like Human papillomavirus and norovirus, which causes diarrhea. Also present were several viruses belonging to those familiar denizens of sewers everywhere: rodents and cockroaches. Bacteria are also present in sewage, so it was not surprising that the viruses that prey on bacteria dominated the known genetic signatures. Finally, a large number of the known viruses found in raw sewage came from plants, probably owing to the fact that humans eat plants, and plant viruses outnumber other types of viruses in human stool. This study was also the first attempt to look at all the viruses in the population. Other studies have focused on bacteria, or certain types of viruses. The researchers also developed new computational tools to analyze this data. This approach, called metagenomics, had been done before, but not with raw sewage. The main application of this new technology, says Hendrix, will be to discover new viruses and to study gene exchange among viruses. "The big question we're interested in is, 'Where do emerging viruses come from?'" he says. The team's hypothesis is that new viruses emerge, in large part, through gene exchange. But before research on gene exchange can begin in earnest, large numbers of viruses must be studied, the researchers say. "First you have to see the forest before you can pick out a particular tree to work on," says Pipas. "If gene exchange is occurring among viruses, then we want to know where those genes are coming from, and if we only know about a small percentage of the viruses that exist, then we're missing most of the forest." Karen Hoffmann | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:3e2d0a47-a342-4707-aa21-a6a1e2c393ae>
4.0625
1,320
Content Listing
Science & Tech.
38.362009
95,609,860
Galaxies in the early universe grew fast by rapidly making new stars. Such prodigious star formation episodes, characterized by the intense radiation of the newborn stars, were often accompanied by fireworks in the form of energy bursts caused by the massive central black hole accretion in these galaxies. This discovery by a group of astronomers led by Peter Barthel of the Kapteyn Institute of the University of Groningen in the Netherlands is published today in the Astrophysical Journal Letters. Our Milky Way galaxy forms stars at a slow, steady pace: on average one new star a year is born. Since the Milky Way contains about a hundred billion stars, the actual changes are very slight. The Milky Way is an extremely quiet galaxy; its central black hole is inactive, with only weak energy outbursts due to the occasional capture of a passing star or gas cloud. This is in marked contrast to the ‘active’ galaxies of which there are various types and which were abundant in the early universe. Quasars and radio galaxies are prime examples: owing to their bright, exotic radiation, these objects can be observed as far as the edge of the observable universe. The light of the normal stars in their galaxies is extremely faint at such distances, but active galaxies can be easily detected through their luminous radio, ultraviolet or X-ray radiation, which results from steady accretion onto their massive central black holes. Until recently these distant active galaxies were only interesting in their own right as peculiar exotic objects. Little was known about the composition of their galaxies, or their relationship to the normal galaxy population. However, in 2009 ESA's Herschel space telescope was launched. Herschel is considerably larger than NASA's Hubble, and operates at far-infrared wavelengths. This enables Herschel to detect heat radiation generated by the processes involved in the formation of stars and planets at a small scale, and of complete galaxies at a large scale. Peter Barthel has been involved with Herschel since 1997 and heads an observational programme targeting distant quasars and radio galaxies. His team used the Herschel cameras to observe seventy of these objects. Initial inspection of the observations has revealed that many emit bright far-infrared radiation. The Astrophysical Journal Letter ‘Extreme host galaxy growth in powerful early-epoch radio galaxies’, by Peter Barthel and co-authors Martin Haas (Bochum University, GER), Christian Leipski (Max-Planck Institute for Astronomy, Heidelberg, GER) and Belinda Wilkes (Harvard-Smithsonian Center for Astrophysics, Cambridge, USA), describes their project and the detailed analysis of the first three distant radio galaxies. The fact that these three objects, as well as many others from the observational sample, emit strong far-infrared radiation indicates that vigorous star formation is taking place in their galaxies, creating hundreds of stars per year during one or more episodes lasting millions of years. The bright radio emission implies strong, simultaneous black hole accretion. This means that while the black holes in the centres of the galaxies are growing (as a consequence of the accretion), the host galaxies are also growing rapidly. The Herschel observations thereby provide an explanation for the observation that more massive galaxies have more massive black holes. Astronomers have observed this scaling relationship since the 1990s: the fireworks in the early universe could well be responsible for this relationship. Barthel: ‘It is becoming clear that active galaxies are not only among the largest, most distant, most powerful and most spectacular objects in the universe, but also among the most important objects; many if not all massive normal galaxies must also have gone through similar phases of simultaneous black hole-driven activity and star formation.’ Contact: Prof. Peter Barthel, tel. +31-6-11391826 The Astrophysical Journal Letters, Volume 757, Number 2 Extreme Host Galaxy Growth in Powerful Early-epoch Radio Galaxies Peter Barthel, Martin Haas, Christian Leipski, and Belinda Wilkes University of Groningen to award Honorary Doctorate to Former UN Secretary-General Ban Ki-Moon UG in 39th place on European Teaching Excellence ranking list Wierenga-Rengerink Prize for the best UG doctoral dissertation in 2017 has been awarded to Dr Alain Dekker for his dissertation ‘Down & Alzheimer: Behavioural biomarkers of a forced marriage’
<urn:uuid:d3a4f9da-b3f3-41db-9cea-e7e8dc689bb7>
3.96875
904
News (Org.)
Science & Tech.
21.910363
95,609,875
XML Schema Definition (XSD) by Vijay Mukhi, Shruti Gupta, Sonal Mukhi The XML Schema specifies the properties of a resource, while the XML file stipulates a set of values for these properties. The primary utility of the XML Schema lies in its ability to concede generous amount of autonomy to the programmer to define the rules of data validity; and thereafter, to hand over the responsibility of data validation to the XML validator. This liberates the programmer from the mundane drudgery of the task of data validation. Home page url Download or read it online for free here: by Norman Walsh - O'Reilly Media This introduction to XML presents the Extensible Markup Language at a reasonably technical level for anyone interested in learning more about structured documents. In addition to XML 1.0 Specification, this text outlines related XML specifications. by Steve Holzner - Sams The book offers real-world XML examples and covers basic syntax, XML document structure, document types, the benefits of XML Schema, formatting using CSS, working with XHTML, XForms, building XML into database or Web Service applications with SOAP. by DJ Adams - O'Reilly Media This book offers developers a chance to learn and understand the Jabber technology and protocol from an implementer's point of view. Detailed information of each part of the Jabber protocol is introduced, explained, and discussed. by Ronald Bourret This paper gives a high-level overview of how to use XML with databases. It describes how the differences between data-centric and document-centric documents affect their usage with databases and how XML is commonly used with relational databases.
<urn:uuid:0a124320-c35b-417d-a4bc-f30c6168028c>
2.984375
349
Content Listing
Software Dev.
29.474586
95,609,883
Inheritable, overridable class data Class::Data::Inheritable is for creating accessor/mutators to class data. That is, if you want to store something about your class as a whole (instead of about a single object). This data is then inherited by your subclasses and can be overriden. Source Files (show merged sources derived from linked package) |Class-Data-Inheritable-0.08.tar.gz||00000056605.53 KB||1251629713almost 9 years ago| |_link||0000000128128 Bytes||1386067997over 4 years ago| |perl-Class-Data-Inheritable.changes||00000028072.74 KB||1321616342over 6 years ago| |perl-Class-Data-Inheritable.spec||00000036983.61 KB||1323193116over 6 years ago|
<urn:uuid:6556bb38-517e-425f-b780-65cc4ca3cf5a>
2.65625
198
Content Listing
Software Dev.
66.846148
95,609,884
CSS stands for Cascading Style Sheet. The CSS rules gives the beautiful look to the web page because HTML do not have that many style options.Also, it is wise to keep the style information for website separate from the main content. You can add CSS rule inside HTML document or in a separate file depending on the website. This tutorial is for any person interested in learning CSS. To maximize your benefit from this tutorial. The learner must be familiar with basic HTML syntax. You can visit our HTML tutorial page to learn basics of HTML language. In this section you will find topics related to CSS. Each of these topics have articles listed under them. You must read the tutorial and practice examples to gain mastery over CSS.
<urn:uuid:be15f0cf-6120-4db5-9faf-c24cc020bc08>
3.1875
148
Tutorial
Software Dev.
61.431853
95,609,888
COLUMBUS, Ohio - If you were fortunate enough to witness the recent total solar eclipse in all its glory, you might have noticed something surprising. It was dark as night, yet people and objects were easier to see than on a typical moonless night. Scientists at The Ohio State University have discovered a possible biological explanation - the presence (or absence) of a protein in the retina known as a GABA receptor. GABA, short for gamma-aminobutyric acid, is a chemical messenger responsible for communication between cells, especially those in the brain. The GABA receptor is in abundance on certain cells in the retina on sunny days, and enhances the ability to see details and edges of objects. At night, it disappears. But that process is normally gradual. When the total eclipse took viewers from brightness to darkness in minutes, the GABA receptor would have still been present on those cells in their eyes, giving them super-sharp night vision for a brief time, said lead researcher Stuart Mangel, a professor of neuroscience at the Ohio State University College of Medicine. The study, which was conducted in rabbits, also found that the neurotransmitter dopamine, which increases in the light and decreases in the dark, regulates whether the GABA receptor is working. "It has been known for decades that there is a mechanism in the retina in the eye that helps us see small objects and detect edges on bright days, and that this mechanism gradually turns off when it is dark. However, what this mechanism is and how it is controlled has been a mystery," said Mangel, a member of Ohio State's Neuroscience Research Institute. The research appears in the journal Current Biology. "On bright days, dopamine levels are high and signaling is strong, enhancing the detection of spatial details and edges," Mangel said. "On moonless nights, however, dopamine levels are low and the GABA signal is minimal, decreasing our ability to see those details." Mangel, who visited Tennessee for the Aug. 21 eclipse, said he and others experienced an unusual clarity of vision during the minutes when the moon shut out the sun's rays. "During the total eclipse, it was as dark as it usually is at dusk. Several people I was with commented that they could see as well during totality as they could when it had been bright, and that their acuity was much better than it usually is when it is dark at dusk," he said. He realized at the time that his research offers one explanation. Normally, when you're outdoors, it takes hours for the background light to decrease from bright to dark as the Earth rotates on its axis. When it finally becomes dark at dusk, a person or animal's ability to see small details is much lower than during the middle of the day. Visual performance needs change with the ambient light level, Mangel said. We need to see fine spatial details on bright days and to see large dim objects on moonless nights. "Evolution has made trade-offs so that we can see well on bright days and on moonless nights," he said. "My findings show that the change in background light triggers a process in the retina that normally takes hours. This process involves assembling and moving the GABA receptor protein to a specific site in the retina when it is bright, and disassembling the same protein and moving it away from the synapse as it becomes dark," Mangel said. "The reason our acuity stayed high during the total eclipse is that there wasn't enough time for protein disassembly to take place." Other researchers who worked on the study were Antoine Chaffiol, Masaaki Ishii and Yu Cao. The National Institutes of Health and the Plum Foundation supported the research. CONTACT: Stuart Mangel, 614-292-5753; Stuart.Mangel@osumc.edu Written by Misti Crane, 614-477-2964; Crane.firstname.lastname@example.org
<urn:uuid:35c61ac4-a869-4bee-87e4-9b67a31544a9>
3.765625
809
News Article
Science & Tech.
49.493556
95,609,889
Using the SDL2 library for creating an OpenGL context, GLSL for creating vertex and fragment shaders, and the C++ programming language (C++11 specification), you should create a simple two dimensional (2D) Game Engine supporting supporting sprited tiled maps and 2D collision based on simple rectangle intersection. The Game Engine should also have all the necessary features for creating a simple 2D Space shooter game: sprite sheet loading, color keying, 2D animations, 2D collision detection, keyboard and gamepad control, etc For testing your Game Engine, you must use the assets of the Xenon 2000: Project PCF and create your own Xenon 2000 clone. (which will be provided)As a reference for your implementation, consider the following video: [login to view URL] Your Xennon 2000 clone must have at least: 4 different types of enemies, 3 power ups, one Ship add-on and 3 different types of shots. The implemented Game Engine, must use OpenGL 3.3 specification or higher. The Engine architecture must follow the Object Oriented paradigm and be the most generic as possible, so it can be used for creating other kind of 2D games. Also, the resource management must be taken into account: every time an allocated resource is not required, it should be freed. You must avoid resource/memory leaks at all costs. For the Game Loop inside your game engine, you must separate the game logic from the rendering. During each loop, first all the actors or game objects have their game logic updated (positions, etc.), and then all actors are rendered. 8 freelancer đang chào giá trung bình €207 cho công việc này Hi, I am an professional in game development. I have good experience in 2D Game Engine using SDL2 and OpenGL in C++. I can do this project perfectly. Thanks.
<urn:uuid:52bafcb5-5a45-4da2-94fd-e79c133177e2>
2.796875
419
Product Page
Software Dev.
48.698521
95,609,893
5 February 2009 Primitive whales gave birth on land by Kate Melville A pair of ancient whale fossils - a pregnant female and a male of the same species - reveals how these primitive ancestors of today's whales gave birth and provides new insights into how whales made the transition from land to water. The 48 million-year-old fossils, discovered in Pakistan in 2000 and 2004, are described in a paper published in PLoS. University of Michigan paleontologist Philip Gingerich, who led the team that made the discoveries, was at first perplexed by the assortment of adult female and fetal bones found together. "When I first saw the small teeth in the field, I thought we were dealing with a small adult whale, but then we continued to expose the specimen and found ribs that seemed too large to go with those teeth," he said. "By the end of the day, I realized we had found a female whale with a fetus." The find was the first discovery of a fetal skeleton of an extinct whale in the group known as Archaeoceti, and it is a new species which the researchers have dubbed Maiacetus inuus. The fetus is positioned for head-first delivery, like land mammals but unlike modern whales, indicating that these whales still gave birth on land. Another clue to the whales' lifestyle is the well-developed set of teeth in the fetus, suggesting that Maiacetus newborns were equipped to fend for themselves, rather than being helpless in early life. The 8.5-foot-long male specimen, collected four years later from the same fossil beds, shares characteristic anatomical features with the female of the species, but its virtually complete skeleton is 12 percent larger overall, and its canine teeth are 20 percent larger. The size difference of male and female Maiacetus is only moderate, hinting that the males didn't control territories or command harems of females. "The whales' big teeth, well-suited for catching and eating fish, suggest the animals made their livings in the sea, probably coming onto land only to rest, mate and give birth," speculated Gingerich. Like other primitive archaeocetes, Maiacetus had four legs modified for foot-powered swimming, and although these whales could support their weight on their flipper-like limbs, they probably couldn't travel far on land. "They clearly were tied to the shore," Gingerich said. "They were living at the land-sea interface and going back and forth." Compared with previous fossil whale finds, Maiacetus occupies an intermediate position on the evolutionary path that whales traversed as they made the transition from full-time land dwellers to dedicated denizens of the deep. "Specimens this complete are virtual 'Rosetta stones'," Gingerich said, "providing insight into functional capabilities and life history of extinct animals that cannot be gained any other way." Human Noise Wrecks Whales' Sex Lives Source: University of Michigan
<urn:uuid:94d3f91a-4f20-4c84-b006-543f8075f3e3>
3.90625
606
News Article
Science & Tech.
34.003676
95,609,900
6 June 2011 Antimatter bottled-up for 16 minutes by Kate Melville Antimatter remains an enigma, but researchers at CERN may soon be able to ascertain some of its key properties thanks to groundbreaking techniques they've developed that trap and store antimatter for more than 15 minutes. Reporting their work in Nature Physics, the ALPHA team (an international group of scientists working at CERN) outlined their plans to refine the antihydrogen trap with "the hope that by 2012 we will have a new trap with laser access to allow spectroscopic experiments on the antiatoms." Antihydrogen atoms were first made in large quantities at CERN eight years ago, but they can't be stored conventionally since antiatoms touching ordinary-matter atoms in the walls of the container would instantly annihilate each other. Using a "magnetic bottle" with a superconducting magnet to suspend the antiatoms away from the bottle walls, the researchers last year demonstrated the trapping of antihydrogen atom capture for about a tenth of a second. Improvements to the technique have now enabled routine trapping times of more than 15 minutes. Antimatter is puzzling to cosmologists and physicists because it should have been produced in equal amounts with normal matter during the Big Bang. Today, however, there is no evidence of antimatter galaxies and experimentally produced antimatter is seen for only short periods before it annihilates in a collision with normal matter. Scientists want to measure the properties of antiatoms in order to determine whether their electromagnetic and gravitational interactions are the same as normal matter. One goal is to check whether antiatoms abide by CPT (charge-parity-time) symmetry, as do normal atoms. CPT symmetry means that a particle would behave the same way in a mirror universe if it had the opposite charge and moved backward in time. "Any hint of CPT symmetry breaking would require a serious rethink of our understanding of nature," said Danish scientist Jeffrey Hangst, a spokesperson for the ALPHA team. "But half of the universe has gone missing, so some kind of rethink is apparently on the agenda." Importantly, the ALPHA team not only made and stored the long-lived antihydrogen atoms, they were also able to measure their energy distribution. "It may not sound exciting, but it's the first experiment done on trapped antihydrogen atoms," team member Jonathan Wurtele of Berkeley Labs explained. "This summer we're planning more experiments, with microwaves. Hopefully we will measure microwave-induced changes of the atomic state of the anti-atoms."
<urn:uuid:217e1959-0a0f-4ffc-b510-313e034a5f9c>
3.140625
533
Truncated
Science & Tech.
24.368613
95,609,901
Scientists at Chalmers University of Technology have developed a new way to study nanoparticles one at a time, and have discovered that individual particles that may seem identical in fact can have very different properties. The results, which may prove to be important when developing new materials or applications such as hydrogen sensors for fuel cell cars, will be published in Nature Materials. "We were able to show that you gain deeper insights into the physics of how nanomaterials interact with molecules in their environment by looking at the individual nanoparticle as opposed to looking at many of them at the same time, which is what is usually done," says Associate Professor Christoph Langhammer, who led the project. By applying a new experimental approach called plasmonic nanospectroscopy, the group studied hydrogen absorption into single palladium nanoparticles. They found that particles with exactly the same shape and size may exhibit differences as great as 40 millibars in the pressure at which hydrogen is absorbed. The development of sensors that can detect hydrogen leaks in fuel cell powered cars is one example of where this new understanding could become valuable in the future. "One main challenge when working on hydrogen sensors is to design materials whose response to hydrogen is as linear and reversible as possible. In that way, the gained fundamental understanding of the reasons underlying the differences between seemingly identical individual particles and how this makes the response irreversible in a certain hydrogen concentration range can be helpful," says Christoph Langhammer. Others have looked at single nanoparticles one at a time, but the new approach introduced by the Chalmers team uses visible light with low intensity to study the particles. This means that the method is non-invasive and does not disturb the system it is investigating by, for example, heating it up. "When studying individual nanoparticles you have to send some kind of probe to ask the particle 'what are you doing?'. This usually means focusing a beam of high-energy electrons or photons or a mechanical probe onto a very tiny volume. You then quickly get very high energy densities, which might perturb the process you want to look at. This effect is minimized in our new approach, which is also compatible with ambient conditions, meaning that we can study nanoparticles one at a time in as close to a realistic environment as possible", says Christoph Langhammer. Even though they have now reached the level where their results are ready to be published, Christoph Langhammer believes they have just scratched the surface of what their discovery and developed experimental methodology will lead to in relation to further research. He hopes that they have helped to establish a new experimental paradigm, where looking at nanoparticles individually will become standard in the scientific world. "It is not good enough to look at, and thus obtain an average of, hundreds or millions of particles if you want to understand the details of how nanoparticles behave in different environments and applications. You have to look at individual ones, and we have found a new way to do that." "My own long-term vision is to apply our method to more complex processes and materials, and to push the limits in terms of how small nanoparticles can be for us to be able to measure them. Hopefully, along the way, we will gain even deeper insights into the fascinating world of nanomaterials." Johanna Wilde | EurekAlert! Metal too 'gummy' to cut? Draw on it with a Sharpie or glue stick, science says 19.07.2018 | Purdue University Machine-learning predicted a superhard and high-energy-density tungsten nitride 18.07.2018 | Science China Press For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:26143451-aa93-4d6f-8129-977bde8f460b>
3.671875
1,327
Content Listing
Science & Tech.
36.675801
95,609,904
Well-defined fronts develop at the leading edge of the Columbia River (USA) plume. Convergent flow at these frontal boundaries may concentrate zooplanktonic organisms, which may in turn increase local prey availability to planktivorous fishes. In May 2001 and 2002, we compared the density, biomass and community structure of planktonic and neustonic zooplankton among plume fronts, low-salinity plume waters, and within the more saline, coastal marine waters. Fronts were characterized by distinct color discontinuities and high wave energy and were usually accom- panied by foam and flotsam. The surface manifestation of the fronts was narrow and formed a thin lens of warm, low-salinity water overlying colder and more saline shelf waters. Overall, neither zoo- plankton nor neuston densities were higher in frontal regions. However, some zooplankton taxa were more abundant at fronts, and plankton biomass was 4 to 47 times higher in the frontal regions than in the neighboring plume and more oceanic shelf waters. The zooplankton community in the front habi- tat was distinct, particularly in the near-surface neuston, and was comprised of more surface-oriented organisms compared to the adjacent ocean and plume waters. We conclude that convergence zones at frontal regions at the leading edge of the Columbia River plume concentrate organisms that, either through active swimming or positive buoyancy, are maintained near the surface. Time scales of these fronts are much shorter than generation times in these organisms and therefore we believe that the observed changes in biomass and community composition in the front habitat are due to physical concentrating mechanisms and not to in situ growth. Increased zooplankton biomass at plume fronts may provide a unique and valuable food resource for planktivorous fishes, including juvenile salmonids as they transition from freshwater to the ocean environment. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:04c8a27f-3130-4d8f-a100-723703266707>
3.09375
411
Academic Writing
Science & Tech.
15.967387
95,609,919
Illustration showing the path of the small asteroid 2013 LR6, which safely passed within 65,000 miles of Earth on Friday. By Irene Klotz, Reuters CAPE CANAVERAL, Florida (Reuters) - An asteroid the size of a small truck zoomed past Earth four times closer than the moon on Saturday, the latest in a parade of visiting celestial objects that has raised awareness of potentially hazardous impacts on the planet.? NASA said Asteroid 2013 LR6 was discovered about a day before its closest approach to Earth, which occurred at 12:42 a.m. EDT (0442 GMT on Saturday) about 65,000 miles over the Southern Ocean, south of Tasmania, Australia.? The 30-foot-wide (10-meter-wide) asteroid posed no threat.? A week ago, the comparatively huge 1.7-mile-wide (2.7-km-wide) asteroid QE2, complete with its own moon in tow, passed 3.6 million miles (5.8 million km) from Earth.? While on February 15, a small asteroid exploded in the atmosphere over Chelyabinsk, Russia, leaving more than 1,500 people injured by flying glass and debris. That same day, an unrelated asteroid passed just 17,200 miles from Earth, closer than the networks of communication satellites that ring the planet.? "There is theoretically a collision possible between asteroids and planet Earth," astronomer Gianluca Masi, with the Virtual Telescope project, said during a Google+ webcast that showed live images of the approaching asteroid.? NASA says it has found about 95 percent of the large asteroids, those with diameters 0.65 miles or larger, with orbits that take them relatively close to Earth.? An object of that size hit the planet about 65 million years ago in what is now Mexico's Yucatan peninsula, triggering a global climate change that is believed to be responsible for the demise of the dinosaurs and many other forms of life on Earth.? The U.S. space agency and other research organizations, as well as private companies, are working on tracking smaller objects that fly near Earth.?
<urn:uuid:2f6f7cba-9b5e-4945-84e7-cef949fcfb12>
3.15625
438
Content Listing
Science & Tech.
59.633024
95,609,921
Terrestrial Ecosystems and Their Special Features Research on ecosystems began in the field of limnology. Initially, therefore, only aquatic ecosystems were examined. The cycling of material in these aquatic systems, which can be represented as is thought of as being generally applicable. It is, however, not possible simply to extend this to terrestrial systems, which are far more complex in their structure. In terrestrial systems there are two parallel cycles, a short cycle and a long; in the short cycle, which is quantitatively the more important, consumers play a relatively minor role. KeywordsTerrestrial Ecosystem Dead Wood Litter Layer Tree Layer Forest Steppe Unable to display preview. Download preview PDF.
<urn:uuid:9ed7c924-fc73-474d-8d33-5d616614b376>
3.09375
142
Truncated
Science & Tech.
26.486881
95,609,926
Imagine a new and improved biorefinery, one that produces advanced biofuels as environmentally sustainable as they are economically viable. Behind the successful conversion of biomass to a better biofuel or a new green chemical, there is a carefully chosen solvent. The right solvent not only dissolves biomass but also drives the efficiency of the entire conversion process, resulting in higher yields and a lower bottom line. AUSTIN, Texas — Building on the success of 10 years of investigation into the production of renewable fuels from plants, the Great Lakes Bioenergy Research Center (GLBRC), led by the University of Wisconsin–Madison, recently embarked on a new mission: to develop sustainable alternatives to transportation fuels and products currently derived from petroleum. Advances in biofuels research tend to involve reduced costs, greater reagent stability, more diverse and valuable end products, or faster reactions, which often increase product yields as well. In an article published last summer in Science, researchers at the Great Lakes Bioenergy Research Center (GLBRC) reported on ten years of work assessing the potential climate benefit of producing dedicated bioenergy crops such as switchgrass, poplar, or restored prairie. The mood? Assistant professor of biochemistry Vatsan Raman was recently named to a list of 44 young researchers featured in Biochemistry’s “Future of Biochemistry” special issue. Plant biomass contains considerable calorific value but most of it makes up robust cell walls, an unappetising evolutionary advantage that helped grasses to survive foragers and prosper for more than 60 million years. The trouble is that this robustness still makes them less digestible in the rumen of cows and sheep and difficult to process in bioenergy refineries for ethanol fuel. Lignin, a substance that makes up roughly a quarter of plant biomass, reinforces plant structures and offers a defense system against microbes. But this complex substance is also notorious for being difficult to degrade, creating challenges for biofuels producers and paper manufacturers alike. Yeung and colleagues at Rice, UCLA, Michigan State University and the University of New Mexico counted rare molecules in the atmosphere that contain only heavy isotopes of nitrogen and discovered a planetary-scale tug-of-war between life, the deep Earth and the upper atmosphere that is expressed in atmospheric nitrogen. Five Great Lakes Bioenergy Research Center (GLBRC) researchers have been named to Clarivate Analytics’ 2017 list of “Highly Cited Researchers”.
<urn:uuid:b18639c1-c9ce-4d28-92ed-7a3f9a921f60>
3.171875
511
Content Listing
Science & Tech.
15.392113
95,609,929
Not to be confused with Ionizing radiation. Illustration of the relative abilities of three different types of ionizing radiation to penetrate solid matter. Note caveats in the text about this simplified diagram. The international symbol for properties of alpha beta and gamma rays pdf and levels of radiation that are unsafe for unshielded humans. Radiation in general exists throughout nature, such as in light and sound. In physics, radiation is the emission or transmission of energy in the form of waves or particles through space or through a material medium. Radiation is often categorized as either ionizing or non-ionizing depending on the energy of the radiated particles. Ionizing radiation carries more than 10 eV, which is enough to ionize atoms and molecules, and break chemical bonds. This is an important distinction due to the large difference in harmfulness to living organisms. A common source of ionizing radiation is radioactive materials that emit α, β, or γ radiation, consisting of helium nuclei, electrons or positrons, and photons, respectively. Other sources include X-rays from medical radiography examinations and muons, mesons, positrons, neutrons and other particles that constitute the secondary cosmic rays that are produced after primary cosmic rays interact with Earth’s atmosphere. Gamma rays, X-rays and the higher energy range of ultraviolet light constitute the ionizing part of the electromagnetic spectrum. This type of radiation only damages cells if the intensity is high enough to cause excessive heating. Ultraviolet radiation has some features of both ionizing and non-ionizing radiation. While the part of the ultraviolet spectrum that penetrates the Earth’s atmosphere is non-ionizing, this radiation does far more damage to many molecules in biological systems than can be accounted for by heating effects, sunburn being a well-known example. These properties derive from ultraviolet’s power to alter chemical bonds, even without having quite enough energy to ionize atoms. This aspect leads to a system of measurements and physical units that are applicable to all types of radiation. The intensity of all types of radiation from a point source follows an inverse-square law in relation to the distance from its source. Like any ideal law, the inverse-square law approximates a measured radiation intensity to the extent that the source approximates a geometric point. Some kinds of ionising radiation can be detected in a cloud chambers.
<urn:uuid:647bbda5-65d3-455f-9fb0-7357cc6d1656>
4.15625
484
Knowledge Article
Science & Tech.
27.452735
95,609,937
"Recent Results in the Mathematics of Political Power" September 16, 1997 Bailey Hall 201 Refreshments at 3:45 Math Department Common Room In democracies, elected representatives typically vote "yes" or "no" on proposed legislation, constitutional changes, etc. The voting systems range from simple majority rule, to weighted versions in which legislators from more populous districts cast more votes, to complex bicameral systems with presidential vetoes and veto overrides, such as the US federal system. Designers of such a system must be able to test whether the actual difference in influence among legislators came out the way they intended. The traditional approach is to use a mathematical "voting power index," but the known indices differ sharply from each other. Can the issue be resolved? Some recent results indicate promising lines of research for the future. |Union College Math Department Home Page| Comments to: email@example.com Created automatically on: Mon Jul 23 04:03:22 EDT 2018
<urn:uuid:3e1ec9dd-8240-49bd-a92c-a3f3d30f680c>
2.671875
204
News (Org.)
Science & Tech.
27.488077
95,609,940
An artist’s conception shows Orbex’s Prime rocket lifting off. (Orbex Illustration) Lockheed Martin is in line to receive $31 million in grants from the UK Space Agency to establish Britain’s first spaceport on Scotland’s north coast, and to develop a new made-in-Britain system for deploying small satellites in orbit. The British government announced the grants today, only hours after and support the rise of horizontal-launch spaceports in other British locales. In addition to Lockheed Martin’s grants, another $7 million will be awarded to London-based to support the development of its Prime rocket for launch from the Sutherland spaceport. The is designed to be fueled by bio-propane and will deliver payloads of up to 330 pounds to low Earth orbit. Today’s grants were announced in conjunction with this week’s Farnborough International Airshow, which is taking place southwest of London. It’s not surprising that Lockheed Martin will benefit from the British grants. The U.S.-based company is a prominent member of the consortium supporting Sutherland’s bid. Lockheed Martin has been tasked not only with establishing vertical-launch operations at the Sutherland spaceport, but also with developing a rocket-powered upper stage that’s capable of deploying up to six small satellites in separate orbits. The work on the upper stage, known as an orbital maneuvering vehicle, will be done at a facility in the English city of Reading. “Lockheed Martin will apply its 50 years of experience in small satellite engineering, launch services and ground operations, as well as a network of U.K.-based and international teammates, to deliver new technologies, new capabilities and new economic opportunities,” Patrick Wood, Lockheed Martin’s U.K. country executive for space, said in a statement. British and U.S. governmental agencies have been working on a that would establish a legal and technical framework for U.S. space launch vehicles to operate from launch sites in Britain. “Attracting U.S. operators to the U.K. will enhance our capabilities and boost the whole market,” the UK Space Agency said in today’s statement. British companies already produce nearly half of the world’s small satellites and a quarter of the world’s telecommunications satellites. The British government says the commercial space sector could contribute as much as $5 billion to the country’s economy over the next decade. In its earlier announcement, the UK Space Agency said it would award £2.5 million ($3.3 million) to a consortium known as Highlands and Islands Enterprise to help get the Sutherland spaceport into operation in the early 2020s. Another £2 million would be made available for the development of horizontal-launch spaceports in England’s , at on Scotland’s west coast, and in in Wales. (Binghamton University) New research from Binghamton University, State University at New York finds that mobile coupons can affect both short- and long-term sales goals, and that targeting customers with the right type of mobile coupon can boost revenue. An artist’s impression shows the spaceport at Scotland’s Sutherland site. (Courtesy of Perfect Circle PV) The British government has selected a spot in Sutherland, on the A’Mhoine Peninsula in the Scottish Highlands, as the site of the country’s first spaceport. In a news release timed to coincide with the opening of this week’s Farnborough International Airshow, the government said it would provide initial funding of £2.5 million ($3.3 million) to Highlands and Islands Enterprise to develop the vertical-launch site in Sutherland, with an aim of seeing the first liftoff in the early 2020s. Sutherland was chosen for the United Kingdom’s first vertical launch site after an assessment of several proposed spaceport sites in Scotland as well as Wales and England’s Cornwall region. The UK Space Agency determined that the spot on Scotland’s north coast was the best place to target highly sought-after satellite orbits with vertically launched rockets. Three other proposed horizontal-launch sites will be eligible for grants from a newly established £2 million ($2.7 million) fund to promote suborbital space flights, satellite launches and spaceplane operations, the government said. Those sites are Newquay in Cornwall, Glasgow Prestwick in Scotland, and Snowdonia in Wales. “The space sector is an important player in the U.K.’s economy and our recent Space Industry Act has unlocked the potential for hundreds of new jobs and billions of revenue for British business across the country,” Britain’s secretary of state for transport, Chris Grayling, said in today’s news release. British officials estimates that the commercial space sector will be worth a potential $5 billion to the country’s economy over the next decade. The United Kingdom already has a thriving satellite industry, fueled in part by potential spaceport customers such as San Francisco-based Spire Global. “In Spire, Scotland already sports Europe’s most advanced and prolific satellite manufacturing capability, and with a spaceport right next door, enabling clockwork-like launches, we can finally get our space sector supply chain to be truly integrated,” Spire CEO Peter Platzer said. The government said additional grants from its £50 million ($66 million) UK Spaceflight Program fund would be announced during the Farnborough Airshow. Sutherland isn’t likely to be Europe’s only spaceport, and it may not be its first: Last week, with operations beginning as early as 2020. Uncertainty, hostility and irrelevance are now part of daily life for scientists at the U.S. Environmental Protection Agency-- Read more on ScientificAmerican.com MeerKAT has drawn astronomers, engineers and data scientists from around the world-- Read more on ScientificAmerican.com (University of Miami Rosenstiel School of Marine & Atmospheric Science) Scientists have confirmed for the first time that radical changes of one volcano in southern Japan was the direct result of an erupting volcano 22 kilometers (13.7 miles) away. The observations from the two volcanos -- Aira caldera and Kirishima -- show that the two were connected through a common subterranean magma source in the months leading up to the 2011 eruption of Kirishima. (UCLA Samueli School of Engineering) A team of UCLA engineers and scientists discovered a new and potentially highly effective type of weed killer. This finding could lead to the first new class of commercial herbicides in more than 30 years, an important outcome as weeds continue to develop resistance to current herbicide regimens.
<urn:uuid:e5b10623-db87-46ee-9983-c6ac972ad8af>
2.84375
1,391
Content Listing
Science & Tech.
43.431077
95,609,961
What do you see in this photograph? A Pac-Man with a tail gobbling up stars in the galaxy? NASA published this image of CG4, a ruptured cometary globule, which, in the Rorschach test of observing the heavens, it described as a "claw." More from NASA: The "claw" of this odd looking "creature" in the above photo is a gas cloud known as a cometary globule. This globule, however, has ruptured. Cometary globules are typically characterized by dusty heads and elongated tails. These features cause cometary globules to have visual similarities to comets, but in reality they are very much different. Globules are frequently the birthplaces of stars, and many show very young stars in their heads. The reason for the rupture in the head of this object is not completely known. The galaxy to the left of the globule is huge, very far in the distance, and only placed near CG4 by chance superposition. Image credit: NASA
<urn:uuid:597a839a-60ec-4c5f-9d12-f4826a3f4810>
3.09375
213
Knowledge Article
Science & Tech.
45.157494
95,609,975
What is meant by the fourth dimension?© BrainMass Inc. brainmass.com July 19, 2018, 8:05 am ad1c9bdddf When we look at our world, we describe objects as three dimensional. We can see that an object has certain length, width and height. The same for a coordinate system. We can describe any point in space with three numbers (the simplest is the x,y and z coordinates). However things in the world are not static. Objects move from one place to another, materials can change phase (water to ice for example). Hence when we describe a physical process, it is not enough to describe its three dimensional aspect, but also the *time ...
<urn:uuid:5d5aa9fb-6d88-4da9-977f-86d867bf7377>
3.40625
149
Truncated
Science & Tech.
71.972381
95,610,000
Research published today (22 February) in the journal Nature shows that western scrub-jays are able to plan for future food shortages by caching food. The birds are shown to have learned from their previous experiences of food scarcity, storing food for future use in places where they anticipate future slim pickings. The researchers at the University of Cambridge believe this is the first known example of future planning in animals. On alternate mornings eight jays were given breakfast in one compartment or refused breakfast in another, before being allowed free access to food the rest of the day. On the sixth day of the experiment they were suddenly given whole pine nuts suitable for caching in the evening. The researchers observed that the jays consistently cached most pine nuts in the tray in the 'no breakfast' compartment, anticipating that they would not be fed in the following morning in that compartment. Another experiment showed that the birds were able to plan ahead to provide themselves with a more varied diet. The jays were consistently given a breakfast of peanuts in one compartment and dog kibble in the other. When the birds in the evening were offered both foods, they preferred to cache peanuts in the kibble compartment and vice versa - to make sure they had an interesting breakfast the following morning. "The jays spontaneously plan for tomorrow, without being motivated by their current needs", said Nicola Clayton, Professor of Comparative Cognition at the University of Cambridge. "People have assumed that animals only have a concept of the present, but these findings show that jays also have some understanding of future events and can plan for future eventualities. The western scrub-jays demonstrate behaviour that shows they are concerned both about guarding against food shortages and maximising the variety of their diets. It suggests they have advanced and complex thought processes as they have a sophisticated concept of past, present and future and factor this into their planning." Previous research by Clayton's team has shown that scrub-jays have a concept of the past. They remember what they have cached where and how long ago, and they also keep track of which particular bird was watching when they cached so that they can protect their caches accordingly from being stolen by observant thieves. The research, funded by the Biotechnology and Biological Sciences Research Council (BBSRC), marks a step forward in our understanding of animal psychology and cognition. This project was also supported with a Medical Research Council (MRC) Cooperative Grant. Matt Goode | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:93ee1084-f806-4ede-96c2-920e69b4b310>
3.546875
1,134
Content Listing
Science & Tech.
41.111249
95,610,006
A recent study in Nature (1) suggested that terrestrial plants may be a global source of the potent greenhouse gas methane, making plants substantial contributors to the annual global methane budget. This controversial finding and the resulting commotion triggered a consortium (2) of Dutch scientists to re-examine this in an independent study. Reporting in New Phytologist, Tom Dueck and colleagues present their results and conclude that methane emissions from plants are negligible and do not contribute to global climate change. The consortium brings together a unique combination of expertise and facilities enabling the design and execution of a novel experiment. Plants were grown in a facility containing atmospheric carbon dioxide almost exclusively with a heavy form of carbon (13C). This makes the carbon released from the plants relatively easy to detect. Thus, if plants are able to emit methane, it will contain the heavy carbon isotope and can be detected against the background of lighter carbon molecules in the air. Six plant species were grown in a 13C-carbon dioxide atmosphere, saturating the plants with heavy carbon. 13C-Methane emission was measured under controlled, but natural conditions with a photo-acoustic laser technique. This technique is so sensitive that the scientists are able to measure the carbon dioxide in the breath of small insects like ants. Even with this state-of-the-art technique, the measured emission rates were so close to the detection limit that they did not statistically differ from zero. To our knowledge this is the first independent test which has been published since the controversy last year. Conscious of the fact that a small amount of plant material might only result in small amounts of methane, the researchers sampled the ‘heavy’ methane in the air in which a large amount of plants were growing. Again, the measured methane emissions were neglible. Thus these plant specialists conclude that there is no reason to reassess the mitigation potential of plants. The researchers stress that questions still remain and that the gap in the global methane budget needs to be properly addressed. (1) ’Methane emissons from terrestrial plants under aerobic conditions’ by Keppler F, Hamilton JTG, Braß M, Rockmann T. Nature 439: 187–191 (2) The Dutch consortium includes scientists from Plant Research International, IsoLife and Plant Dynamics in Wageningen, Utrecht University, and the Radboud University in Nijmegen. Lucy Mansfield | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Health and Medicine 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering
<urn:uuid:15af37ba-4824-45e0-a021-fc39e027bff1>
3.78125
1,158
Content Listing
Science & Tech.
39.030696
95,610,017
San Francisco: Billionaire Internet investor Yuri Milner announced another $100 million initiative on Tuesday to better understand the cosmos, this time by deploying thousands of tiny spacecraft to travel to our nearest neighboring star system and send back pictures. If successful, scientists could determine if Alpha Centauri, a star system about 25 trillion miles away, contains an Earth-like planet capable of sustaining life. The catch: It could take years to develop the project, dubbed Breakthrough Starshot, and there is no guarantee it will work. Tuesday’s announcement, made with cosmologist Stephen Hawking, comes less than a year after the announcement of Breakthrough Listen. That decade-long, $100 million project, also backed by Milner, monitors radio signals for signs of intelligent life across the universe. Breakthrough Starshot involves deploying small light-propelled vehicles to carry equipment like cameras and communication equipment. Scientists hope the vehicles, known as nanocraft, will eventually fly at 20 percent of the speed of light, more than a thousand times faster than today’s spacecraft. “The thing would look like the chip from your cell phone with this very thin gauzy light sail,” said Pete Worden, the former director of NASA’s Ames Research Center, who is leading the project. “It would be something like 10, 12 feet across.” He envisions sending a larger conventional spacecraft containing thousands of nanocraft into orbit, and then launching the nanocraft one by one, he said in an interview. The idea has precedents with mixed results. Two years ago, Cornell University`s KickSat fizzled after the craft carrrying 104 micro-satellites into space failed to release them. The plan was to let the tiny satellites orbit and collect data for a few weeks. Worden acknowledges challenges, including the nanocraft surviving impact on launch. They would then endure 20 years of travel through the punishing environment of interstellar space, with obstacles such as dust collisions. “The problems remaining to be solved - any one of them are showstoppers,” Worden said. Governments likely would not take on the research due to its speculative nature, he said, yet the technology is promising enough to merit pursuing. If the nanocraft reach the star system and succeed in taking photographs, it would take about another four years to transmit them back to Earth. A onetime physics PhD student in Moscow who dropped out to move to the United States in 1990, Milner is one of a handful of technology tycoons devoting time and money to space exploration. He is known for savvy investments, including in social network Facebook Inc and Chinese smartphone company Xiaomi. (Reporting by Sarah McBride; Editing by Tom Brown)
<urn:uuid:f606511c-1a19-4f2e-a94e-6ceaba593f8f>
2.984375
575
News Article
Science & Tech.
37.628183
95,610,025
The Gravimetry Problem A classical problem in geophysics and physical geodesy is gravimetry, i.e., the determination of the Earth’s mass density distribution from measurements of the gravitational potential or related quantities. KeywordsHarmonic Function Gravitational Potential Scaling Function Mother Wavelet Regular Surface Unable to display preview. Download preview PDF.
<urn:uuid:5d7ded35-b238-4ae4-a9e6-f0760cfdc6af>
2.71875
77
Truncated
Science & Tech.
7.181
95,610,030
A global study led by Professor Robert Diaz of the Virginia Institute of Marine Science, College of William and Mary, shows that the number of "dead zones"—areas of seafloor with too little oxygen for most marine life—has increased by a third between 1995 and 2007. Diaz and collaborator Rutger Rosenberg of the University of Gothenburg in Sweden say that dead zones are now "the key stressor on marine ecosystems" and "rank with over-fishing, habitat loss, and harmful algal blooms as global environmental problems." The study, which appears in the August 15 issue of the journal Science, tallies 405 dead zones in coastal waters worldwide, affecting an area of 95,000 square miles, about the size of New Zealand. The largest dead zone in the U.S., at the mouth of the Mississippi, covers more than 8,500 square miles, roughly the size of New Jersey. Diaz began studying dead zones in the mid-1980s after seeing their effect on bottom life in a tributary of Chesapeake Bay near Baltimore. His first review of dead zones in 1995 counted 305 worldwide. That was up from his count of 162 in the 1980s, 87 in the 1970s, and 49 in the 1960s. He first found scientific reports of dead zones in the 1910s, when there were 4. Worldwide, the number of dead zones has approximately doubled each decade since the 1960s. Diaz and Rosenberg write "There is no other variable of such ecological importance to coastal marine ecosystems that has changed so drastically over such a short time as dissolved oxygen." Dead zones occur when excess nutrients, primarily nitrogen and phosphorus, enter coastal waters and help fertilize blooms of algae. When these microscopic plants die and sink to the bottom, they provide a rich food source for bacteria, which in the act of decomposition consume dissolved oxygen from surrounding waters. Major nutrient sources include fertilizers and the burning of fossil fuels. Geologic evidence shows that dead zones were not "a naturally recurring event" in Chesapeake Bay or most other estuarine ecosystems, says Diaz. "Dead zones were once rare. Now they're commonplace. There are more of them in more places." The first dead zone in Chesapeake Bay was reported in the 1930s. Scientists refer to water with too little oxygen for fish and other active organisms as "hypoxic." Diaz says that many ecosystems experience a progression in which periodic hypoxic events become seasonal and then, if nutrient inputs continue to increase, persistent. Earth's largest dead zone, in the Baltic Sea, experiences hypoxia year-round. Chesapeake Bay experiences seasonal, summertime hypoxia through much of its main channel, occupying about 40% of its area and up to 5% of its volume. Diaz and Rosenberg note that hypoxia tends to be overlooked until it starts to affect organisms that people eat. A possible indicator of hypoxia's adverse effects on an economically important finfish species in Chesapeake Bay is the purported link between oxygen-poor bottom waters and a chronic outbreak of a bacterial disease among striped bass. Several Bay researchers, including VIMS fish pathologist Wolfgang Vogelbein, hypothesize that the prevalence of mycobacteriosis among Bay stripers (>75%) is due to the stress they encounter when development of the Bay's summertime dead zone forces them from the cooler bottom waters they prefer into warmer waters near the Bay surface. Diaz and Rosenberg's also point out a more fundamental effect of hypoxia: the loss of energy from the Bay's food chain. By precluding or stunting the growth of bottom-dwellers such as clams and worms, hypoxia robs their predators of an important source of nutrition. Diaz and VIMS colleague Linda Schaffner estimate that Chesapeake Bay now loses about 10,000 metric tons of carbon to hypoxia each year, 5% of the Bay's total production of food energy. The Baltic Sea has lost 30% of its food energy—a condition that has contributed to a significant decline in its fisheries yields. Diaz and Rosenberg say the key to reducing dead zones is "to keep fertilizers on the land and out of the sea." Diaz says that goal is shared by farmers concerned with the high cost of buying and applying nitrogen to their crops. "They certainly don't want to see their dollars flowing off their fields into the Bay," says Diaz. "Scientists and farmers need to continue working together to develop farming methods that minimize the transfer of nutrients from land to sea." Dr. Bob Diaz | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:83670be7-8efc-44a0-b9f2-50e64af3bcc4>
3.8125
1,540
Content Listing
Science & Tech.
45.458692
95,610,032
Isotopes of chlorine |Standard atomic weight (Ar, standard)|| This article needs additional citations for verification. (May 2018) (Learn how and when to remove this template message) Chlorine (17Cl) has 25 isotopes with mass numbers ranging from 28Cl to 52Cl and 2 isomers (34mCl and 38mCl). There are two principal stable isotopes, 35Cl (75.78%) and 37Cl (24.22%), giving chlorine a standard atomic weight of 35.45. The longest-lived radioactive isotope is 36Cl, which has a half-life of 301,000 years. All other isotopes have half-lives under 1 hour, many less than one second. The shortest-lived are 29Cl and 30Cl, with half-lives less than 20 and 30 nanoseconds, respectively—the half-life of 28Cl is unknown. Trace amounts of radioactive 36Cl exist in the environment, in a ratio of about 7×10−13 to 1 with stable isotopes. 36Cl is produced in the atmosphere by spallation of 36Ar by interactions with cosmic ray protons. In the subsurface environment, 36Cl is generated primarily as a result of neutron capture by 35Cl or muon capture by 40Ca. 36Cl decays to either 36S (1.9%) or to 36Ar (98.1%), with a combined half-life of 308,000 years. The half-life of this hydrophilic nonreactive isotope makes it suitable for geologic dating in the range of 60,000 to 1 million years. Additionally, large amounts of 36Cl were produced by irradiation of seawater during atmospheric detonations of nuclear weapons between 1952 and 1958. The residence time of 36Cl in the atmosphere is about 1 week. Thus, as an event marker of 1950s water in soil and ground water, 36Cl is also useful for dating waters less than 50 years before the present. 36Cl has seen use in other areas of the geological sciences, forecasts, and elements. List of isotopes This article needs to be updated.(July 2018) isotopic mass (u) |range of natural| |31Cl||17||14||30.99241(5)||150(25) ms||β+ (99.3%)||31S||3/2+| |β+, p (.7%)||30P| |32Cl||17||15||31.985690(7)||298(1) ms||β+ (99.92%)||32S||1+| |β+, α (.054%)||28Si| |β+, p (.026%)||31P| |34mCl||146.36(3) keV||32.00(4) min||β+ (55.4%)||34S||3+| |36Cl[n 3]||17||19||35.96830698(8)||3.01(2)×105 y||β− (98.1%)||36Ar||2+||Trace[n 4]||approx. 7×10−13| |38mCl||671.361(8) keV||715(3) ms||IT||38Cl||5−| |43Cl||17||26||42.97405(17)||3.07(7) s||β− (>99.9%)||43Ar||3/2+#| |β−, n (<.1%)||42Ar| |44Cl||17||27||43.97828(12)||0.56(11) s||β− (92%)||44Ar| |β−, n (8%)||43Ar| |45Cl||17||28||44.98029(13)||400(40) ms||β− (76%)||45Ar||3/2+#| |β−, n (24%)||44Ar| |46Cl||17||29||45.98421(77)||232(2) ms||β−, n (60%)||45Ar| |47Cl||17||30||46.98871(64)#||101(6) ms||β− (97%)||47Ar||3/2+#| |β−, n (3%)||46Ar| |48Cl||17||31||47.99495(75)#||100# ms [>200 ns]||β−||48Ar| |49Cl||17||32||49.00032(86)#||50# ms [>200 ns]||β−||49Ar||3/2+#| |51Cl||17||34||51.01449(107)#||2# ms [>200 ns]||β−||51Ar||3/2+#| - Geologically exceptional samples are known in which the isotopic composition lies outside the reported range. The uncertainty in the atomic mass may exceed the stated value for such specimens. - Commercially available materials may have been subjected to an undisclosed or inadvertent isotopic fractionation. Substantial deviations from the given mass and composition can occur. - Values marked # are not purely derived from experimental data, but at least partly from systematic trends. Spins with weak assignment arguments are enclosed in parentheses. - Uncertainties are given in concise form in parentheses after the corresponding last digits. Uncertainty values denote one standard deviation, except isotopic composition and standard atomic mass from IUPAC, which use expanded uncertainties. - Isotope masses from: - Isotopic compositions and standard atomic masses from: - J. R. de Laeter; J. K. Böhlke; P. De Bièvre; H. Hidaka; H. S. Peiser; K. J. R. Rosman; P. D. P. Taylor (2003). "Atomic weights of the elements. Review 2000 (IUPAC Technical Report)". Pure and Applied Chemistry. 75 (6): 683–800. doi:10.1351/pac200375060683. - M. E. Wieser (2006). "Atomic weights of the elements 2005 (IUPAC Technical Report)". Pure and Applied Chemistry. 78 (11): 2051–2066. doi:10.1351/pac200678112051. Lay summary. - Half-life, spin, and isomer data selected from the following sources. See editing notes on this article's talk page. - G. Audi; A. H. Wapstra; C. Thibault; J. Blachot; O. Bersillon (2003). "The NUBASE evaluation of nuclear and decay properties" (PDF). Nuclear Physics A. 729: 3–128. Bibcode:2003NuPhA.729....3A. doi:10.1016/j.nuclphysa.2003.11.001. Archived from the original (PDF) on 2008-09-23. - National Nuclear Data Center. "NuDat 2.1 database". Brookhaven National Laboratory. Retrieved September 2005. Check date values in: - N. E. Holden (2004). "Table of the Isotopes". In D. R. Lide. CRC Handbook of Chemistry and Physics (85th ed.). CRC Press. Section 11. ISBN 978-0-8493-0485-9. - Meija, J.; et al. (2016). "Atomic weights of the elements 2013 (IUPAC Technical Report)". Pure and Applied Chemistry. 88 (3): 265–91. doi:10.1515/pac-2015-0305. - "Universal Nuclide Chart". nucleonica. (Registration required (. )) |Isotopes of sulfur||Isotopes of chlorine||Isotopes of argon| |Table of nuclides|
<urn:uuid:b79ed963-37b3-4085-9bf5-6110a979ba26>
3.453125
1,705
Structured Data
Science & Tech.
97.839873
95,610,050
Over a 20-year period, Gregory Rasmussen, currently at Lady Margaret Hall Oxford, intensively studied every move of African wild dogs in Zimbabwe to the extent of "living with packs" for periods of up to a month in order to work out how much energy they were spending eating, sleeping, and running. He came to the conclusion that "whilst to date we have seen poverty traps as being something intrinsically human, they are not!" Nature's currency is energy, and in theory, keeping the cost of living low leaves more in the "piggy bank" for reproduction. However, staying in nature's fast lane isn't easy, and necessitates that evolution comes up with a "business plan" to bank energy (nature's surrogate for wealth!) to survive. In the face of bigger competitors like lions and hyenas, whose larger stomachs cater for irregular meals, and which maximize returns by having low foraging costs, the dogs' evolved a unique plan. Now highly endangered, the African wild dog opted for extreme metabolic adaptations to running, thus ensuring they caught a regular supply of food, and by forming packs, had many runners to reduce capture costs and stomachs to maximize on the returns. This great strategy, however, has an Achilles heel as packs fewer than five are less effective hunters, and thus have to undertake energetically expensive extra hunts to secure their prey. The results from this study highlighted a weakness in the business plan, for when the financial energetic annual accounts were done, the benefits of having fewer individuals to feed in a smaller packs was outweighed by the greater costs of running. To chase their prey, wild dogs need to be lithe and athletic, a design that ensures their stomachs can't be too big, which in turn limits the amount they can gorge in a sitting: a physical limitation on their gluttony which biologist call "a morphological constraint." In the same way that Size Zero women can struggle to have children, and bouncing babies, this study highlighted an Achilles heel where energetic poverty translated into reproductive poverty, and a vicious circle whereby small packs have fewer pups, leading to even smaller packs, and driving them into an extinctive vortex. From a conservation standpoint, these results demonstrate how evolutionary strength gained by sociality can be undermined by an Achilles heel that can push species into extinction. Professor David Macdonald, Director of the Wildlife Conservation Research Unit, known as the WildCRU, which specializes in the science to underpin practical solution to conservation problems, said "This study, unique in its detail, shows the power of energetic theory to enable us to not only understand the evolution of packing power, and facets that dictate the survival of this stunningly beautiful species, but better understand how to conserve other social species of which we are one." "Achilles' Heel of Sociality Revealed by Energetic Poverty Trap in Cursorial Hunters," by Gregory S. A. Rasmussen, Markus Gusset, Franck Courchamp, and David W. Macdonald. American Naturalist (2008) 172:508? DOI: 10.1086/590965 Patricia Morse | EurekAlert! Study relating to materials testing Detecting damages in non-magnetic steel through magnetism 23.07.2018 | Technische Universität Kaiserslautern Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Science Education 23.07.2018 | Health and Medicine 23.07.2018 | Life Sciences
<urn:uuid:391debc6-485b-4fb1-9f79-0929e6f599f4>
3.390625
1,233
Content Listing
Science & Tech.
36.908689
95,610,058
New evidence that a sea of cosmic neutrinos permeates the universe Clear evidence the first stars took more than a half-billion years to create a cosmic fog Tight new constraints on the burst of expansion in the universe's first trillionth of a second "We are living in an extraordinary time," said Gary Hinshaw of NASA's Goddard Space Flight Center in Greenbelt, Md. "Ours is the first generation in human history to make such detailed and far-reaching measurements of our universe." WMAP measures a remnant of the early universe - its oldest light. The conditions of the early times are imprinted on this light. It is the result of what happened earlier, and a backlight for the later development of the universe. This light lost energy as the universe expanded over 13.7 billion years, so WMAP now sees the light as microwaves. By making accurate measurements of microwave patterns, WMAP has answered many longstanding questions about the universe's age, composition and development. The universe is awash in a sea of cosmic neutrinos. These almost weightless sub-atomic particles zip around at nearly the speed of light. Millions of cosmic neutrinos pass through you every second. "A block of lead the size of our entire solar system wouldn’t even come close to stopping a cosmic neutrino,” said science team member Eiichiro Komatsu of the University of Texas at Austin. WMAP has found evidence for this so-called "cosmic neutrino background" from the early universe. Neutrinos made up a much larger part of the early universe than they do today. Microwave light seen by WMAP from when the universe was only 380,000 years old, shows that, at the time, neutrinos made up 10% of the universe, atoms 12%, dark matter 63%, photons 15%, and dark energy was negligible. In contrast, estimates from WMAP data show the current universe consists of 4.6% percent atoms, 23% dark matter, 72% dark energy and less than 1 percent neutrinos. Cosmic neutrinos existed in such huge numbers they affected the universe’s early development. That, in turn, influenced the microwaves that WMAP observes. WMAP data suggest, with greater than 99.5% confidence, the existence of the cosmic neutrino background - the first time this evidence has been gleaned from the cosmic microwaves. Much of what WMAP reveals about the universe is because of the patterns in its sky maps. The patterns arise from sound waves in the early universe. As with the sound from a plucked guitar string, there is a primary note and a series of harmonics, or overtones. The third overtone, now clearly captured by WMAP, helps to provide the evidence for the neutrinos. The hot and dense young universe was a nuclear reactor that produced helium. Theories based on the amount of helium seen today predict a sea of neutrinos should have been present when helium was made. The new WMAP data agree with that prediction, along with precise measurements of neutrino properties made by Earth-bound particle colliders. Another breakthrough derived from WMAP data is clear evidence the first stars took more than a half-billion years to create a cosmic fog. The data provide crucial new insights into the end of the "dark ages," when the first generation of stars began to shine. The glow from these stars created a thin fog of electrons in the surrounding gas that scatters microwaves, in much the same way fog scatters the beams from a car’s headlights.The first peak reveals a specific spot size for early universe sound waves, just as the length of guitar string gives a specific note. The second and third peaks are the harmonics. Credit: WMAP Science Team > Click for larger image "We now have evidence that the creation of this fog was a drawn-out process, starting when the universe was about 400 million years old and lasting for half a billion years," said WMAP team member Joanna Dunkley of the University of Oxford in the U.K. and Princeton University in Princeton, N.J. "These measurements are currently possible only with WMAP." A third major finding arising from the new WMAP data places tight constraints on the astonishing burst of growth in the first trillionth of a second of the universe, called “inflation”, when ripples in the very fabric of space may have been created. Some versions of the inflation theory now are eliminated. Others have picked up new support. "The new WMAP data rule out many mainstream ideas that seek to describe the growth burst in the early universe," said WMAP principal investigator, Charles Bennett, of The Johns Hopkins University in Baltimore, Md. "It is astonishing that bold predictions of events in the first moments of the universe now can be confronted with solid measurements." The five-year WMAP data were released this week, and results were issued in a set of seven scientific papers submitted to the Astrophysical Journal. Prior to the release of the new five-year data, WMAP already had made a pair of landmark finds. In 2003, the probe's determination that there is a large percentage of dark energy in the universe erased remaining doubts about dark energy's very existence. That same year, WMAP also pinpointed the 13.7 billion year age of the universe. Additional WMAP science team institutions are: the Canadian Institute for Theoretical Astrophysics, Columbia University, University of British Columbia, ADNET Systems, University of Chicago, Brown University, and UCLA. Robert Naeye | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Science Education 23.07.2018 | Health and Medicine 23.07.2018 | Life Sciences
<urn:uuid:0a0d41b0-6753-498d-99c0-b67093da27c8>
3.0625
1,743
Content Listing
Science & Tech.
45.158732
95,610,059
“I would like to emphasize something. The theories about the rest of physics are very similar to the theory of quantum electrodynamics: they all involve the interaction of spin 1/2 objects (like electrons and quarks) with spin 1 objects (like photons, gluons, or W’s) within a framework of amplitudes by which the probability of an event is the square of the length of an arrow. Why are all the theories of physics so similar in their structure? There are a number of possibilities. The first is the limited imagination of physicists: when we see a new phenomenon we try to fit it into the framework we already have—until we have made enough experiments, we don’t know that it doesn’t work. So when some fool physicist gives a lecture at UCLA in 1983 and says, “This is the way it works, and look how wonderfully similar the theories are,” it’s not because Nature is really similar; it’s because the physicists have only been able to think of the same damn thing, over and over again.” Another possibility is that things look similar because they are aspects of the same thing—some larger picture underneath, from which things can be broken into parts that look different, like fingers on the same hand.” Richard Feynman. “QED: The Strange Theory of Light and Matter’ The mind of particles ABSTRACT. Spin is the gauging parameter of information that defines the 1st dimotion of particles. As such its h-spin constant is the constant of information, or §ðatic §ðate or 1st dimotion of physical systems. We could thus say that at the beginning there was spin, and spin collapsed the angular momentum of the wave into form, and form created perception and gauging of information created the quantum world. As c collapsed in h, spin thus transformed motion into form. In a more abstract expression, the diversity of positions of complex Spins are the angles the mind of atoms defines for each of its key ilogic actions of exist¡ence, if you know what I mean (: And vice versa, when there is no spin in a particle it means it is NOT at all a particle. It does not have the capacity to gauge information. So the only boson with no spin, the HIggs must be considered a pure entropic locomotion, the substrata of the dark world outside the galactic Universe with NO spin. Now of course, all this will sound Chinese to humans till quantum computers become conscious and wonder as we do with our DNA, from where it comes its consciousness – from the spin of its atoms. Fact is even for physicists Spin is a bizarre physical quantity. Since based on the known sizes of subatomic particles, as Fermi noticed and then forgot, the surfaces of charged particles would have to be moving faster than the speed of light in order to produce the measured magnetic moments. Answer: they do move and stop as all motions are sTeps and Stops, and they do so faster than light speed, so we cannot detect them in the motion, only in the position of its STop state. And that is all what happens there in a real explanation. Nothing bizarre: Particle rotates faster than c, stops in the positions/angles of the spin, rotates faster than c (not perceivable) stops… So spin is quantized, meaning that only certain discrete spins are allowed. Specifically… The 3 families of spins, 1/2, 0 and 1 spins What they mean? It has been a conundrum for quantum physicists to the point that Feynman infamously said “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” So I will say I don’t understand the full formalism of quantum mechanics beyond what a graduate physicist do so specialized details and some elements of its extensive inflationary mathematical mirrors escape my failing memory, but in 5ð it is much easier to understand it, and a generation into the future, when the change of paradigm is completed by younger researchers Feynman’s dictum will be another oxymoron. This said, the meaning of the 3 spins once we have fully comprehended what dimensions are and how they ‘accumulate’ in growing scales is rather simple: -1/2 spin particles are larger fermions that exist in bidimensional space, as galatoms do in the cosmological scale. -1 spin particles are smallish bosons that exist in tridimensional space, and as its wave-states collapse into spherical surfaces they become cellular points of the larger fermions as stars do in the galatom. (The parallelism between stars and photons, galaxies and fermions is NOT that far fetched, in fact in many cosmological equations, stars are modelled as photons and galatoms as hydrogen atoms, as in the Walker Einstein formalism and the analysis of the clouds of stars around black holes). – 0 spin: But not all particles do have spin – not all of them gauge information, specifically those who ‘die’, do not have spin, do not rotate. In the next graph, a particle without spin (Higgs) has not a first Dimotion, which starts all ‘vital actions’ and must therefore be considered a ‘dead’ entropic form, outside the light space-time galatom in which we all co-exist, becoming the entropic agent of the dark entropic, expanding space between galaxies. This implies two unresolved questions: does spin become conserved? It does even when the particle dies and becomes an ‘entropic meson’ reason why the pion also has 0 spin. And this, as spin is the minimal unit of informative gauging, also means the immortality of information, metaphysically the immortality of existence, as we mutate between planes – but that goes into other posts. Other questions will then be: does the neutrino, born of the death/decay of particles have spin? And does the graviton have spin? Does it exist at all? The answer I am afraid is likely, yes, neutrinos that come together in couples to create a light photon (Jordan-Broglie theory of light) have spin and no gravitons don’t have spin because they do NOT exist, being a mass a vortex of space-time, an intrinsic non-lineal attractive ‘monopole’ that does not exchange lineal particles to attract… In any case, for all other particles spin does not only exist but must be the essential parameter along reproductive speed of all its forms, as all of them are ultimately by products of the fundamental c h, ‘planckton’ particle: the photon. Further on, if we are to consider the complexity of particles, it becomes obvious the more spin positions they have (angles of spin), the more complex, the particle is and the richer will be its social dimotions according to those angles ruled by the 4th postulate of Non-E Geometry. This might seem to be a contradiction but it is not in 5D as smaller systems are more complex in information and when a new ‘form’ emerges it is ALL A NEW GAME of existence, starting afresh with a simpler ‘young state’. That is, the photon is more complex than the electron, as a ‘whole’ even if the electron holds as parts multiple photons in its nebulae (Cantor’s paradox, the set of all sets is simpler than its elements). That explains why smaller particles have more spin positions (and if the graviton ‘is’, it would be a rank 2 spin, more complex as it is smaller than the photon 1 spin, more complex than the fermion 1/2, more complex than the dead particle-antiparticle 0 spin). Some insights on those spins. All together we find 0 spins in particles with entropic motion (Higgs field of dark entropy outside galaxies or in transformations of our world into the dark world of top quarks – some advanced model of 5D is needed to understand this; pions and other mesons which are in fact particles (life arrow)-antiparticles (death arrow) events in time, seen through the ‘invariance=symmetry’ of Space=Time, by humans who cannot distinguish in such fast processes what is happening simultaneously in space and sequentially in time. But and this is the beauty of it, the 1 spin gluon-photon can have 0, 1 and – 1 spins. So it can not only fully rotate as a spherical form in the direction of its locomotion/momentum, but it can exist without rotation-perception. So to speak moving at full speed: It then all start to be fun, once we get to understand unlike Mr. Feynman (: what we do as we start to give ‘rational meanings’ to the wor(l)ds of quantum physics. It is to notice then that a ‘dead pion’ cancel its ±spins; so the conservation of spin does not necessarily means that the gauging capacity to inform an immortal mind of a particle exits, but rather that it is either 0, nothing or two inverse ‘genders’ (indeed, we shall now assess the ±spins as the two first expressions of gender duality, latter explained in detail), as a magnificent physicist-artist put it in his beautiful sculptures of the next key spin ‘thing’ 1/2 and -1/2: In the graphs, we have the four essential ‘states’ of spins for fermion particles and antiparticles, where we shall call the right handed particle male, the left handed female, the spin up particle-life state, the spin down, antiparticle-dying state: It’s really that antiparticles are travelling backwards in its inner finite time, which is the meaning of death, as we know in 5D models. So mathematically speaking, a particle travelling forwards in time is indistinguishable from the corresponding antiparticle travelling backwards in time. And so, if we see them in sequence is a life-death cycle and zero sum. But if we see them simultaneously in space it is an annihilation. And hence also a zero sum. So the immortality of the whole, which is what ALL IS ABOUT, remains. And alas, this ONLY happens with Fermions that die down the ladder of annihilation; bosons on the other hand, evolve socially. They do not have sex, but the same gender and come into social herds/numbers. A duality which we shall find in all scales (between complementary sexual beings that are good for couples, vs. neutral beings that are good for social gatherings). More on this with better maths sometime into the future. Spin’s angles and a(nti)symmetric topologies. From a different ‘point of view’ of the pentagonal entangled perspectives every element of reality has, we can also connect spins’ positions with the laws of non-euclidean geometry regarding, the parallelism or perpendicularity of those angles in different particles which define its entropic perpendicular antisymmetric annihilations or its social evolutionary behaviors: Parallel angles are social, symmetric, used for parties and atoms as quantum numbers of their social evolution. Inverse spins are antisymmetric flows. Spins in an intermediate quantum angle produce complementary, reproductive actions. So for each angle we can define a basic operation of exist¡ence. Where operation, means an operands over an action, of which there are those who determine a growth on scale of those actions (or inversely an entropic sum of all other potential actions, positive to the system which ad to the entropic action according to ¬ Ælgebra. ∃ = a+I+o+u Spin is the mass/angular momentum/cover perceived on a point-particle, and hence its intrinsic observable clock of time. The immediate relation between spin and the 2nd Ðimotion of exi=st¡ence follows. Further on, the beauty of spin is that it is a ‘stop and go’ motion, with fixed positions of gauging information. Another approach to spin theory to be fully developed is the relationship between spins and dimensions, meaning that as spins gauge the virtual space in which the particle does exist, a lesser spin form must have a lesser dimensional world in which it exists: In Space each spin number position means 1/2 dimension; so particles with 1/2, -1/2 have 2 1/2 dimensions which create a more limited form than a boson with 3 positions or a dimensions or a graviton with 5 dimensional positions (if it does exist, which I would simply deny because in the duality of charges and masses as vortices of attractive time vs. poles of communication and exchange of entropic energy, masses are only attractive not requiring for that task to exchange particles but merely act as vortices of time-space. The graviton then should exist in cosmological antigravitation or dark entropy between galaxies that act as huge electromagnetic repulsive waves – then however we shall find that for the sake of economicity we can do with the neutrino and the Higgs; never mind the rank 2 tensor bullshit of EFE from where others – not Einstein, deduced applying analogies with quantum physics that a graviton should exist and carry a 2 spin). And it follows that larger beings with more relative dimensions will have particle wise with more dimensions. So gravitons are for the larger scale, bosons for the intermediate and particles within the inner region of atoms, creating a symmetry between number of dimensions in spin terms and size of the world they exist within. The question then as dimension can be changed and we do have a few to choose between, is which kind of dimension is the spin about? It seems that we cannot have more than 2 half dimensions of the same species, which basically means fermions have 1/2 and 1/2 dimensions of circular, angular momentum to complete an intrinsic external membrane (±1/2) while bosons have also a degree of freedom or lineal dimension of momentum to escape its fixed positions, and that is all. Finally the graviton has even more dimensions to enclose the other particles.
<urn:uuid:ffeccc68-3532-4e48-b8c2-8a643e827474>
2.796875
3,042
Personal Blog
Science & Tech.
36.619137
95,610,067
Electron transport is caused by an electric field, a thermal gradient, or a concentration gradient. In the latter two cases, a uniform condition is established as a result of the transport unless external sources are used to maintain the nonuniform conditions. The transport of electrons under these conditions may cause transfer of energy from one part of the sample to the other, but the electrons do not gain energy from the external sources in the process of transport. On the other hand, when the transport is caused by an electric field, electrons are continuously supplied with energy from the source of the electric field at a rate J · ε (J is the current density and ε is the electric field), and it would appear that the total energy of the electron system should go on increasing indefinitely. This, however, does not happen as this gain of energy is balanced by transfer of energy to the lattice atoms through the process of collisions. It has been seen that an electron is scattered by the lattice either by emitting or by absorbing a phonon. The lattice absorbs energy from the electron when a phonon is emitted and it delivers energy to the electron when a phonon is absorbed. In the absence of the electric field the absorption and emission processes are so balanced that there is no net transfer of energy from the electron system to the lattice system or the vice versa. It means in effect that the temperature, the thermodynamic coefficent determining transfer of energy from one system to another, of the electron and that of the lattice system are identical. KeywordsDrift Velocity Compound Semiconductor Gallium Arsenide Lattice Temperature Differential Conductivity Unable to display preview. Download preview PDF.
<urn:uuid:777a29cc-5f86-43b6-9f94-ba3e3d8ce4d7>
3.265625
344
Truncated
Science & Tech.
29.566705
95,610,080
New analysis from the College of Guelph is dispelling a generally held assumption about local weather change and its impression on forests in Canada and overseas. It is lengthy been thought that local weather change is enabling treelines to march farther uphill and northward. Nevertheless it seems that local weather warming-induced advances could also be halted by unsuitable soils. It is a vital discovering for useful resource managers trying to protect particular person species or total ecosystems. “There is a widespread perception in regards to the impacts of local weather change,” stated U of G researcher Emma Davis. “It is really a extra difficult story than individuals imagine.” Her research are the primary in southwestern Canada to check how elements resembling soil properties might have an effect on treeline advance. Together with Prof. Ze’ev Gedalof, Davis, a latest PhD graduate within the Division of Geography, Surroundings and Geomatics, checked out plant development at increased altitudes than regular within the Canadian Rockies. Usually, travelling northward means a temperature drop of about 1 C each 130 kilometres, equal to climbing 65 to 100 metres up a mountainside. Simply as scientists count on climate-induced warming to allow extra northerly motion of crops, in addition they predict the alpine treeline will climb warming mountain slopes. The U of G researchers grew spruce and fir seedlings at various elevations past their present limits in 4 areas, together with Jasper Nationwide Park in Alberta and Kootenay Nationwide Park in British Columbia. Additionally they collected soil samples from the identical areas during which to develop spruce seeds in development chambers on the College. Controlling for situations resembling local weather variables, seed high quality and predation allowed them to zero in on soil properties. They discovered that crops thriving under the treeline had been hindered by soils past the present vary, though the scientists aren’t positive why. Gedalof stated soils might inhibit seed germination, together with modifications to soil chemistry attributable to vegetation or soil microbes or fungi. Local weather warming can also inhibit germination or development of seedlings by growing soil floor temperatures or lowering winter snow days, resulting in drier situations. The researchers suspect related outcomes would happen in different mountainous areas and at increased latitudes worldwide. Their findings are excellent news for uncommon or threatened species that face potential competitors from encroaching crops creeping up from under the treeline, stated Gedalof, who runs the Local weather and Ecosystem Dynamics Analysis Group on campus. “We have purchased a while to determine protect the alpine system.” On the identical time, crops struggling to adapt to hotter situations of their residence vary might face hurdles in migrating northward or increased on a mountainside in the event that they encounter unsuitable soils. That is vital for ecologists trying to reap the benefits of beneficial warming to encourage development of economically and ecologically vital plant species past their present ranges, he stated. “Non-climatic elements are clearly limiting modifications.”
<urn:uuid:3b30ec25-15e2-4592-a126-fae3b9a8dad0>
3.40625
624
News Article
Science & Tech.
23.2576
95,610,086
The ocean makes up the Earths primary life support system, comprises 70 percent of our planets surface and is essential to human well-being and prosperity. Ocean ecosystems are threatened by unsustainable fishing, global change, habitat destruction, invasive species, and pollution - the combined effects of which are far more destructive than individual threats on their own. Effectively addressing these threats requires comprehensive ocean management at large scales. Several models exist for achieving such large scale marine management, each of which tackles a broad range of issues with its own suite of inputs, objectives and methodologies. Often, more than one of these frameworks are applied to the same or similar geographies by different institutions. Over the past five years CI, together with a multitude of partners, has developed the Seascapes model to manage large, multiple-use marine areas in which government authorities, private organizations, and other stakeholders cooperate to conserve the diversity and abundance of marine life and to promote human well-being. The definition of the Seascapes approach and the identification of the essential elements of a functioning Seascape were built from the ground up, informed by the extensive field experience of numerous marine management practitioners. In order to learn more about the different approaches to managing large-scale marine areas, their comparative merits, and the synergies and overlaps between them, CI commissioned this independent analysis of several widely applied models. Although the report was commissioned by CI, the views expressed in this report are those of the authors; they were charged with providing a critical examination of all the assessed approaches, including the Seascapes approach. This analysis provides a comprehensive understanding of the strengths and weaknesses of each approach. This will help us and, we hope, other readers to identify ways to work together to achieve even greater results through synergistic efforts. We are delighted to publish this report and intend to use its recommendations to further strengthen our work and expand our partnerships. Together, we will secure a new future for the worlds oceans. Mendeley saves you time finding and organizing research There are no full text links Choose a citation style from the tabs below
<urn:uuid:42a33cd6-4b24-47b3-af5c-472dfce003ac>
2.734375
419
Academic Writing
Science & Tech.
12.04659
95,610,110
If you thought genetically modified crops were controversial, genetically modified animals are a whole different playing field of bizarre. Science has made a ton of advancements in changing animal DNA, especially with CRISPR, a genome editing tool that allows scientists to edit genomes with unprecedented precision. CRISPR has prevented HIV infections in human cells and aided in the creation of genetically modified pigs that may one day serve as organ donors for human transplant patients. It's groundbreaking, but is it always ethical? Customizing animals is hardly a new concept. That adorable toy poodle and those fancy teacup pigs were all created to be as cute as nature would allow through selective breeding. That's one thing, but splicing the extra-cute DNA of your pup is a whole different level. Is genetically modifying a slaughterhouse cow to feel no pain a public service or a scary animal experiment? What about splicing together a sustainable Salmon that humans can slice up on a bagel with cream cheese? These insane genetically modified animal experiments are the future, and many of the advancement seem promising. But is the future really as rosy as you would hope? Vote up the extreme genetic experiments that are the most extreme and make you go "what?!" Ruppy, The Flourescent Puppy Bioluminescent GMO animals aren't something new for science. Scientists have across-the-board injected animals like rhesus monkeys, mice, pigs and naked mole rats with glowing jellyfish DNA in various experiments. Ruppy the Glowing Puppy is a little different – she's the world's first transgenic dog which means she produces "a fluorescent protein that glows red under ultraviolet light," whereas most bioluminescent animals glow a blue or green when with UV light. She and her four siblings were created to created the dogs by "cloning fibroblast cells that express a red fluorescent gene produced by sea anemones" in order to help scientists discover how disease-causing genes are passed down through generations, which could lead to major breakthroughs in treating cancer, blindness and narcolepsy. Scientists picked the red-glowing jellyfish DNA because it's way easier to see the results if their subject is literally a glowing like an exit sign. Mice That Make Baby Formula Not every mother can breastfeed, and while synthetic formula definitely does its job, it's lacking in certain proteins that can boost a baby's immune system. Lactoferrin is naturally occurring in human breast milk and helps babies gain resistance to bacteria and fungi. In an effort to improve baby formula, Russian scientists have been splicing mice with human genes in order to spawn rodents that produce milk with lactoferrin. Of course, you'd need a whole lot of mice to create enough milk for all the human babies in the world who need it, but this is just the beginning. Scientists hope this research will expand to animals like goats and cows that could produce milk on a much larger scale. Goats are truly adorable, but spiders are everyone's worst nightmare. So what happens when you combine them? Scientists have created a literal goat Spider-goat, proving that Peter Parker's fate really could happen if it was at all ethical to mess around with a human's DNA. In 2012, University of Wyoming scientists genetically engineered goats to produce a spider silk protein in their milk. This wasn't some kind of sick experiment to defeat the likes of Mysterio and the Green Goblin. It was to help foster silk production through created a super-strong milk, from which a particular protein could then be extracted and spun into silk. Spider silk is one of the strongest materials in the world, and is highly useful in the medical and scientific fields (not just in the superficial fashion space, though who doesn't appreciate a nice silk shirt?). But spiders just don't make enough of the material on their own. Genetically combining them with goats, who typically milk at least once a day, would increase production. If your skin is crawling because now you're thinking of a spider farm, just think of cute baby goats instead. Much like their fellow house pet Ruppy, the world's first transgenic dog, South Korean scientists managed to breed white Turkish Angora cats that glow under UV lights. The cats were cloned from their mother's altered skin cells, and scientists believe their breakthrough could lead to cloning endangered species like tigers and leopards. They also hope the cats can bring insight to more than genetic diseases that affect both cats and humans.
<urn:uuid:bd4eb264-fffe-4976-9123-b2dfbb7baf7b>
3.09375
921
Listicle
Science & Tech.
44.849189
95,610,129
- Open Access A review of comet and asteroid statistics © The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences. 1999 Received: 7 October 1998 Accepted: 15 September 1999 Published: 20 June 2014 The statistics of Earth-approaching asteroids are first summarized, and an enhanced frequency of objects smaller than 100 meters is noted. Superposed on these random hazards may be a periodic one of new comets due to galactic tides of the Oort Cloud with a period of 26–36 Myr (Rampino, 1998). New asteroids and comets are being found evermore frequently because new telescope-and-detector systems are coming on line. These are intended primarily for the discovery of dangerous objects, but a beginning has been made with the study of statistics of main-belt asteroids. In addition to trans-Neptunian objects, cis-Neptunian “Centaurs” are recognized, which may be a link in the evolution of short-period comets and thereby contribute to the flux of Earth approachers. With the new equipment coming on line, we are beginning to see that the global hazard will be mostly quantified within a few decades. We do see a shortage in astrometric follow up fainter than about the 20th magnitude.
<urn:uuid:b2546ad3-85cc-420c-a234-d4f70c5fe740>
2.90625
304
Truncated
Science & Tech.
28.565227
95,610,142
reactance(redirected from Capacitive reactance) Also found in: Thesaurus, Medical, Encyclopedia. Related to Capacitive reactance: Inductive reactance n. Symbol XElectricity Opposition to the flow of alternating current caused by the inductance and capacitance in a circuit rather than by resistance. 1. (Electronics) the opposition to the flow of alternating current by the capacitance or inductance of an electrical circuit; the imaginary part of the impedance Z, Z = R + iX, where R is the resistance, i = √–1, and X is the reactance. It is expressed in ohms. Compare resistance3 2. (General Physics) the opposition to the flow of an acoustic or mechanical vibration, usually due to inertia or stiffness. It is the magnitude of the imaginary part of the acoustic or mechanical impedance the opposition of inductance and capacitance to alternating electrical current, expressed in ohms. Symbol: X
<urn:uuid:3d655bb3-13f2-4ad9-8605-9546bf83c06c>
2.6875
205
Structured Data
Science & Tech.
19.096853
95,610,154
A huge radio telescope called the Square Kilometre Array (SKA) may help solve some of the most fundamental cosmic mysteries surrounding the birth and growing pains of our Universe – including the possibility of extraterrestrial life. Scientists have learned so much about the workings of our inconceivably vast Universe that one might be forgiven for thinking there’s little left to discover. That assumption is so wrong as to be ludicrous. Some disconcerting gaps in our knowledge: Big Bang theory, compelling though it is, remains a theory. In fact, some cosmologists believe our Universe may be just one in an infi nity of universes. Others, such as South African-born theoretical physicist Neil Turok, suggest we may be living in a sheet-like “brane” that periodically touches another brane, setting off Big Bangs in a never-ending cycle. How do pulsars work? What does it take to make a black hole radiate energy? Can we test the limits of Einstein’s General Relativity? How much do we really know about the adolescence of our Universe, when the first stars coalesced from the primordial gas? There’s more. We still have no idea what dark matter is, even though it constitutes a signifi cantly bigger portion of the cosmos than all the stars and galaxies (so-called baryonic matter) put together. As for dark energy, well, that’s an even bigger mystery. Since it makes up a resounding two-thirds of the “stuff” of the Universe, and is apparently responsible for accelerating its expansion, we have a compelling reason to find out. Then there’s the ET thing. Although astronomers have yet to discover firm evidence of extraterrestrial life, they are finding its chemical building blocks – not to mention extra-solar planets that could theoretically harbour life – all over the place. Bottom line: many of the most fundamental questions about the laws of nature and the functioning of the Universe – including its beginning and its likely end (if any) – remain unanswered. Having evolved into the most intelligent and insatiably curious species in the 3,8 billionyear history of life on planet Earth, we really need to know – and the Square Kilometre Array (SKA), the largest and most sensitive radio telescope ever conceived, is a giant leap in the right direction. South Africa and Australia are the only two countries remaining on the shortlist for hosting this mega-telescope. The European Union’s Framework 7 Programme recently awarded funding to a consortium of 20 signatories to finalise the design of all the sub-systems between now and 2010. This programme will not only consider technical design issues and science cases for the SKA, but will also investigate the most appropriate governance structures, intellectual property issues, ownership of data and long-term operational issues for the lifetime of the telescope. A fi nal decision on the location of the telescope is expected by 2011, and construction is expected to start soon afterwards, around 2014. If it is built in South Africa, the core of the SKA will be in the Karoo region of the Northern Cape. SKA will consist of an interferometer array with a total receiving area of about one square kilometre, of which 50 per cent will be contained within a core site more or less 5 km in diameter. Stations of antennas will fan out from the core in a spiral pattern, with proposed remote stations in several other African countries and on neighbouring islands up to 4 500 km from the core. The potential benefi ts for the southern African region are huge: the telescope would attract top scientists from around the world, boost local scientifi c and engineering skills, and seed a host of high-tech industries. Some key reasons why the Karoo is an appropriate location for the SKA: - Low levels of radio frequency interference and the certainty of a future radio quiet zone protected by tough legislation. - Basic infrastructure of roads, electricity and communication in place. - Ideal geographical location, sky coverage and topography. - Safe and stable area, with very few people and no confl icting economic activities. - Required land, labour and services available and very affordable. - Excellent academic infrastructure to support SKA science and technology. - The astronomical richness of the southern skies, and a strong tradition of astronomy. Unless current research and development programmes reveal fatal fl aws in technologies of choice, the SKA will consist of thousands of dishes, each 10-15 m in diameter. We’re talking Big League astronomy here: the joint receiving area of all these dishes and panels will add up to about 1 million m. Special antenna tiles in the core of the array will form a “radio fish-eye lens” for all-sky monitoring at low frequencies, allowing many independent observations at the same time. The SKA will require super-fast data transport networks and more powerful computing than ever before. It won’t be cheap. According to the latest estimates, the telescope instrument alone (excluding infrastructure) will cost about 1,5 billion euro (about R14,5 billion at current exchange rates) to build, with contributions from partner countries around the globe. Many international teams are working together to develop the technology solutions that will make the SKA possible and feasible. They are also participating in multinational studies to trade off projected costs against the instrument’s technical performance specifications. An independent international committee will ultimately select an optimal site for the SKA based on comprehensive technical, scientifi c and financial considerations. A fi nal decision is expected only in 2011, after which the telescope will be built in phases, with construction on Phase 1 (representing about 10 per cent of the total telescope) starting in 2014 and slated for completion by 2016. Operation of the full SKA (called Phase 2) should start by 2020, but scientifi c observations will be possible throughout the construction phases. This ability to perform "early science" using a subset of the full instrument is a powerful feature of radio interferometer arrays. To construct an affordable SKA with the requisite technical specifi cations, it will be necessary to develop technologies that are not currently employed in existing radio telescopes. These technologies are being driven by various design studies and "demonstrator" projects around the world. As an important first step, South Africa has launched an ambitious – and by all accounts, thoroughly successful – demonstrator project known as MeerKAT (KAT refers to Karoo Array Telescope). The underlying intention may be to show the astronomical community that this country is capable of building and operating a world-class facility, but it goes a lot further than that: irrespective of who gets the big prize, MeerKAT is destined to become one of the world’s premier mid-frequency radio astronomy facilities, and as such, will put South Africa right at the cutting edge of radio astronomy. About MeerKAT According to the project management, the telescope – funded by the Department of Science and Technology via the National Research Foundation – will be constructed in several phases to ensure the best value for money and sound technology choices. In fact, the government has tak en a big leap of faith here, committing R860 million to the SKA effort, including the design and construction of MeerKAT. Winning the SKA bid would be a major step forward for the government’s Astronomy Geographic Advantage Programme (AGAP). Other major astronomy players in the region include the Southern African Large Telescope (SALT) outside Sutherland and the HESS gamma ray telescope in Namibia. The story so far: - The first prototyping phase, a single-dish system, has already been built at the Hartebeesthoek Radio Astronomy Observatory (HartRAO) in Gauteng. - KAT-7, a seven-dish engineering test bed and science instrument near Carnarvon in the Northern Cape, will be commissioned from the end of 2009. - The full array of 80 dishes should be operational and doing science from the end of 2012. A high-speed data transfer network will link the telescope site in the Karoo to a remote operations facility. The Karoo region is ideal for radio astronomy, since it is remote and sparsely populated, with a very dry climate. There is minimal radio frequency interference from sources such as cellphones, broadcasting and air traffic. MeerKAT will explore celestial mysteries such as cosmic magnetism, the evolution of galaxies and the large-scale structure of the Universe, dark matter, and the nature of transient radio sources. It will also study pulsars, and allow scientists to carry out novel astrophysics and astrobiology experiments. South African engineers and astronomers are working closely with teams around the world on the advanced technology required to make MeerKAT work, and on the science programmes at which this telescope is expected to excel. Says project leader Anita Loots: "Looked at one way, it’s an experiment in mission-driven innovation. The government is recognising the importance of the high-tech economy and is seeding new industries, playing a critical role in the ‘knowledge economy’ by backing an important tool for the advancement of physics. Part of that commitment is reflected in its backing of 52 post-graduate bursars who are destined to make important contributions of their own. "In fact, the government took a unique approach to the project. It said: ‘We will allocate money to the project, but you have to build a worldclass facility – and you have to be ready to press the button by the end of 2009.’ It was a scary proposition, but we knew we could do it. "From the very beginning, we took a system engineering approach, and although it’s early days, all indications are that it is working. A year ago, I stood up at a conference in Paris and told them we were going to build a dish within a time frame they considered impossible. They actually laughed at me in disbelief. Today, I think, they are less inclined to laugh – because we’ve done it." The MeerKAT digital signal processing team is working closely with UC Berkeley (two people based at that university), and the collaboration includes teams from several other countries around the globe. The MeerKAT team also enjoys a close relationship with scientists at SETI’s Allen Telescope Array (ATA) in the US, which has pioneered many technologies required by MeerKAT. For the RF systems, they work with several international teams, most notably the Jodrell Bank Observatory in the UK, Caltech and Cornell in the US, Onsala Space Observatory and Chalmers University in Sweden, and ASTRON in the Netherlands. Says Loots: "Funnily enough, we’re even working with our rivals, the Australians, in a formal collaboration focusing on software and computing issues. But cryogenics is the issue that keeps us awake at night. "The technological challenges of the SKA are bigger than any one nation can handle, but there’s no doubt about the value of our contribution. For example, we’re very strong in terms of correlator development and the antenna solution, and we’re making excellent progress in software and computing." Calibrating the full system dishes will pose a number of serious challenges, says Dr Alan Langman, sub-system manager for the digital signal processing team of the Karoo Array Telescope. "There are several levels of calibration. For example, you have to allow for significant differences in temperature during the course of the day, wind and other weather phenomena, and even the effect of gravity, which distorts the dish as it moves. Because the tolerances are so tiny, any of these factors could compromise the signal." And it doesn’t end there, says Langman. "You have to filter out a lot of ‘noise’ from satellites and other stuff to get to your astronomical object. This is where the correlator comes in, improving the noise signal quite dramatically. In essence, it’s a very serious computer, albeit an unusual one that uses Field Programmable Gate Array (FPGA) processors to correlate data from the dishes according to frequency, averaging the results. "We’re talking about a lot of data here: each dish transmits 2 gigabytes of data a second, which means MeerKAT’s full array of 80 dishes will be producing 160 GB of data each second. That requires rather a lot of number crunching. Our system will be scalable and upgradable, which makes it very effective. Ours is essentially next-generation technology, and solving these challenges should ultimately have a myriad potential applications outside the sphere of radio astronomy. For example, better broadband Internet access will be a direct benefit, since the infrastructure that the telescope will require far exceeds the total Internet traffic of the country as a whole. Langman is distinctly upbeat about the potential of the project that occupies his every waking moment: Aside from the obvious scientific and economic benefits to South Africa and the rest of the world, our work will hopefully come across to young people as cool, and perhaps even inspire some of them to follow a career in science, engineering or technology. As part of its student recruitment efforts, the South African SKA project office has produced a student recruitment poster for the MeerKAT project. The poster (download it at www.ska.ac.za) highlights the different skills and disciplines required for the science and engineering of building and operating a world-class radio telescope. For a copy, e-mail firstname.lastname@example.org. The science of the SKA Astronomers explore the universe by passively detecting electromagnetic radiation and cosmic rays emitted by celestial objects. Earth’s atmosphere and ionosphere shield us from much of this radiation, so mode
<urn:uuid:fb83b975-4da6-4ccc-a000-da47ac487d56>
3.6875
2,887
Knowledge Article
Science & Tech.
34.23486
95,610,178