text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
ISpeechRecoGrammar IsPronounceable method (SAPI 5.4)
Microsoft Speech API 5.4
The IsPronounceable method determines if a word has a pronunciation.
Additionally, the SpeechWordPronounceable constant returned by this method indicates whether the word exists in the grammar object's lexicon. Words are likely to be pronounceable even if they are not found in the lexicon.
ISpeechRecoGrammar.IsPronounceable( Word As String ) As SpeechWordPronounceable
Specifies the Word.
A SpeechWordPronounceable constant.
The following Visual Basic form code demonstrates the use of the IsPronounceable method.
To run this code, create a form with the following controls:
- A command button called Command1
- A text box called Text1
Paste this code into the Declarations section of the form.
The Form_Load procedure creates a grammar and loads the general dictation topic. The Command1_Click procedure passes the word or words in Text1 to the IsPronounceable method and displays the resulting SpeechWordPronounceable constant.
Most correctly spelled words will be known and pronounceable; most incorrectly spelled words will be unknown and pronounceable. The example begins with a word which is pronounceable, even though it is misspelled.
Option Explicit Dim MyRecoContext As SpeechLib.SpSharedRecoContext Dim MyGrammar As SpeechLib.ISpeechRecoGrammar Private Sub Command1_Click() On Error GoTo EH Dim strTemp As String Select Case MyGrammar.IsPronounceable(Text1.Text) Case SWPKnownWordPronounceable strTemp = "KnownWordPronounceable" Case SWPUnknownWordPronounceable strTemp = "UnknownWordPronounceable" Case SWPUnknownWordUnpronounceable strTemp = "UnknownWordUnpronounceable" End Select MsgBox "The word """ & Text1.Text & """ is " & strTemp EH: If Err.Number Then ShowErrMsg End Sub Private Sub Form_Load() On Error GoTo EH Set MyRecoContext = New SpSharedRecoContext Set MyGrammar = MyRecoContext.CreateGrammar MyGrammar.DictationLoad "" Text1.Text = "missspeled" EH: If Err.Number Then ShowErrMsg End Sub Private Sub ShowErrMsg() ' Declare identifier: Dim T As String T = "Desc: " & Err.Description & vbNewLine T = T & "Err #: " & Err.Number MsgBox T, vbExclamation, "Run-Time Error" End End Sub | <urn:uuid:198766b3-d11e-4d59-9164-8288270e8b1b> | 2.609375 | 592 | Documentation | Software Dev. | 28.939363 | 95,587,966 |
Could Shallow Biospheres Exist Beneath the Icy Ceilings of Ocean Moons?March 14, 2018 / Posted by: Miki Huynh
Vent tubeworms, such as Riftia pachyptila found near the Galapagos Islands, represent the kinds of life that can persist near deep sea hydrothermal vents, the source of chemical energy that may provide one of the building blocks for life. Credit: NOAA Okeanos Explorer Program, Galapagos Rift Expedition 2011 (via Astrobiology Magazine)
Astrobiologist Michael Russell, Co-I of the NASA Astrobiology Institute Icy World’s Team at NASA’s Jet Propulsion Laboratory, and his colleagues suggest that where an icy crust and a hidden ocean meet in a frozen world such as Europa, two sources of the building blocks of life could join together and potentially support the evolution of life. At the underside of Europa’s icy crust, they suggest that a shallow biosphere–a network of ecosystems–can form.
The feature story by Charles Q. Choi is published in Astrobiology Magazine.
The research paper, “The Possible Emergence of Life and Differentiation of a Shallow Biosphere on Irradiated Icy Worlds: The Example of Europa” is published in Astrobiology.
Source: [Astrobiology Magazine (astrobio.net)]
- Life Underground - Available to Play
- Electron Acceptors and Carbon Sources for a Thermoacidophilic Archaea
- Yosemite Granite Tells New Story About Earth's Geologic History
- Supporting SHERLOC in the Detection of Kerogen as a Biosignature
- New Estimates of Earth's Ancient Climate and Ocean pH
- How Microbes From Spacecrafts Survive Clean Rooms
- Radical Factors in the Evolution of Animal Life
- Understanding Oxygen as an Exoplanet Biosignature
- Recap of the 2018 Astrobiology Graduate Conference (AbGradCon)
- Astrobiologist Rebecca Rapf Receives Inaugural Maggie C. Turnbull Early Career Award | <urn:uuid:da2903df-ecc1-4b9f-825d-c98603d80559> | 3.515625 | 426 | News (Org.) | Science & Tech. | 12.158566 | 95,587,987 |
This lab was conducted in order to show and analyze the way DNA is extracted.
If the lab is conducted properly then we should be able to view a visible amount of DNA from the strawberry and detergent mixture.
The independent variable in this experiment is the strawberry mixture while the dependent variable is the amount of DNA extracted.
Place a strawberry in a plastic baggy filled with the detergent mix and crush the strawberry, mixing the pulp with the detergent mix thoroughly, pour the detergent mixture into the funnel. Let the liquid from the mixture drain into the beaker then add the ethanol to the mixture.
The ethanol sat on top of the detergent due to its lighter density. Bubbles started rising as soon as the ethanol was added durn turning the liquid cloudy. The DNA grouped rapidly, taking no longer than two to three minuets total before slowing down and seeming to stop grouping. The DNA itself looked like sputum or phlegm, and was easily extracted from the mixture and sticky to the touch. When the DNA was extracted from the test tube and the mixture was stirred, more DNA started to collect at the top.
-As the strawberry is physically mashed into the detergent the cells are broken down and opened. The ethanol is else dense than the mixture and draws
the now accessible DNA to the surface where it is viewable.
-In comparing the extraction of strawberry DNA compared to human DNA, given a sample of the same amount of cells, there would be more DNA extracted from the strawberry for it has eight sets of chromosomes while humans just have two sets.
-In real world situations DNA extraction would be used in something as complex as a murder investigation, in which DNA would have to be extracted to match a perpetrator to the evidence, or as simple as a pregnancy test.
-A single cotton thread can not be seen from 100 feet away, but thousands of cotton threads together in a rope would be visible. The same applies to DNA in this experiment. A single double helix is hard to view even with the most complex of microscope, but when thousands of sticky little DNA strands bind together in the ethanol solution they become visible to the human eye.
-Based on prior knowledge I know that the extraction of human DNA from mussel tissue is quite similar to the extraction of DNA from a strawberry and also ethanol based. That fact is to be expected for really there is no major cellular difference between strawberries and humans minus the fact that one is plant and the other is animal. Both are eukaryotic.
In the experiment, DNA was successfully extracted from a strawberry, demonstrating the process a real life scientists would possibly extract DNA from cells. The lab was intact successful for the group and I were able to extract a visible amount of DNA from the mixture.There was really no source of error in this lab due to its simplicity. I feel in order to improve this lab, we should compare the amount of strawberry DNA to another fruit like a banana or kiwi. I personally learned the physical process of DNA extraction as well as what DNA looks like. | <urn:uuid:d37820c0-94d4-4111-a1fb-5c268926a4a1> | 3.53125 | 625 | Personal Blog | Science & Tech. | 48.005862 | 95,587,988 |
solar fuel plant
A type of solar fuel plant created and designed using a variety of chemicals, solar energy, systems and technologies to convert solar energy into a variety of storable and transportable fuels.
Solar technology engineers aspire to create a solar fuel plant using solar energy and a variety of technologies.Submitted by MC Harmonious on September 6, 2015
The numerical value of solar fuel plant in Chaldean Numerology is: 5
The numerical value of solar fuel plant in Pythagorean Numerology is: 1
Images & Illustrations of solar fuel plant
Find a translation for the solar fuel plant definition in other languages:
Select another language:
Discuss these solar fuel plant definitions with the community:
Word of the Day
Would you like us to send you a FREE new word definition delivered to your inbox daily?
Use the citation below to add this definition to your bibliography:
"solar fuel plant." Definitions.net. STANDS4 LLC, 2018. Web. 22 Jul 2018. <https://www.definitions.net/definition/solar fuel plant>. | <urn:uuid:470a316b-2fcf-433f-abef-ecd6f600703e> | 2.859375 | 227 | Structured Data | Science & Tech. | 44.489347 | 95,587,992 |
Why some months have 30 and some 31 days , and february 28?
Why not? This has nothing to do with physics or mathematics- it is mostly due to the Roman calendar- a matter of history.
....it evolved slightly from there.
Can be each day associated with the position of the sun ?
Not sure what you mean.
I mean if 15th of june the sun have the same position or something close to 15th of july and to 15th of august and so on.
mreq do you mean to ask if there is a calendar based on various alignments of the heavenly bodies?
What i want to know it's if there is a connection between sun position and the number of the day? Let's say 1 june 2000, 1 june 2001, 1 june 2002, 1 june 2003 etc.
If the sun coordinates are the same.
Close, but not exact. The mean Tropical year is 365.24219 days long, which means that after four years, the Sun has drifted almost 1 day in position. This is why we have leap years; We add an extra day to the year every four years to tweak the Sun's position and calendar day back into sync. This however over compensates a bit, so our present calendar the Gregorian one, omits the leap day for years that are evenly divided by 400.( Thus the year 2000, which normally should have been a leap year by the four year rule, was not.)
The previous calendar, the Julian, did not have this slight correction, so when Britain and the American colonies switched to the Gregorian calendar in 1751, it was 11 days out of sync with the Sun. As a result of the switch, Sept 2 was followed by Sept 14 to re-align the date and Sun.
As you can imagine, this was disconcerting to some. Some people thought that several days of their lives were being taken away, and some landlords wanted to charge a full month's rent for September, while their tenants argued that they should only be charged for 19 days, etc.
How about 1th june 2000 and 1 th june 2005 ? Is the position of the sun the same ?
Let's took for example 10 february 1564. Judging by the position of the sun what day should coincide with that by the gregorian calendar.
As Janus said, close but not exact. How exact do you want to get?
Interesting, I had never heard that. I wonder how astronomy software deals with that? I would suspect they ignore such historical issues and just apply the modern calendar backwards.
Looks like they just apply the modern calender backwards.
Heh, duh, I should have realized how easily I could test that!
[...and Starry Night works the same way.]
I think it would be pretty funny if it had "this date does not exist" or something of the sort.
Agreed. But if any unscrupulous high school students read this thread, they may go start arguing with their teachers about what date certain historical events happened on. Magna Carta? June 15, 1215? Naah.
I bet I can guess who manufactures your telescope. Hahahaha
Well, not that early, as the Gregorian calendar was not introduced until 1582. However, the adoption was not universal. Countries slowly changed over; Russia used the Julian calendar until 1918 and Greece was the last to make the switch in 1923.
Maybe/mabye not. When I bought Starry Night my primary scope was by one manufacturer and now I have a new one with a label that belies the fact that the OTA and mount are repackaged products from still two more manufacturers! So I've got a lot of major labels covered!
Isn't that the point? What does it really mean to say that the Magna Carta was signed on June 15, 1215? According to the people who signed it? According to our new calendar scrolled backwards? And don't even get me started on Christmas. It is bad enough that it isn't known when exactly Jesus was born, but why was December 25th chosen? According to the Wiki it may be because that was the date of the winter solstice in the Roman calendar. But if that's the case, that means calendar changes have moved Christmas so that it is now 4 days later.
The unix cal command
$ cal 9 1752
S M Tu W Th F S
1 2 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
If i take a date let's say 15th january 1540. What was the earth position on the orbit (regarding the sun) that day, and now in 2010 when the earth is in the same position ?
And another question is Where on the orbit is let's say february ?
Positions are calculated with the earth as the reference.
You haven't tried the program yet, have you?
So there isn't a fixed point ?
P.S. Are this things possible with some software ? Which one ?
mreq, you have another thread open where people suggested software to you! Try it!
Based on the vagueness of the questions you are asking and your inability to properly convey what you are looking for or why, it doesn't appear you really know what you are looking for. So the best thing for you to do is to try the software, see what information it gives you and see if it is of value to you. We can't spoon-feed this to you if you don't even know what you want!
Lets try i tthis way too:
On Jan 15, 1540 at noon, from the earth, the sun is at:
RA: 20h, 54.17m
DEC: -17deg 29.54m
Was this information helpful to you?
Separate names with a comma. | <urn:uuid:893c99b3-751b-4175-ad6e-4290f6dcd4b7> | 3.375 | 1,235 | Comment Section | Science & Tech. | 80.242576 | 95,587,994 |
Climate change is already affecting people around the world — so adapting is crucial.
In some places, at least, people are finding innovative ways to adapt, according to new research. A new study shows that using nature to adapt to intense storms and drought can be affective for thriving in a changing climate.
In some Indonesian villages on Borneo Island and Java, people cut down trees along the banks of rivers to sell or use for fuel. Without the trees there as a buffer, the soil erodes into the streams, swallowing up the water or turning it murky brown. At the same time, these islands are experiencing more instances of intense rain and drought, making it more difficult to grow food.
Giacomo Fedele, climate change adaptation fellow at Conservation International’s Moore Center for Science, traveled to two villages in Borneo and two villages in Java to learn how different communities responded to flood and drought caused by climate change. In a recent interview, Human Nature spoke with him about his research. | <urn:uuid:9a721f7c-2cae-406c-a919-a7869da659e1> | 3.46875 | 205 | News (Org.) | Science & Tech. | 37.213059 | 95,587,999 |
As part of the international Census of Marine Life (CoML), a team of world renown scientists will embark on an expedition to explore coral reef biodiversity in the largest fully protected marine area in the world--the Northwestern Hawaiian Islands Marine National Monument. Led by the National Oceanic and Atmospheric Administration (NOAA) Pacific Islands Fisheries Science Center, with funding from NOAA's Coral Reef Conservation Program and the Alfred P. Sloan Foundation, this 23-day research cruise to the Monument's French Frigate Shoals will be the first in a series of surveys by CoML's Census of Coral Reef Ecosystems (CReefs) project. The CoML projects are designed to assess the diversity, distribution and abundance of ocean life and explain how it changes over time. This CReefs project will provide needed baseline information and foster understanding of coral reef ecosystems globally. The cruise will take place on the NOAA ship Oscar Elton Sette and depart from Honolulu's Snug Harbor on Friday, October 6th.
According to NOAA's Dr. Russell Brainard, chief scientist for the expedition, this pioneering effort is unprecedented in the level of taxonomic expertise. While annual reef assessment and monitoring program surveys are conducted throughout the Northwestern Hawaiian Islands (NWHI), those surveys have been forced to focus on the larger and better understood fish, corals, macroalgae and macroinvertebrates (lobsters, large crabs, sea urchins). This expedition is unique in focusing primarily on the more cryptic small invertebrates (tiny crabs, mollusks, sea slugs, worms and more), algae, and microbes over a range of habitats at French Frigate Shoals. Although some of these smaller organisms may not be as charismatic as monk seals, or colorful aquarium fish (until you look under a microscope), they form the complex tapestry that supports the existence of the larger animals, and changes in their abundance or diversity are often the first indicators of environmental impacts or changes. These groups of organisms are also the least understood, and many new species records for the NWHI, as well as the discovery of new species are likely during this expedition.
Department of Land and Natural Resources, Aquatic Resources Division Administrator Dr. Dan Polhemus, summarized the need for this type of survey in saying "we cannot properly manage what we don't know we have." Don Palawski, Refuge Manager for the Pacific Remote Islands National Wildlife Refuge Complex under the U.S. Fish and Wildlife Service (USFWS) stated that the biodiversity surveys being conducted during this expedition are "one of the priorities for conserving all the monument's natural resources."
The taxonomists, biologists specializing in the classification of these organisms, are donating their time and considerable expertise. "We plan to provide for the State of Hawaii a baseline record of the diversity of a relatively pristine area in order to have some basic working knowledge of what lives in the NWHI chain. There will never be any way to measure impact on the environment without first knowing what is there" said Dr. Joel Martin, Chief of the Division of Invertebrate Studies and Curator of Crustacea, Natural History Museum of Los Angeles County.
Coral reefs are highly threatened repositories of extraordinary biodiversity and therefore have been called "the rainforests of the sea", but little is known about the ocean's diversity as compared to its terrestrial counterpart.
According to Dr. Nancy Knowlton of Scripps Institution of Oceanography at UC San Diego, CReefs lead principal investigator, "We don't even know to the nearest order of magnitude the number of species living in the coral reefs around the globe. Our best guess is somewhere between 1 and 9 million species based on comparisons with the diversity found in rainforests and a partial count of organisms living in a tropical aquarium."
Information from this effort will be posted on the CReefs website at www.creefs.org, and the cruise can also be followed at www.hawaiianatolls.org and http://sanctuaries.noaa.gov/. Furthermore, the results are projected to join coral reef biological data from the NOAA PIFSC Coral Reef Ecosystem Division and National Centers for Coastal and Ocean Science, which will be placed in the Pacific regional NBII Pacific Basin Information Node and global Ocean Biogeographic Information System databases by 2008.
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:a46ab63a-c9f8-45f4-9f11-daaf99bc9ad1> | 3.296875 | 1,544 | Content Listing | Science & Tech. | 37.258356 | 95,588,003 |
Please consider donating to Behind the Black, by giving either a one-time contribution or a regular subscription, as outlined in the tip jar to the right or below. Your support will allow me to continue covering science and culture as I have for the past twenty years, independent and free from any outside influence.
Astronomers have discovered a moon orbiting 2007 OR10, one of the Kuiper Belt’s larger objects.
With this discovery, most of the known dwarf planets in the Kuiper Belt larger than 600 miles across have companions. These bodies provide insight into how moons formed in the young solar system. “The discovery of satellites around all of the known large dwarf planets — except for Sedna — means that at the time these bodies formed billions of years ago, collisions must have been more frequent, and that’s a constraint on the formation models,” said Csaba Kiss of the Konkoly Observatory in Budapest, Hungary. He is the lead author of the science paper announcing the moon’s discovery. “If there were frequent collisions, then it was quite easy to form these satellites.” | <urn:uuid:439e70c7-c266-4ab7-8f86-a7b4ee91a77c> | 3.34375 | 233 | Truncated | Science & Tech. | 42.336739 | 95,588,007 |
The technique, devised by scientists at the National Center for Atmospheric Research (NCAR) and the University of Maryland, combines cutting-edge simulations portraying the interaction of weather and fire behavior with newly available satellite observations of active wildfires. Updated with new observations every 12 hours, the computer model forecasts critical details such as the extent of the blaze and changes in behavior.
Wildfires can be seen in much different detail, depending which satellite instrument is used to observe them. The image at left, produced from data generated by the MODIS instrument aboard NASA’s Aqua satellite, uses 1-kilometer pixels to approximate a fire burning in Brazil from March 26 to 30, 2013. The image at right, produced with data from the new VIIRS instrument, shows the same fire in far greater detail with 375-meter pixels. Credit: Wilfrid Schroeder, University of Maryland
The breakthrough is described in a study appearing today in an online issue of Geophysical Research Letters, after first being posted online last month.
“With this technique, we believe it’s possible to continually issue good forecasts throughout a fire’s lifetime, even if it burns for weeks or months,” said Janice Coen of NCAR in Boulder, Colo., the lead author and model developer. “This model, which combines interactive weather prediction and wildfire behavior, could greatly improve forecasting—particularly for large, intense wildfire events where the current prediction tools are weakest.”
Firefighters currently use tools that can estimate the speed of the leading edge a fire but are too simple to capture critical effects caused by the interaction of fire and weather.
The researchers successfully tested the new technique by using it retrospectively on the 2012 Little Bear Fire in New Mexico, which burned for almost three weeks and destroyed more buildings than any other wildfire in the state’s history.Graphic of wildfire prediction software
Over the last decade, Coen has developed a tool, known as the Coupled Atmosphere-Wildland Fire Environment (CAWFE) computer model, that connects how weather drives fires and, in turn, how fires create their own weather. Using CAWFE, she successfully simulated the details of how large fires grew.
But without the most updated data about a fire’s current state, CAWFE could not reliably produce a longer-term prediction of an ongoing fire. This is because the accuracy of all fine-scale weather simulations decline significantly after a day or two, affecting the simulation of the blaze. An accurate forecast would also have to include updates on the effects of firefighting and of such processes as spotting, in which embers from a fire are lofted in the fire plume and dropped ahead of a fire, igniting new flames.
Until now, it was not possible to update the model. Satellite instruments offered only coarse observations of fires, providing images in which each pixel represented a square kilometer (an area roughly 0.6 miles by 0.6 miles). These images might show several places burning, but could not distinguish the boundaries between burning and non-burning areas, except for the largest wildfires.
To solve the problem, Coen’s co-author, Wilfrid Schroeder of the University of Maryland, in College Park, has produced higher-resolution fire detection data from a new satellite instrument, the Visible Infrared Imaging Radiometer Suite (VIIRS), which is jointly operated by NASA and the National Oceanic and Atmospheric Administration (NOAA). This new tool provides wall-to-wall coverage of the entire globe at intervals of 12 hours or less, with pixels about 375 meters across (1,200 feet). The higher resolution enabled the two researchers to outline the active fire perimeter in much greater detail.
Coen and Schroeder then fed the VIIRS fire observations into the CAWFE model. By restarting the model every 12 hours with the latest observations of the fire extent — a process known as cycling — they could accurately predict the course of the Little Bear fire in 12- to 24-hour increments during five days of the historic blaze. By continuing this way, it would be possible to simulate even a very long-lived fire’s entire lifetime, from ignition until extinction.
“The transformative event has been the arrival of this new satellite data,” said Schroeder, a professor of geographical sciences who is also a visiting scientist with NOAA. “The enhanced capability of the VIIRS data favors detection of newly ignited fires before they erupt into major conflagrations. The satellite data has tremendous potential to supplement fire management and decision support systems, sharpening the local, regional, and continental monitoring of wildfires.”Keeping firefighters safe
In addition, they could enable decision makers to look at several newly ignited fires and determine which pose the greatest threat.
“Lives and homes are at stake, depending on some of these decisions, and the interaction of fuels, terrain, and changing weather is so complicated that even seasoned managers can’t always anticipate rapidly changing conditions,” Coen said. “Many people have resigned themselves to believing that wildfires are unpredictable. We’re showing that’s not true.”
The research was funded by NASA, the Federal Emergency Management Agency, and the National Science Foundation (NSF), which sponsors NCAR. The University Corporation for Atmospheric Research manages NCAR. Any opinions, findings and conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of NSF.
Journalists and public information officers (PIOs) of educational and scientific institutions who have registered with AGU can download a PDF copy of this early view article by clicking on this link: http://onlinelibrary.wiley.com/doi/10.1002/2013GL057868/abstract Or, you may order a copy of the final paper by emailing your request to Thomas Sumner at firstname.lastname@example.org. Please provide your name, the name of your publication, and your phone number. Neither the paper nor this press release is under embargo.
Also about fire research: This week’s Eos features an article about new research techniques for investigating the linkages between people, climate and fire. The article is accessible for free to anyone interested in it. Eos, the newspaper of the Earth and space sciences, is published by AGU.
“Use of spatially refined satellite remote sensing fire detection data to initialize and evaluate coupled weather-wildfire growth model simulations”
Authors:Janice L. Coen
Peter Weiss | American Geophysical Union
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:9a556e4a-06ac-4d0d-bff6-1bf92d6cb7e4> | 4.03125 | 1,982 | Content Listing | Science & Tech. | 33.620581 | 95,588,025 |
The RASC Prince George Centre Observatory has implemented a meteor detection project using information obtained from the Sky Scan Science Awareness Project and a program called Radio-SkyPipe. Two FM automobile radios tuned to 98.7 MHz, a Yagi antenna, and a Quadrifilar Helicoidal antenna,
enable us to collect data and present it here in graph form.
Starts 07/18/2018 00:00:52 UT
Ends 07/18/2018 04:45:00 UT
Created using Radio-SkyPipe software from Radio-Sky Publishing
The RASC Prince George Centre Observatory has associated itself with the International meteor detection organization called Radio Meteor Observatories On Line. Using the programs Spectrum Lab and Colorgramme RMOB lab, along with an ICOM IC-PCR1000 receiver tuned to Channel 3 video carrier frequency 61.240 MHz, we are able to collect data and present it here. Meteor activity on the left is in graph form for a 24 hour period. Data in the right hand box accumulates during the month and is colour based with blue signifying zero activity. Colours approaching red signify increased activity.
The following image is the latest real time capture from the AllSky camera installed at the RASC Prince George Centre Observatory. This camera has a maximum 180 degree field of view. North is up, West is to the right. | <urn:uuid:360d3ca9-28f4-4a62-abdb-73eb1fe70fc9> | 2.71875 | 279 | News (Org.) | Science & Tech. | 46.960601 | 95,588,033 |
A couple of ecologists turned to citizen science in hopes of learning more about ticks and tick-borne diseases. Amazingly, the public showed up and provided valuable data from thousands of ticks.
Kill the rats in order to save the reefs? Rats have been decimating seabird populations for a long time and new research shows that this also has devastating consequences on the world's coral reef populations.
The first giants have been unearthed. Scientists discover a new super-sized dinosaur species that walked the Earth millions of years earlier than other massive dinos.
Spiders can fly — with a little help from the planet's electric fields, of course. A new study reveals that the Earth's electric circuit triggers ballooning behavior from these eight-legged creatures.
Dogs and humans go way back. The first dogs in the Americas were found to originate from Siberia, traveling to the Americas with humans until Europeans wiped them out. Now, only traces of their genes remain in a contagious tumor.
Steer clear of otters because no matter how cute they may appear, things could turn violent in an instant. In Maine, police officers had to fatally shoot a river otter who bit an unsuspecting woman by the beach.
An elderly man emerged the victor in a potentially deadly tussle with a rabid fox in Brunswick, Maine last Monday, June 25. The 95-year-old used a wooden plank to beat the animal to death.
A newly discovered fossilized sea animal who lived millions of years ago just received an unusual honor by being named after President Barack Obama. Another was named after acclaimed naturalist David Attenborough.
Bobcats tend to be shy and avoid humans, but these animals can be dangerous as evidenced by a recent attack of a rabid animal on a woman in Georgia. Fortunately, DeDe Phillips was able to strangle the bobcat to death.
Spiders are already terrifying enough for some people — and even more so when they fly. In a new study, scientists observe their amazing ability to spin silk parachutes out of thin air to help them glide for hundreds of miles.
The skeleton of a rare and new dinosaur species, estimated to be 150 million years old, was just sold to an anonymous art collector. The skeleton fetched for a hefty price of $2 million — but scientists aren't happy about it.
Paleontologists trace back the origins of all squamates, discovering that the line goes back even further than initially believed. The mother of lizards, it turns out, lived way back in the Permian period.
Go on, down a glass of milk. Cockroach milk, that is. Research says that the milk from the creepy crawlies are packed with nutritional benefits that should put it on top of every health buff's grocery list.
Mussels in the Puget Sound were found with traces of opioids in their system, indicating a high number of people in the area taking the drug. Other pharmaceuticals were also found including antibiotics, antidepressants, and other medication.
Most birds were killed off following the fifth mass extinction event that destroyed most of life on Earth 66 million years ago. Fortunately, ground-dwellers survived and eventually became the ancestors of all modern birds today. | <urn:uuid:2436d54f-961e-431a-8bc3-b1376afbe280> | 2.859375 | 657 | Content Listing | Science & Tech. | 48.560647 | 95,588,036 |
September 2017 saw a spate of solar activity, with the Sun emitting 27 M-class and four X-class flares and releasing several powerful coronal mass ejections, or CMEs, between Sept. 6-10. Solar flares are powerful bursts of radiation, while coronal mass ejections are massive clouds of solar material and magnetic fields that erupt from the Sun at incredible speeds.
The activity originated from one fast-growing active region -- an area of intense and complex magnetic fields -- as it travelled across the Sun's Earth-facing side in concert with the star's normal rotation. As always, NASA and its partners had many instruments observing the Sun from both Earth and space, enabling scientists to study these events from multiple perspectives.
With multiple views of solar activity, scientists can better track the evolution and propagation of solar eruptions, with the goal of improving our understanding of space weather. Harmful radiation from a flare cannot pass through Earth's atmosphere to physically affect humans on the ground, however -- when intense enough -- they can disturb the atmosphere in the layer where GPS and communications signals travel. On the other hand, depending on the direction they're traveling in, CMEs can spark powerful geomagnetic storms in Earth's magnetic field.
To better understand the fundamental processes that drive these events, and ultimately improve space weather forecasts, many observatories watch the Sun around the clock in dozens of different wavelengths of light. Each can reveal unique structures and dynamics in the Sun's surface and lower atmosphere, giving researchers an integrated picture of the conditions driving space weather.
Scientists also have their eyes on the Sun's influence on Earth and even other planets. Effects from September's solar activity were observed as Martian aurora and across the globe on Earth, in the form of events known as ground-level enhancements -- showers of neutrons detected on the ground, produced when energetic particles accelerated by a solar eruption stream along Earth's magnetic field lines and flood the atmosphere.
The imagery below shows the wide swath of views available to researchers as they use these recent space weather events to learn more and more about the star we live with.
NOAA's Geostationary Operational Environmental Satellite-16, or GOES-16, watches the Sun's upper atmosphere -- called the corona -- at six different wavelengths, allowing it to observe a wide range of solar phenomena. GOES-16 caught this footage of an X9.3 flare on Sept. 6, 2017. This was the most intense flare recorded during the current 11-year solar cycle. X-class denotes the most intense flares, while the number provides more information about its strength. An X2 is twice as intense as an X1, an X3 is three times as intense, etc. GOES also detected solar energetic particles associated with this activity. Credit: NOAA/GOES
NASA's Solar Dynamics Observatory watches the corona at 10 different wavelengths on a 12-second cadence, enabling scientists to track highly dynamic events on the Sun such as these X2.2 and X9.3 solar flares. These images were captured on Sept. 6, 2017, in a wavelength of extreme ultraviolet light that shows solar material heated to over one million degrees Fahrenheit. The X9.3 flare was the most intense flare recorded during the current solar cycle. Credit: NASA/GSFC/SDO
JAXA/NASA's Hinode caught this video of an X8.2 flare on Sept. 10, 2017, the second largest flare of this solar cycle, with its X-ray Telescope. The instrument captures X-ray images of the corona to help scientists link changes in the Sun's magnetic field to explosive solar events like this flare. The flare originated from an extremely active region on the Sun's surface -- the same region from which the cycle's largest flare came. Credit: JAXA/NASA/SAO/MSU/Joy Ng
Key instruments aboard NASA's Solar and Terrestrial Relations Observatory, or STEREO, include a pair of coronagraphs -- instruments that use a metal disk called an occulting disk to study the corona. The occulting disk blocks the Sun's bright light, making it possible to discern the detailed features of the Sun's outer atmosphere and track coronal mass ejections as they erupt from the Sun.
On Sept. 9, 2017, STEREO watched a CME erupt from the Sun. The next day, STEREO observed an even bigger CME, which was associated with the X8.2 flare of the same day. The Sept. 10 CME traveled away from the Sun at calculated speeds as high as 7 million mph, and was one of the fastest CMEs ever recorded. The CME was not Earth-directed. It side-swiped Earth's magnetic field, and therefore did not cause significant geomagnetic activity. Mercury is in view as the bright white dot moving leftwards in the frame. Credit: NASA/GSFC/STEREO/Joy Ng
Like STEREO, ESA/NASA's Solar and Heliospheric Observatory, or SOHO, uses a coronagraph to track solar storms. SOHO also observed the CMEs that occurred during Sept. 9-10, 2017; multiple views provide more information for space weather models. As the CME expands beyond SOHO's field of view, a flurry of what looks like snow floods the frame. These are high-energy particles flung out ahead of the CME at near-light speeds that struck SOHO's imager. Credit: ESA/NASA/SOHO/Joy Ng
NASA's Interface Region Imaging Spectrometer, or IRIS, peers into a lower level of the Sun's atmosphere -- called the interface region -- to determine how this area drives constant changes in the Sun's outer atmosphere. The interface region feeds solar material into the corona and solar wind: In this video, captured on Sept. 10, 2017, jets of solar material appear like tadpoles swimming down toward the Sun's surface. These structures -- called supra-arcade downflows -- are sometimes observed in the corona during solar flares, and this particular set was associated with the X8.2 flare of the same day. Credit: NASA/GSFC/LMSAL/Joy Ng
NASA's Solar Radiation and Climate Experiment, or SORCE, collected this data on total solar irradiance, the total amount of the Sun's radiant energy, throughout Sept. 2017. While the Sun produced high levels of extreme ultraviolet light, SORCE actually detected a dip in total irradiance during the month's intense solar activity. A possible explanation for this observation is that over the active regions -- where solar flares originate -- the darkening effect of sunspots is greater than the brightening effect of the flare's extreme ultraviolet emissions. As a result, the total solar irradiance suddenly dropped during the flare events. Scientists gather long-term solar irradiance data in order to understand not only our dynamic star, but also its relationship to Earth's environment and climate. NASA is ready to launch the Total Spectral solar Irradiance Sensor-1, or TSIS-1, this December to continue making total solar irradiance measurements. Credit: NASA/GSFC/Univ. of Colorado/LASP/Joy Ng
The intense solar activity also sparked global aurora on Mars more than 25 times brighter than any previously seen by NASA's Mars Atmosphere and Volatile Evolution, or MAVEN, mission. MAVEN studies the Martian atmosphere's interaction with the solar wind, the constant flow of charged particles from the Sun. These images from MAVEN's Imaging Ultraviolet Spectrograph show the appearance of bright aurora on Mars during the September solar storm. The purple-white colors show the intensity of ultraviolet light on Mars' night side before (left) and during (right) the event. Credit: NASA/GSFC/Univ. of Colorado/LASP | <urn:uuid:2596c246-143f-43d9-a215-7a29af95fa14> | 3.953125 | 1,619 | Knowledge Article | Science & Tech. | 44.773295 | 95,588,048 |
National Center for
2VQA: Protein-Folding Location Can Regulate Mn Versus Cu- Or Zn- Binding. Crystal Structure Of Mnca
Nature (2008) 455 p.1138-1142
Metals are needed by at least one-quarter of all proteins. Although metallochaperones insert the correct metal into some proteins, they have not been found for the vast majority, and the view is that most metalloproteins acquire their metals directly from cellular pools. However, some metals form more stable complexes with proteins than do others. For instance, as described in the Irving-Williams series, Cu(2+) and Zn(2+) typically form more stable complexes than Mn(2+). Thus it is unclear what cellular mechanisms manage metal acquisition by most nascent proteins. To investigate this question, we identified the most abundant Cu(2+)-protein, CucA (Cu(2+)-cupin A), and the most abundant Mn(2+)-protein, MncA (Mn(2+)-cupin A), in the periplasm of the cyanobacterium Synechocystis PCC 6803. Each of these newly identified proteins binds its respective metal via identical ligands within a cupin fold. Consistent with the Irving-Williams series, MncA only binds Mn(2+) after folding in solutions containing at least a 10(4) times molar excess of Mn(2+) over Cu(2+) or Zn(2+). However once MncA has bound Mn(2+), the metal does not exchange with Cu(2+). MncA and CucA have signal peptides for different export pathways into the periplasm, Tat and Sec respectively. Export by the Tat pathway allows MncA to fold in the cytoplasm, which contains only tightly bound copper or Zn(2+) (refs 10-12) but micromolar Mn(2+) (ref. 13). In contrast, CucA folds in the periplasm to acquire Cu(2+). These results reveal a mechanism whereby the compartment in which a protein folds overrides its binding preference to control its metal content. They explain why the cytoplasm must contain only tightly bound and buffered copper and Zn(2+). | <urn:uuid:02e92148-1689-4292-99b4-afe444b97bf9> | 2.8125 | 479 | Academic Writing | Science & Tech. | 56.696218 | 95,588,065 |
Dogs see us move in SLOW MOTION: Animal's brain processes visual information faster than humans, study finds
- Scientists from Trinity College, Dublin, studied animals including dogs to find that their size and metabolic rate dictates how they experience time
- They found that time perception depends on how fast an animal’s nervous system processes information in order to react to its environment
- Smaller animals that need to avoid predators quickly tend to see events unfolding more slowly, but this is not an absolute rule
- Dogs take in visual information 25 per cent faster than humans, which makes time move more slowly for them
Animals come in all shapes and sizes and now scientists have demonstrated how their form affects their perception of moving objects.
By studying a variety of animals, researchers have discovered that a creature’s body mass and metabolic rate dictates how it perceives the speed of a moving object - or person.
They found that a dog and housefly see movements more slowly than a human, while a rat and a cat see movement more quickly.
Scroll down for video
By studying a variety of animals (pictured) researchers have discovered that a creature's body mass and metabolic rate dictates how it perceives the speed of a moving object - or person
THE PERCEPTION OF TIME
To examine how animals experience the passing of time, the scientists examined how fast they could see a light flash in a second (Hz).
The higher the number, the slower time seems to move for the animals.
- Housefly 250Hz
- Pigeon 100Hz
- Rhesus macaque 85Hz
- Dog 80Hz
- Human 60Hz
- Cat 55Hz
- Brown Rat 39 Hz
- Gecko 20Hz
- Sea Turtle 15Hz
The scientists, from the School of Natural Sciences, Trinity College Dublin, Ireland and the Universities of Edinburgh and St Andrews, said that speed perception depends on how fast an animal’s nervous system processes information in order to react to its environment.
To investigate, they showed 34 types of vertebrates, including fish, birds, lizards and mammals, a flashing light.
If the light flashes fast enough, both humans and animals see it as a constant beam, Scientific American reported.
By measuring an animal’s brain activity, they examined the highest frequency that it saw the light flashing.
To animals that can see the light flashing at higher speeds, it is as if movements and situations unfold more slowly, according to the study, published in the journal Animal Behaviour.
The team think that this is advantageous to animals that need to avoid obstacles or predators quickly.
For example, chipmunks and pigeons can see a light flash 100 times a second, while cats see it flash 55 times a second.
By studying an array of different animals, a team of researchers discovered that a creature’s body mass and metabolic rate dictates how they experience the passing of time. This means that a mouse (pictured left) experiences the world and time very different than an elephant, (left) for example
Animals that could see the light flash at high speeds were found to have faster metabolisms, confirming the scientists’ hypothesis that the species that can see objects move at a higher frequency of flashes tend to be smaller.
A dog can take in visual information – and see a light flashing – 25 per cent faster than a human, and while this makes it seem that time moves more slowly for canines, it is not enough to mean that one dog year equates to seven human years, they added .
The study demonstrates that a mouse sees the world and experiences time in a very different way to an elephant, for example.
The connection between the perception of time and a creature’s body size and metabolism suggests that different nervous systems have evolved based upon a species’ environment and how they survive in the wild.
Most watched News videos
- Brutal bat attack caught on surveillance video in the Bronx
- Disaster averted by good samaritan that saved child in hot car
- Comedian is forced to move her scooter from disability space on train
- Shocking video shows mother brutally beating her twin girls
- NFL quarterback Jimmy Garoppolo goes on a date with porn star
- Man sets up projector to make garden look like jurassic park
- The terrifying moment a plane comes crashing down in South Africa
- Waitress tackles male customer after grabbing her backside
- Leo Varadkar outlines Ireland's preparation for Brexit
- Road rage brawl ends with BMW driver sending man flying
- Sir David Attenborough shuts down Naga Munchetty's questions
- Biker jailed after filming himself speeding at 200mph | <urn:uuid:8499c088-b83d-408a-b73e-b5f88aba7abc> | 3.34375 | 963 | Truncated | Science & Tech. | 21.284801 | 95,588,074 |
Aldehydes, Ketones, and their Derivatives
Acyclic Aldehydes Rule C-303 303.1 - (a) The name of an acyclic polyaldehyde in which more than two aldehyde groups are attached to an unbranched chain is formed by
adding "-tricarbaldehyde", "-tetracarbaldehyde", etc., to the name of the longest chain carrying the maximum number of aldehyde groups. The name and numbering of the main chain do
not include the aldehyde groups, and numbering follows the general principles for unsaturation and substituents. (b) Alternatively, the name is formed by adding the prefix "formyl-" to the
name of the dial incorporating the principal chain.
303.2 - An aldehyde group in an acyclic compound is named by the prefix "formyl-" when a group having priority for citation as principal group is also present. (See, however, also Rules
C-415, and C-416.)
Examples to Rule C-303.1
303.3 - For an acyclic polyaldehyde, in which the aldehyde groups -CHO are attached to more than one branch of a branched chain, the name of the longest chain carrying the greatest
number of aldehyde groups is used together with a suffix "-dial" (see Rule C-302.1), "-tricarbaldehyde", etc. (see Rule C-303.1), and other chains carrying aldehyde
groups are named by use of "formylalkyl-" prefixes.
Example to Rule C-303.2
See Recommendations'93 R-5.6.1
Examples to Rule C-303.3
Aldehydes Rule C-304, Rule C-305
Ketones Rule C-311, Rule C-312, Rule C-313, Rule C-314
, Rule C-315, Rule C-316, Rule C-317, Rule C-318
Ketenes Rule C-321
Acetals and Acylals Rule C-331, Rule C-332, Rule C-333
This HTML reproduction of Sections A, B and C of IUPAC "Blue Book" is as close as possible to the
published version [see Nomenclature of Organic Chemistry, Sections A, B, C, D, E, F, and H, Pergamon Press, Oxford, 1979.
Copyright 1979 IUPAC.] If you need to cite these rules please quote this reference as their source.
Published with permission of the IUPAC by Advanced Chemistry Development, Inc., www.acdlabs.com, +1(416)368-3435 tel, +1(416)368-5596 fax. For comments or suggestions please contact firstname.lastname@example.org | <urn:uuid:29b19570-ed6e-4c85-9d49-0df6b976d0fb> | 2.6875 | 600 | Documentation | Science & Tech. | 67.636884 | 95,588,079 |
Latest News on Asteroid
Showing of 0 - 10 from 65 results
Asteroid - Total results - 65
Jul 13, 2018
Astronomers discover rare double asteroid revolving around each other near EarthIn June, observations by GSSR showed the first signs that the Asteroid could be a binary system.
Jul 08, 2018
NASA to fund project aimed at turning asteroids into giant, autonomous spacecraftThe project could one day enable space colonisation by helping make off-Earth manufacturing efficient.
Jun 28, 2018
Astronomers classify the mysterious interstellar object ‘Oumuamua as a cometOumuamua was first detected last October by the University of Hawaii’s Pan-STARRS1 telescope.
May 28, 2018
Asteroid that led to dinosaur extinction increased the Earth's temperature for 100,0000 years: StudyThe Chicxulub asteroid — which caused the extinction of dinosaurs — drove a long-lasting era of global warming when it smashed into Earth 65 million years ago.
May 22, 2018
This permanent immigrant asteroid could reveal important information about the evolution of our solar systemTo find the origin of the asteroid, the team ran simulations to trace the location of 2015 BZ509 right back to the birth of our Solar System.
Apr 17, 2018
Giant football field sized asteroid avoids NASA detection as it flies past EarthNASA scientists noticed the massive asteroid at an observatory in Arizona just 21 hours prior to the flyby.
Mar 21, 2018
An asteroid could collide with Earth in the year 2135 but NASA has a planThe report says that NASA's Planetary Defense Coordination Office is responsible to detect incoming asteroids and Comets close to the Earths orbit.
Jan 24, 2018
Near-earth asteroid flyby on 4 February will not pose any threat to Earth, says NASAIt was discovered in 2002 by the former NASA-sponsored Near Earth Asteroid Tracking project at the Maui Space Surveillance Site in Hawaii.
Nov 04, 2017
Rare metal from asteroid that wiped out the dinosaurs could be used for the effective treatment of cancerScientists have demonstrated that iridium - a rare metal delivered to Earth by the asteroid - can be used to kill cancer without harming healthy cells.
Oct 27, 2017
The solar system is being visited by a small asteroid or comet from elsewhere in the GalaxyIt appears that the object traveled to the solar system from the direction of the constellation of Lyra | <urn:uuid:c658e27f-69ef-4ffc-86fd-2f5ebffc0d4b> | 3.1875 | 498 | Content Listing | Science & Tech. | 22.968958 | 95,588,093 |
Plastic pollution is gaining global recognition as a threat to the resilience and productivity of ocean ecosystems. However, we are only just beginning to understand the scope and impacts of microplastic particles (less than 5 mm) on coastal and ocean resources, and the San Francisco Bay Area is no exception. A preliminary study of nine water sites in San Francisco Bay, published in 2016, showed greater levels of microplastics than the Great Lakes or Chesapeake Bay. Based on these findings, the San Francisco Estuary Institute (SFEI) organized a workshop with stakeholders, scientific experts, and regulatory staff to identify major data gaps and management questions. SFEI developed a Microplastic Strategy to outline the essential scientific studies needed to inform management actions.
With a generous grant of $880,250 from the Gordon and Betty Moore Foundation, $75,000 from the Bay's Regional Monitoring Program, and support from Patagonia, City of Palo Alto, East Bay Municipal Utility District, and San Francisco Baykeeper, scientists from SFEI and The 5 Gyres Institute have embarked on a two-year study to address the highest priority elements identified in the Strategy (3 minute video).
This project includes multiple scientific components to develop improved knowledge of microplastic in the Bay Area environment and prioritize practical steps to reduce pollution:
- Baseline microplastic monitoring in San Francisco Bay surface water, sediment, and fish
- Monitoring in National Marine Sanctuary surface waters outside of the Golden Gate
- Characterization of microplastics in treated wastewater and stormwater flowing into the Bay
- Rigorous method development and standardization
- Development of modeling tools to link Bay contamination to that of adjacent Sanctuaries
- Data-driven policy options for the Bay Area developed with leading national and regional experts
- Sharing findings with regional stakeholders and the public
The scientific information, tools, and policy recommendations developed via the San Francisco Bay microplastic project are intended to catalyze similar efforts to understand and reduce plastic pollution around the globe.
Track the latest developments on Instagram and Twitter: #SFBayMicroplastics
Related Projects, News, and Events:
The RMP has conducted initial studies of microplastic pollution in San Francisco Bay. Findings from a 2015 screening-level RMP study of microplastic pollution in our Bay show widespread contamination at levels greater than other U.S. water bodies with high levels of urban development, the Great Lakes and Chesapeake Bay. Wildlife consume microplastic particles; ingestion can lead to physical harm, and can expose aquatic organisms to pollutants like PCBs that the plastics have absorbed from the surrounding environment.
The short (3-min) video summarizes the goals of the SF Bay Microplastics Project, which aims to better understanding the distribution of microplastic in San Francisco Bay and adjacent National Marine Sanctuaries, the pathways by which these contaminants enter the Bay, and possible means of controlling their release. 5 Gyres and San Francisco Estuary Institute are collaboratively carrying out the project.
SFEI Science at International Marine Debris Conference (March 12-16) (News)
El Cerrito Rain Garden
SFEI science will feature prominently at the Sixth International Marine Debris Conference in San Diego next week:
Hunting for Plastic in California’s Protected Ocean Waters (News)
Image from KQED
Rebecca Sutton, Meg Sedlak, and Diana Lin of SFEI, in partnership with Carolynn Box of 5 Gyres, conducted ocean water sampling associated with an ambitious project. The project is focused on determining the characteristics and fate of microplastics in the Bay and adjacent ocean waters. KQED reporter Lindsey Hoshaw published a story covering the team's activities along the California coast. After determinng that the Bay has greater than expected microplastic pollution, the science team, as reported by Hoshaw's story, is conducting further ground-breaking research.
Local News: Scientists launch major study of microplastics pollution in San Francisco Bay (News)
SFEI scientists process microplastic samples collected from San Francisco Bay.
SFEI and The 5 Gyres Institute have launched an ambitious two-year research project to monitor San Francisco Bay for pollution in the form of tiny particles of plastic pollution, reports ABC7 News. These microplastic particles are eaten by local fish, according to previous studies, which can expose them to harmful contaminants.
A two-year investigation on microplastic and nanoplastic pollution in San Francisco Bay and the surrounding ocean will launch this month, led by two research centers, the San Francisco Estuary Institute and the 5 Gyres Institute. | <urn:uuid:9c919b4f-7cbc-442d-8b8d-3f51c89eaf5c> | 3.3125 | 949 | About (Org.) | Science & Tech. | 10.690792 | 95,588,122 |
What’s the value of a null pointer?
No doubt you’ve been involved in the (always heated) discussions about which is the correct one (By the way, if you said NUL you need to take yourself to one side and give yourself a stern talking to).
The arguments tend to go something like this:
- 0 is the only ‘well-known’ value a pointer can be set to that can be checked.
- NULL is more explicit than just writing zero (even though it is just a macro definition wrapper)
The problem with using 0 or NULL is that they are, in fact, integers and that can lead to unexpected behaviours when function overloading occurs.
Based on what we’ve just discussed it should be pretty straightforward to see that the int overload will be called. (This rather weakens the argument that NULL is more explicit – explicitly confusing in this case!)
It gets worse: Implementations are free to define NULL as any integer type they like. In a 32-bit system it might seem reasonable to set NULL to the same size as a pointer:
Sadly, this just adds confusion to our code:
In case you’re a C programmer who’s looked at this (and is feeling pretty smug at the moment):
Unfortunately this won’t compile, because a void* cannot be implicitly converted to an int, long or int* (or any other type) in C++.
In C++11 the answer to the question of the value of a pointer is much more simple: it is always nullptr
In our code, using nullptr instead of NULL gives the results we expect
nullptr is a keyword that represents any ‘empty’ pointer (and also any pointer-like objects; for example smart pointers) A nullptr is not an integer type, or implicitly convertible to one (it’s actually of type nullptr_t). However, it is implicitly convertable to bool; and to maintain backward compatibility the following code will work as expected:
Can’t wait? Download the full set of articles as a PDF, here.
To learn more about Feabhas’ Modern C++ training courses, click here.
Latest posts by Glennan Carnie (see all)
- Your handy cut-out-and-keep guide to std::forward and std::move - April 26, 2018
- Setting up Sublime Text to build your project - April 12, 2018
- “May Not Meet Developer Expectations” #77 - February 15, 2018 | <urn:uuid:13807b7b-ad86-42b4-a79e-351e7b370022> | 3.078125 | 533 | Personal Blog | Software Dev. | 47.46805 | 95,588,129 |
Another Example: Successive Polarization Filters for Beams of Spin s= 1/2 Particles
So far, our first example of a unitary transformation from one basis to another involved a finite-dimensional unitary submatrix. Let us consider one more example of this type, an even simpler example involving spin s = 1/2 particles, hence, a 2 × 2-dimensional transformation. Suppose we have a beam of spin s = 1/2 particles. They can be prepared, so all are in a state of definite spin orientation, say, with m s = +1/2, or with m s =− 1/2, along some specific z-direction in 3-D space by passing the beam through a polarization filter. The historically first such filter is that employed by Stern and Gerlach involving a set of three magnets, with nonuniform magnetic fields, placed in succession along the beam line, so a set of baffles can eliminate the particles with one of the two spin orientations. Other types of sophisticated polarization filters exist. (For a reference to modem polarization filters, see, e.g., Polarized Beams and Polarized Gas Targets, Hans Paetz gen. Schieck and Lutz Sydow, eds. World Scientific, 1996). We will assume the filter is perfect and prepares particles in a pure state of very definite m s along a specific z-direction. Suppose the first such filter is followed with a second filter, identical to the first, but now with its new z’ axis oriented along some new direction, given by polar and azimuth angles, θ and φ, relative to the original x, y, z axes, and set for some definite m’s along the new direction. What fraction of the s = 1/2-particles will pass through the second filter?
KeywordsPure State World Scientific Unitary Transformation Azimuth Angle Beam Line
Unable to display preview. Download preview PDF. | <urn:uuid:371521c5-3f9c-4162-ae6e-5e0410918dc7> | 2.625 | 408 | Truncated | Science & Tech. | 48.614067 | 95,588,130 |
Fasten your seat belts, turbulence ahead - lessons from Titan
28 August 2007
Ever spilled your drink on an airline due to turbulence? Researchers on both sides of the Atlantic are finding new ways to understand the phenomenon - both on Earth and on Titan.
Turbulence plays an important role in Earth’s weather system, and can be more than an inconvenience - hundreds of injuries have occurred on commercial flights due to turbulence. It is studied both in Earth's atmosphere and in that of Saturn's moon, Titan, aided by data from ESA’s Huygens probe. The study of one is helping the other.
Giles Harrison, atmospheric physicist at the University of Reading in the UK, devised an inexpensive way to measure the effects of turbulence using weather balloons. The instrument package contains a magnetic field sensor which measures fluctuations in Earth’s magnetic field due to turbulence. As Earth's magnetic field is very stable, the measurements of magnetic changes taken with the weather balloon showed the effects of turbulence on the sensor, since the balloon itself was moving very violently.
All bodies, planets and moons, are subject to the same principles of physics. So by working together, researchers looking at Earth and those looking at our planetary neighbours can really test their models of the processes taking place and gain new insights into both.
Planetary scientist Ralph Lorenz, at the Johns Hopkins University Applied Physics Laboratory in the USA, found Harrison's results key to making sense of data from Huygens, which descended by parachute through Titan's atmosphere in January 2005.
The Surface Science Package (SSP) on board Huygens included a set of tilt sensors which measured motions of the probe during its descent. These tilt sensors act much like a drink in a glass, using a small slug of liquid to measure the tilt angle.
As the probe plummeted under the parachute through Titan’s atmosphere, there was a lot of buffeting, even though the air itself was fairly still. Knowing the signature of cloud-induced turbulence in Harrison's balloon data from Earth inspired Lorenz to look for a similar effect in the Huygens data using the tilt sensor.
“Huygens’ tilt history was just this long, squiggly, complex mess, but seeing the fingerprint of cloud turbulence in Harrison's work showed me what to look for,” said Lorenz.
Armed with that information, Lorenz found that a 20-minute period of Huygens' 2.5-hour descent, around an altitude of 20 km, was affected by this kind of in-cloud turbulence. Having experimented with instrumentation on small models, even frisbees, to understand the dynamics of aerospace vehicles like the probe, Lorenz was familiar with the sensors used by Harrison.
Lorenz’s analysis helped identify a turbulent cloud layer in Titan’s atmosphere - a significant result for the investigation of Titan’s meteorology. In the process, he also found a way to improve Harrison's magnetic sensor arrangement on the weather balloon, simply by changing its orientation.
Mark Leese, Project Manager for the SSP on Huygens at The Open University said “We knew Huygens had a bumpy ride down to Titan’s surface. Now we can separate out twenty minutes of air turbulence – probably due to a cloud layer - from other effects such as cross winds or air buffeting due to the irregular shape of the probe.”
Notes for editors:
Lorenz's analysis ‘Descent motions of the Huygens probe as measured by the Surface Science Package (SSP): turbulent evidence for a cloud layer’, by R. Lorenz, J. Zarnecki, M. Towner, M. Leese, A. Ball, B. Hathi, A. Hagermann and N. Ghafoor, in the online version of the Planetary and Space Science journal. It is expected to appear in print in November this year.
The original work by Harrison and Hogan was published last year in the Journal of Atmospheric and Oceanic Technology, in a paper titled ‘In Situ Atmospheric Turbulence Measurement Using the Terrestrial Magnetic Field— A Compass for a Radiosonde’ by R. Harrison and R. Hogan.
An exchange of ideas between Lorenz and Harrison appears in the August 2007 issue of the Journal of Oceanic and Atmospheric Technology.
Harrison's work is supported by the Paul Instrument Fund of the Royal Society, Lorenz is supported by NASA's Cassini Project. The Science and Technology Facilities Council funds UK participation in the Cassini Huygens mission, in particular, the research at The Open University.
Weather balloons carry packages known as radiosondes, which take (sounding) measurements of air temperature, moisture and wind direction used for weather forecasting. The balloons are filled with helium or hydrogen gas and the measurements are sent back to the surface by radio. When the balloon bursts, usually at 15 to 20 km altitude, the instruments fall to earth by parachute.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency.
The Jet Propulsion Laboratory (JPL), a division of the California Institute of Technology in Pasadena, manages the Cassini-Huygens mission for NASA's Science Mission Directorate, Washington DC. JPL designed and assembled the Cassini orbiter.
Development of the Huygens probe was managed by ESA’s European Space Technology and Research Centre (ESTEC). ASI managed the development of the high-gain antenna and the other instruments that were part of its contribution.
For more information :
Ralph Lorenz, John Hopkins University Applied Physics Laboratory, USA
Email : Ralph.Lorenz @ jhuapl.edu
Giles Harrison, Department of Metrology, University of Reading, UK
Email : R.G.Harrison @ reading.ac.uk
Jean-Pierre Lebreton, ESA Huygens Project scientist
Email : Jean-Pierre.Lebreton @ esa.int | <urn:uuid:b2f18fc8-5942-40c5-af4d-200f76244575> | 3.40625 | 1,260 | Knowledge Article | Science & Tech. | 41.444796 | 95,588,149 |
- Research highlight
- Open Access
Is canalization more than just a beautiful idea?
© BioMed Central Ltd 2010
Published: 16 March 2010
The heat-shock protein 90 (Hsp90) is currently thought to buffer eukaryotic cells against perturbations caused by pre-existing cryptic genetic variation. A new study suggests that the buffering function of Hsp90 could instead be due to its repression of de novo transposon-mediated mutagenesis.
In the 1940s, the developmental biologist and geneticist CH Waddington coined the concept of 'developmental stability', or the robustness of the phenotype against genetic and environmental perturbations [1, 2]. It has been claimed that this robustness, termed 'canalization', has evolved under natural selection to stabilize phenotypes and decrease their variability. This is achieved by buffering the expression of traits, holding them near their optimal states despite genetic and environmental perturbations. Canalization also allows the accumulation of 'cryptic genetic variation' caused by mutations that do not affect the phenotype. Canalized traits are phenotypically expressed only in particular environments or genetic backgrounds and become available for natural selection, a mechanism that can lead to the assimilation of novel traits.
The Hsp90 story in flies has become very complicated, however. Recent studies have shown that the buffering by Hsp90 is limited to specific morphological traits and does not affect others. This supports the idea that numerous mechanisms are involved in developmental buffering, and that Hsp90 is just one of many capacitors for genetic variation [1, 2]. In addition, Hsp90 is a very abundant protein, in some cells accounting for up to 2% of the total protein content, and a reduction in Hsp90 activity affects the expression levels of numerous genes. A new study that implicates Hsp90 in the repression of transposon-mediated mutagenesis now further complicates the story. In work recently published in Nature, Specchia et al. show that biogenesis of the small PIWI-interacting RNA (piRNA) in Drosophila depends on the activity of Hsp90. These results are of interest not only for the insights they provide into the molecular pathways of piRNA production, but also because they imply that Hsp90 prevents phenotypic variation by suppressing de novo mutation caused by the activity of transposons in the germline, one of the known roles of the piRNAs in Drosophila. This calls for current ideas on the buffering role of Hsp90 in flies to be revisited.
piRNAs are one class of the numerous small RNAs (around 20 to 30 nucleotides long) that are expressed by eukaryotic cells and that trigger sequence-specific gene silencing called RNA silencing [5, 6]. By base pairing with target mRNAs, the small RNAs guide inhibitory complexes based on members of the Argonaute class of proteins (which includes the PIWI proteins) to the mRNAs, resulting in mRNA destruction or the inhibition of translation. RNA silencing is thought to have evolved as a form of nucleic-acid-based immunity to inactivate parasitic and pathogenic invaders such as viruses and transposable elements (transposons) . In Drosophila, the endogenous small interfering RNA (esiRNA) pathway of RNA silencing restrains the expression of transposons in somatic cells, whereas the piRNA pathway represses transposon activity in germline cells.
Transposons are generally considered as 'selfish DNA' elements usually hidden from sight. They can move around the genome, transposing into new sites and causing insertion mutations that are frequently deleterious. Thus, host genomes have evolved multiple mechanisms for regulating transposons, including RNA silencing. Transposition is also potentially adaptive by occasionally providing a source of genetic diversity . Thus, a transposable element is often defined as a natural, endogenous, genetic toolbox for mutagenesis. In addition, transposon defense mechanisms have recently been shown to be co-opted or borrowed to provide additional regulatory complexity for host genes [7–9].
The production of esiRNAs from their longer precursor transcripts requires the processing activity of the ribonuclease Dicer. By contrast, the production of piRNAs is independent of Dicer. Drosophila has three distinct PIWI proteins, AGO3, Aubergine, and Piwi, all of which exhibit the small RNA-guided ribonuclease ('Slicer') activity. Deep sequencing and bioinformatic analyses of Drosophila piRNAs suggest a model for piRNA biogenesis in which PIWI subfamily proteins guide the 5' end formation of piRNAs by reciprocally cleaving or slicing long sense and antisense transcripts of transposons. Thus, in this amplification loop, which is called the ping-pong cycle, transposons are both a source of piRNAs and a target of piRNA-mediated silencing. However, classification of piRNAs according to their origins indicated that piRNAs derived from a particular piRNA cluster locus are exclusively loaded onto one of the PIWI proteins, Piwi, indicating that those piRNAs are produced by a pathway independent of the ping-pong cycle. This pathway is called the primary processing pathway [5, 6]. The mechanism of their production, however, has been largely unclear.
Specchia et al. examined the effect of Hsp90 mutations on transposon mobility in individual flies and found that in homozygous Hsp90 null mutants, several transposons had jumped into new sites within the genome. They further showed that approximately 1% of Hsp90 mutants screened (30 out of 3,220 flies) exhibited morphological abnormalities. Together, these findings suggested that the phenotypic variation observed among Hsp90 mutants could be due to de novo mutations produced by activated transposable elements rather than to the buffering of pre-existing cryptic genetic variation. For example, among the abnormalities observed by Specchia et al. among their Hsp90 mutants was a fly resembling the Scutoid phenotype (in which there is a loss of bristles from the head and thorax of the adult), which is caused by a mutation in the noc gene. The authors demonstrated that the coding sequence of the noc gene in this fly was indeed interrupted by an I-element-like transposon sequence. This indicates that the Scutoid phenotype found in the screen was caused by a de novo mutation and not by the expression of a pre-existing genetic variation (Figure 2b).
As well as suggesting that a reinterpretation of the buffering role of Hsp90 might be needed, these new findings also provide evidence supporting a model in which Hsp90 is involved in the control of transposon activity in germ cells by affecting piRNA biogenesis. piRNAs in Drosophila are produced almost exclusively in germ cells from intergenic repetitive genes, transposable elements and piRNA clusters by two pathways: the primary processing pathway, and the amplification 'ping-pong' loop [5, 6]. Mature piRNAs are loaded onto the PIWI subfamily of Argonaute proteins, and the amplification loop is known to be independent of Dicer but dependent on the Slicer activity of PIWI proteins. However, the mechanisms of primary piRNA processing remain elusive. How does Hsp90 function in piRNA biogenesis and which of the two piRNA production pathways is it involved in? Hsp90 can, for example, be co-purified with the Slicer activity of Ago2, one of the mammalian Argonaute proteins .
Hsp90 could play a role in the biogenesis of small silencing RNAs either as a chaperone for the correct folding of the Argonaute proteins or by providing an assembly platform for components of the small RNA biogenetic machinery to promote the loading of small RNAs onto the Argonaute proteins. It will be important to ascertain whether Hsp90 interacts with the PIWI proteins in flies and has a role in their function, such as ensuring their correct cellular localization, and also whether mutations in Hsp90 affect either or both of the two piRNA biogenesis pathways. It will also be interesting to examine whether Hsp90 is required for the esiRNA pathway that silences transposable elements in somatic cells. Further investigation should reveal the role of Hsp90 in RNA silencing and help expand our understanding of transposon regulation by RNA-silencing pathways.
- Flatt T: The evolutionary genetics of canalization. Q Rev Biol. 2005, 80: 287-316. 10.1086/432265.PubMedView ArticleGoogle Scholar
- Hornstein E, Shomron N: Canalization of development by microRNAs. Nat Genet. 2006, 38: S20-S24. 10.1038/ng1803.PubMedView ArticleGoogle Scholar
- Rutherford RL, Lindquist S: Hsp90 as a capacitor for morphological evolution. Nature. 1998, 396: 336-342. 10.1038/24550.PubMedView ArticleGoogle Scholar
- Specchia V, Piacentini L, Tritto P, Fanti L, D'Alessandro R, Palumbo G, Pimpinelli S, Bozzetti MP: Hsp90 prevents phenotypic variation by suppressing the mutagenic activity of transposons. Nature. 2010, 463: 662-665. 10.1038/nature08739.PubMedView ArticleGoogle Scholar
- Siomi H, Siomi MC: On the road to reading the RNA interference code. Nature. 2009, 457: 396-404. 10.1038/nature07754.PubMedView ArticleGoogle Scholar
- Ghildiyal M, Zamore PD: Small silencing RNAs: an expanding universe. Nat Rev Genet. 2009, 10: 94-108. 10.1038/nrg2504.PubMedPubMed CentralView ArticleGoogle Scholar
- Girard A, Hannon GJ: Conserved themes in small-RNA-mediated transposon control. Trends Cell Biol. 2008, 18: 136-148. 10.1016/j.tcb.2008.01.004.PubMedPubMed CentralView ArticleGoogle Scholar
- Kazazian HH: Mobile elements: drivers of genome evolution. Science. 2004, 303: 1626-1632. 10.1126/science.1089670.PubMedView ArticleGoogle Scholar
- Siomi H, Siomi MC: Interactions between transposable elements and Argonautes have (probably) been shaping the Drosophila genome throughout evolution. Curr Opin Genet Dev. 2008, 18: 181-187. 10.1016/j.gde.2008.01.002.PubMedView ArticleGoogle Scholar
- Liu J, Carmell MA, Rivas FV, Marsden CG, Thomson JM, Song JJ, Hammond SM, Joshua-Tor L, Hannon GJ: Argonaute2 is the catalytic engine of mammalian RNAi. Science. 2004, 305: 1437-1441. 10.1126/science.1102513.PubMedView ArticleGoogle Scholar | <urn:uuid:94c4bc6d-5168-4d42-b97a-cc181dacf33d> | 2.84375 | 2,366 | Academic Writing | Science & Tech. | 39.163089 | 95,588,155 |
The Actinomycetes[s., actinomycete], according to the latest edition of Bergey’s Manual
(Volume 4), represent an aerobic, Gram-positive bacteria which predominantly and essentially give rise
to specificbranching filaments* or asexual spores** or hyphae***. It has been duly observed that the
elaborated morphology, arrangement of spores, explicit cell-wall chemistry, and above all the various
kinds of carbohydrates critically present in the cell extracts are specifically vital and equally important
requirement for the exhaustive taxonomy of theactinomycetes. Consequently, these informations are
utilized meticulously to carry out the articulated division of these bacteria into different well-defined
categories with great ease and fervour. It is quite pertinent to state at this juncture, that theactinomycetes
do possess and exert anappreciable practical impact by virtue of the fact that they invariably play an
apparentmajor role in the following two highly specialized and particular aspects, namely:
(a) Mineralization of organic matter in the soil, and
(b) Primary source of most naturally synthesized antibiotics.
220.127.116.11. General Characteristics
The general characteristics of theactinomycetes are as stated under :
(a) The branching network of hyphae usually developed by the actinomycetes, grows critically
both on the surface of thesolid substratum (e.g., agar) as well as into it to give rise to the
formation ofsubstrate mycelium. However, the septate**** mostly divide the hyphae into
specific elongated cells (viz., 20 μm and even longer) essentially consisting of a plethora of
(b) Invariably, the actinomycetes afford the development of thallus. Noticeably, a large crosssectionof the
actinomycetes do possess an aerial mycelium that extends above the solid
subtratum,and produces articulately asexual, thin-walled spores known as conidia
[s., conidium] or conidiospores at the terminal ends of filaments. In an event, when the
spores are located strategically in asporangium, they are termed as sporangiospores.
(c) The spores present in the actinomycetes not only vary widely in terms of shape and size, but
alsodevelop them (spores) by the help of septal formation at the tips of the filaments, invariably
in response tonutrient deprivation. Besides, a larger segment of these spores are
specifically devoid of any thermal resistance; however, they dowithstand dessication quite
satisfactorily, and thus exhibit considerable adaptive value.
(d) Generally, most actinomycetes are not found to be motile,* and the motility is particularly
confined to the flagellated spores exclusively.
In the recent past, several taxonomically characteristic features and useful techniques are of
immense value and worth, such as:
• Morphological features and the colour ofmycelia and sporangia
• Surface properties and arrangement ofconidiospores
•% (G + C) in DNA
•Phospholipid content and composition of cell membranes
•Thermal resistance encountered in spores
• Comparison of16S rRNA sequences and their values
• Production of relativelylarger DNA fragments by means of restriction enzyme digestion,
• Ultimate separation and comparison of‘larger DNA fragments’ by the aid of Pulsed FieldElectrophoresis.
Significance of Actinomycetes
There are, in actual practice,three most important practical significances of the actinomycetes,
as mentioned below:
(1)Actinomycetes are predominantly the inhabitants of soil and are distributed widely.
(2) They are able to degrade a large variety and an enormous quantum of organic chemical
entities. However, these are of immense significance in the mineralization of organic matter.
(3) They invariably and critically give rise to a large excess of extremely vital‘natural antibiotics’
that are used extensively in the therapeutic armamentariume.g., actinomycetin. Importantly,
a plethora ofactinomycetes represent free-living microbes, whereas a few are
pathogens to human beings, animals, and even certain plants.
Fig. 3.5. illustrates the cross-section of an actinomycete colony withliving and dead hyphae.The substrate and aerial mycelium having chains of conidiospores have been depicted evidently.
Theactinomycetes have been duly classified into three major divisions based upon the following
(a) Whole cell carbohydrate patterns of aerobic actinomycetes
(b) Major constituents of cell wall types of actinomycetes, and
(c) Groups of actinomycetes based on whole cell carbohydrate pattern and cell wall type.The aforesaid
three major divisions shall now be dealt with separately in the sections that follows.
Actinomycetes with Multiocular** Sporangia***
The latest version ofBergey’s Manual has explicitly described the actinomycetes occurring as
the‘clusters of spores’ in a specific situation when a hypha undergoes division both transversely and
logitudinally.In reality, all the three genera critically present in this section essentially possess chemotype
III cell walls,whereas the cell extract carbohydrate patterns differ prominently.
Salient Features:The salient features of the actinomycetes with multiocular sporangia are as
(1) The mole % (G + C) values varies from 57 to 75.
(2)Chemotype III C Cell Walls****: Geodermatophillus belonging to this category has motilespores and is specifically an aerobic soil organism.
(3)Chemotype III B Cell Walls : Dermatophillus invariably gives rise to pockets of motile spores
having tufts offlagella. It is a facultative anaerobe and also a parasite of mammals actually
responsible for the skin infectionstreptothricosis.
(4)Chemotype III D Cell Walls: Frankia usually produces non-motile sporangiospores evidently
located in a sporogenous body. It is found to extend its normal growth in a symbiotic
association particularly with the roots ofeight distinct families of higher non-leguminous
plant sourcesviz., alder trees. These organisms are observed to be extremely efficient
microaerophilic nitrogen-fixerswhich frequently take place very much within the root
nodulesof the plants. Furthermore, the roots of the infected plants usually develop nodules
that would eventually cause fixation of nitrogen so efficiently that a plant, for instance : an
alder tree,may grow quite effectively even in the absence of combined N2, when nodulated
respectively. It has been duly observed that very much inside thenodule cells, Frankia invariably
gives rise tobranching hyphae having globular vesicles strategically located at
their ends. Consequently, these vesicles could be the most preferred sites of the N2 fixation
ultimately. However, the entire phenomenon of N2 fixation is quite similar to that of Rhizobium
wherein it is both O2 sensitive and essentially and predominantly needs two elements, namely :molybdenum (Mo), and cobalt (Co).
Actinomycetes and Related Organisms
This particular section essentially comprises of a relatively heterogenous division of a large crosssection
of microorganisms having altogether diverse characters including:group, genus, order, and
family,as outlined below :
(a) Group: Coryneform
(b) Genus: Arthrobacter, Cellulomonas, Kurthia, Propionibacterium
(c) Order: Actinomycetales, and
(d) Family: Actinomycetaceae, Mycobacteriaceae, Frankiaceae, Actinoplanaceae, Nocardiaceae,Streptomycetaceae, Micromonosporaceae.
Salient Features :The salient features of coryneform bacteria are as follows:
(1) They are usually non-motile, Gram-positive, and non-acid fast.
(2) They are mostly chemoorganotrophs, aerobic, and also facultatively anaerobic.
(3) They are widely distributed in nature with % (G + C) values ranging between 52 to 68 moles
(4) The type species belonging to this class is represented byC. diphtheriae which is particularly
known to produce a highlylethal exotoxin and causes the dreadly disease in humans
(b) Plant pathogenic corynebacteria: Interestingly, the bacteria belonging to this particular class
is closely akin to those present in section (a) above; however, these are essentially characterized by three
prominent features, namely: (i) less pleomorphic, (ii) strictly aerobic in nature, and (iii) possess % (G + C)
values ranging between 65–75 moles per cent.
Based on ample scientific evidences, this particular section is further sub-divided intofour
categories, such as: (i) types of polysaccharide antigens, (ii) composition of amino acids present duly in
cell walls, (iii) minimal nutritional requirements, and (iv) etiology of the disease caused in plants.
(c) Non-pathogenic corynebacteria: This particular section essentially consists of non-pathogenic
corynebacteria quite commonly derived and isolated from soil, water, air, and are invariably described in
the literature very scantily by virtue of theirmorphological similarities and hence, the virtual scope ofany possible distinct differentiation.
Thefour prominent genus shall be treated individually in the sections that follows:
(a) Arthrobacter: The genus Arthrobacter essentially consists of such organisms that undergo a
marked and pronounced change in form particularly in the course of their respective growth on the
complex media.It has been duly observed that the relatively ‘older cultures’ do comprise of coccoid
cells*very much resembling to micrococci in their appearance. In certain specific instances, the cells
could be eitherspherical to ovoid or slightly elongated. Importantly, when these are carefully transferred
to the‘fresh culture media’, consequently the ultimate growth takes place by two distinct modes,
namely : (a) due to swelling, and (b) due to elongation of the coccoid cells, to produce rods thatessentially have a diameter precisely much less in comparison to the corresponding
Arthrobacter’s subsequent growth and followed up divisions usually yields irregular rods that
vary appreciably both in size and shape.
Importantly, a small segment of the rods are invariably arranged at an‘angle’ to each other
thereby causing deformation. However, in richer media, cells may exhibit preliminary (rudimentary)
branching, whereas the formation oftrue mycelia cease to form. Besides, along with the passage of the
‘exponential phase’,the rods turn out to be much shorter and get converted to the corresponding coccoid
cells.A few other prevalent characteristics are as follows:
• Rods are either non-motile completely or motile by one sub-polar or a few lateral flagella.
• Coccoid cells are Gram-positive in nature, chemoorganotrophic, aerobic soil organisms having
a distinctrespiratory metabolism.
• Species present within the genus are invariably categorized and differentiated solely depending
on the composition of cell wall; hydrolysis of gelatin, starch etc.; and the ultimate
It is, however, pertinent to state here thetwo other genera although whose actual and precise
affiliation is still‘uncertain’, yet they are quite related to the Arthrobacter, namely: Brevibacterium and
(b) Cellulomonas: The genus Cellulomonas essentially comprises of bacteria that have the competence
and ability to hydrolyse thecellulose particularly.
Salient Features :The various vital and important salient features are as stated below:
(1) Thecells usually observed in young cultures are irregular rods having a diameter nearly 0.5 μm
and a length ranging either between 0.7 to 2μm or even slightly in excess.
(2) The appearance of thecells could be straight, slightly curved, or angular or beaded or occasionally
(3) Importantly, certaincells may be arranged strategically at an angle to each other as could be
observed in the case ofArthrobacter [see section 18.104.22.168(a)]; besides, they (cells) may infrequently
exhibit rudimentary branching as well.
(4)Older cultures are invariably devoid of ‘true mycelia’ but the ‘coccoid cells’ do predominate
(5) Thecells may be Gram-positive to Gram-negative variable, motile to non-motile variable,
non acid-fast, aerobic chemo-organotrophos, having anoptimum growth temperature at 30°C.(6) The % (G + C) values ranges between 71.7 to 72.7 moles.
Interestingly, there exists only one species,Cellulomonas flavigenum, which is exclusively known
and recognized; and found commonly in the soil.
(c) Kurthia: The genus Kurthia is specifically characterized by organisms that are prominently
and rigidlyaerobic in nature; besides, they happen to be chemoorganotrophs. Young cultures essentially
comprise of cells that are mostly unbranched rods having round ends, and occurring as distinct
parallel chains. Older culturesnormally comprise of coccoid cells that are critically obtained by the
fragmentation of rods.
Salient Features:The salient features of the organisms belonging to the genus Kurthia are as
(1) The rods are renderedmotile by the presence of peritrichous flagella*.
(2) Thecells predominantly grow in abundance, particularly in the presence of sodium chloride
(NaCl) solution [4 to 6% (w/v)] prepared in sterilized distilled water.
(3) The optimum temperature required for the healthy growth of the cells usually varies between
25 to 30°C.
Interestingly, there prevails only one species,Kurthia zoefi, that has been duly recognized anddescribed in the literature.
(d) Propionibacterium: The family Propionibacteriaceae invariably consists microbes that have
the following characteristic features :
(i) They are all Gram-positive, non-spore forming, anaerobic to aerotolerant, pleomorphic,
branching or filamentous or regular rods.
(ii) On being subjected to ‘fermentative procedures’ it has been duly observed that the major
end-products ultimately generated are, namely :propionic acid, acetic acid, carbon dioxide,
or amixture of butyric, formic, lactic together with other monocarboxylic acids.
(iii) Growth: Their normal growth is usually enhanced by the very presence of carbon dioxide,
(iv) Habitat: These microbes are normally inhabitants of skin, respiratory, and the intestinal
tracts of a large cross-section of animals.
A survey of literature would reveal the description oftwo genera, namely : Propionibacteriumand Eubacterium.
Propionibacterium: The genus Propionibacterium predominantly comprises of such bacterial
cellsthat happen to be virtually non-motile, anaerobic to aerotolerant, and essentially give rise to propionic
acidas well as acetic acid.
Salient Features:The bacterial cells do have the following salient features, such as :
(1) They are quite often arranged in pairs, singles or ‘V’ and ‘Y’ configurations.
(2) These are actuallychemoorganotrophs which eventually attain growth very rapidly between
a temperature ranging between 32–37°C.
(3) A large and appreciable quantum of strains do grow either in 20% (w/v) bile salts or 6.5%
(w/v) sodium-chloride/glucose broth.
(4) Certain species are observed to be pathogenic in nature.
However, the genusPropionibacterium essentially includes eight species that have been duly
identified, characterized, and recognized entirely based upon theirend products derived from their
Eubacterium: The genus Eubacterium comprises prominently of such bacterial cells that could
be either motile or non-motile, obligatory anaerobic, and lastly either non-fermentative or fermentative
in nature. It has been adequately demonstrated that particularly thefermentative species give rise to
mixtures of organic acids,viz., butyric, acetic, formic or lactic, or even other monocarboxylic organic
acids. Besides, these bacterial cells undergo both profuse and rapid growth at 37°C, and are invariably
observed to be located strategically in the various marked and pronounced cavities inhumans, animals,
soil, and plant products.Interestingly, there are certain species belonging to this genus which exhibit distinct pathogenicity. | <urn:uuid:56e9c527-e10b-4a8f-b0fb-c3e6b77562d8> | 3.109375 | 3,768 | Academic Writing | Science & Tech. | 6.644675 | 95,588,171 |
Influences of strip mining on the hydrologic environment of parts of Beaver Creek Basin, Kentucky, 1955-59 / by Charles R. Collier [and others].
- Physical Description: 1 online resource (x, B85 pages) : illustrations, maps + 2 plates.
- Publisher: Washington : United States Department of the Interior, Geological Survey, 1964.
Title from title screen (viewed September 29, 2014).
"Prepared in collaboration with the U.S. Dept. of Interior, Bureau of Sport Fisheries and Wildlife and Bureau of Mines; U.S. Department of Agriculture, Forest Service and Soil Conservation Services; Department of the Army, Corps of Engineers; and the Commonwealth of Kentucky, University of Kentucky, Geological Survey, Department of Conservation, and the Department of Fish and Wildlife Resources."
|Bibliography, etc. Note:||
Includes bibliographical references and index.
Search for related items by subject
Search for related items by series
- Geological Survey professional paper ; 427-B
- Hydrologic influences of strip mining
- Geological Survey professional paper ; 427-B.
- Hydrologic influences of strip mining. | <urn:uuid:63ad704f-1541-4811-891e-8a2e1588cc56> | 2.96875 | 241 | Truncated | Science & Tech. | 34.242151 | 95,588,193 |
Differences in the photosynthetic plasticity of ferns and Ginkgo grown in experimentally controlled low [O2]: [CO2] atmospheres may explain their contrasting ecological fate across the Triassic-Jurassic mass extinction boundary
|Title:||Differences in the photosynthetic plasticity of ferns and Ginkgo grown in experimentally controlled low [O2]: [CO2] atmospheres may explain their contrasting ecological fate across the Triassic-Jurassic mass extinction boundary||Authors:||Yiotis, Charilaos
McElwain, Jennifer C.
|Permanent link:||http://hdl.handle.net/10197/8504||Date:||11-Mar-2017||Abstract:||Background and Aims: Fluctuations in [CO2] have been widely studied as a potential driver of plant evolution; however, the role of a fluctuating [O2]:[CO2] ratio is often overlooked. The present study aimed to investigate the inherent physiological plasticity of early diverging, extant species following acclimation to an atmosphere similar to that across the Triassic–Jurassic mass extinction interval (TJB, approx. 200 Mya), a time of major ecological change. Methods: Mature plants from two angiosperm (Drimys winteri and Chloranthus oldhamii), two monilophyte (Osmunda claytoniana and Cyathea australis) and one gymnosperm (Ginkgo biloba) species were grown for 2 months in replicated walk-in Conviron BDW40 chambers running at TJB treatment conditions of 16 % [O2]– 1900 ppm [CO2] and ambient conditions of 21 % [O2]–400 ppm [CO2], and their physiological plasticity was assessed using gas exchange and chlorophyll fluorescence methods. Key Results: TJB acclimation caused significant reductions in the maximum rate of carboxylation (VCmax) and the maximum electron flow supporting ribulose-1,5-bisphosphate regeneration (Jmax) in all species, yet this downregulation had little effect on their light-saturated photosynthetic rate (Asat). Ginkgo was found to photorespire heavily under ambient conditions, while growth in low [O2]:[CO2] resulted in increased heat dissipation per reaction centre (DIo/RC), severe photodamage, as revealed by the species' decreased maximum efficiency of primary photochemistry (Fv/Fm) and decreased in situ photosynthetic electron flow (Jsitu). Conclusions: It is argued that the observed photodamage reflects the inability of Ginkgo to divert excess photosynthetic electron flow to sinks other than the downregulated C3 and the diminished C2 cycles under low [O2]:[CO2]. This finding, coupled with the remarkable physiological plasticity of the ferns, provides insights into the underlying mechanism of Ginkgoales' near extinction and ferns' proliferation as atmospheric [CO2] increased to maximum levels across the TJB.||Funding Details:||European Research Council||Type of material:||Journal Article||Publisher:||Oxford University Press||Copyright (published version):||2017 the Author||Keywords:||Triassic–Jurassic boundary;Ginkgo biloba;Gymnosperms;Monilophytes;Angiosperms;High CO2;Low O2;Photosynthetic plasticity;Photorespiration;Photodamage;Stomatal conductance;Mesophyll conductance||DOI:||10.1093/aob/mcx018||Language:||en||Status of Item:||Peer reviewed|
|Appears in Collections:||Biology & Environmental Science Research Collection|
Earth Institute Research Collection
Show full item record
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. For other possible restrictions on use please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply. | <urn:uuid:4fb87f47-dc1b-4127-b3a2-345cff7ff150> | 2.53125 | 874 | Academic Writing | Science & Tech. | 7.679753 | 95,588,221 |
An activity coefficient is a factor used in thermodynamics to account for deviations from ideal behaviour in a mixture of chemical substances. In an ideal mixture, the microscopic interactions between each pair of chemical species are the same (or macroscopically equivalent, the enthalpy change of solution and volume variation in mixing is zero) and, as a result, properties of the mixtures can be expressed directly in terms of simple concentrations or partial pressures of the substances present e.g. Raoult's law. Deviations from ideality are accommodated by modifying the concentration by an activity coefficient. Analogously, expressions involving gases can be adjusted for non-ideality by scaling partial pressures by a fugacity coefficient.
The concept of activity coefficient is closely linked to that of activity in chemistry.
- 1 Thermodynamic definition
- 2 Experimental determination of activity coefficients
- 3 Theoretical calculation of activity coefficients
- 4 Link to ionic diameter
- 5 Dependence on state parameters
- 6 Concentrated solutions of electrolytes
- 7 Application to chemical equilibrium
- 8 References
- 9 External links
B is the chemical potential of a pure substance and xB is the mole fraction of the substance in the mixture.
This is generalised to include non-ideal behavior by writing
when aB is the activity of the substance in the mixture with
where γB is the activity coefficient, which may itself depend on xB. As γB approaches 1, the substance behaves as if it were ideal. For instance, if γB ≈ 1, then Raoult's law is accurate. For γB > 1 and γB < 1, substance B shows positive and negative deviation from Raoult's law, respectively. A positive deviation implies that substance B is more volatile.
In many cases, as xB goes to zero, the activity coefficient of substance B approaches a constant; this relationship is Henry's law for the solvent. These relationships are related to each other through the Gibbs–Duhem equation. Note that in general activity coefficients are dimensionless.
In detail: Raoult's law states that the partial pressure of component B is related to its vapor pressure (saturation pressure) and its mole fraction xB in the liquid phase,
with the convention In other words: Pure liquids represent the ideal case.
At infinite dilution, the activity coefficient approaches its limiting value, γB∞. Comparison with Henry's law,
In other words: The compound shows nonideal behavior in the dilute case.
The above definition of the activity coefficient is impractical if the compound does not exist as a pure liquid. This is often the case for electrolytes or biochemical compounds. In such cases, a different definition is used that considers infinite dilution as the ideal state:
The symbol has been used here to dinstinguish between the two kinds of activity coefficients. Usually it is omitted, as it is clear from the context which kind is meant. But there are cases where both kinds of activity coefficients are needed and may even appear in the same equation, e.g., for solutions of salts in (water + alcohol) mixtures. This is sometimes a source of errors.
Modifying mole fractions or concentrations by activity coefficients gives the effective activities of the components, and hence allows expressions such as Raoult's law and equilibrium constants to be applied to both ideal and non-ideal mixtures.
Knowledge of activity coefficients is particularly important in the context of electrochemistry since the behaviour of electrolyte solutions is often far from ideal, due to the effects of the ionic atmosphere. Additionally, they are particularly important in the context of soil chemistry due to the low volumes of solvent and, consequently, the high concentration of electrolytes.
For solution of substances which ionize in solution the activity coefficients of the cation and anion cannot be experimentally determined independently of each other because solution properties depend on both ions. Single ion activity coefficients must be linked to the activity coefficient of the dissolved electrolyte as if undissociated. In this case a mean stoichiometric activity coefficient of the dissolved electrolyte, γ±, is used. It is called stoichiometric because it expresses both the deviation from the ideality of the solution and the incomplete ionic dissociation of the ionic compound which occurs especially with the increase of its concentration.
For a 1:1 electrolyte, such as NaCl it is given by the following:
where γ+ and γ− are the activity coefficients of the cation and anion respectively.
More generally, the mean activity coefficient of a compound of formula ApBq is given by
Single-ion activity coefficients can be calculated theoretically, for example by using the Debye–Hückel equation. The theoretical equation can be tested by combining the calculated single-ion activity coefficients to give mean values which can be compared to experimental values.
The prevailing view that single ion activity coefficients are unmeasurable independently, or perhaps even physically meaningless, has its roots in the work of Guggenheim in the late 1920s. However, chemists have never been able to give up the idea of single ion activities, and by implication single ion activity coefficients. For example, pH is defined as the negative logarithm of the hydrogen ion activity. If the prevailing view on the physical meaning and measurability of single ion activities is correct then defining pH as the negative logarithm of the hydrogen ion activity places the quantity squarely in the unmeasurable category. Recognizing this logical difficulty, International Union of Pure and Applied Chemistry (IUPAC) states that the activity-based definition of pH is a notional definition only. Despite the prevailing negative view on the measurability of single ion coefficients, the concept of single ion activities continues to be discussed in the literature, and at least one author presents a definition of single ion activity in terms of purely thermodynamic quantities and proposes a method of measuring single ion activity coefficients based on purely thermodynamic processes.
Experimental determination of activity coefficients
Activity coefficients may be determined experimentally by making measurements on non-ideal mixtures. Use may be made of Raoult's law or Henry's law to provide a value for an ideal mixture against which the experimental value may be compared to obtain the activity coefficient. Other colligative properties, such as osmotic pressure may also be used.
Theoretical calculation of activity coefficients
Activity coefficients of electrolyte solutions may be calculated theoretically, using the Debye–Hückel equation or extensions such as the Davies equation, Pitzer equations or TCPC model. Specific ion interaction theory (SIT) may also be used.
For non-electrolyte solutions correlative methods such as UNIQUAC, NRTL, MOSCED or UNIFAC may be employed, provided fitted component-specific or model parameters are available. COSMO-RS is a theoretical method which is less dependent on model parameters as required information is obtained from quantum mechanics calculations specific to each molecule (sigma profiles) combined with a statistical thermodynamics treatment of surface segments.
This simple model predicts activities of many species (dissolved undissociated gases such as CO2, H2S, NH3, undissociated acids and bases) to high ionic strengths (up to 5 mol/kg). The value of the constant b for CO2 is 0.11 at 10 °C and 0.20 at 330 °C.
where ν is the number of ions produced from the dissociation of one molecule of the dissolved salt, b is the molality of the salt dissolved in water, φ is the osmotic coefficient of water, and the constant 55.51 represents the molality of water. In the above equation, the activity of a solvent (here water) is represented as inversely proportional to the number of particles of salt versus that of the solvent.
Link to ionic diameter
where A and B are constant, zi is the valence number of the ion, and I is ionic strength.
Dependence on state parameters
The derivative of an activity coefficient with respect to temperature is related to excess molar enthalpy by
Similarly, the derivative of an activity coefficient with respect to pressure can be related to excess molar volume.
Concentrated solutions of electrolytes
For concentrated ionic solutions the hydration of ions must be taken into consideration, as done by Stokes and Robinson in their hydration model from 1948. The activity coefficient of the electrolyte is split into electric and statistical components by E. Glueckauf who modifies the Robinson Stokes model.
The statistical part includes hydration index number h , the number of ions from the dissociation and the ratio r between the apparent molar volume of the electrolyte and the molar volume of water and molality b.
Application to chemical equilibrium
At equilibrium, the sum of the chemical potentials of the reactants is equal to the sum of the chemical potentials of the products. The Gibbs free energy change for the reactions, ΔrG, is equal to the difference between these sums and therefore, at equilibrium, is equal to zero. Thus, for an equilibrium such as
- α A + β B ⇌ σ S + τ T
Substitute in the expressions for the chemical potential of each reactant:
Upon rearrangement this expression becomes
The sum σμ
S + τμ
T − αμ
A − βμ
B is the standard free energy change for the reaction, ΔrG
K is the equilibrium constant. Note that activities and equilibrium constants are dimensionless numbers.
This derivation serves two purposes. It shows the relationship between standard free energy change and equilibrium constant. It also shows that an equilibrium constant is defined as a quotient of activities. In practical terms this is inconvenient. When each activity is replaced by the product of a concentration and an activity coefficient, the equilibrium constant is defined as
where [S] denotes the concentration of S, etc. In practice equilibrium constants are determined in a medium such that the quotient of activity coefficient is constant and can be ignored, leading to the usual expression
which applies under the conditions that the activity quotient has a particular (constant) value.
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "Activity coefficient".
- DeHoff, Robert (2006). Thermodynamics in materials science (2nd ed.). Boca Raton, Fla.: CRC Taylor & Francis. pp. 230–231. ISBN 9780849340659.
- Ibáñez, Jorge G.; Hernández Esparza, Margarita; Doría Serrano, Carmen; Singh, Mono Mohan (2007). Environmental Chemistry: Fundamentals. Springer. ISBN 978-0-387-26061-7.
- Atkins, Peter; dePaula, Julio (2006). "Section 5.9, The activities of ions in solution". Physical Chemisrry (8th ed.). OUP. ISBN 9780198700722.
- Guggenheim, E. A. (1928). "The Conceptions of Electrical Potential Difference between Two Phases and the Individual Activities of Ions". The Journal of Physical Chemistry. 33 (6): 842–849. doi:10.1021/j150300a003. ISSN 0092-7325.
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "pH".
- Rockwood, Alan L. (2015). "Meaning and Measurability of Single-Ion Activities, the Thermodynamic Foundations of pH, and the Gibbs Free Energy for the Transfer of Ions between Dissimilar Materials". ChemPhysChem. 16 (9): 1978–1991. doi:10.1002/cphc.201500044. ISSN 1439-4235. PMC .
- Betts, R. H.; MacKenzie, Agnes N. "Radiochemical Measurements of Activity Coefficients in Mixed Electrolytes". Canadian Journal of Chemistry. 30 (2): 146–162. doi:10.1139/v52-020.
- King, E. L. (1964). "Book Review: Ion Association, C. W. Davies, Butterworth, Washington, D.C., 1962". Science. 143 (3601): 37. Bibcode:1964Sci...143...37D. doi:10.1126/science.143.3601.37. ISSN 0036-8075.
- Grenthe, I.; Wanner, H. "Guidelines for the extrapolation to zero ionic strength" (PDF).
- Ge, Xinlei; Wang, Xidong; Zhang, Mei; Seetharaman, Seshadri (2007). "Correlation and Prediction of Activity and Osmotic Coefficients of Aqueous Electrolytes at 298.15 K by the Modified TCPC Model". Journal of Chemical & Engineering Data. 52 (2): 538–547. doi:10.1021/je060451k. ISSN 0021-9568.
- Ge, Xinlei; Zhang, Mei; Guo, Min; Wang, Xidong (2008). "Correlation and Prediction of Thermodynamic Properties of Nonaqueous Electrolytes by the Modified TCPC Model". Journal of Chemical & Engineering Data. 53 (1): 149–159. doi:10.1021/je700446q. ISSN 0021-9568.
- Ge, Xinlei; Zhang, Mei; Guo, Min; Wang, Xidong (2008). "Correlation and Prediction of Thermodynamic Properties of Some Complex Aqueous Electrolytes by the Modified Three-Characteristic-Parameter Correlation Model". Journal of Chemical & Engineering Data. 53 (4): 950–958. doi:10.1021/je7006499. ISSN 0021-9568.
- Ge, Xinlei; Wang, Xidong (2009). "A Simple Two-Parameter Correlation Model for Aqueous Electrolyte Solutions across a Wide Range of Temperatures". Journal of Chemical & Engineering Data. 54 (2): 179–186. doi:10.1021/je800483q. ISSN 0021-9568.
- "Project: Ionic Strength Corrections for Stability Constants". IUPAC. Archived from the original on 29 October 2008. Retrieved 2008-11-15.
- Klamt, Andreas (2005). COSMO-RS from quantum chemistry to fluid phase thermodynamics and drug design (1st ed.). Amsterdam: Elsevier. ISBN 978-0-444-51994-8.
- N. Butler, James (1998). Ionic equilibrium: solubility and pH calculations. New York, NY [u.a.]: Wiley. ISBN 9780471585268.
- Ellis, A. J.; Golding, R. M. (1963). "The solubility of carbon dioxide above 100 degrees C in water and in sodium chloride solutions". American Journal of Science. 261 (1): 47–60. Bibcode:1963AmJS..261...47E. doi:10.2475/ajs.261.1.47. ISSN 0002-9599.
- Kortüm, G. (1960). "The Structure of Electrolytic Solutions, herausgeg. von W. J. Hamer. John Wiley & Sons, Inc., New York; Chapman & Hall, Ltd., London 1959. 1. Aufl., XII, 441 S., geb. $ 18.50". Angewandte Chemie. 72 (24): 97. doi:10.1002/ange.19600722427. ISSN 0044-8249. | <urn:uuid:a15283df-a803-491b-a19e-b05cd4223c0e> | 3.796875 | 3,349 | Knowledge Article | Science & Tech. | 47.805223 | 95,588,231 |
All the cells in an organism carry the same instruction manual, the DNA, but different cells read and express different portions of it in order fulfill specific functions in the body. For example, nerve cells express genes that help them send messages to other nerve cells, whereas immune cells express genes that help them make antibodies.
In large part, this highly regulated process of gene expression is what makes us fully functioning, complex beings, rather than a blob of like-minded cells.
Despite its importance, researchers still do not completely understand how cells access the appropriate information in the DNA. They know this process is controlled by proteins called transcription factors, which bind to specific sites around a gene and - in the right combination - allow the gene’s sequence to be read.
However, functional transcription factor binding sites in the DNA are notoriously difficult to locate. The large number of transcription factors and cell types allow endless possible combinations, making it incredibly hard to determine where, when and how each binding event occurs. Moreover, results from genome-wide mapping efforts have only added to the confusion by suggesting that transcription factors bind very promiscuously all over the place, even to sites where they do not turn genes on or off.
Now, researchers at the Stowers Institute for Medical Research have developed a high-resolution method that can precisely and reliably map individual transcription factor binding sites in the genome, vastly outperforming standard techniques.
With the new technique, which was published March 9, 2015 in Nature Biotechnology, transcription factor binding sites that are likely functional leave behind clear footprints, indicating that transcription factors consistently land on very specific sequences. In contrast, questionable binding sites that were previously detected as bound showed a more scattered unspecific pattern that was no longer considered bound.
“Now we can see the subtleties, and a level of precision that we hadn’t anticipated,” says Stowers Associate Investigator Julia Zeitlinger, Ph.D., lead author of the study that also included Stowers colleagues Qiye He, Ph.D., and Jeff Johnston. “Not only do we see a distinct sequence motif where the transcription factor binds, but we also see additional sequences that seem to contribute to binding specificity. There is a lot more information that we can now read to understand how these factors act on the genetic code to influence expression.”
Over the last 15 years, a number of techniques have emerged to enable researchers to map where transcription factors bind to the genome. All of these techniques build on a method called chromatin immunoprecipitation or ChIP, which essentially tethers the proteins to their positions on the DNA, chops the DNA into manageable chunks, and then isolates the sections that are bound by the proteins.
Researchers have taken a variety of approaches to determine the sequence contained within these sections. ChIP-chip uses microarrays or gene chip technology to find the general neighborhood where a transcription factor’s footprint has appeared. ChIP-seq improves upon this approach by using the latest sequencing technologies, but still cannot pinpoint the exact address of the footprint.
The breakthrough came with ChIP-exo developed by Frank Pugh, Ph.D. and colleagues at Penn State University, which uses the addition of an enzyme called exonuclease to trim back the DNA fragments to the spot where the transcription factor is bound. Though this latest technique has promised to reveal the exact address of each transcription factor, its practical implementation had fallen short.
After several attempts to get the ChIP-exo technique to work in her laboratory, Zeitlinger decided to develop her own version. Having worked with ChIP-chip or ChIP-seq for over 15 years, Zeitlinger recognized that the much smaller amounts of DNA obtained by ChIP-exo made it very hard to obtain accurate sequence information. While helping a student working on an unrelated RNA technique, she saw a potential solution.
Normally, when researchers prepare a strip of DNA for analysis, they have to add an extra bit of sequence that serves as a kind of start site for the sequencing machinery. Traditionally, this prep involves two inefficient “ligation” steps, adding a bit of sequence first to the front and then to the back of each sample.
Zeitlinger and her colleagues figured out a way to accomplish the same feat in just one ligation step, adding a bit of sequence to the back of the fragment and then letting the strand form a circle. In addition, the researchers included a random bar code in the bit of DNA they used for ligation, which enabled them to catch any errors or artifacts that might arise in the sequencing procedure.
They called the new method “ChIP experiments with nucleotide resolution through exonuclease, unique barcode and single ligation” or ChIP-nexus. When they used ChIP-nexus to map the footprints of four well-known proteins – namely, human TBP and Drosophila NF-kappaB, Twist and Max— they found that it consistently outperformed existing ChIP-seq protocols in resolution and specificity.
The new tool could distinguish real footprints, those generated by a transcription factor sitting tightly on a particular sequence for a long time, from background noise, that may have arisen from a protein pausing on a sequence in its search for the right landing spot. Having a better collection of real footprints in turn provides much more detailed sequence information on the binding preferences of transcription factors.
Zeitlinger thinks the technique represents an important step forward for the field and will ultimately supplant ChIP-seq for the study of gene regulation.
“We still have a very simplistic idea of how transcription factors come in, open up the DNA, and turn on genes,” says Zeitlinger. ”If we do this kind of analysis for lots of transcription factors, we will gather information needed to better understand gene expression.”
In particular, she would like to see the technique used to ask how small changes in DNA -- the kind that exist naturally in the human population -- affect transcription factor binding and therefore differences in gene expression from one individual to the next.
The work was funded by the Stowers Institute for Medical Research and a grant from the National Institutes of Health (New Innovator Award 1DP2OD004561). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
Lay Summary of Findings
At any given time, only a subset of the genes in a given cell are expressed or “turned on.” Proteins called transcription factors act as the molecular switchboard operators of the cell, binding specific sites in the DNA to flip different genes on and off. Despite their importance, researchers still have difficulty identifying these transcription factor binding sites. In the current issue of the scientific journal Nature Biotechnology, Stowers Institute scientists report the development of a new method called ChIP-nexus that can precisely and reliably map these sites, vastly outperforming previous techniques. Stowers Associate Investigator Julia Zeitlinger, Ph.D., who led the study, explains that researchers can use the new method to understand how transcription factors interact with DNA to control gene expression. For example, the patented technique has already shown that transcription factors binding sites are not scattered across the genome as previously thought, but rather appear in specific, predictable sequences.
About the Stowers Institute for Medical Research
The Stowers Institute for Medical Research is a non-profit, basic biomedical research organization dedicated to improving human health by studying the fundamental processes of life. Jim Stowers, founder of American Century Investments, and his wife, Virginia, opened the Institute in 2000. Since then, the Institute has spent over 900 million dollars in pursuit of its mission.
Currently, the Institute is home to almost 550 researchers and support personnel; over 20 independent research programs; and more than a dozen technology-development and core facilities.
Head, Science Communications
Kim Bland | newswise
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:61de75be-df85-4955-94ff-1e16da6a2d85> | 3.390625 | 2,254 | Content Listing | Science & Tech. | 35.881087 | 95,588,276 |
The movement strategies of birds and mammals are often closely linked to their mating system, but few studies have examined the relationship between mating systems and movement in fishes. We examined the movement patterns of the guppy ( Poecilia reticulata) in the Arima river of Trinidad and predicted that sexual asymmetry in reproductive investment would result in male-biased movement. Since male guppies maximize their reproductive success by mating with as many different females as possible, there should be strong selection for males to move in search of mates. In agreement with our prediction, the percentage of fish that emigrated from release pools was higher for males than females (27.3% vs. 6.9%, respectively). Sex ratio was highly variable among pools and may influence a male's decision to emigrate or continue moving. We also detected a positive relationship between body length and the probability of emigration for males and a significant bias for upstream movement by males. Among the few females that did emigrate, a positive correlation was observed between body length and distance moved. Sex-biased movement appears to be related to mating systems in fishes, but the evidence is very limited. Given the implications for ecology, evolution, and conservation, future studies should explicitly address the influence of sex and mating systems on movement patterns.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:82c99eed-8217-42ab-add0-c70c309a2a3f> | 3.484375 | 278 | Academic Writing | Science & Tech. | 23.736525 | 95,588,277 |
Climate around the world and even in the United States varies a lot. Places near the equator tend to be hot and humid most of the year. The sun’s rays are most intense at the equator. Areas near the North and South poles are cold much of the year. Some areas get little rain, followed by rainy seasons or monsoons. Others get moisture year-round, while desert areas are almost always dry.
Climate is controlled by many factors. Areas near oceans and large lakes often stay warmer than dry, inland areas. If you live in a mountainous area at high altitude, your climate is probably cooler and more intense than an area at sea level. Winds blowing in from the oceans also influence weather.
Climate is important. It determines what types of houses people build and clothing they wear. Climate influences the crops people grow and the food they eat. Climate even controls things like traditions, games and play. If you live in an area where it’s cold most of the year, you probably won’t play outside as much as a child living in a warm, mild climate.
Fun Facts about Climate Around the World for Kids
- Climate and weather are not the same things. Weather is short term and climate is consistent commonly summarized as “Climate is what you expect, weather is what you get”
- Latitude, altitude, terrain and nearby water bodies all affect the climate of a place
- Koppen classification of climates uses average monthly measurements of temperature and precipitation
- Modern day instruments such as anemometers, barometers and thermometers are used to study weather and determine climate changes over the past few centuries
- Climate change comes from changes in regional or global climates over decades or up to millions of years
- The most commonly referred to climate change today is the rise of the earth’s average surface temperature called global warming
- Scientists try to predict climate changes using models to simulate how the atmosphere, oceans, land surfaces and ice will interact
Climate Around the World Vocabulary
- Climate: Weather averaged over a long period of time
- Anemometer: Instrument used to measure and record wind speed
- Interact: Two or more things that act upon each other
- Latitude: distance measured from the earth’s equator either north or south
- Altitude: Height or distance from sea level of a place
- Classification: distribution into groups according to some common relation or attribute
- Barometer: Instrument used to measure atmospheric pressure
- Regional: Of a particular tract of land or area
All about Climate Around the World Video for Kids
Watch this awesome climate around the world video for kids:
A video documentary all about climate around the world, explaining why there are different climates in different places.
Climate Around the World Q&A
Question: What do I need to do if I want to study climates and weather?
Answer: To learn more about the weather and climates you can pay attention to daily temperatures. Your family may have a thermometer outside their house or you could listen to your local weather forecasts. You could even look up how to make your own monitoring devices like a barometer. In school you will want to study your sciences and math as they will provide you with the basics to study more about these things in college.
Question: Why are the climates at the earth’s poles so very cold?
Answer: The poles of the earth are much colder than the rest of the planet because they are the furthest away from the sun’s rays so receive the least amount of the sun’s intensity. Unlike the equator at the fat center of the earth which gets the most of the sun’s warming.
Question: Can people live in every type of climate?
Answer: People can survive in every type of climate, but not always without the help of modern day technology. If people are exposed to some climates without proper protection they can die. Modern devices like air conditioning protect people from extreme heat and make thinks more comfortable for people in hot and humid climates. Science has also developed extreme insulating materials too that help protect people from extreme cold as well.
Enjoyed the Easy Geography for Kids all about Climate Around the World info? Take the FREE & fun quiz all about Climate Around the World and download FREE all about Climate Around the World worksheet for kids. For lengthy info click here.
Cite This Page
You may cut-and-paste the below MLA and APA citation examples:
MLA Style Citation
Declan, Tobin. " World Climate Facts for Kids ." Easy Science for Kids, Jul 2018. Web. 20 Jul 2018. < http://easyscienceforkids.com/all-about-climate-around-the-world/ >.
APA Style Citation
Tobin, Declan. (2018). World Climate Facts for Kids. Easy Science for Kids. Retrieved from http://easyscienceforkids.com/all-about-climate-around-the-world/
Sponsored Links : | <urn:uuid:d8fa98a1-c0d7-41e3-860a-344003a5e600> | 3.84375 | 1,037 | Knowledge Article | Science & Tech. | 44.010404 | 95,588,281 |
Topic: Electroreception in fish, amphibians and monotremes
From an evolutionary point of view, electroreception is particularly intriguing as a sense modality that has been repeatedly lost and reinvented again.
Some animals have evolved a most astonishing sensory capacity – they are able to detect naturally occurring electric fields in the microvolt or even nanovolt range with the help of specialised receptors. The electric signals, which convey information about the structure of the environment and the activity of other animals, are processed in specific regions of the brain. This passive electric sense can be employed for navigation, obstacle avoidance or prey detection and is particularly useful in conditions that limit the use of other senses, e.g. at night or in murky waters. Some fish, known as electric fish, have gone one step further and evolved an active electric sense. With the help of an electric organ, they can generate weak or strong electric fields and use them for electrical communication, active electrolocation or, in case of strong electric discharges, even for stunning prey.
As electroreception needs a conductive medium, it is generally limited to aquatic and partly aquatic species. So far, it has only been convincingly demonstrated in vertebrates – it can be found in many marine and freshwater fishes, some amphibians and, most remarkably, monotreme mammals. Speculations that also the largely aquatic star-nosed mole (Condylura cristata) possesses electroreceptors on the highly touch-sensitive tentacles surrounding its nostrils have not been confirmed (but these animals give as important insights into tactile sensation and specifically a series of striking convergences with the eye). Also the recently reported response to electric fields of two species of freshwater crayfish, Cherax destructor and Procambarus clarkii, is most likely not due to a specialized electric sense as the behavioural thresholds were extremely high and electroreceptors could not be identified. It seems reasonable to predict, however, that, given its potential advantages, electroreception will eventually be demonstrated in other groups of animals as well, possibly birds (e.g. wading birds that probe the soil with their bill to find food, where again tactile sensitivity and the convergence of touch receptors are well known), reptiles or invertebrates.
Electroreceptors can be found in lampreys as well as in many groups of true fish, where they may be limited to the head or distributed across the whole surface of the body. Cartilaginous fish (elasmobranchs: sharks, rays and skates; holocephalans: chimaeras), non-teleost ray-finned fish (polypterids: bichirs and reedfish; acipenseriforms: sturgeons and paddlefish), some teleosts (siluriforms: catfish; gymnotids: American knifefish; gymnarchids: African knifefish; mormyrids: elephantfish), lungfish and coelacanths all are electroreceptive and use this sense mainly for prey capture. Recent studies of the Mississippi River paddlefish (Polyodon spathula), for example, have shown that its huge rostrum functions as an electrical antenna for detecting the electric signals of planktonic crustaceans. It might also help the fish to orientate during migration to their spawning grounds.
Among amphibians, electroreception has been demonstrated in several aquatic caecilians and urodeles (salamanders) but to date not in anurans (frogs). That anurans appear to lack an electric sense might be due to the mainly non-predatory lifestyle of their tadpoles. Examples of electroreceptive urodele species are the giant salamander (Andrias davidianus), the axolotl (Ambystoma mexicanum) and the olm (Proteus anguinus). Particularly for the latter, the value of an electric sense is obvious – it lives in caves and is almost blind, so it can detect and recognise prey items by their electric fields. Generally, the main function of the electrosensory system in amphibians seems to be the localisation of prey objects.
Three living species of monotreme have been shown to be capable of electroreception – the Australian duck-billed platypus (Ornithorhynchus anatinus) as well as two species of echidna, the Australian short-beaked echidna (Tachyglossus aculeatus) and the Western long-beaked echidna (Zaglossus bruijnii) of New Guinea. The electric sense is best studied in the platypus. It had long puzzled scientists how platypuses manage to catch large amounts of invertebrate prey in murky streams at night with their eyes, ears and nostrils closed. The mystery was solved when they discovered that the bill skin is laced with push rod mechanoreceptors and electroreceptors. While the push rods are distributed uniformly across the bill surface, the 40,000 electroreceptors are arranged in a series of stripes, which probably aids the localisation of prey. The platypus electroreceptive system is highly directional, with the axis of greatest sensitivity pointing outwards and downwards. By making short latency head movements called saccades when swimming, platypuses constantly expose the most sensitive part of their bill to the stimulus to localise prey as accurately as possible. This behaviour is comparable to that shown by barn owls that orient towards an acoustic stimulus.
The electroreceptive system of echidnas is structurally similar to that of platypus but far less complex. In contrast to the 40,000 electroreceptors found on the platypus bill, Western long-beaked echidnas possess only 2,000 receptors and short-beaked echidnas merely 400 that are concentrated in the tip of the snout. Thus, echidnas have obviously experienced a reduction in their electroreceptive abilities, most likely due to the environment they live in. While platypus is largely aquatic, echidnas are terrestrial, although their terrestrial lifestyle is probably secondarily derived from a semi-aquatic ancestor, as suggested in a recent paper on monotreme phylogenetic relationships (Phillips et al. 2009, PNAS). Western long-beaked echidnas live in wet tropical montane forests, where they feed on earthworms in damp leaf litter. So their habitat is probably still quite favourable to the reception of electrical signals, contrary to the varied but generally more arid habitat of their short-beaked relative. However, short-beaked echidnas are particularly active after rain and readily feed on termites and ants, digging tunnels into their nests, where it might be humid enough to detect electric fields. Furthermore, the tip of their snout is constantly wet, which might enhance electroreception. So although it seems likely that echidnas use electroreception at close range to identify live objects, it remains to be shown how behaviourally relevant their electric sense actually is.
Evidence of convergence
From an evolutionary point of view, electroreception is particularly intriguing as a sense modality that has been repeatedly lost and reinvented again. As an electric sense is found in phylogenetically old groups such as sturgeons, it is generally considered to be no more recent than other vertebrate sensory systems, although its real evolutionary origin is still unknown. It is, however, fairly safe to assume that electroreception evolved once in basal vertebrates, was lost in the common ancestor of holosteans (gars and bowfin) and teleosts and then re-evolved independently at least twice in teleost fish. Electroreception has been demonstrated in two distantly related teleost lineages, the osteoglossomorphs, which contain the African mormyrids and gymnarchids, and the ostariophysans, which include the South American gymnotids and the widely distributed catfish. While amphibians most likely inherited their electric sense from their fish ancestors and some groups then lost it, monotreme mammals almost certainly acquired electroreception independently. These separate evolutionary origins are reflected by differences in the morphology of the electroreceptors and processing of the electric signals in the brain.
In all fish and amphibians, the electroreceptors are a secondary cell system (such as in the eye and ear), where a specialized receptor hair cell responds to a stimulus by producing a receptor potential, which then activates a primary sensory neuron. However, there are differences between different groups, e.g. with respect to receptor type and the nature of the stimulating signals. Most fish and amphibians possess cathodally sensitive ampullary receptors (known as “Ampullae of Lorenzini” in elasmobranchs). They consist of a jelly-filled canal, which opens to the surface by a pore in the skin, and are excited by negative pulses and inhibited by positive ones. In contrast, the electroreceptive teleosts have evolved anodally sensitive ampullary electroreceptors as well as tuberous receptors. Tuberous receptors are not connected to the surface and able to respond to the discharges of the electric organs of these electric fish.
The electroreceptors of monotremes are completely different in that they are modified mucous and serous glands. This is ideal for animals that are not fully aquatic, because the association with a gland helps to maintain conductivity and prevent desiccation. In contrast to fish and amphibians, the nerve endings, which are arranged in a daisy chain around the pore of the gland, are naked and not tipped by a sensory cell. Thus, there is no peripheral synapse involved in the transduction of the electric stimuli. The receptors are excited by negative pulses.
Innervation and brain structures
What all electroreceptors have in common is that only afferent nerves are present, which carry nerve impulses to the brain. However, while in all fish and amphibians the receptors are derived from the acoustic-lateralis system and innervated by the 8th cranial nerve (lateral line nerve), the receptors of monotremes are supplied by the 5th cranial nerve (trigeminal nerve). This provides further evidence for independent evolutionary origins.
The afferents project to particular regions of the brain, which also differ between groups. In non-teleost fish and amphibians, the electroreceptor region of the brain is the dorsal nucleus of the medulla. This dorsal nucleus is not found in teleosts. Instead, their electroreceptor afferents are received in a special portion of the medial nucleus known as the electrosensory lateral line lobe (ELLL), which has evolved at least twice independently. In monotremes, the electrical signals are processed in the somatosensory neocortex of the forebrain.
In the elaborate neocortex of platypus, a detailed topographic representation of the bill surface exists, which, in combination with a representation of different field strengths at each point on the bill, allows for highly sophisticated signal processing. Another special feature of the platypus brain is the intimate association between mechano- and electroreceptive neurons (in fish and amphibians, such an association is not prominent). There are alternating rows of mechanosensory neurons and bimodal neurons, which receive electrosensory and mechanical input (so there are no neurons that only respond to electrical input). This stripe-like array is evocative of the primary visual cortex in primates, which integrates input from the two eyes. It has been speculated that the bimodal neurons allow the platypus to estimate the absolute distance of prey: The electrical signal generated by a moving prey organism will reach the bill before the mechanical waves. Thus, there will be a certain time delay between the two signals that changes with distance to the prey. If the bimodal neurons in the neocortex were sensitive to a certain time-of-arrival difference, this would provide a direct read-out of distance (quite similar to echolocating bats).
Cite this web page
Map of Life - "Electroreception in fish, amphibians and monotremes"
July 21, 2018 | <urn:uuid:87fd1425-80ce-4413-8966-976485cc5eaa> | 3.390625 | 2,547 | Knowledge Article | Science & Tech. | 14.821417 | 95,588,292 |
Contents Index Previous Next
J.2 Allowed Replacements of Characters
replacements are allowed for the vertical line, number sign, and quotation
- A vertical line character (|) can be replaced by an exclamation
mark (!) where used as a delimiter.
- The number sign characters (#) of a based_literal
can be replaced by colons (:) provided that the replacement is done for
To be honest: The intent
is that such a replacement works in the Value and Wide_Value attributes,
and in the Get procedures of Text_IO, so that things like ``16:.123:''
- The quotation marks (") used as string brackets at
both ends of a string literal can be replaced by percent signs (%) provided
that the enclosed sequence of characters contains no quotation mark,
and provided that both string brackets are replaced. Any percent sign
within the sequence of characters shall then be doubled and each such
doubled percent sign is interpreted as a single percent sign character
These replacements do not change the meaning
of the program.
Reason: The original purpose
of this feature was to support hardware (for example, teletype machines)
that has long been obsolete. The feature is no longer necessary for that
reason. Another use of the feature has been to replace the vertical line
character (|) when using certain hardware that treats that character
as a (non-English) letter. The feature is no longer necessary for that
reason, either, since Ada 95 has full support for international character
sets. Therefore, we believe this feature is no longer necessary.
Users of equipment that still
uses | to represent a letter will continue to do so. Perhaps by next
the time Ada is revised, such equipment will no longer be in use.
that it was never legal to use this feature as a convenient method of
including double quotes in a string without doubling them -- the string
%"This is quoted."%
not legal in Ada 83, nor will it be in Ada 95. One has to write:
"""This is quoted."""
Contents Index Previous Next Legal | <urn:uuid:7d7ebe0c-6646-4c8a-a1ab-ea1279f01000> | 2.625 | 441 | Documentation | Software Dev. | 41.588476 | 95,588,305 |
How does travelling at lightspeed affect time?
What was Newton's pivotal text about?
Organic matter has been found on Mars, what does this mean?
Speedy Stumpers for Smart Scientists.
Are there dunes on Pluto?
Should we colonise space?
What health issues might humanity face in space?
What would a trip into space feel like?
Could we one day take holidays on another planet?
Is there life on the red planet? How can we find out?
How would we find a suitable new planet? Are there any planets with atmospheres like ours?
Would you leave Earth if you had to? Why would we need to leave at all?
What does the space race have to do with better contact lenses and safer eye surgeries? This week's Down to Earth...
How the space race spawned the selfie…
The first evidence of a black hole swarm at the centre of our galaxy...
How life found a way on one of the world's driest deserts...
Who will be The Naked Scientists' big brain of the week?
Astronomers explore a planetary system very similar to our own.
The most powerful rocket ever built takes to the skies.
... The biggest and most advanced telescope dish ever made.
How will this giant sunshield unfold in space?
What are the challenges ahead of launch for JWST? Are astronomers ready?
Imagine folding up a a giant telescope, like origami, squashing it into a rocket and blasting it one million miles into...
What's on the horizon for space commerce in 2018?
We celebrate 50 years since the discovery that changed astronomy forever | <urn:uuid:14311513-6c6a-4f54-9487-3199ca9fc467> | 3.0625 | 340 | Content Listing | Science & Tech. | 68.559548 | 95,588,316 |
Denial may be the easiest route now, but it likely will not be an option in years to come.
NASA scientist James Hansen gave a public lecture at the University of Utah on Monday, outlining the current and expected impacts that climate change will have on the Earth and the ecosystems that populate it.
The effects can be seen today, Hansen said, at shrinking glacier fields and in bodies of water such as Lake Mead and Lake Powell that are at half capacity.
But Hansen isn't concerned about the hardships that global climate change may have on his life. They're almost nominal when compared with what his grandchildren will see, he said. That is why he wants mitigate the effects now and help preserve an adequate habitat for future generations.
"The Earth belongs to future generations," said Hansen, who heads NASA's Goddard Institute of Space Studies, "and we have the obligation of returning it to them in equal or better condition."
According to Hansen's research, the Earth's atmosphere is currently populated with 385 parts per million of carbon dioxide. His findings show that "safe" or "balanced" levels of carbon dioxide that the atmosphere can support are 350 parts per million.
As the amount of carbon dioxide in the atmosphere continues to rise, the corresponding changes from the environment pose irreversible effects and devastating consequences for ecosystems that humans depend on, Hansen said.
The problems that stand in the way of offsetting climate change are many, he said. The disconnect between validated climate research and the public's understanding of the issue has been among the most prominent hurdles, Hansen said.
"We struggle to educate, because most people are worried about their jobs and the economy and not 50 years from now," Hansen said. "It should start on a person-to-person basis, and it's difficult because of the amount of misinformation."
Though it would take significant changes from today's standards, Hansen said it is possible to begin the process of reducing the amount of carbon monoxide in the atmosphere. He suggested pushing research and development of energy-efficient technology, renewable energy and an improved electric grid to facilitate a more proactive flow of energy.
Hansen acknowledged that these investments aren't feasible in the current political climate, saying coal and gas money have tainted the reasoning skills of elected leaders. To bring about the necessary investments and regulation to preserve the climate, Hansen said, more fruitful dialogues with elected leaders have to begin. After that, if long-term solutions aren't being considered, public protests and the courts may be the best road to legislation, he said.
"I thought the lecture was wonderful," said Cindy King, a Salt Lake City resident who attended the event. "I trust his work, and Utah needs to get a little busier at curbing our pollution."
The urgency behind Hansen's lecture struck a cord with the youth in the audience.
Canyon Evans, a U. senior majoring in environmental studies, said he has to act because his future and the future of his children are at risk. The public is divided on the issue of climate change, Evans said, but the consequences of disregarding science for convenience don't seem like a fair trade for his future.
"I don't care what people say when you print this," he said. "It's my future."
E-MAIL: [email protected] | <urn:uuid:d04c2300-2ff5-42bd-9f41-9634d4843a68> | 3.65625 | 675 | News Article | Science & Tech. | 41.884064 | 95,588,332 |
|시간 제한||메모리 제한||제출||정답||맞은 사람||정답 비율|
|2 초||512 MB||55||42||36||81.818%|
Count the divisors of every value in the range [L, U] (both L and U included) and return the biggest divisor count you can find.
The first line will contain an integer C with the number of ranges to process. The next C lines will contain a pair of integers L, U.
You have to count the divisors for each number in the range and output the biggest count.
For each range a line containing the biggest divisor count found.
5 1 10 1000 1000 9999900 10000000 35 999 25 25
4 16 256 32 3 | <urn:uuid:1c532222-0a43-46d5-8a65-7d3f25ccb620> | 2.875 | 210 | Tutorial | Science & Tech. | 104.98779 | 95,588,342 |
They appear when particles in the electrically conductive gas (plasma) called the solar wind hit the earth’s atmosphere at an altitude of roughly 100 kilometers above the surface of the earth. The process that opens up the earth’s protective magnetic field to the solar wind is the subject of Lars-Göran Westerberg’s research project at Luleå University of Technology in Sweden.
The fact that our society is also affected by solar activity and solar wind was first observed by seafarers, who noticed that their compass navigation was sometimes unreliable. However, no one could explain then that this was the result of solar wind impacting the magnetic field.
Our earth is surrounded by a protective package of magnetic field lines called the magnetosphere. The system can be likened to an onion, with the earth as the center and the many layers of peels representing different strata in the magnetosphere. On planets and heavenly bodies that have no magnetic field, such as the moon and Venus, the solar wind particles hit the surface directly. This seriously reduces the chances of there being life there. Most of the solar wind that hits the earth’s magnetosphere goes past without coming into contact with the earth, which in turn is of crucial importance to the evolution of life that took place and continues to take place.
Be that as it may, even though the magnetosphere constitutes an effective shield against solar wind, plasma can nevertheless penetrate the so-called magnetopause, the outer layer of the magnetosphere. By following the magnetic field lines, the charged particles make their way toward the earth and the atmosphere, resulting in displays of northern lights, for one thing.
Solar wind can get through the magnetosphere in different ways. The dominant mechanism is called magnetic coupling. This means that the magnetic field stored in the solar wind interacts with the magnetic field of the earth, with the two fields merging and thereby forming two new field configurations.
This converts magnetic energy in the solar wind into kinetic energy, which makes the plasma accelerate and flow into the earth’s magnetosphere via plasma rays. Magnetic coupling is a central process for converting magnetic energy to kinetic energy. It is involved in all space applications in which two magnetic fields cross each other, and the majority of the research into magnetic coupling targets the physics that underlies the process.
“The aim of my research project is to study the more global effects of magnetic coupling on the ambient plasma. When the process takes place, the structure and behavior of the solar wind flow is radically altered around the area where the coupling takes place. This is reflected in the dynamics of the solar wind when it ultimately reaches the earth’s atmosphere,” says Lars-Göran Westerberg.
In “The Interaction of Solar Wind with the Earth’s Magnetic Field,” Lars-Göran Westerberg has applied newly developed theories together with computer simulations and measurements performed by the Cluster satellites. Cluster is a project involving the European and American space agencies, consisting of four satellites that circle the earth in formation.
“The fact that there are four of them makes the measurements much more useful than any from a single satellite. With four satellites, it’s possible to take measurements simultaneously from different places in an area where coupling is occurring. By studying how magnetic coupling impacts the local area, we gain enhanced knowledge of how the process is controlled by the behavior of the prevailing solar wind and also what the consequences are in an area several earth radii from the site where the coupling originates,” explains Lars-Göran Westerberg.
An understanding of this process is also crucial because it represents a central mechanism for converting energy in space physics and, at the same time, is a direct result of sun/earth interaction that impacts the environment of the earth.
Lars-Göran Westerberg is involved in the Swedish national research school in space engineering that is coordinated by Luleå University of Technology.
Lena Edenbrink | alfa
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:c1c2683e-22d0-4a55-919d-e34e9b6fed71> | 4.0625 | 1,454 | Content Listing | Science & Tech. | 38.672132 | 95,588,345 |
Internal conversion is possible whenever gamma decay is possible, except in the case where the atom is fully ionised. During internal conversion, the atomic number does not change, and thus (as is the case with gamma decay) no transmutation of one element to another takes place.
Since an electron is lost from the atom, a hole appears in an electron shell which is subsequently filled by other electrons that descend to that empty, lower energy level, and in the process emit characteristic X-ray(s), Auger electron(s), or both. The atom thus emits high-energy electrons and X-ray photons, none of which originate in that nucleus. The atom supplied the energy needed to eject the electron, which in turn caused the latter events and the other emissions.
Since primary electrons from internal conversion carry a fixed (large) part of the characteristic decay energy, they have a discrete energy spectrum, rather than the spread (continuous) spectrum characteristic of beta particles. Whereas the energy spectrum of beta particles plots as a broad hump, the energy spectrum of internally converted electrons plots as a single sharp peak (see example below).
In the quantum mechanical model of the electron, there is a finite probability of finding the electron within the nucleus. During the internal conversion process, the wavefunction of an inner shell electron (usually an s electron) is said to penetrate the volume of the atomic nucleus. When this happens, the electron may couple to an excited energy state of the nucleus and take the energy of the nuclear transition directly, without an intermediate gamma ray being first produced. The kinetic energy of the emitted electron is equal to the transition energy in the nucleus, minus the binding energy of the electron to the atom.
Most internal conversion (IC) electrons come from the K shell (the 1s state), as these two electrons have the highest probability of being within the nucleus. However, the s states in the L, M, and N shells (i.e., the 2s, 3s, and 4s states) are also able to couple to the nuclear fields and cause IC electron ejections from those shells (called L or M or N internal conversion). Ratios of K-shell to other L, M, or N shell internal conversion probabilities for various nuclides have been prepared.
An amount of energy exceeding the atomic binding energy of the s electron must be supplied to that electron in order to eject it from the atom to result in IC; that is to say, internal conversion cannot happen if the decay energy of the nucleus is less than a certain threshold. There are a few radionuclides in which the decay energy is not sufficient to convert (eject) a 1s (K shell) electron, and these nuclides, to decay by internal conversion, must decay by ejecting electrons from the L or M or N shells (i.e., by ejecting 2s, 3s, or 4s electrons) as these binding energies are lower.
Although s electrons are more likely for IC processes due to their superior nuclear penetration compared to electrons with orbital angular momentum, spectral studies show that p electrons (from shells L and higher) are occasionally ejected in the IC process. | <urn:uuid:1da0cea4-5e0d-4850-bd6e-ab16d4d5a268> | 3.875 | 647 | Knowledge Article | Science & Tech. | 37.047231 | 95,588,378 |
any of the 2,500 insect species constituting the family Formicidae of the order Hymenoptera, to which the bee and the wasp also belong. Like most members of the order, ants have a "wasp waist," that is, the front part of the abdomen forms a narrow stalk, called the waist, or pedicel, that attaches to the thorax. The wings, when present, are also typical of the order; the small hind pair of wings is attached to the rear edge of the front pair. The head has two bent antennae, used both as organs of touch and as chemosensory organs. In most species there are two compound eyes. The jaws are of the biting type and in some species are used for defense. Some ants have stings, and some can spray poison from the end of the abdomen. Most ants are black, brown, red, or yellow. Metamorphosis is complete. A soft, legless, white larva hatches from the egg; in most species it is completely helpless and must be fed and carried by adults. In some species pupation occurs within a cocoon. Ants are cosmopolitan in distribution.
All species show some degree of social organization; many species nest in a system of tunnels, or galleries, in the soil, often under a dome, or hill, of excavated earth, sand, or debris. Mound-building ants may construct hills up to 5 ft (1.5 m) high. Other species nest in cavities in dead wood, in living plant tissue, or in papery nests attached to twigs or rocks; some invade buildings or ships. Colonies range in size from a few dozen to half a million or more individuals. Typically they include three castes: winged, fertile females, or queens; wingless, infertile females, or workers; and winged males. Those ordinarily seen are workers. In some colonies ants of the worker type may become soldiers or members of other specialized castes.
Whenever a generation of queens and males matures it leaves on a mating flight; shortly afterward the males die, and each fecundated queen returns to earth to establish a new colony. The queen then bites off or scrapes off her wings, excavates a chamber, and proceeds to lay eggs for the rest of her life (up to 15 years), fertilizing most of them with stored sperm. Females develop from fertilized and males from unfertilized eggs. The females become queens or workers, depending on the type of nutrition they receive. The first-generation larvae are fed by the queen with her saliva; all develop into workers, which enlarge the nest and care for the queen and the later generations. It is thought that the production of males by the queen and the rearing of new queens by the workers may be controlled by hormonal secretions of all the members of the colony. There are many variations on the basic pattern of new colony formation. In some species the queen cannot establish a colony herself and is adopted by workers of another colony. Slave-making ants raid the nests of other ant species and carry off larvae or pupae to serve as workers; in a few slave-making species the adults cannot feed themselves.
Different species differ widely in their diets and may be carnivorous, herbivorous, or omnivorous. Members of some species eat honeydew from plants infested with aphids and certain other insects; others, called dairying ants, feed and protect the aphids and "milk" them by stroking. Harvester ants eat and store seeds; these sometimes sprout around the nest, leading to the erroneous belief that these ants cultivate their food. However, cultivation is practiced by certain ants that feed on fungi grown in the nest. Some of these, called leaf-cutter, or parasol, ants, carry large pieces of leaf to the nest, where the macerated leaf tissue is used as a growth medium for the fungus. Most leaf cutters are tropical, but the Texas leaf-cutting ant is a serious crop pest in North America. The army ants of the New World tropics and the driver ants of tropical Africa are carnivorous, nomadic species with no permanent nests. They travel like armies in long columns, overrunning and devouring animals that cannot flee their path; the African species even consume large mammals.
Ants as a group are beneficial to humans. Their tunneling mixes and aerates the soil, in some places replacing the activity of earthworms. Many species feed on small insects that are serious crop pests. House pests among the North American ants include the yellowish pharaoh ant, the little black ant, the odorous house ant, the Argentine ant of warm climates, and the black carpenter ant. Carpenter ants tunnel in wood but do not feed on it. The fire ant, which has a painful bite, is a serious pest to humans and livestock in many parts of the South.
Ants are classified in the phylum Arthropoda, class Insecta, order Hymenoptera, family Formicidae.
- See publications of the U.S. Dept. of Agriculture; Insect Societies (1971) and, with B. Holldobler, The Ants (1990). ,
- Ball, G. E., Taxonomy, Phylogeny, and Zoogeography of Beetles and Ants (1985).
- The Behavioural Ecology of Ants (1987). ; ,
- The Lives of Ants (2009). ; ,
- Adventures among Ants (2010). ,
- Secret Lives of Ants (2012). ,
any insect of the family Formicidae of the order Hymenoptera, characterized by a narrow abdominal construction. All ants are colonial and each...
The common name for a narrow-waisted, generally wingless insect of the family Formicidae (order Hymenoptera). Ants live in highly elaborate...
Ants are found on almost every land mass on the planet, except the frozen Arctic and Antarctic. They live in highly organized colonies, usually... | <urn:uuid:95554f3d-5f1f-4449-9844-aace3c4644f9> | 3.78125 | 1,254 | Knowledge Article | Science & Tech. | 53.634696 | 95,588,382 |
Basaltic Volcanism on the Terrestrial Planets
Publisher: Pergamon Press 1981
Number of pages: 1286
The theme of this book is the study of basaltic volcanism on the terrestrial planets as a stage in planetary evolution: to use the eruption of lava from the interior of a planet as evidence of the thermal and chemical processes of the planet.
Home page url
Download or read it online for free here:
by Don E. Wilhelms - University of Arizona Press
Don Wilhelms was a member of the Apollo Scientific Team. In this book he describes his role, along with his colleagues, during the Apollo explorations of the Moon. He presents a brief history of the theories associated with the origin of the moon.
by Michael H. Carr - NASA
The knowledge gained through space exploration is leading to the new science of comparative planetology. This book outlines the geologic history of the terrestrial planets in light of recent exploration and the revolution in geologic thinking.
- Rice University
This 1400+ pages book covers the very rapidly growing area of star-and-planet formation and evolution, from astrophysics to planetary science. It is most useful for researchers, graduate students, and some undergraduate students.
- National Aeronautics and Space Administration
Passing by Jupiter in 1979, the Voyager spacecraft have collected an enormous amount of data that may prove to be a keystone in understanding our solar system. This publication provides an early look at the Jovian planetary system ... | <urn:uuid:e7914ba8-b576-4b8f-ad52-5792cbb67075> | 2.9375 | 310 | Content Listing | Science & Tech. | 34.6875 | 95,588,393 |
Environmentally Driven Plasticity
The two major environmental parameters which have the greatest impact on the growth forms of marine sessile organisms are light, required for photosynthesis, and hydrodynamics. A full discussion of the physics of underwater light distributions and hydrodynamics could easily cover a few textbooks. In Box 2.1 the basic hydrodynamic laws are summarized, together with two dimensionless parameters, the Reynolds number Re (2.3) and the Péclet number Pe (2.4), which can be used to characterize the impact of the flow on the organism. In Sect. 2.1.1 “Growing and flowing” we will focus on the biomechanical impact of hydrodynamics on the growth process and try to construct a number of laws for the biomechanical impact of hydrodynamics using an engineering approach. In Sect. 4.3 we will return to the topic of hydrodynamics, from a modeling point of view and try to construct a computational method capable of capturing the influence of hydrodynamics in models of growth and form of marine sessile organisms. In Box 2.2 the basic equations, in a highly simplified form, of underwater light distributions are shown. To a certain extent, in contrast with the hydrodynamic equations, these simplified equations can more or less straightforwardly be included in computational models; this will be discussed in Chap. 4.
KeywordsCoral Reef Growth Form Flow Speed Hydrodynamic Force Brown Seaweed
Unable to display preview. Download preview PDF. | <urn:uuid:c999820a-cfdd-4b1b-9b15-ccee4f927a97> | 2.859375 | 321 | Truncated | Science & Tech. | 43.934474 | 95,588,415 |
NHGRI researchers' novel approach compared iPSC to subcloned cells
It's been more than 10 years since Japanese researchers Shinya Yamanaka, M.D., Ph.D., and his graduate student Kazutoshi Takahashi, Ph.D., developed the breakthrough technique to return any adult cell to its earliest stage of development (a pluripotent stem cell) and change it into different types of cells in the body. Called induced pluripotent stem cells (iPSCs), this technique opens the doors to medical advances, including generating cartilage cell tissue to repair knees, retinal cells to improve the vision of those with age-related macular degeneration and other eye diseases, and cardiac cells to restore damaged heart tissues.
Induced pluripotent stem cells (iPSCs) -- stem cells that are capable of differentiating into one of many cell types -- are a technique that opens the doors to medical advances, including generating cartilage cell tissue to repair knees, retinal cells to improve the vision of those with age-related macular degeneration and other eye diseases, and cardiac cells to restore damaged heart tissues.
Credit: Darryl Leja, NHGRI
Despite its immense promise, adoption of iPSCs in biomedical research and medicine has been slowed by concerns that these cells are prone to increased numbers of genetic mutations.
A new study by scientists at the National Human Genome Research Institute (NHGRI), part of the National Institutes of Health, suggests that iPSCs do not develop more mutations than cells that are duplicated by subcloning. Subcloning is a technique where single cells are cultured individually and then grown into a cell line. The technique is similar to the iPSC except the subcloned cells are not treated with the reprogramming factors which were thought to cause mutations. The researchers published their findings on February 6, 2017, in the Proceedings of the National Academy of Sciences.
"This technology will eventually change how doctors treat diseases. These findings suggest that the question of safety shouldn't impede research using iPSC," said Pu Paul Liu, M.D., Ph.D., co-author, senior investigator in NHGRI's Translational and Functional Genomics Branch and deputy scientific director for the Division of Intramural Research.
Dr. Liu and his collaborators examined two sets of donated cells: one set from a healthy person and the second set from a person with a blood disease called familial platelet disorder. Using skin cells from the same donor, they created genetically identical copies of the cells using both the iPSC and the subcloning techniques. They then sequenced the DNA of the skin cells as well as the iPSCs and the subcloned cells and determined that mutations occurred at the same rate in cells that were reprogrammed and in cells that were subcloned.
Most genetic variants detected in the iPSCs and subclones were rare genetic variants inherited from the parent skin cells. This finding suggests that most mutations in iPSCs are not generated during the reprogramming or iPSC production phase and provides evidence that iPSCs are stable and safe to use for both basic and clinical research, Dr. Liu said.
"Based on this data, we plan to start using iPSCs to gain a deeper understanding of how diseases start and progress," said Erika Mijin Kwon, Ph.D., co-author and NHGRI post-doctoral research fellow. "We eventually hope to develop new therapies to treat patients with leukemia using their own iPSCs. We encourage other researchers to embrace the use of iPSCs."
Jeannine Mjoseth | EurekAlert!
O2 stable hydrogenases for applications
23.07.2018 | Max-Planck-Institut für Chemische Energiekonversion
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Health and Medicine
23.07.2018 | Earth Sciences
23.07.2018 | Science Education | <urn:uuid:a54ee22b-655a-42c0-ad89-25d820c2dae6> | 2.75 | 1,345 | Content Listing | Science & Tech. | 40.020223 | 95,588,430 |
“Soaring to the depths of our universe, gallant spacecraft roam the cosmos, snapping images of celestial wonders,” the space agency said. “Some spacecraft have instruments capable of capturing radio emissions. When scientists convert these to sound waves, the results are eerie to hear.”
Even the names of the tracks are mysterious. There’s “Radar Echoes From Titan’s Surface,” a pulsing rhythm suddenly interrupted by some electronic blips followed by an ominous revving sound. “Plasmaspheric Hiss” almost sounds like breathing. And the whistling “Plasmawaves - Chorus” resembles a distant call.
Of course, none of these sounds were caused by space ghosts (or even Space Ghost). Each had a more down-to-earth explanation, which NASA posted on its website. For example, plasma waves “like the roaring ocean surf, create a rhythmic cacophony that ― with the EMFISIS instrument aboard NASA’s Van Allen Probes ― we can hear across space.” | <urn:uuid:5f7c3151-691f-4f50-8988-60ab8c561f5b> | 3.15625 | 225 | News Article | Science & Tech. | 42.857357 | 95,588,440 |
This guide will help you install and setup C++ development environment in Linux (Ubuntu or other that can use package manager) using Eclipse IDE.
You can deploy C++ program in Linux and I’ll show you the same here. Let’s part this article in these two segments for easy understanding.
- Setup Eclipse for C++ development in Ubuntu Linux
- Learn to compile and run C++ programs in Ubuntu Linux
The instructions are exact for Ubuntu and may apply on other Linux distributions which also support package manager to get software from Linux app store. Other workable Linux are Linux Mint, Elementary OS, POP! OS and etc.
Table of content
- Install build-essential
- Compile and run C++ program in Ubuntu Linux
- Compile C++ code in Linux:
- Run C++ code in Linux:
- Method 2: Setup Eclipse for C++ programming in Ubuntu Linux
- Install Eclipse in Ubuntu based Linux distributions
- Install Eclipse C++ Development Tooling (CDT) Plugin
- Compile and run C++ program with Eclipse CDT
To do all sort of coding in Linux, you first fire up the terminal and install build-essential package. It a bunch of software you’ll need to compile programs (GCC and G++ compilers).
Some Linux Distros comes preloaded with build-essential, but make sure to run the following command to install. If it’s not already installed, it will get installed, if there is an update to this is available and if it’s already installed the terminal will tell you so.
sudo apt-get install build-essential
Compile and run C++ program in Ubuntu Linux
When you install build-essential the core part is done, you are ready to jump and compile program in Linux. Assuming you can code in C++, here our main goal is on how you can compile and run C++ programs in Linux.
For instance, you have a example.cpp (cpp is a standard extension for program).
You can save this file anywhere on your computer.
Compile C++ code in Linux:
From the directory where you program is present fire up the following command.
g++ -o swap example.cpp
-o = We use this option will build an executable code in the file.
Run C++ code in Linux:
After you done compiling the code, a executable file will be there which you can run using the following command.
This will put your code in action.
Method 2: Setup Eclipse for C++ programming in Ubuntu Linux
The above was the basic pay of running your C plus plus program on a Linux operating system. That being said compiling and running individual file will take so much time if done one by one. That is where IDE (Integrated Development Environmen) came to serve, now there are many ID available for Linux but let’s start with eclipse (it’s open source).
Install Eclipse in Ubuntu based Linux distributions
Fire up the terminal and type the following command to install eclipse on your Linux machine. This will download and install eclipse from the software center (apart from Ubuntu Linux this will also work on Ubuntu based Linux distro and you can always side-load it in your favorite LINUX operating system.
sudo apt-get install eclipse
It didn’t work for me on Ubuntu 18.04 beta, but I was able to install it from the software center.
Follow these steps to install it from the software center.
Step 1. Press the start button on your keyboard or click the menu icon left-bottom and open Ubuntu Software.
Step 2. Now go ahead, using the search icon type Eclipse and from result select it.
Step 3. Final step, click the install button. Easy peasy.
Install Eclipse C++ Development Tooling (CDT) Plugin
Eclipse is pre-configured for Java development but you need to configure it for C++ development which is why we are going to install a plugin goes by the name C++ development tooling (CDT). To install CDT.
Step 1: Open the eclipse and from the help click install new software.
Step 2: Now click on the available software sites link.
Step 3: Now types EDT in this search bar and select the result and click ok.
Step 4: From the drop-down take the C++ development tools and click next.
A few clicks on the Next button.
Now make sure you are connected to a fast internet because this is going to take some time, the process will install this software from the repository.
Once the process is done close the eclipse a platform and start it again for the changes to take effect.
Compile and run C++ program with Eclipse CDT
You’ll see the information about C++ Plugin at the next start.
You can now import or create C++ projects, to do that start a new project with C++ option.
When you create a new project or simple import, you’ll be asked to enter a new project name ana you can also choose to save it under a manual location.
Once you have everything ready, you can compile the C++ project and run it:
So that’s how you make a C++ environment on Ubuntu Linux if you find this article helpful share it with your developer friends and tweet about it. These instructions will also apply on other Linux distributions such as lubuntu, Ubuntu, Fedora, elementary OS and pretty much everything that has Ubuntu software and ability to download repository from there. If you are using a Linux distribution that does not support installing from terminal Ubuntu software Center you can always download your favorite IDE inside load it | <urn:uuid:5c33d6a7-95b8-454e-89ac-dc9c6a371f94> | 2.640625 | 1,179 | Tutorial | Software Dev. | 52.689729 | 95,588,454 |
A time-series of high-resolution spectra in the optical and ultraviolet has twice been obtained just a few minutes after the detection of a gamma-ray bust explosion in a distant galaxy. The international team of astronomers responsible for these observations derived new conclusive evidence about the nature of the surroundings of these powerful explosions linked to the death of massive stars.
At 11:08 pm on 17 April 2006, an alarm rang in the Control Room of ESO's Very Large Telescope on Paranal, Chile. Fortunately, it did not announce any catastrophe on the mountain, nor with one of the world's largest telescopes. Instead, it signalled the doom of a massive star, 9.3 billion light-years away, whose final scream of agony - a powerful burst of gamma rays - had been recorded by the Swift satellite only two minutes earlier. The alarm was triggered by the activation of the VLT Rapid Response Mode, a novel system that allows for robotic observations without any human intervention, except for the alignment of the spectrograph slit.
Starting less than 10 minutes after the Swift detection, a series of spectra of increasing integration times (3, 5, 10, 20, 40 and 80 minutes) were taken with the Ultraviolet and Visual Echelle Spectrograph (UVES), mounted on Kueyen, the second Unit Telescope of the VLT.
"With the Rapid Response Mode, the VLT is directly controlled by a distant explosion," said ESO astronomer Paul Vreeswijk, who requested the observations and is lead-author of the paper reporting the results. "All I really had to do, once I was informed of the gamma-ray burst detection, was to phone the staff astronomers at the Paranal Observatory, Stefano Bagnulo and Stan Stefl, to check that everything was fine."
The first spectrum of this time series was the quickest ever taken of a gamma-ray burst afterglow, let alone with an instrument such as UVES, which is capable of splitting the afterglow light with uttermost precision. What is more, this amazing record was broken less than two months later by the same team. On 7 June 2006, the Rapid-Response Mode triggered UVES observations of the afterglow of an even more distant gamma-ray source a mere 7.5 minutes after its detection by the Swift satellite.
Gamma-ray bursts are the most intense explosions in the Universe. They are also very brief. They randomly occur in galaxies in the distant Universe and, after the energetic gamma-ray emission has ceased, they radiate an afterglow flux at longer wavelengths (i.e. lower energies). They are classified as long and short bursts according to their duration and burst energetics, but hybrid bursts have also been discovered (see ESO PR 49/06). The scientific community agrees that gamma-ray bursts are associated with the formation of black holes, but the exact nature of the bursts remains enigmatic.
Because a gamma-ray burst typically occurs at very large distances, its optical afterglow is faint. In addition, it fades very rapidly: in only a few hours the optical afterglow brightness can fade by as much as a factor of 500. This makes detailed spectral analysis possible only for a few hours after the gamma-ray detection, even with large telescopes. During the first minutes and hours after the explosion, there is also the important opportunity to observe time-dependent phenomena related to the influence of the explosion on its surroundings. The technical challenge therefore consists of obtaining high-resolution spectroscopy with 8-10 m class telescopes as quickly as possible.
"The afterglow spectra provide a wealth of information about the composition of the interstellar medium of the galaxy in which the star exploded. Some of us even hoped to characterize the gas in the vicinity of the explosion," said team member Cédric Ledoux (ESO).
The Rapid Response Mode UVES observations of 17 April 2006 allowed the astronomers to discover variable spectral features associated with a huge gas cloud in the host galaxy of the gamma-ray burst. The cloud was found to be neutral but excited by the radiation from the UV afterglow light.
From detailed modelling of these observations, the astronomers were able - for the first time - to not only pinpoint the physical mechanism responsible for the excitation of the atoms, but also determine the distance of the cloud to the GRB. This distance was found to be 5,500 light-years, which is much further out than was previously thought. Either this is a special case, or the common picture that the features seen in optical spectra originate very close to the explosion has to be revised. As a comparison, this distance of 5,500 light-years is more than one fifth of that between the Sun and the centre of our Galaxy.
"All the material in this region of space must have been ionised, that is, the atoms have been stripped of most if not all of their electrons," said co-author Alain Smette (ESO). "Were there any life in this region of the Universe, it would most probably have been eradicated."
"With the Rapid-Response Mode of the VLT, we are really looking at gamma-ray bursts as quickly as possible," said team member Andreas Jaunsen from the University of Oslo (Norway). "This is crucial if we are to unravel the mysteries of these gigantic explosions and their links with black holes!"
The two gamma-ray bursts were discovered with the NASA/ASI/PPARC Swift satellite, which is dedicated to the discovery of these powerful cosmic explosions.
Preliminary reports on these observations have been presented in GCN GRB Observation Reports 4974 and 5237. The team is composed of Paul Vreeswijk, Cédric Ledoux, Alain Smette, Andreas Kaufer and Palle Møller (ESO), Sara Ellison (University of Victoria, Canada), Andreas Jaunsen (University of Oslo, Norway), Morten Andersen (AIP, Potsdam, Germany), Andrew Fruchter (STScI, Baltimore, USA), Johan Fynbo and Jens Hjorth (Dark Cosmology Centre, Copenhagen, Denmark), Patrick Petitjean (IAP, Paris, France), Sandra Savaglio (MPE, Garching, Germany), and Ralph Wijers (Astronomical Institute, University of Amsterdam, The Netherlands). Paul Vreeswijk was at the time of this study also associated with the Universidad de Chile, Santiago.
Henri Boffin | alfa
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:c3c0e515-aeab-4a6b-a844-464c8a8bd39a> | 2.65625 | 1,985 | Content Listing | Science & Tech. | 39.917191 | 95,588,456 |
Butterfly: Wingspan is 1 1/8 to 1 ¾ inches (2.8-4.4 cm). The Zarucco Duskywing is basically brown on both upper- and lower wing surfaces, with three to five small, glassy-whitish spots at the subapical region of the upper forewing. The white spots are located just to the outside edge of a distinctive light russet-red to beige patch found at the end of the forewing cell and in the postmedial position along the outer costal edge. The Zarucco Duskywing lacks the white spot adjacent to the inside edge of the russet patch that is present in Juvenal’s Duskywing and Horace’s Duskywing. Another patch, more russet in color, is located submedially along the inner margin of the upper forewing. The whitish wing fringe along the hindwing in Alabama specimens is lighter in color than that of the forewing, which is grayish. The wings below are dark brown with numerous light and darker spots. The female is usually lighter in color than the male with larger glassy spots and more strongly patterned wings.
Egg: The Zarucco Duskywing female lay eggs on host plant leaves, often using new growth or leaf tips. The yellowish to green eggs turn light orange a few days before hatching.
Caterpillar: The caterpillars are light green with a pale lateral stripe. The body is covered with numerous minute white dots. The head is brown and rimmed on either side with three large yellow-orange spots. The caterpillars construct shelters of rolled leaves tied together with silk. When not feeding, the caterpillars return to their retreat to rest. The caterpillars from the last brood in the fall overwinters. Pupation and emergence of adults appear during the following spring.
Chrysalis: The chrysalis is green to brownish with a single small dark dot on either side of the head portion that together, resemble eyes. The pupae is often formed in leaf litter.
Like most duskywings, the Zarucco Duskywing flies within a few feet of the ground and has a rapid, erratic flight. While perching and nectaring, this butterfly holds its wings open.
The Zarucco Duskywing is primarily an inhabitant of the southeastern U. S., being found from North Carolina south to the Florida Keys, and west along the Gulf coastal states to eastern Texas and Oklahoma. During favorable climates, some specimens may stray northward as far as Connecticut, Pennsylvania and southern Illinois. It is most common in the lower two-thirds of Alabama, and rare in the upper more mountainous regions, especially in northeast Alabama.
A dot on the county map indicates that there is at least one documented record of the species within that county. In some cases, a species may be common throughout the county, in others it may be found in only a specific habitat.
The sightings bar graphs depict the timing of flight(s) within each of three geographic regions. Place your cursor on a bar within the graph to see the number of individuals recorded during that period.
The abundance calendar displays the total number of individuals recorded within each week of the month. Both the graphs and the calendar are on based data collection that began in 2000.
The records analyzed here are only a beginning. As more data is collected, these maps and graphs will paint a more accurate picture of distribution and abundance in Alabama. Submit your sightings to email@example.com.
Sightings in the following counties: Baldwin, Barbour, Bibb, Blount, Bullock, Clay, Cleburne, Colbert, Dallas, DeKalb, Greene, Jefferson, Lee, Macon, Mobile, Perry, Pickens, Shelby, Sumter
View county names by moving the mouse over a county or view a map with county names
The Zarucco Duskywing is often found in hot, sandy habitats such as sandy pine forests, scrub-oak habitat, utility right-of-ways, sand dunes, roadsides, fields, and other open areas. This species is a common inhabitant of coastal regions.
In Alabama, hoarypea (Tephrosia spp.), milkpea (Galactia spp.), wisteria (Wisteria spp.) and Bagpod (Sesbania vesicaria) have been documented as host plants.
In other states, larvae are known to feed on the leaves of Black Locust (Robinia pseudacacia), Hairy Bush Clover (Lespedeza hirta), Carolina Indigo (Indigofera caroliniana), American Wisteria (Wisteria frutescens), and vetches (Vicia spp.).
For more information about the documented host plants and/or nectar plants, please visit the Alabama Plant Atlas using the following links:
Provide a variety of garden worthy, nectar-rich flowers to attract butterflies like the Zarucco Duskywing. These include: Butterfly Milkweed and other milkweeds; Purple Coneflower and other coneflowers; black eyed susans; phloxes; mountain mints; Common Buttonbush; Joe Pye weeds; gayfeathers/blazing stars; Mistflower; ironweeds; asters; and goldenrods.
If you have a lawn in your landscape, consider letting it be natural. The diverse assemblage of native and nonnative flowering plants and grasses typically found in naturalized lawns provides nectar and host sources for many small butterflies including Zarucco Duskywings. | <urn:uuid:d1461e59-be6a-4e0c-ad05-8697cb2ebe29> | 2.859375 | 1,189 | Knowledge Article | Science & Tech. | 43.647556 | 95,588,484 |
Astronomy is a broad discipline covering all facets of astrophysics. In this section you can learn about the origins of the universe, black holes and other astronomical phenomena.
Topics to Explore:
Longest Lunar Eclipse of the Century Is Coming
Mars Moves Closest to Earth Since 2003
The Moon Is Causing Longer Days on Earth
How Do We Find Things in the Blackness of Space?
Constellations are groupings of stars that, when viewed from Earth, form distinct shapes. Constellations have been around since the dawn of recorded history. In this section you will learn all about constellations and their histories.
Icy Comets Orbiting a Star Like Our Sun Spotted for the First Time
Observatories are structures designed and equipped for observing astronomical events. In this section you will learn all about famous observatories and the role they play astronomy.
The Hitomi Satellite Briefly Glimpsed the Universe, Then Died — What Happened?
We're Now One Step Closer to a Gravitational Wave Space Observatory
Anti-asteroid Space 'Sentinel' Could Soon Patrol the Planetary Skies
Stars are celestial bodies made up of hot gases. Stars radiate energy that comes form thermonuclear reactions. In this section you will learn all about stars and their importance in the universe.
What Do You Get When Two Neutron Stars Collide?
Astronomers Determine When 'Cosmic Dawn' Happened
Tiny Yet Mighty: Neutron Stars May Be Ravenous X-ray Dazzlers
In the Solar System Channel, you can explore the planets and celestial objects around our own sun. Learn about topics such as Mars, Jupiter and the Moon.
Antipodes Map Locates the Opposite of Any Spot on Earth
6 Creative Uses for an Old Smartphone
The Debaucherous History of Bachelor Parties | <urn:uuid:a847f3f0-4910-48dc-bff0-13a4c4038873> | 3.453125 | 386 | Content Listing | Science & Tech. | 38.465677 | 95,588,497 |
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Your vote was cast
Thank you for your feedback
Thank you for your feedback
MetadataShow full item record
AbstractHow heavy is the weight of air above your head? At first, this seems a ridiculous question. However, gases do actually have a weight and since the atmosphere consists of a mixture of gases (mainly nitrogen and oxygen), air has a weight. This weight is also described as pressure. In fact, the atmosphere is exerting about 5 tonnes of pressure on your head. A cubic metre of air typically weighs about 1.2 kilograms (kg) at sea-level. Thus, the weight of air in a car or a large tea chest is about 1 kg, which is approximately the weight of a large bag of sugar. Fortunately, we do not experience this weight or pressure because our internal pressure acts as a counter-balance. However, we do feel rapid changes in pressure. For instance, if you dive into a swimming pool, the sudden increase in pressure is expressed particularly by compression on the head, especially the eardrums. This is because water is about 1000 times denser than air. Furthermore, we also sense changes in pressure during air travel. Aircraft are normally pressurized to the equivalent pressure at about 2000 metres. Therefore, on ascent and descent we can experience changes in pressure, this is usually noticeable by a slight discomfort in your eardrums, with the drums tending to ‘pop’. Large animals have difficulty in adjusting rapidly to pressure changes. For instance, there have been reports of cows exploding when hit by tornadoes (extremely violent whirlwinds). The inner core or ‘vortex’ of tornadoes is the lowest pressure system on Earth. When these extremely low-pressure systems hit cattle, they cannot rapidly adjust their internal pressure and they can therefore explode. For the same reason, people in tornado-vulnerable areas (such as ‘tornado alley’ in the central USA) are advised to open windows in advance of possible tornadoes, as this diminishes the chances of the house exploding when hit by a tornado (i.e. very low pressure outside and relatively high pressure inside). Evangelista Torricelli was one of the first scientists to experiment with pressure. In 1644, Torricelli inverted a tube of mercury into a vessel containing mercury (a liquid metal). In the upper part of the tube a vacuum was created when the mercury fell. This meant Torricelli was able to show that air pressure was capable of supporting a column of mercury. When the air pressure rose, the height of mercury increased. When the air pressure decreased, the height of the mercury decreased. To illustrate air pressure to yourself, simply place a drinking glass under the water level in a bowl. Raise the inverted full glass above the surface of the water, keeping the rim below the water level. Air pressure keeps the water in the glass, even though it is above water level.
CitationGeography Review, 22 (3) : 28-31
PublisherPhilip Allan Updates
CollectionsPlant and Environmental Research Group | <urn:uuid:e2eb4e15-0dab-40b2-ad09-7926b3399916> | 3.3125 | 666 | Knowledge Article | Science & Tech. | 43.780049 | 95,588,499 |
Refer to attachments for full experimental design.
I need help with the following questions:
1. Why is it necessary to start this experiment with a large excess of ice in the metal ice container (the calorimeter)?
2. If all the ice melts when you were doing this experiment so at the end of the procedure you have a colorimeter full of water, how would this affect you result? Would you calculate a caloric value of your food to be higher or lower than true value?
4. Look up the density of water at 0 degrees celsius and explain if it would be acceptable to use the volume of the water melted (ml) instead of the mass of the water melted in your calculations.© BrainMass Inc. brainmass.com July 22, 2018, 12:17 pm ad1c9bdddf
The amount of energy that was harnessed in the nut and ultimately transferred into the melting of ice in the calorimeter is measurable based on how much water there was after the whole nut was burned. As a result, the amount of energy you can measure is directly related to how much water you collect, out of a total of however much ice there was at the beginning.
If there was too little ice, say, a single, small cube of ice, by the end of the burning we will have a puddle, but how do we ...
The expert determines why it is necessary to start a calorimeter experiment with a large excess of ice in the metal ice container. | <urn:uuid:3e388b3c-67bd-432c-ab3e-1994271f1e41> | 3.4375 | 308 | Q&A Forum | Science & Tech. | 58.702309 | 95,588,527 |
DNA nanowire improved by altering sequences
DNA molecules don't just code our genetic instructions. They can also conduct electricity and self-assemble into well-defined shapes, making them potential candidates for building low-cost nanoelectronic devices.
Each ribboning strand of DNA in our bodies is built from stacks of four molecular bases, shown here as blocks of yellow, green, blue and orange, whose sequence encodes detailed operating instructions for the cell. New research shows that tinkering with the order of these bases can also be used to tune the electrical conductivity of nanowires made from DNA.
Credit: Maggie Bartlett, NHGRI
A team of researchers from Duke University and Arizona State University has shown how specific DNA sequences can turn these spiral-shaped molecules into electron "highways," allowing electricity to more easily flow through the strand.
The results may provide a framework for engineering more stable, efficient and tunable DNA nanoscale devices, and for understanding how DNA conductivity might be used to identify gene damage. The study appears online June 20 in Nature Chemistry.
Scientists have long disagreed over exactly how electrons travel along strands of DNA, says David N. Beratan, professor of chemistry at Duke University and leader of the Duke team. Over longer distances, they believe electrons travel along DNA strands like particles, "hopping" from one molecular base or "unit" to the next. Over shorter distances, the electrons use their wave character, being shared or "smeared out" over multiple bases at once.
But recent experiments lead by Nongjian Tao, professor of electrical engineering at Arizona State University and co-author on the study, provided hints that this wave-like behavior could be extended to longer distances.
This result was intriguing, says Duke graduate student and study lead author Chaoren Liu, because electrons that travel in waves are essentially entering the "fast lane," moving with more efficiency than those that hop.
"In our studies, we first wanted to confirm that this wave-like behavior actually existed over these lengths," Liu said. "And second, we wanted to understand the mechanism so that we could make this wave-like behavior stronger or extend it to even longer distances."
DNA strands are built like chains, with each link comprising one of four molecular bases whose sequence codes the genetic instructions for our cells. Using computer simulations, Beratan's team found that manipulating these same sequences could tune the degree of electron sharing between bases, leading to wave-like behavior over longer or shorter distances. In particular, they found that alternating blocks of five guanine (G) bases on opposite DNA strands created the best construct for long-range wave-like electronic motions.
The team theorizes that creating these blocks of G bases causes them to all "lock" together so the wave-like behavior of the electrons is less likely to be disrupted by random wiggling in the DNA strand.
"We can think of the bases being effectively linked together so they all move as one," Liu said. "This helps the electron be shared within the blocks."
The Tao group confirmed these theoretical predictions using break junction experiments, tethering short DNA strands built from alternating blocks of three to eight guanine bases between two gold electrodes and measuring the amount of electrical charge flowing through the molecules.
The results shed light on a long-standing controversy over the exact nature of the electron transport in DNA, Beratan says. They might also provide insight into the design of tunable DNA nanoelectronics, and into the role of DNA electron transport in biological systems.
"This theoretical framework shows us that the exact sequence of the DNA helps dictate whether electrons might travel like particles, and when they might travel like waves," Beratan said. "You could say we are engineering the wave-like personality of the electron."
Other authors include Yuqi Zhang and Peng Zhang of Duke University and Limin Xiang and Yueqi Li of Arizona State University.
This research was supported by grants from the Office of Naval Research (N00014-11-1-0729) and the National Science Foundation (DMR-1413257).
CITATION: "Engineering nanometer-scale coherence in soft matter," Chaoren Liu, Yuqi Zhang, Peng Zhang, David N. Beratan, Limin Xiang, Yueqi Li, Nongjian Tao. Nature Chemistry, June 20, 2016. DOI: 10.1038/nchem.2545
Kara J. Manke | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:ef527551-7e6f-46f0-b85a-2fc9a42807ec> | 3.921875 | 1,505 | Content Listing | Science & Tech. | 36.332215 | 95,588,531 |
This algorithm turns a text string into a phonetic text string.
Consequently, different mis-spellings of the same word, even a made-up word like many company names are, will end up having the same phonetic text (Soundex or Metaphone).
This helps databases prevent duplicate data in the following way:
Imagine a database with a form in which someone has to type a company name, and the company name they wish to type is not in the dictionary.
When the company name is first typed, the new record is stored with the company name and the Metaphone for that company name:
So now another user hears the company name on the phone and types in Likom... which has the same metaphone, "LKM".
Rather than search for the Company Name Text, the (shorter) Metaphone Text can be searched for matches and a list presented for the user to choose from.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
(1344) Last modified: Tue, 03 Feb 2009 18:26:29 +0000 | <urn:uuid:83697aee-0d09-4dfc-9ab1-62fb95036510> | 2.671875 | 446 | Product Page | Software Dev. | 30.641773 | 95,588,558 |
Weather Talk: What is the heaviest rain on Earth?
When a thunderstorm produces an inch of rain, it is often casually referred to as a heavy rain. But other than temporary street flooding, an inch of rain typically causes no problems in our area. Much heavier rains on the order of 5-10 inches have happened in the past with much costlier impacts.
The rainiest place in the United States is on the windward slope and at the summit of Hawaii's Mt. Waiʻaleʻale. Average annual rainfall is 452 inches. It rains on an average of 360 days a year, and daily rainfalls of 20-30 inches are common.
The wettest place on Earth is the village of Mawsynram in Meghalaya, India, which receives 467 inches of rain per year. In terms of a single storm, in 2014, the World Meteorological Organization (WMO) confirmed a world record 48-hour rainfall of 98.15 inches on June 15-16, 1995, in Cherrapunji, India. | <urn:uuid:6a2df48b-92a9-46ff-a45a-5f6a163dedc3> | 3.421875 | 219 | News Article | Science & Tech. | 68.642374 | 95,588,560 |
We only have one example of a planet with life: Earth. But within the next generation, it should become possible to detect signs of life on planets orbiting distant stars. If we find alien life, new questions will arise. For example, did that life arise spontaneously? Or could it have spread from elsewhere? If life crossed the vast gulf of interstellar space long ago, how would we tell?
New research by Harvard astrophysicists shows that if life can travel between the stars (a process called panspermia), it would spread in a characteristic pattern that we could potentially identify.
In this theoretical artist's conception of the Milky Way galaxy, transluscent green "bubbles" mark areas where life has spread beyond its home system to create cosmic oases, a process called panspermia. New research suggests that we could detect the pattern of panspermia, if it occurs.
Credit: NASA/JPL/R. Hurt
"In our theory clusters of life form, grow, and overlap like bubbles in a pot of boiling water," says lead author Henry Lin of the Harvard-Smithsonian Center for Astrophysics (CfA).
There are two basic ways for life to spread beyond its host star. The first would be via natural processes such as gravitational slingshotting of asteroids or comets. The second would be for intelligent life to deliberately travel outward. The paper does not deal with how panspermia occurs. It simply asks: if it does occur, could we detect it? In principle, the answer is yes.
The model assumes that seeds from one living planet spread outward in all directions. If a seed reaches a habitable planet orbiting a neighboring star, it can take root. Over time, the result of this process would be a series of life-bearing oases dotting the galactic landscape.
"Life could spread from host star to host star in a pattern similar to the outbreak of an epidemic. In a sense, the Milky Way galaxy would become infected with pockets of life," explains CfA co-author Avi Loeb.
If we detect signs of life in the atmospheres of alien worlds, the next step will be to look for a pattern. For example, in an ideal case where the Earth is on the edge of a "bubble" of life, all the nearby life-hosting worlds we find will be in one half of the sky, while the other half will be barren.
Lin and Loeb caution that a pattern will only be discernible if life spreads somewhat rapidly. Since stars in the Milky Way drift relative to each other, stars that are neighbors now won't be neighbors in a few million years. In other words, stellar drift would smear out the bubbles.
Christine Pulliam | EurekAlert!
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:c3339b24-67c0-4e1d-8499-843911a361a2> | 4.03125 | 1,197 | Content Listing | Science & Tech. | 48.963196 | 95,588,567 |
Trailing 200,000-light-year-long streamers of seething gas, a galaxy that was once like our Milky Way is being shredded as it plunges at 4.5 million miles per hour through the heart of a distant cluster of galaxies. In this unusually violent collision with ambient cluster gas, the galaxy is stripped down to its skeletal spiral arms as it is eviscerated of fresh hydrogen for making new stars.
Composite image of the galaxy C153 (X-ray Images: NASA/CXC/SAO/UMass/D. Wang et al. Optical: NASA/STScI/U. Alabama/W. Keel Radio: NRAO/ F. Owen Optical (OII): Gemini Obs./M. Ledlow)
The galaxys untimely demise is offering new clues to solving the mystery of what happens to spiral galaxies in a violent universe. Views of the early universe show that spiral galaxies were once much more abundant in rich clusters of galaxies. But they seem to have been vanishing over cosmic time. Where have these "missing bodies" gone?
Astronomers are using a wide range of telescopes and analysis techniques to conduct a "CSI" or Crime Scene Investigator-style look at what is happening to this galaxy inside its clusters rough neighborhood. "Its a clear case of galaxy assault and battery," says William Keel of the University of Alabama. "This is the first time we have a full suite of results from such disparate techniques showing the crime being committed, and the modus operandi."
Steve Roy | MSFC
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:dfcc8b57-78d9-4b99-89b1-17d935c60eed> | 2.75 | 954 | Content Listing | Science & Tech. | 44.026846 | 95,588,568 |
- About the Arctic Report Card - Arctic Program
Issued annually since 2006, the Arctic Report Card is a timely and peer-reviewed source for clear, reliable and concise environmental information on the current state of different components of the Arctic environmental system relative to historical records
- Delingpole: NOAA Caught Lying About Arctic Sea Ice
Yep: the Arctic sea ice is doing just fine Yep: yet again, the NOAA is telling porkies As usual, Paul Homewood has got its number First, here’s what the NOAA is claiming, as relayed in a scaremongering piece at Vox: The Arctic Ocean once froze reliably every year
- Arctic - Wikipedia
Arctic vegetation is composed of plants such as dwarf shrubs, graminoids, herbs, lichens, and mosses, which all grow relatively close to the ground, forming tundra An example of a dwarf shrub is the Bearberry As one moves northward, the amount of warmth available for plant growth decreases considerably
- News | Alaska Climate Research Center
Annual Summary Report 2017 2017 Alaska Climate Summary Statewide 2017 Year in Review Spring and summer temperatures in the Arctic were cooler in 2017 than they have been in many years this decade, but the annual average surface temperature was still the second highest on record according to the annual issue of NOAA’s Arctic Report Card
- Climate of the Arctic - Wikipedia
The climate of the Arctic is characterized by long, cold winters and short, cool summers There is a large amount of variability in climate across the Arctic, but all regions experience extremes of solar radiation in both summer and winter Some parts of the Arctic are covered by ice (sea ice, glacial ice, or snow) year-round, and nearly all parts of the Arctic experience long periods with
- Paleoclimatology Data | National Centers for Environmental . . .
Paleoclimatology data are derived from natural sources such as tree rings, ice cores, corals, and ocean and lake sediments These proxy climate data extend the archive of weather and climate information hundreds to millions of years
- Arctica - Wikipedia
"Global Security, Climate Change, and the Arctic" - 24-page special journal issue (fall 2009), Swords and Ploughshares, Program in Arms Control, Disarmament, and International Security (ACDIS), University of Illinois "Global Security, Climate Change, and the Arctic" - streaming video of November 2009 symposium at the University of Illinois Implications of an Ice-Free Arctic for Global Security
- Arctic Sea Ice Going Down With the Blues | Paul Beckwith . . .
There is a very high probability that the Arctic sea ice will essentially vanish by the end of summer melt in 2020 or earlier The ice-free duration would likely be less than one-month in September for this first “blue-ocean” event | <urn:uuid:03933de6-3f0a-461f-890e-689b353c52cb> | 2.6875 | 583 | Content Listing | Science & Tech. | -19.878333 | 95,588,589 |
This information could be used to conserve or rebuild reefs in areas affected by climate change, by changes in extreme weather patterns, increasing sedimentation or altered land use.
In reef-building corals variations within genes involved in immunity and response to stress correlate to water temperature and clarity, finds a study published in BioMed Central’s open access journal BMC Genetics.
Credit: Petra Lundgren, Juan C Vera, Lesa Peplow, Stephanie Manel and Madeleine JH van Oppen
A research team led by the Australian Institute of Marine Science, and in collaboration with Penn State University and the Aix-Marseille University, studied DNA variations (Single Nucleotide Polymorphisms, SNPs) across populations of reef corals found at a range of temperatures and water clarity along the Great Barrier Reef.
SNPs which correlated to water clarity and water temperature preferred by cauliflower coral were found in genes involved in providing immune response, and regulating stress-induced cell-death. This means that coral with a specific version of these genes tended to grow at higher temperatures (or water clarity) and another variant at lower. A similar story was found for staghorn coral - SNP in genes involved in detoxification, immune response, and defense against reactive oxygen damage, were found to be associated with temperature or to water clarity.
Dr Petra Lundgren, from The Australian Institute of Marine Science, explained, "Corals are particularly vulnerable to climate change. Not only is the temperature of the water they live in affected but extreme weather and higher rainfall leads to increased levels of sediment, agricultural runoff, and fresh water on the reef. This work opens up possibilities for us to enhance reef resilience and recovery from impacts of climate change and pollution. For example, if in the future we need to restore coral populations, we can make sure that we use the most robust strains of corals to do so."
Media ContactDr Hilary Glover
Please name the journal in any story you write. If you are writing for the web, please link to the article. All articles are available free of charge, according to BioMed Central's open access policy.
Article citation and URL available on request on the day of publication.
All images are to be credited to Petra Lundgren, Juan C Vera, Lesa Peplow, Stephanie Manel and Madeleine JH van Oppen.
2. BMC Genetics is an open access, peer-reviewed journal that considers articles on all aspects of inheritance and variation in individuals and among populations.
3. BioMed Central is an STM (Science, Technology and Medicine) publisher which has pioneered the open access publishing model. All peer-reviewed research articles published by BioMed Central are made immediately and freely accessible online, and are licensed to allow redistribution and reuse. BioMed Central is part of Springer Science+Business Media, a leading global publisher in the STM sector. @BioMedCentral
Hilary Glover | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:1f5d8243-a186-41fe-8db5-f738e069e109> | 3.359375 | 1,264 | Content Listing | Science & Tech. | 33.521187 | 95,588,598 |
Coupled model intercomparison project
In climatology, the Coupled Model Intercomparison Project (CMIP) is a collaborative framework designed to improve our knowledge of climate change, being the analog of Atmospheric Model Intercomparison Project (AMIP) for global coupled ocean-atmosphere general circulation models (GCMs). It was organized in 1995 by the Working Group on Coupled Modelling (WGCM) of the World Climate Research Programme’s (WCRP). It is developed in phases to foster the climate model improvements but also to support national and international assessments of climate change.
The Program for Climate Model Diagnosis and Intercomparison at Lawrence Livermore National Laboratory has been supporting the several CMIP phases by helping WGCM to determine the scope of the project, by maintaining the project's data base and by participating in data analysis. CMIP has received model output from the pre-industrial climate simulations ("control runs") and 1% per year increasing-CO2 simulations of about 30 coupled GCMs. More recent phases of the project (20C3M, ...) include more realistic scenarios of climate forcing for both historical, paleoclimate and future scenarios.
CMIP Phases 1 and 2
According to Lawrence Livermore National Laboratory PCMDI, the response to the CMIP1 announcement was very successful and up to 18 global coupled models participated in the data collection representing most of the international groups with global coupled GCMs. In consequence, at the September 1996 meeting of CLIVAR NEG2 in Victoria, Canada, it was decided that CMIP2 will be an inter-comparison of 1% per year compound CO2 increase integrations (80 years in length) where CO2 doubles at around year 70.
CMIP Phase 3
During 2005 and 2006, a collection of climate model outputs was coordinated and stored by PCMDI . The climate model outputs included simulations of past, present and future climate scenarios This activity enabled those climate models, outside the major modeling centers to perform research of relevance to climate scientists preparing the IPCC Fourth Assessment Report (IPCC-AR4). For the CMIP3 a list of 20 different experiments were proposed , and the PCMDI kept the documentation of all the global climate model involved . Additional information and data-sets are in .
CMIP Phase 5
The most recently completed phase of the project (2010-2014) is CMIP5. CMIP5 included more metadata describing model simulations than previous phases. The METAFOR project created an exhaustive schema describing the scientific, technical, and numerical aspects of CMIP runs which was archived along with the output data.
A main objective of the CMIP5 experiments is to address outstanding scientific questions that arose as part of the IPCC AR4 process, improve understanding of climate, and to provide estimates of future climate change that will be useful to those considering its possible consequences. The IPCC Fifth Assessment Report summarizes information of CMIP5 experiments, while the CMIP5 experimental protocol was endorsed by the 12th Session of the WCRP Working Group on Coupled Modelling (WGCM) . Additional information and data-sets are in .
CMIP Phase 6
Planning meetings for Phase 6 began in 2013, and an overview of the design and organization was published in 2016. By 2018 CMIP6 had endorsed 23 Model Intercomparison Projects (MIPs) involving 33 modeling groups in 16 countries. A small number of common experiments were also planned. The deadline for submission of papers to contribute to the IPCC 6th Assessment Report Working Group I is early 2020.
The structure of the CMIP6 has been extended with respect to CMIP5 by providing an equivalent framework named CMIP Diagnostic, Evaluation and Characterization of Klima (DECK), together with a set of Endorsed MIPs to improve the description of aspects of climate models beyond the core set of common experiments included in DECK. However, CMIP-Endorsed Model Intercomparison Projects (MIPs) are still built on the DECK and CMIP historical simulations, therefore their main goal is just to address a wider range of specific questions . This structure will be kept in future CMIP experiments.
CMIP6 also aims to be consistent regarding common standards and documentation. To achieve that it includes methods to facilitate a wider distribution and characterization of model outputs, and common standard tools for their analyses. A number of guides has been created for data managers, modelers and users.
A set of official/common forcings datasets are available for the studies under DECK, as well as several MIPS. That allows for more sensible comparisons on the model ensemble created under the CMIP6 umbrella.
- Historical Short-Lived Climate Forcers (SLCF) and GHG (CO2 and CH4) Emissions
- Biomass Burning Emissions
- Global Gridded Land-use Forcing Datasets: data are available from
- Historical greenhouse gases (GHG) concentrations: a full description is published via the CMIP6 Special Issue publication
- Ozone Concentrations and Nitrogen (N)-Deposition: additional information at , while the description of ozone radiative forcing based on this dataset is published .
- Aerosol Optical Properties and Relative Change in Cloud Droplet Number Concentration: Data are available as supplement to Stevens et al. (2016) at
- Solar Forcing: Datasets are available from and the description published
- Stratospheric Aerosol Data Set: data are available from
- AMIP Sea Surface Temperature and Sea Ice Datasets
Beyond these historical forcings, CMIP6 also has a common set of future scenarios comprising land use and emissions as required for the future Shared Socio-economic Pathway (SSP) and Representative Concentration Pathways (RCPs) .
- "CMIP3-Info". pcmdi.llnl.gov. Retrieved 2018-05-20.
- "CMIP3-Experiments". pcmdi.llnl.gov. Retrieved 2018-05-20.
- "CMIP3-Models". pcmdi.llnl.gov. Retrieved 2018-05-20.
- "CMIP3-Overview". cmip.llnl.gov. Retrieved 2018-05-20.
- "ESGF-LLNL - Home | ESGF-CoG". esgf-node.llnl.gov. Retrieved 2017-10-09.
- "There is still no room for complacency in matters climatic". The Economist. Retrieved 2017-10-09.
- Taylor, K. E.; Stouffer, R. J.; Meehl, G. A. (2012-03-01). "An Overview of CMIP5 and the Experiment Design". BAMS.
- "CMIP5-Overview". cmip.llnl.gov. Retrieved 2018-05-20.
- Eyring, Veronika; et al. "Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) Experimental Design and Organization" (PDF). Retrieved 6 July 2018.
- "CMIP6_Forcing_Datasets_Summary". Google Docs. Retrieved 2018-07-18.
- Meinshausen, M.; Vogel, E.; Nauels, A.; Lorbacher, K.; Meinshausen, N.; Etheridge, D. M.; Fraser, P. J.; Montzka, S. A.; Rayner, P. J. (2017-05-31). "Historical greenhouse gas concentrations for climate modelling (CMIP6)". Geosci. Model Dev. 10 (5): 2057–2116. doi:10.5194/gmd-10-2057-2017. ISSN 1991-9603.
- Checa-Garcia, Ramiro; Hegglin, Michaela I.; Kinnison, Douglas; Plummer, David A.; Shine, Keith P. (2018-04-06). "Historical Tropospheric and Stratospheric Ozone Radiative Forcing Using the CMIP6 Database". Geophysical Research Letters. 45 (7): 3264–3273. doi:10.1002/2017gl076770. ISSN 0094-8276.
- Stevens, B.; Fiedler, S.; Kinne, S.; Peters, K.; Rast, S.; Müsse, J.; Smith, S. J.; Mauritsen, T. (2017-02-01). "MACv2-SP: a parameterization of anthropogenic aerosol optical properties and an associated Twomey effect for use in CMIP6". Geosci. Model Dev. 10 (1): 433–452. doi:10.5194/gmd-10-433-2017. ISSN 1991-9603.
- Matthes, K.; Funke, B.; Andersson, M. E.; Barnard, L.; Beer, J.; Charbonneau, P.; Clilverd, M. A.; Dudok de Wit, T.; Haberreiter, M. (2017-06-22). "Solar forcing for CMIP6 (v3.2)". Geosci. Model Dev. 10 (6): 2247–2302. doi:10.5194/gmd-10-2247-2017. ISSN 1991-9603.
- O'Neill, B. C.; Tebaldi, C.; van Vuuren, D. P.; Eyring, V.; Friedlingstein, P.; Hurtt, G.; Knutti, R.; Kriegler, E.; Lamarque, J.-F. (2016-09-28). "The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6". Geosci. Model Dev. 9 (9): 3461–3482. doi:10.5194/gmd-9-3461-2016. ISSN 1991-9603.
- Coupled Model Intercomparison Project (CMIP)
- An Overview of Results from the Coupled Model Intercomparison Project (CMIP)
- MIPS Overview, included CMIP1 to CMIP5 phases
- CMIP5 Summary
- CMIP5 Design Documents
- CMIP6 homepage (WCRP)
- CMIP Related Publications
|This climatology/meteorology–related article is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:23877999-4eb7-454a-9de0-80f645906e49> | 2.546875 | 2,216 | Knowledge Article | Science & Tech. | 56.644842 | 95,588,610 |
|ECHINODERMATA : APODIDA : Synaptidae||STARFISH, SEA URCHINS, ETC.|
Description: A worm-like holothurian with twelve pinnate tentacles and no tube-feet. Each tentacle has 8-11 pairs of digits, increasing in length towards the end of the tentacle. Colour is pink with a transparent skin and obvious longitudinal muscle-bands. spicules are anchors associated with pear-shaped anchor-plates. Typically 10-30cm in length.
Habitat: Burrows in muddy sand in the sublittoral.
Distribution: Found on all coasts of the British Isles.
Similar Species: All British synaptids are superficially similar and their tentacles and spicules must be examined with a microscope or hand-lens to distinguish the species.
Key Identification Features:
Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK.
WoRMS: Species record : World Register of Marine Species.
|Picton, B.E. & Morrow, C.C. (2016). Leptosynapta bergensis (Ostergren, 1905). [In] Encyclopedia of Marine Life of Britain and Ireland. |
http://www.habitas.org.uk/marinelife/species.asp?item=ZB5240 Accessed on 2018-07-19
|Copyright © National Museums of Northern Ireland, 2002-2015| | <urn:uuid:82554813-2891-4b9b-814c-ac4ba028d2cd> | 2.8125 | 314 | Knowledge Article | Science & Tech. | 43.570097 | 95,588,624 |
The measurement is based on a new method that looks at the scattered near-infrared light or 'cloudshine' and was made with ESO's New Technology Telescope. Associated with the forthcoming VISTA telescope, this new technique will allow astronomers to better understand the cradles of newborn stars.
The vast expanses between stars are permeated with giant complexes of cold gas and dust opaque to visible light. Yet these are the future nurseries of stars to be.
"One would like to have a detailed knowledge of the interiors of these dark clouds to better understand where and when new stars will appear," says Mika Juvela, lead author of the paper in which these results are reported.
Because the dust in these clouds blocks the visible light, the distribution of matter within interstellar clouds can be examined only indirectly. One method is based on measurements of the light from stars that are located behind the cloud .
"This method, albeit quite useful, is limited by the fact that the level of details one can obtain depends on the distribution of background stars," says co-author Paolo Padoan.
In 2006, astronomers Padoan, Juvela, and colleague Veli-Matti Pelkonen, proposed that maps of scattered light could be used as another tracer of the cloud's inner structure, a method that should yield more advantages. The idea is to estimate the amount of dust located along the line of sight by measuring the intensity of the scattered light.
Dark clouds are feebly illuminated by nearby stars. This light is scattered by the dust contained in the clouds, an effect dubbed 'cloudshine' by Harvard astronomers Alyssa Goodman and Jonathan Foster. This effect is well known to sky lovers, as they create in visible light wonderful pieces of art called 'reflection nebulae'. The Chameleon I complex nebula is one beautiful example.
When making observations in the near-infrared, art becomes science. Near-infrared radiation can indeed propagate much farther into the cloud than visible light and the maps of scattered light can be used to measure the mass of the material inside the cloud.
To put this method to the test and use it for the first time for a quantitative estimation of the distribution of mass within a cloud, the astronomers who made the original suggestion, together with Kalevi Mattila, made observations in the near-infrared of a filament in the Corona Australis cloud . The observations were made in August 2006 with the SOFI instrument on ESO's New Technology Telescope at La Silla, in the Chilean Atacama Desert. The filament was observed for about 21 hours.
Their observations confirm that the scattering method is providing results that are as reliable as the use of background stars while providing much more detail.
"We can now obtain very high resolution images of dark clouds and so better study their internal structure and dynamics," says Juvela. "Not only is the level of details in the resulting map no longer dependent on the distribution of background stars, but we have also shown that where the density of the cloud becomes too high to be able to see any background stars, the new method can still be applied."
"The presented method and the confirmation of its feasibility will enable a wide range of studies into the interstellar medium and star formation within the Milky Way and even other galaxies," says co-author Mattila.
"This is an important result because, with current and planned near-infrared instruments, large cloud areas can be mapped with high resolution," adds Pelkonen. "For example, the VIRCAM instrument on ESO's soon-to-come VISTA telescope has a field of view hundreds of times larger than SOFI. Using our method, it will prove amazingly powerful for the study of stellar nurseries."Notes
: Located in the constellation of the same name ('Southern Crown'), the Corona Australis molecular cloud is shaped like a 45 light year long cigar. Located about 500 light years away, it contains the equivalent of about 7000 Suns. On the sky, the dark cloud is surrounded by many beautiful 'reflection nebulae'.: Observations of a star-forming cloud with ESO's VLT and based on near-infrared scattering is available as ESO Press Photo 26/03.
Henri Boffin | alfa
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:f4ce3c60-cae2-4a4d-9611-05dd88ebe3e1> | 4.0625 | 1,457 | Content Listing | Science & Tech. | 40.991719 | 95,588,654 |
The fungus-growing termite is the most common and widely distributed member of all Macrotermes in southern Africa. They live in nests that are kept a constant temperature by a remarkable piece of engineering, a spiralled mound consisting of a network of vents and tunnels set around one central chimney. As their names suggests these termites actually cultivate fungus to digest the food which they are unable to do so themselves. General facts about termites.
The fungus-growing termites build large mounds which are typically 2 to 3 metres high. The mounds, known as termitaria, are made with a mixture of soil, saliva and faeces which dry as hard as concrete. These structures can withstand being used as a rubbing post by elephants.
A fungus-growing termite nest sits 1 m below the ground. It is kept at a constant temperature of 31 degrees centigrade by a spiralled mound which emits hot stale air through tiny holes in its walls which in turn allow cold fresh air in. This then circulates around a network of tunnels.
As well as an elaborate air conditioning system of hot and cold air being exchanged through a central chimney and a network of vents and tunnels there is an evaporation cooling system which the fungus-growing termite workers keep topped up with water.
Fungus-growing termites rely on fungus to digest their food into an edible compost. After the rains they will sometimes carry the spores of this fungus outside. This may be to disperse it further afield as a back up to their own supplies.
Fungus-growing termites work individually using a process called swarm intelligence to build complex temperature controlled mounds with specialist chambers and fungal gardens. There is no central blueprint or coordinator for this outstanding piece of engineering.
Fungus-growing termite workers will chew up and eat dead and decaying plant material. Because they are unable to fully digest it they return to the nest and excrete it. The job of digesting these faeces is done by fungi. The resulting compost is then eaten by the colony. | <urn:uuid:e8b67818-5549-4c01-921b-d654a75b6ea6> | 3.65625 | 423 | Knowledge Article | Science & Tech. | 46.843194 | 95,588,660 |
- How do touch-sensitive screens work?
- What is the function of a hydraulic bypass valve?
- Can we make robots that experience emotions?
- How do you make and install a lightning rod?
- What do Legos have to do with engineering?
- Why is it cheaper to make new plastic bottles than to recycle old ones?
- How do electricity transmission lines withstand a lifetime of exposure to the elements?
Why can’t we put metal objects in a microwave?
You can, but it’s pointless — and potentially dangerous…By Leda Zimmerman
It is “counterproductive to put something metallic inside your microwave oven if you want to heat it up,” says Caroline A. Ross, Toyota Professor of Materials Science and Engineering. Microwaves are a form of electromagnetic radiation, like radio waves. They are generated by a device called a magnetron, and they pulse back and forth rapidly inside an oven at a carefully calibrated frequency. Microwaves bounce off the oven’s interior metal walls, pass through paper, glass, and plastic, but they get absorbed by food — more specifically, by the food’s water content. This absorption makes the molecules oscillate back and forth, creating heat and cooking the food from the inside out, the outside in, or uniformly, depending on where the water lies.
A metal object placed inside the oven deflects these waves away from the food, Ross explains. It sends them jumping around erratically, possibly damaging the interior of the oven. In fact, metal is so good at reflecting this radiation that the window built into the front of microwave ovens contains a fine metallic mesh you can see through, but from which microwaves cannot escape (light comes in small enough wavelengths to slip through, but not microwaves, which measure around 12 centimeters).
In some situations, metal placed inside a microwave can become very hot, a fact food manufacturers cleverly take advantage of, notes Ross. Some microwavable soups and pies are packaged with a thin metallic layer under a non-metallic lid, so the food trapped against the metal browns nicely. But leave those snacks in just a few minutes too long, and they might incinerate.
This same electromagnetic activity can do a number on metal. The oscillation of the microwaves can produce a concentrated electric field at corners or an edge of a metallic object, ionizing the surrounding air “so you can hear it popping away,” says Ross. You might also see sparking, which “is a little like lightning,” she adds. This kind of microwave sound and light show isn’t limited to metal. Ross sometimes puts on a demonstration for her kids: She cuts up hot dogs, creating sharp edges, and “watches the electric sparks jumping between them.”
When Ross isn’t explaining science at home, she works on small magnets that could be used in next-generation computers, making data storage more reliable, capacious, and energy efficient — research that could show up some day in a microwave or refrigerator near you. Magnetic chips from her lab might someday replace silicon memory chips, and “any advance in computation power will make it cheaper and easier to integrate with household appliances,” she says.
Posted: November 2, 2010 | <urn:uuid:1e512dd4-a5ab-4b6f-91fb-4ed64611ff93> | 3.5 | 680 | Nonfiction Writing | Science & Tech. | 40.378407 | 95,588,667 |
Widespread microplastic pollution is raising growing concerns as to its detrimental effects upon living organisms. A realistic risk assessment must stand on representative data on the abundance, size distribution and chemical composition of microplastics. Raman microscopy is an indispensable tool for the analysis of very small microplastics (<20 μm). Still, its use is far from widespread, in part due to drawbacks such as long measurement time and proneness to spectral distortion induced by fluorescence. This review discusses each drawback followed by a showcase of interesting and easily available solutions that contribute to faster and better identification of microplastics using Raman spectroscopy. Among discussed topics are: enhanced signal quality with better detectors and spectrum processing; automated particle selection for faster Raman mapping; comprehensive reference libraries for successful spectral matching. A last section introduces non-conventional Raman techniques (non-linear Raman, hyperspectral imaging, standoff Raman) which permit more advanced applications such as real-time Raman detection and imaging of microplastics.
Catarina F. Araujo, Mariela M. Nolasco, Antonio M.P. Ribeiro, Paulo J.A. Ribeiro-Claro, Water Research, Volume 142, 1 October 2018, Pages 426-440 | <urn:uuid:67c79942-00a1-4bce-882a-c17616f3f167> | 2.625 | 260 | Truncated | Science & Tech. | 16.412497 | 95,588,682 |
When chemists want to produce a lot of a substance -- such as a newly designed drug -- they often turn to catalysts, molecules that speed chemical reactions.
Many jobs require highly specialized catalysts, and finding one in just the right shape to connect with certain molecules can be difficult. Natural catalysts, such as enzymes in the human body that help us digest food, get around this problem by shape-shifting to suit the task at hand.
Chemists have made little progress in getting synthetic molecules to mimic this shape shifting behavior -- until now.
Ohio State University chemists have created a synthetic catalyst that can fold its molecular structure into a specific shape for a specific job, similar to natural catalysts.
In laboratory tests, researchers were able to cause a synthetic catalyst -- an enzyme-like molecule that enables hydrogenation, a reaction used to transform fats in the food industry -- to fold itself into a specific shape, or into its mirror image.
The study appears in the June 25 issue of the Journal of the American Chemical Society.
Being able to quickly produce a catalyst of a particular shape would be a boon for the pharmaceutical and chemical industries, said Jonathan Parquette, professor of chemistry at Ohio State.
The nature of the fold in a molecule determines its shape and function, he explained. Natural catalysts reconfigure themselves over and over again in response to different chemical cues -- as enzymes do in the body, for example.
When scientists need a catalyst of a particular shape or function, they synthesize it through a process that involves a lot of trial and error.
"It's not uncommon to have to synthesize dozens of different catalysts before you get the shape you're looking for," Parquette said. "Probably the most important contribution this research makes is that it might give scientists a quick and easy way to get the catalyst that they want."
The catalyst in this study is just a prototype for all the other molecules that the chemists hope to make, said co-author and professor of chemistry T.V. RajanBabu.
"Eventually, we want to make catalysts for many other reactions using the fundamental principles we unearthed here," RajanBabu said.
For this study, Parquette, RajanBabu, and postdoctoral researcher Jianfeng Yu synthesized batches of a hydrogenation catalyst in the lab and coaxed the molecules to change shape.
The technique that the chemists developed amounts to nudging certain atoms on the periphery of the catalyst molecule in just the right way to initiate a change in shape. The change propagates to a key chemical bond in the middle of the molecule. That bond swings like a hinge, to initiate a twist in one particular direction that spreads throughout the rest of the molecule.
Parquette offered a concrete analogy for the effect.
"Think of the Radio City Rockettes dance line. The first Rockette kicks her leg in one direction, and the rest of them kick the same leg in the same direction -- all the way down the line. A change in shape that starts at one end of a molecule will propagate smoothly all the way to the other end."
In tests, the chemists caused the catalysts to twist one way or the other, either to form one chemical product or its mirror image. They confirmed the shape of the molecules at each step using techniques such as nuclear magnetic resonance spectroscopy.
That's what the Ohio State chemists find most exciting: the molecule does not maintain only one shape. Depending on its surroundings -- the chemical "nudges" that it receives on the outside -- it will adjust.
"For many chemical reactions to work, molecules must be able to fit a catalyst like a hand fits a glove," RajanBabu said. "Our synthetic molecules are special because they’re flexible. It doesn't matter if the hand is a small hand or a big hand, the 'glove' will change its shape to fit it, as long as there is even a slight chemical preference for one of the hands. The 'flexible glove' will find a way to make a better fit, and so it will assist in specifically making one of the mirror image forms.”
Despite decades of research, scientists aren't sure exactly how this kind of propagation works. It may have something to do with the polarity of different parts of the molecule, or the chemical environment around the edges of the molecule.
But Parquette says the new study demonstrates that propagation can be used to make synthetic catalysts change shape quickly and efficiently -- an idea that wasn't apparent before. The use of adaptable synthetic molecules may even speed the discovery of new catalysts.
This work was funded by the National Science Foundation.Contact: Jonathan Parquette, (614) 292-5886; Parquette.firstname.lastname@example.org
Pam Frost Gorder | newswise
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:59ad381a-a33c-408c-90f7-502690f29d11> | 3.453125 | 1,585 | Content Listing | Science & Tech. | 39.506386 | 95,588,685 |
Boosted by natural magnifying lenses in space, NASA's Hubble Space Telescope has captured unique close-up views of the universe's brightest infrared galaxies, which are as much as 10,000 times more luminous than our Milky Way.
The galaxy images, magnified through a phenomenon called gravitational lensing, reveal a tangled web of misshapen objects punctuated by exotic patterns such as rings and arcs. The odd shapes are due largely to the foreground lensing galaxies' powerful gravity distorting the images of the background galaxies. The unusual forms also may have been produced by spectacular collisions between distant, massive galaxies in a sort of cosmic demolition derby.
These six Hubble Space Telescope images reveal a jumble of misshapen-looking galaxies punctuated by exotic patterns such as arcs, streaks, and smeared rings. These unusual features are the stretched shapes of the universe's brightest infrared galaxies that are boosted by natural cosmic magnifying lenses. Some of the oddball shapes also may have been produced by spectacular collisions between distant, massive galaxies. The faraway galaxies are as much as 10,000 times more luminous than our Milky Way. The galaxies existed between 8 billion and 11.5 billion years ago.
Credit: NASA, ESA, and J. Lowenthal (Smith College)
"We have hit the jackpot of gravitational lenses," said lead researcher James Lowenthal of Smith College in Northampton, Massachusetts. "These ultra-luminous, massive, starburst galaxies are very rare. Gravitational lensing magnifies them so that you can see small details that otherwise are unimaginable. We can see features as small as about 100 light-years or less across. We want to understand what's powering these monsters, and gravitational lensing allows us to study them in greater detail."
The galaxies are ablaze with runaway star formation, pumping out more than 10,000 new stars a year. This unusually rapid star birth is occurring at the peak of the universe's star-making boom more than 8 billion years ago. The star-birth frenzy creates lots of dust, which enshrouds the galaxies, making them too faint to detect in visible light. But they glow fiercely in infrared light, shining with the brilliance of 10 trillion to 100 trillion suns.
Gravitational lenses occur when the intense gravity of a massive galaxy or cluster of galaxies magnifies the light of fainter, more distant background sources. Previous observations of the galaxies, discovered in far-infrared light by ground- and space-based observatories, had hinted of gravitational lensing. But Hubble's keen vision confirmed the researchers' suspicion.
Lowenthal is presenting his results at 3:15 p.m. (EDT), June 6, at the American Astronomical Society meeting in Austin, Texas.
According to the research team, only a few dozen of these bright infrared galaxies exist in the universe, scattered across the sky. They reside in unusually dense regions of space that somehow triggered rapid star formation in the early universe.
The galaxies may hold clues to how galaxies formed billions of years ago. "There are so many unknowns about star and galaxy formation," Lowenthal explained. "We need to understand the extreme cases, such as these galaxies, as well as the average cases, like our Milky Way, in order to have a complete story about how galaxy and star formation happen."
In studying these strange galaxies, astronomers first must detangle the foreground lensing galaxies from the background ultra-bright galaxies. Seeing this effect is like looking at objects at the bottom of a swimming pool. The water distorts your view, just as the lensing galaxies' gravity stretches the shapes of the distant galaxies. "We need to understand the nature and scale of those lensing effects to interpret properly what we're seeing in the distant, early universe," Lowenthal said. "This applies not only to these brightest infrared galaxies, but probably to most or maybe even all distant galaxies."
Lowenthal's team is halfway through its Hubble survey of 22 galaxies. An international team of astronomers first discovered the galaxies in far-infrared light using survey data from the European Space Agency's (ESA) Planck space observatory, and some clever sleuthing. The team then compared those sources to galaxies found in ESA's Herschel Space Observatory's catalog of far-infrared objects and to ground-based radio data taken by the Very Large Array in New Mexico. The researchers next used the Large Millimeter Telescope (LMT) in Mexico to measure their exact distances from Earth. The LMT's far-infrared images also revealed multiple objects, hinting that the galaxies were being gravitationally lensed.
These bright objects existed between 8 billion and 11.5 billion years ago, when the universe was making stars more vigorously than it is today. The galaxies' star-birth production is 5,000 to 10,000 times higher than that of our Milky Way. However, the ultra-bright galaxies are pumping out stars using only the same amount of gas contained in the Milky Way.
So, the nagging question is, what is powering the prodigious star birth? "We've known for two decades that some of the most luminous galaxies in the universe are very dusty and massive, and they're undergoing bursts of star formation," Lowenthal said. "But they've been very hard to study because the dust makes them practically impossible to observe in visible light. They're also very rare: they don't appear in any of Hubble's deep-field surveys. They are in random parts of the sky that nobody's looked at before in detail. That's why finding that they are gravitationally lensed is so important."
These galaxies may be the brighter, more distant cousins of the ultra-luminous infrared galaxies (ULIRGS), hefty, dust-cocooned, starburst galaxies, seen in the nearby universe. The ULIRGS' star-making output is stoked by the merger of two spiral galaxies, which is one possibility for the stellar baby boom in their more-distant relatives. However, Lowenthal said that computer simulations of the birth and growth of galaxies show that major mergers occur at a later epoch than the one in which these galaxies are seen.
Another idea for the star-making surge is that lots of gas, the material that makes stars, is flooding into the faraway galaxies. "The early universe was denser, so maybe gas is raining down on the galaxies, or they are fed by some sort of channel or conduit, which we have not figured out yet," Lowenthal said. "This is what theoreticians struggle with: How do you get all the gas into a galaxy fast enough to make it happen?"
The research team plans to use Hubble and the Gemini Observatory in Hawaii to try to distinguish between the foreground and background galaxies so they can begin to analyze the details of the brilliant monster galaxies.
Future telescopes, such as NASA's James Webb Space Telescope, an infrared observatory scheduled to launch in 2018, will measure the speed of the galaxies' stars so that astronomers can calculate the mass of these ultra-luminous objects.
"The sky is covered with all kinds of galaxies, including those that shine in far-infrared light," Lowenthal said. "What we're seeing here is the tip of the iceberg: the very brightest of all."
The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C.
For more information and additional images, visit: http://hubblesite.
For NASA's Hubble webpage, visit: http://www.
Smith College, Northampton, Massachusetts
Rob Gutro | EurekAlert!
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:02d76600-023d-4ec3-af2c-e627f1afe800> | 3.703125 | 2,253 | Content Listing | Science & Tech. | 41.445451 | 95,588,711 |
Group of named regexes that form a formal grammar
Grammar is a powerful tool used to destructure text and often to return data structures that have been created by interpreting that text.
For example, Perl 6 is parsed and executed using a Perl 6-style grammar.
An example that's more practical to the common Perl 6 user is the JSON::Tiny module, which can deserialize any valid JSON file, however the deserializing code is written in less than 100 lines of simple, extensible code.
If you didn't like grammar in school, don't let that scare you off grammars. Grammars allow you to group regexes, just as classes allow you to group methods of regular code.
In this case, we have to specify that the regex is lexically scoped using the
my keyword, because named regexes are normally used within grammars.
Being named gives us the advantage of being able to easily reuse the regex elsewhere:
say so "32.51" ~~ ; # OUTPUT: «True»say so "15 + 4.5" ~~ /\s* '+' \s*/ # OUTPUT: «True»
regex isn't the only declarator for named regexes. In fact, it's the least common. Most of the time, the
rule declarators are used. These are both ratcheting, which means that the match engine won't back up and try again if it fails to match something. This will usually do what you want, but isn't appropriate for all cases:
mymymy = 'Tokens won\'t backtrack, which makes them fail quicker!';say so ~~ ; # OUTPUT: «True»say so ~~ ; # OUTPUT: «False»# the entire string get taken by the .+
Note that non-backtracking works on terms, that is, as the example below, if you have matched something, then you will never backtrack. But when you fail to match, if there is another candidate introduced by
||, you will retry to match again.
my ;my ;say so "bd" ~~ ; # OUTPUT: «False»say so "bd" ~~ ; # OUTPUT: «True»
The only difference between the
rule declarators is that the
rule declarator causes
:sigspace to go into effect for the Regex:
mymysay so 'onceuponatime' ~~ ; # OUTPUT: «True»say so 'once upon a time' ~~ ; # OUTPUT: «False»say so 'onceuponatime' ~~ ; # OUTPUT: «False»say so 'once upon a time' ~~ ; # OUTPUT: «True»
Grammar is the superclass that classes automatically get when they are declared with the
grammar keyword instead of
class. Grammars should only be used to parse text; if you wish to extract complex data, you can add actions within the grammar, or an action object is recommended to be used in conjunction with the grammar.
For instance, if you have a lot of alternations, it may become difficult to produce readable code or subclass your grammar. In the Actions class below, the ternary in
method TOP is less than ideal and it becomes even worse the more operations we add:
say Calculator.parse('2 + 3', actions => Calculations).made;# OUTPUT: «5»
To make things better, we can use proto regexes that look like
:sym<...> adverbs on tokens:
say Calculator.parse('2 + 3', actions => Calculations).made;# OUTPUT: «5»
In the grammar, the alternation has now been replaced with
<calc-op>, which is essentially the name of a group of values we'll create. We do so by defining a rule prototype with
proto rule calc-op. Each of our previous alternations have been replaced by a new
rule calc-op definition and the name of the alternation is attached with
In the actions class, we now got rid of the ternary operator and simply take the
.made value from the
$<calc-op> match object. And the actions for individual alternations now follow the same naming pattern as in the grammar:
method calc-op:sym<add> and
The real beauty of this method can be seen when you subclass that grammar and actions class. Let's say we want to add a multiplication feature to the calculator:
is Calculatoris Calculationssay BetterCalculator.parse('2 * 3', actions => BetterCalculations).made;# OUTPUT: «6»
All we had to add are additional rule and action to the
calc-op group and the thing works—all thanks to proto regexes.
TOP token is the default first token attempted to match when parsing with a grammar. Note that if you're parsing with
token TOP is automatically anchored to the start and end of the string. If you don't want to parse the whole string, look up
rule TOP or
regex TOP are also acceptable.
A different token can be chosen to be matched first using the
:rule named argument to
.parsefile. These are all
rule instead of
token is used, any whitespace after an atom is turned into a non-capturing call to
ws, written as
. means non-capturing. That is to say:
Is the same as:
ws matches one or more whitespace characters (
\s) or a word boundary (
# First <.ws> matches word boundary at the start of the line# and second <.ws> matches the whitespace between 'b' and 'c'say 'ab c' ~~ / ab c /; # OUTPUT: «「ab c」»# Failed match: there is neither any whitespace nor a word# boundary between 'a' and 'b'say 'ab' ~~ /. b/; # OUTPUT: «Nil»# Successful match: there is a word boundary between ')' and 'b'say ')b' ~~ /. b/; # OUTPUT: «「)b」»
You can also redefine the default
.parse: "4 \n\n 5"; # Succeeds.parse: "4 \n\n 5"; # Fails
<sym> token can be used inside proto regexes to match the string value of the
:sym adverb for that particular regex:
.parse("I ♥ Perl", actions => class).made.say; # OUTPUT: «Perl»
This comes in handy when you're already differentiating the proto regexes with the strings you're going to match, as using
<sym> token prevents repetition of those strings.
<?> is the always succeed assertion. When used as a grammar token, it can be used to trigger an Action class method. In the following grammar we look for Arabic digits and define a
succ token with the always succeed assertion.
In the action class, we use calls to the
succ method to do set up (in this case, we prepare a new element in
@!numbers). In the
digit method, we convert an Arabic digit into a Devanagari digit and add it to the last element of
@!numbers. Thanks to
succ, the last element will always be the number for the currently parsed
say Digifier.parse('255 435 777', actions => Devanagari.new).made;# OUTPUT: «(२५५ ४३५ ७७७)»
It's fine to use methods instead of rules or tokens in a grammar, as long as they return a Cursor:
The grammar above will attempt different matches depending on the arguments provided by parse methods:
say +DigitMatcher.subparse: '12७१७९०९', args => \(:full-unicode);# OUTPUT: «12717909»say +DigitMatcher.subparse: '12७१७९०९', args => \(:!full-unicode);# OUTPUT: «12»
Variables can be defined in tokens by prefixing the lines of code defining them with
:. Arbitrary code can be embedded anywhere in a token by surrounding it with curly braces. This is useful for keeping state between tokens, which can be used to alter how the grammar will parse text. Using dynamic variables (variables with
%* twigils) in tokens cascades down through all tokens defined thereafter within the one where it's defined, avoiding having to pass them from token to token as arguments.
One use for dynamic variables is guards for matches. This example uses guards to explain which regex classes parse whitespace literally:
Here, text such as "use rules for significant whitespace by default" will only match if the state assigned by whether rules, tokens, or regexes are mentioned matches with the correct guard:
say GrammarAdvice.subparse("use rules for significant whitespace by default");# OUTPUT: «use rules for significant whitespace by default»say GrammarAdvice.subparse("use tokens for insignificant whitespace by default");# OUTPUT: «use tokens for insignificant whitespace by default»say GrammarAdvice.subparse("use regexes for insignificant whitespace by default");# OUTPUT: «use regexes for insignificant whitespace by default»say GrammarAdvice.subparse("use regexes for significant whitespace by default")# OUTPUT: #<failed match>
A successful grammar match gives you a parse tree of Match objects, and the deeper that match tree gets, and the more branches in the grammar are, the harder it becomes to navigate the match tree to get the information you are actually interested in.
To avoid the need for diving deep into a match tree, you can supply an actions object. After each successful parse of a named rule in your grammar, it tries to call a method of the same name as the grammar rule, giving it the newly created Match object as a positional argument. If no such method exists, it is skipped.
Here is a contrived example of a grammar and actions in action:
my = TestGrammar.parse('40', actions => TestActions.new);say ; # OUTPUT: «「40」»say .made; # OUTPUT: «42»
An instance of
TestActions is passed as named argument
actions to the parse call, and when token
TOP has matched successfully, it automatically calls method
TOP, passing the match object as an argument.
To make it clear that the argument is a match object, the example uses
$/ as a parameter name to the action method, though that's just a handy convention, nothing intrinsic.
$match would have worked too. (Though using
$/ does give the advantage of providing
$<capture> as a shortcut for
A slightly more involved example follows:
my = KeyValuePairsActions;my = KeyValuePairs.parse(for @ ->
This produces the following output:
Key: second Value: bKey: hits Value: 42Key: perl Value: 6
pair, which parsed a pair separated by an equals sign, aliases the two calls to token
identifier to separate capture names to make them available more easily and intuitively. The corresponding action method constructs a Pair object, and uses the
.made property of the sub match objects. So it (like the action method
TOP too) exploits the fact that action methods for submatches are called before those of the calling/outer regex. So action methods are called in post-order.
The action method
TOP simply collects all the objects that were
.made by the multiple matches of the
pair rule, and returns them in a list.
Also note that
KeyValuePairsActions was passed as a type object to method
parse, which was possible because none of the action methods use attributes (which would only be available in an instance).
In other cases, action methods might want to keep state in attributes. Then of course you must pass an instance to method parse.
ws is special: when
:sigspace is enabled (and it is when we are using
rule), it replaces certain whitespace sequences. This is why the spaces around the equals sign in
rule pair work just fine and why the whitespace before closing
} does not gobble up the newlines looked for in | <urn:uuid:12eca8e0-f438-46c4-b7d6-9f897bd3c00d> | 3.328125 | 2,753 | Documentation | Software Dev. | 50.480803 | 95,588,721 |
By CALEB JONES and AUDREY McAVOY
The Associated Preess
PAHOA, Hawaii — White plumes of acid and extremely fine shards of glass billowed into the sky over Hawaii as molten rock from Kilauea volcano poured into the ocean, creating yet another hazard from an eruption that began more than two weeks ago.
Authorities on Sunday warned the public to stay away from the toxic steam cloud, which is formed by a chemical reaction when lava touches seawater.
Further upslope, lava continued to gush out of large cracks in the ground that formed in residential neighborhoods in a rural part of the Big Island. The molten rock formed rivers that bisected forests and farms as it meandered toward the coast.
The rate of sulfur dioxide gas shooting from the ground fissures tripled, leading Hawaii County to repeat warnings about air quality. At the volcano’s summit, two explosive eruptions unleashed clouds of ash. Winds carried much of the ash toward the southwest.
Joseph Kekedi, an orchid grower who lives and works about 3 miles (5 kilometers) from where lava dropped into the sea, said luckily the flow didn’t head toward him. At one point, it was about a mile upslope from his property in the coastal community of Kapoho.
He said residents can’t do much but stay informed and be ready to get out of the way.
“Here’s nature reminding us again who’s boss,” Kekedi said.
Scientists said the steam clouds at the spots where lava entered the ocean were laced with hydrochloric acid and fine glass particles that can irrigate the skin and eyes and cause breathing problems.
The lava haze, or “laze,” from the plume spread as far as 15 miles (24 kilometers) west of where the lava met the ocean on the Big Island’s southern coast. It was just offshore and running parallel to the coast, said U.S. Geological Survey scientist Wendy Stovall.
Scientists said the acid in the plume was about as corrosive as diluted battery acid. The glass was in the form of fine glass shards. Getting hit by it might feel like being sprinkled with glitter.
“If you’re feeling stinging on your skin, go inside,” Stovall said. Authorities warned that the plume could shift direction if the winds changed.
The Coast Guard said it was enforcing a safety zone extending 984 feet (300 meters) around the ocean entry point.
Coast Guard Lt. Cmdr. John Bannon said in a statement Sunday that “getting too close to the lava can result in serious injury or death.”
Gov. David Ige told reporters in Hilo that the state was monitoring the volcano and keeping people safe.
“Like typical eruptions and lava flows, it’s really allowing Madam Pele to run its course,” he said, referring to the Hawaiian goddess of volcanoes and fire.
Ige said he was thankful that the current flows weren’t risking homes and hoped it would stay that way.
On Saturday, the eruption claimed its first major injury. David Mace, a spokesman for the Federal Emergency Management Agency who was helping Hawaii County respond to the disaster, said a man was struck in the leg by a flying piece of lava. He didn’t have further details, including what condition the man was in.
Kilauea has burned some 40 structures, including two dozen homes, since it began erupting in people’s backyards in the Leilani Estates neighborhood on May 3. Some 2,000 people have evacuated their homes, including 300 who were staying in shelters.
In recent days, the lava began to move more quickly and emerge from the ground in greater volume. Scientists said that’s because the lava that first erupted was magma left over from a 1955 erupted that had been stored in the ground for the past six decades. The molten rock that began emerging over the past few days was from magma that has recently moved down the volcano’s eastern flank from one or two craters that sit further upslope — the Puu Oo crater and the summit crater.
The new lava is hotter, moves faster and has spread over a wider area.
Scientists say they don’t know how long the eruption will last. The volcano has opened more than 20 vents, including four that have merged into one large crack. This vent has been gushing lava high into the sky and sending a river of molten rock toward the ocean at about 300 yards (274 meters) per hour.
Hawaii tourism officials have stressed that most of the Big Island remains unaffected by the eruption and is open for business.
McAvoy reported from Honolulu. Associated Press journalists Jae C. Hong and Marco Garcia in Pahoa contributed to this report. | <urn:uuid:e5b255b2-a68e-4ce9-9fbc-f3d213422f89> | 2.59375 | 1,020 | Truncated | Science & Tech. | 57.685723 | 95,588,724 |
Latest research suggests enormous black hole drove 2 binary stars to merge into 1
For years, astronomers have been puzzled by a bizarre object in the center of the Milky Way that was believed to be a hydrogen gas cloud headed toward our galaxy's enormous black hole.
Having studied it during its closest approach to the black hole this summer, UCLA astronomers believe that they have solved the riddle of the object widely known as G2.
A team led by Andrea Ghez, professor of physics and astronomy in the UCLA College, determined that G2 is most likely a pair of binary stars that had been orbiting the black hole in tandem and merged together into an extremely large star, cloaked in gas and dust — its movements choreographed by the black hole's powerful gravitational field. The research is published today in the journal Astrophysical Journal Letters.
Astronomers had figured that if G2 had been a hydrogen cloud, it could have been torn apart by the black hole, and that the resulting celestial fireworks would have dramatically changed the state of the black hole.
"G2 survived and continued happily on its orbit; a simple gas cloud would not have done that," said Ghez, who holds the Lauren B. Leichtman and Arthur E. Levine Chair in Astrophysics. "G2 was basically unaffected by the black hole. There were no fireworks."
Black holes, which form out of the collapse of matter, have such high density that nothing can escape their gravitational pull — not even light. They cannot be seen directly, but their influence on nearby stars is visible and provides a signature, said Ghez, a 2008 MacArthur Fellow.
Ghez, who studies thousands of stars in the neighborhood of the supermassive black hole, said G2 appears to be just one of an emerging class of stars near the black hole that are created because the black hole's powerful gravity drives binary stars to merge into one. She also noted that, in our galaxy, massive stars primarily come in pairs. She says the star suffered an abrasion to its outer layer but otherwise will be fine.
Ghez and her colleagues — who include lead author Gunther Witzel, a UCLA postdoctoral scholar, and Mark Morris and Eric Becklin, both UCLA professors of physics and astronomy — conducted the research at Hawaii's W.M. Keck Observatory, which houses the world's two largest optical and infrared telescopes.
When two stars near the black hole merge into one, the star expands for more than 1 million years before it settles back down, said Ghez, who directs the UCLA Galactic Center Group. "This may be happening more than we thought. The stars at the center of the galaxy are massive and mostly binaries. It's possible that many of the stars we've been watching and not understanding may be the end product of mergers that are calm now."
Ghez and her colleagues also determined that G2 appears to be in that inflated stage now. The body has fascinated many astronomers in recent years, particularly during the year leading up to its approach to the black hole. "It was one of the most watched events in astronomy in my career," Ghez said.
Ghez said G2 now is undergoing what she calls a "spaghetti-fication" — a common phenomenon near black holes in which large objects become elongated. At the same time, the gas at G2's surface is being heated by stars around it, creating an enormous cloud of gas and dust that has shrouded most of the massive star.
Witzel said the researchers wouldn't have been able to arrive at their conclusions without the Keck's advanced technology. "It is a result that in its precision was possible only with these incredible tools, the Keck Observatory's 10-meter telescopes," Witzel said.
The telescopes use adaptive optics, a powerful technology pioneered in part by Ghez that corrects the distorting effects of the Earth's atmosphere in real time to more clearly reveal the space around the supermassive black hole. The technique has helped Ghez and her colleagues elucidate many previously unexplained facets of the environments surrounding supermassive black holes.
"We are seeing phenomena about black holes that you can't watch anywhere else in the universe," Ghez added. "We are starting to understand the physics of black holes in a way that has never been possible before."
The research was funded by the National Science Foundation, the Lauren Leichtman and Arthur Levine Chair in Astrophysics, the Preston Family Graduate Student Fellowship and the Janet Marott Student Travel Awards. The W. M. Keck Observatory is operated as a scientific partnership among the University of California, Caltech and NASA.
Stuart Wolpert | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:190c0a90-f1e0-4f99-8c88-023e77b06d8a> | 3.5 | 1,606 | Content Listing | Science & Tech. | 45.230074 | 95,588,726 |
At the moment CEGUI only supports image formats that contain raster graphics. For a graphical user-interface library, such as CEGUI, it is of particular interest to provide a solution for scalable content. This is needed considering the different screen resolution of devices it could be used on. Raster graphics however would have to be downscaled and upscaled in relation to the screen size, in order to adapt to it. This is not optimal and only vector graphics provide an optimal solution for this issue, as they can be rendered at any size. SVG is an open format that provides a solution for saving vector graphics in XML. A parser could be integrated into the CEGUI library to interpret files of this format and to import SVG vector graphics into CEGUI. As addition, the renders of CEGUI (OpenGL, DirectX, Ogre, Irrlicht) would have to be improved in a way that they are able to render vector graphics as geometry in real-time.
The suggested project would consist of two main parts. One is the SVG parsing and subsequent management of SVG-based images inside the CEGUI core library. The other part is the rendering, which has to be adapted for the different renderers that CEGUI provides and which has to collaborate with the new SVG-related classes inside CEGUI.
Custom SVG parser
Although SVG parsing could be theoretically done with an existing external library, this would cause CEGUI to require an additional dependancy, the library would have to have an apprioriate Open Source licens and also the parser could not be modified for specific requirements of CEGUI. For these reasons, a custom parser is a better solution. This parser would be integrated into CEGUI and could be designed after the code of existing open-source SVG-parsers, such as cairo library and the Skia graphics library. The parser will translate the SVG file into a data structure inside a new CEGUI class, which could carry the name SVGImage, and which will be designed as a subclass of CEGUI::Image. Before rendering the SVGImage, the shapes defined in it need to be tesselated into vertex data, that can then be rendered in real-time by OpenGL or Direct3D.
Custom vertex-based vector graphics rendering
There are two options how to render vector graphics in an application. One way is by creating a texture, in a specific resolution, based on the vector graphics beforehand and using the pre-created texture later on, to render the rasterized vector graphics as a textured rectangle during the actual rendering process. This has the benefit that it is easy to do, no care has to be taken of rendering performance issues because the texture is only created once and usually not recreated. However, this also brings a big disadvantage, namely that things like zooming and animation are not possible without recreating the texture each frame. This practically defeats the main benefit for which vector graphics are usually chosen: scalability. A better solution is to dynamically tesselate the vector graphics into vertex geometry whenever a new scaling is needed. The geometry can then be rendered with appropriate vertex and fragment shaders by the different renderers of CEGUI. For this to be possible, an optional stage before rendering a SVGImage has to be implemented, which tesselates general shapes defined in SVG, such as lines, triangles, rectangles, ellipsoids and circles, into triangle meshes. These meshes can then be rendered with a vertex shader and a fragment shader. The meshes should only be generated when a specific scale is needed and then kept for rendering until something changes that requires a different scale. SVG supports two types of gradients: linear and radial. These two types of gradients can be emulated with a fragment shader and do also require the vertex shader to be adapted slightly. Although it would be possible to combine full support for all gradients into one big shader with a number of if-cases, it is probably recommandable to seperate them and render the two types of gradient and the solid geometry each in a respective render batch with the appropriate shaders set. Also, considering that Direct3D and OpenGL require different shader program languages, the shaders will have to be written both in hlsl and glsl.
Also for batching to be possible to happen, the renderers have to be changed to use geometry with depth values and render with the depth buffer on. This requires additional changes to all renderers.
For the geometry rendering, it would also be important to, at least, optionally support hardware anti-aliasing, as offered by OpenGL. A public function like:
could be provided for this purpose. Similarly it could be done for the other renderers.
SVG supports animation. Because of this, future animation-support should be considered during the development, although the implementation of this will not be part of this project. For the animation-support it is relevant how and when tesselated geometry is being invalidated and recached. The SVGImage class should optimally be designed in a way that it will invalidate and recache the tesselated geometry on-the-fly with lazy-updates. It is important to define the specific situations that require a retesselation. These are:
- The SVGImage class' vector graphics data has changed. This happens at the initial loading of the SVG file. During animation, this would happen every frame, while the animation is running.
- The size at which an SVGImage class is being displayed has changed. This occurs whenever the widget using the SVGImage is being resized (which can also occur when a parent or the render window being resized).
The tesselation can slow down the program when complex or big SVG files are being processed, especially when it has to be done every frame (animation). To circumvent the stalling of the program, a render frame could be skipped so the processing of data for tesselation can be done inparallel, in a seperate thread. However, the threading-support is purely optional.
I think the best way to work on the project is by working in iterations.
The first iteration(milestone) should provide the following:
- Be able to read in a basic SVG file containing a definition for a line/rectangle
- Parse it into a a data-structure stored inside the SVGImage clas
- Tesselate it into geometry
- Use the geometry inside CEGUIOpenGL3Renderer
All these steps are new and require a novel implementation for CEGUI
This way I will have a proof of concept and will also be able to see possible issues early on. Later on I will have 2 more iterations, which each improve all of these steps. In the second iteration, all types of SVG shapes will be supported and the Ogre renderer will be supported (hlsl shaders). The third and last iteration will add gradients, by using fragment shaders, and support the different Direct3D renderers and the Irrlicht renderer.
[Optionally] CEGUI Imagesets supporting SVG
For usability it should be possible to define SVG images in a CEGUI Imageset file (.imageset). This way, they can be used analogically to raster graphics images and loaded automatically. The existing ChainedXMLLoader can be used to load the SVG XML files from within the imageset XML file. However, each image definition would refer to one SVG file, instead of having a texture atlas and referring to parts of sections of the image. An alternative would be to use groups ("g" elements) defined inside the SVG file, whereas each lowest-level group could be interpreted as containing one vector-graphic image. The group's ID could be used as name of the SVGImage in CEGUI.
27.5. - 17.6. Conduct research about existing SVG parsers. Plan the approximate design of the parser and the SVGImage class.
17.6. - 7.7. In this time I will have some tests and submissions for university so I will only be able to work at smaller amounts. I will begin writing the parser, write a basic SVGImage class. In the end a line or rectangle should be able to be saved in the internal datastructure of an SVGImage class.
7.7. - 14.7. Add a tesselation stage in connection to the SVGImage, so that the a line or rectangle can be parsed and then tesselated into geometry that is suitable to be used in renderers later.
14.7. - 21.7. MILESTONE 1: Adapt the OpenGL3 Renderer (which should serve as basis for proof of concept implementations), so that it can render a simple line/rectangle with solid fill and without contour. The rnederer will have to be changed to accept the triangles from the tesselation stage and render them using custom shader(s). At this point we should be able load a rectangle/line from an SVG file and render it using CEGUI.
21.07.-25.8. Improve the SVGImage class and tesselator to support both the line-contours of rectangles as well as their filling (solid only).
25.8. - 30.8. Add support for SVG shapes, such as circles and ellipsoids and polygon meshes, so that all (important) shapes are covered.
30.8. - 4.8. Integrate the rendering adaptions of OpenGL3 also into the OpenGL renderer. This requires an exception in case SVG usage is attempted and are shaders not supported by the OpenGL version of the user. If no shaders are available, SVG won't be supported at all.
4.8. - 14.8. MILESTONE 2: Change the Ogre renderer to be able to use the tesselated geometry. Since Ogre can render both Direct3D AND OpenGL, the shader programs will have to be provided for both libraries and be generated from within the OgreRenderer. This requires porting the glsl code to hlsl. Now all important shapes defined in SVG should be renderable both in Ogre(D3D or OGL) and OpenGL3 and OpenGL.
14.8. - 25.8. Add GLSL shaders, so that the SVG gradients (linear, radial) can be supported. Port the shaders to HLSL and test in Ogre.
25.8. - 7.9. Milestone 3: Adapt the various D3D Renderers for the changes needed to support triangle rendering using the shaders. Now gradients are supported and SVG should be able to be rendered in all Renderers of CEGUI.
7.9.- X Buffer time, in case issues appear. Add comments and refactor code.
Discussion regarding the development of CEGUI itself - as opposed to questions about CEGUI usage that should be in the help forums.
1 post • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 3 guests | <urn:uuid:ffae64cc-aa18-4166-9540-4bf44928fc9e> | 2.515625 | 2,294 | Comment Section | Software Dev. | 50.281779 | 95,588,754 |
+44 1803 865913
Edited By: John T Tanacredi, Mark L Botton and David Smith
662 pages, 208 figs (33 in colour), tabs
The four living species of horseshoe crabs face a set of growing threats to their survival, including the erosion and/or man-made alteration of essential spawning habitat, coastal pollution, and overfishing.
Horseshoe crabs are 'living fossils', with a more than 200 million year evolutionary history. Their blood provides a reagent, known as Limulus amebocyte lysate or LAL, that clots in the presence of minute quantities of bacterial endotoxin; the LAL test is the state-of-the-art methodology used to ensure that pharmaceuticals and surgical implants are free of contamination. Horseshoe crabs are an integral part of the food web in coastal marine ecosystems, and their eggs provide essential food for shorebirds in the Delaware Bay estuary each spring.The commercial fishery for horseshoe crabs, which utilizes animals for bait, contributes to the economies of coastal communities.
This book consists of papers presented at the 2007 International Symposium on the Science and Conservation of Horseshoe Crabs.
Current Status and Assessment.- Biology, Ecology, and Multi-species Interactions.- Culture and Captive Breeding.- Habitat Requirements, Threats, and Conservation.- Human Uses: Traditional and Biomedical.- Conservation Management.- Public Awareness and Community-based Conservation.
There are currently no reviews for this book. Be the first to review this book!
Dr. John T. Tanacredi is Chair and Professor of Earth and Marine Sciences at Dowling College. He is a Research Associate in the Invertebrate Zoology Department at the American Museum of Natural History. Dr. Mark L. Botton is a Professor of Biology at Fordham University. Dr. David R. Smith is a Research Biological Statistician for the U.S. Geological Survey.
Your orders support book donation projects
NHBS has enriched my life with knowledge for many years now
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:2162a616-db7f-4bc2-9b53-736914a67f47> | 3.3125 | 456 | Product Page | Science & Tech. | 39.206667 | 95,588,765 |
A heat engine using 130 mg of helium as the working substance follows the cycle show in the figures (attached).
Please do not place your response in a .pdf or .cdx format, but Word documents are okay. Thanks!
Please see attached for actual problem.© BrainMass Inc. brainmass.com July 22, 2018, 6:37 pm ad1c9bdddf
For ideal gas, PV = nRT
Molecular weight of He = 4 g/mol
Weight of He = 130 mg = 0.13g
n = Number of moles of ...
The expert determines the temperature of the gas at different points. | <urn:uuid:c1f3c441-9c3e-4f23-9eb3-a40b3fdaa43f> | 2.78125 | 136 | Q&A Forum | Science & Tech. | 88.888077 | 95,588,791 |
SCIENTISTS at Bangor University have made a groundbreaking climate change discovery.
Dr Nathalie Fenner and Professor Chris Freeman say droughts cause peatlands to release more carbon dioxide into the atmosphere than previously thought and have published their findings in a research paper.
Peatlands usually lock in carbon dioxide from plants due to their wetness.
But droughts cause them to release the gases.
Dr Fenner said: “What we previously perceived as a ‘spike’ in the rate of carbon loss during drying out, now appears far more prolonged.” | <urn:uuid:adbafc71-1e5e-48e8-931d-bba1a14976a8> | 3.765625 | 123 | News Article | Science & Tech. | 41.81886 | 95,588,809 |
At the end of August, an unusual expedition under Russian leadership will leave for the Arctic Ocean. One of the participants is Jürgen Graeser of the Alfred Wegener Institute for Polar and Marine Research, one of the research centres of the Helmholtz Association. For the first time in the history of Russian research using drifting stations, a German researcher will take part in the North Pole drifting station NP-35.
With his data recordings of the atmosphere, Graeser will supplement measurements carried out by the Russian project partners, who will be focusing their investigations on sea ice, primarily performing measurements close to the ice. Through this collaboration, the project partners intend to advance the currently patchy data situation in the Arctic and hope to gain a better understanding of these key regions for global climate change.
Experience with regular Russian drifting stations in the pack ice dates back to 1952 when the research station NP-2 was launched. Whereas previous drifting stations were dedicated exclusively to Russian research, the international station planned within the framework of the International Polar Year will, for the first time, include a German participant of the Alfred Wegener Institute, Jürgen Graeser. The planned project will be carried out in conjunction with the Arctic and Antarctic Research Institute (AARI) in St Petersburg. On August 29, 2007, a total of 36 expedition participants will board the Russian research vessel ‘Akademik Fedorov’ in the Siberian harbour of Tiksi.
In the vicinity of Wrangel Island, i.e. between 80 and 85 degrees northern latitude and between 170 degrees eastern and 170 degrees western longitude, a stable ice floe will be chosen as the base for the drifting station ‘North Pole 35’ (NP-35). The selection will be based on long-term satellite observations of the ice and will be verified by helicopter from the research vessel. During the course of winter, the ice floe will drift in the Arctic Ocean and across the North Pole. During the drift, a variety of measurements carried out at the station will provide information about current climate change. The ‘Akademik Fedorov’ is scheduled to evacuate the station after approximately one year. With regard to over-wintering personnel, it is planned to use ‘Polar 5’, the research aircraft of the Alfred Wegener Institute, to fly out Jürgen Graeser and five Russian colleagues after approximately eight months, in April 2008. For this purpose, a landing strip will be constructed on the ice.
The research programme
The Russian colleagues will be investigating the upper ocean layer and sea ice, as well as snow cover. Atmospheric measurements of meteorological parameters such as temperature, wind, humidity and air pressure, will be added through recordings of trace gases such as carbon dioxide and ozone. Jürgen Graeser will examine two topics. On the one hand, he will use a captive balloon system to measure meteorological parameters in the so-called planetary boundary layer, which is the lowest layer of the atmosphere extending to approximately 1500 metres. In addition, he will use ozone sensors to measure the ozone layer in the stratosphere up to approximately 30 kilometres altitude.
Jürgen Graeser has been a technician at the research unit Potsdam of the Alfred Wegener Institute and has many years of experience with Arctic and Antarctic expeditions. His special areas of interest are aerology and meteorology. His expertise includes balloon-based, radiation and meteorological measurements.
The Arctic represents a key region for global climate change. Measurements of sea ice and atmospheric parameters in the Arctic Ocean are still incomplete. Through the current project, researchers intend to identify key processes in the atmosphere and alterations of the sea ice cover in order to examine the coupling of sea ice and atmosphere. The project is one of many during the International Polar Year. More than 50,000 scientists and technical staff from over 60 countries are joining force to explore the polar regions. Their goal is to study the role of the Arctic and Antarctic in shaping the climate and the earth’s ecosystems.
Project ‘Planetary Boundary Layer’
The planetary boundary layer (PBL) identifies the lowest atmospheric layer, extending from the surface to approximately 1500 metres altitude. In the Arctic, this layer is characterised by frequent temperature inversions, i.e. by very stable atmospheric stratification which suppresses vertical movements of the air. A realistic representation of the planetary boundary layer is crucial for the construction of climate models, as it is this layer that determines the lower marginal conditions for all calculations. Particularly, the investigation of processes influenced directly by the boundary layer, requires exact knowledge of the state of the PBL.
AWI scientists in Potsdam use the regional climate model HIRHAM to construct mesoscale fields of pressure, temperature and wind in which cyclones (low pressure regions) and their trajectories are identified. Specifically, they are examining the relationship between cyclone development and various surface conditions (e.g. sea ice cover). Elucidating the connection between the Arctic planetary boundary layer and mesoscale cyclones and their trajectories is the goal of these investigations.
Project ‘Ozone Layer’
Discovery of the Antarctic ozone hole in 1985 triggered intensive exploration efforts of the polar ozone layer. This layer is located between 15 and 25 kilometres altitude in the stratosphere. Many chemical processes of ozone depletion in the Antarctic have since been explained, and the connection of ozone destruction with anthropogenic emissions of chlorofluorocarbons (CFCs) and halons has been proven beyond doubt.
During specific winters, severe ozone losses over the Arctic, and hence much closer to home, have already contributed to a reduction in ozone layer thickness over Europe – leading to an increase of harmful ultraviolet radiation on the earth’s surface. However, to date the ozone depletion in the Arctic is not as pronounced as over the Antarctic. Compared to the Arctic, ozone layer thickness in the Antarctic is much more variable, with only about half of the observed inter-annual variability explained by known chemical mechanisms. Hence, dynamic processes which remain only partly understood are equally important in determining thickness of the ozone layer over the Arctic as the chemical decomposition of ozone.
At the Arctic station of the Alfred Wegener Institute in Ny Ålesund on Spitsbergen (79°N), for instance, a strong annual ozone variation of 30 percent was detected at an altitude of 25 to 30 kilometres. Apparently, it is synchronised with variability of the sun, but cannot be explained by known chemical or other dynamic processes. Investigating the cause of this variability will be the focus of ozone measurements at NP-35. Data records from the drifting station will, for the first time, produce high resolution vertical profiles of ozone distribution in the central Arctic, north of 82 degrees latitude – currently a blank spot on the global ozone distribution map. These unique data will be combined with existing ozone profiles from the Arctic and Sub-Arctic. Calculations of air movement in conjunction with chemical models will contribute to an understanding of seasonal and annual variability of stratospheric ozone in the Arctic.
Angelika Dummermuth | EurekAlert!
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
Drones survey African wildlife
11.07.2018 | Schweizerischer Nationalfonds SNF
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:c6cadeae-d938-4f2a-b2f8-293e860cd9c0> | 3.203125 | 2,084 | Content Listing | Science & Tech. | 31.958697 | 95,588,810 |
Transparent Solar Cell Film Has Clear Advantages
A new solar film created at UCLA is a game-changing new kid of solar cell. An organic polymer, it is nearly fully transparent and is more durable and malleable than silicon, which forms the substrate of traditional solar power cells.
"(A solar film) harvests light and turns it into electricity. In our case, we harvest only the infrared part," says Professor Yang Yang at UCLA's California Nanosystems Institute, who has headed up the research on the new photovoltaic polymer. Absorbing only the infrared light, he explains, means the material doesn't have to be dark or black or blue, like most silicon photovoltaic panels. It can be clear. "We have developed a material that absorbs infrared and is all transparent to the visible light."
"And then we also invented a new electrode, a metal, that is also transparent. So we created a new solar cell," Yang adds.
Well, the metal is actually not transparent, Yang points out; it's just so small that you can't see it. The new polymer incorporates silver nanowires about 0.1 microns thick, about one-thousandth the width of a human hair, and titanium dioxide nanoparticles as an electrode. When in liquid form, it is as clear as a glass of water, and when applied to a hard, flat surface as a film it is meant to be invisible to the eye.
Professor Yang Yang of UCLA’s Nanosystems Institute shares his innovative plans for the film saying, “Whenever people think about solar, they think about the big silicon panels that they put on their roof, or the big solar farms that SoCal Edison builds out in the desert. But for the future of energy use, we must think about how to harvest energy whenever and wherever it is possible. If we can change the concept that energy has to come from one source, which is the power company, that the supply should not be subject to the limitations of the power grid, a lot of new things can happen.”
(Via PIE Global)
The film may eventually be sprayed onto surfaces, which would bring low-cost solar energy to everyone's homes, cars and electronic devices, according to Dr. Yang.
Science fiction fans fondly remember their disbelief when they first read about the idea of a solar power cell you could just spray on in Larry Niven's 1995 novel The Woman in Del Rey Crater.
Scroll down for more stories in the same category. (Story submitted 11/14/2012)
Follow this kind of news @Technovelgy.
| Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit |
you like to contribute a story tip?
Get the URL of the story, and the related sf author, and add
Comment/Join discussion ( 0 )
Related News Stories -
Has Climate Change Already Been Solved By Aliens?
'I had explained," said Nessus, "that our civilisation was dying in its own waste heat.' - Larry Niven, 1970.
Skin Electronics 3D Printed
'June's body is a tracery of lambent lines, like some arcane capillary circuitry...' - Paul Di Filippo, 1985
Super-Resolution Microscopy Provides '4D' Views
View the magnified interior of living cells.
Physicists Try To Turn Light Into Matter
If E=mc squared, then... m=E/c squared!
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
Ontario Starts Guaranteed Minimum Income
'Earned by just being born.'
Is There Life In Outer Space? Will We Recognize It?
'The antennae of the Life Detector atop the OP swept back and forth...'
Space Traumapod For Surgery In Spacecraft
' It was a ... coffin, form-fitted to Nessus himself...'
Tesla Augmented Reality Hypercard
'The hypercard is an avatar of sorts.'
A Space Ship On My Back
''Darn clever, these suits,' he murmured.'
Biomind AI Doctor Mops Floor With Human Doctors
'My aim was just not to lose by too much.' - Human Physician participant.
Fuli Bad Dog Robot Is 'Auspicious Raccoon Dog' Bot
Bad dog, Fuli. Bad dog.
Las Vegas Humans Ready To Strike Over Robots
'A worker replaced by a nubot... had to be compensated.'
You'll Regrow That Limb, One Day
'... forcing the energy transfer which allowed him to regrow his lost fingers.'
Elon Musk Seeks To Create 1941 Heinlein Speedster
'The car surged and lifted, clearing its top by a negligible margin.'
Somnox Sleep Robot - Your Sleepytime Cuddlebot
Science fiction authors are serious about sleep, too.
Real-Life Macau or Ghost In The Shell
Art imitates life imitates art.
Has Climate Change Already Been Solved By Aliens?
'I had explained," said Nessus, "that our civilisation was dying in its own waste heat.'
First 3D Printed Human Corneas From Stem Cells
Just what we need! Lots of spare parts.
VirtualHome: Teaching Robots To Do Chores Around The House
'Just what did I want Flexible Frank to do? - any work a human being does around a house.'
Messaging Extraterrestrial Intelligence (METI) Workshop
SF writers have thought about this since the 19th century.
More SF in the News Stories
More Beyond Technovelgy science news stories | <urn:uuid:53ae4274-4550-4ed1-ae2b-edbbeecbd565> | 3.453125 | 1,230 | Content Listing | Science & Tech. | 60.740439 | 95,588,823 |
Dynamics of Peel-Harvey Estuary, Southwest Australia
Peel-Harvey Estuary is an ultra-shallow multi-basin estuary in which salinity varies from almost fresh in winter to hypersaline in autumn. The physical dynamics of the estuary are controlled by riverflow, low-frequency variations of ocean water level, wind forcing, evaporation, and diurnal tidal currents in the channels connecting the basins. A study of these processes is presented with examples of observational data and numerical models. A discussion is also given of the importance of physical processes to the occurrence of annual blooms of Nodularia spumigena and the cycling of phosphorus, (which is input to the estuary by riverflow), between plankton and sediments.
KeywordsWind Stress Bottom Friction Ocean Tide Tidal Prism Astronomical Tide
Unable to display preview. Download preview PDF.
- Black, R. E., and C. J. Hearn, 1987: Management of a eutrophic estuary: modelling the effects of a new outlet to the sea. 8th Australian Conference on Coastal and Ocean Engineering (The Institution of Engineers, Australia), 284–287.Google Scholar
- Hearn, C. J. and J. R. Hunter, 1987: Modelling wind-driven flow in shallow systems on the southwest Australian coast. In: Numerical Modelling: Applications to Marine Systems. (Ed J Noye), pp 47–58 Elsevier/North Holland, Amsterdam.Google Scholar
- Hearn, C. J., R. J. Lukatelich, and J. R. Hunter, 1986: Dynamical models of the Peel-Harvey estuarine system. Environmental Dynamics Report ED-86–152. The University of Western Australia, Nedlands 6009, Western AustraliaGoogle Scholar
- Hodgkin, E. P., P. B. Birch, R. E. Black, and R. B. Humphries, 1980: The Peel-Harvey Estuarine System Study (1976–1980). Report No 9, Department of Conservation and Environment, Perth, Western Australia.Google Scholar
- Huber, A. L., 1984: Physiology and ecology of cyanobacteria in the Peel-Harvey Estuarine System, Western Australia with particular reference to Nodularia spumigena. Ph.D. thesis, The University of Western Australia. Nedlands 6009, Western Australia.Google Scholar
- Lukatelich, R. J., 1987: Nutrients and phytoplankton in the Peel-Harvey Estuarine System, Western Australia. Ph.D. thesis, The University of Western Australia, Nedlands 6009, Western Australia.Google Scholar
- McComb, A. J., R. P. Atkins, P.B. Birch, D. M. Gordon, and R. J. Lukatelich, 1981: Eutrophication in the Peel-Harvey Estuarine System, Western Australia. In: Nutrient Enrichment in Estuaries. (Eds B J Neilsen and L E Cronin), pp 332–342. Humana Press, New JerseyGoogle Scholar
- Meetam, P., and C. J. Hearn, 1988: A tidal model of Lake Songkhla, Thailand. Department of Mathematics Report, The University of Western Australia, Nedlands 6009, Western Australia.Google Scholar
- Simons, T. J., 1980: Circulation models of lakes and inland seas. Can. Bull. Fish. Aquat. Sci., 203, 146pp.Google Scholar
- Thatcher, M. L. and D. R. F. Harleman, 1981: Long-term salinity calculation in Delaware estuary. J. Environ. Eng. Div. (Am. Soc. Civ. Eng.), 107, 11–27.Google Scholar | <urn:uuid:a174cbcf-e95e-4007-bd5d-7aa32d6bb1b0> | 2.9375 | 805 | Academic Writing | Science & Tech. | 60.780962 | 95,588,827 |
“Spooky action at a distance” aboard the ISS
Albert Einstein famously described quantum entanglement as “spooky action at distance”; however, up until now experiments that examine this peculiar aspect of physics have been limited to relatively small distances on Earth.
In a new study published today, 9 April, in the Institute of Physics and German Physical Society’s New Journal of Physics, researchers have proposed using the International Space Station (ISS) to test the limits of this “spooky action” and potentially help to develop the first global quantum communication network.
Their plans include a so-called Bell experiment which tests the theoretical contradiction between the predictions of quantum mechanics and classical physics, and a quantum key distribution experiment which will use the ISS as a relay point to send a secret encryption key across much larger distances than have already been achieved using optical fibres on Earth.
Their calculations show that “major experimental goals” could already be achieved with only a few overhead passes of the ISS, with each of the experiments lasting less than 70 seconds on each pass.
“During a few months a year, the ISS passes five to six times in a row in the correct orientation for us to do our experiments. We envision setting up the experiment for a whole week and therefore having more than enough links to the ISS available,” said co-author of the study Professor Rupert Ursin from the Austrian Academy of Sciences.
Furthermore, the only equipment needed aboard the ISS would be a photon detection module which could be sent to the ISS and attached to an already existing motorised commercial photographer’s lens (Nikon 400 mm), which sits, always facing the ground, in a 70 cm window in the Cupola Module.
For the Bell experiment, a pair of entangled photons would be generated on the ground; one would be sent from the ground station to the modified camera aboard the ISS, while the other would be measured locally on the ground for later comparison.
Entangled photons have an intimate connection with each other, even when separated over large distances, which defies the laws of classical physics. A measurement on one of the entangled photons in a pair will determine the outcome of the same measurement on the second photon, no matter how far apart they are.
“According to quantum physics, entanglement is independent of distance. Our proposed Bell-type experiment will show that particles are entangled, over large distances — around 500 km — for the very first time in an experiment,” continued Professor Ursin.
“Our experiments will also enable us to test potential effects gravity may have on quantum entanglement.”
The researchers also propose a quantum key distribution experiment, where a secret cryptographic key is generated using a stream of photons and shared between two parties safe in the knowledge that if an eavesdropper intercepts it, this would be noticed.
Up until now, the furthest a secret key has been sent is just a few hundred kilometres, which would realistically enable communication between just one or two cities.
Research teams from around the world are looking to build quantum satellites that will act as a relay between the two parties, significantly increasing the distance that a secret key could be passed; however, the new research shows that this may be possible by implementing an optical uplink towards the ISS and making a very minor alteration to the camera already on-board. | <urn:uuid:dcbc5179-1768-40c5-b69a-c4fd0f7a7115> | 3.5625 | 692 | News Article | Science & Tech. | 21.5621 | 95,588,837 |
Licensed under a Creative Commons Attribution ShareAlike 3.0 License
TEI By Example offers a series of freely available online tutorials walking individuals through the different stages in marking up a document in TEI (Text Encoding Initiative). Besides a general introduction to text encoding, step-by-step tutorial modules provide example-based introductions to eight different aspects of electronic text markup for the humanities. Each tutorial module is accompanied with a dedicated examples section, illustrating actual TEI encoding practise with real-life examples. The theory of the tutorial modules can be tested in interactive tests and exercises.
XML is a metalanguage by which one can create separate markup languages for separate purposes. It is platform-, software-, and system-independent and no one 'owns' XML, although specific XML markup languages can be owned by its creators. Generally speaking, XML empowers the content provider and facilitates data integration, exchange, maintenance, and extraction. XML is currently the de facto standard on the World Wide Web partly because HTML (Hypertext Markup Language) was rephrased as an XML encoding language. XML is edited and managed by the W3C which also published the specification as a recommendation in 1998.
The big selling point of XML is that it is text based. This means that each XML encoding language is entirely built up in ASCII (American Standard Code for Information Interchange), or plain text, and can be created and edited using a simple text-editor like Notepad or its equivalents on other platforms. However, when you start working with XML, you will soon find that it is better to edit XML documents using a professional XML editor. While plain text-editors don't know that you're writing TEI, XML editors will help you to write error-free XML documents, validate your XML against a DTD or a schema, force you to stick to a valid XML structure, and enable you to perform transformations.
Since ASCII only provides for characters commonly found in the English language, different character encoding systems have been designed such as Isolat-1 (ISO-8859-1) for Western languages and Unicode (UTF-8 and UTF-16). By using these character encoding systems, non-ASCII characters such as French é, à, ç, Norwegian æ ø å, or Hebrew ק can be used in XML documents. These systems rely on ASCII notation for the expression of these non-ASCII characters. The French à, for instance, is represented by the string 'agrave' in Isolat-1 and by the number '00E0' in Unicode.
Any XML encoding language consists of five components.
For example, a simple two-paragraph document could be encoded as follows in XML:
An XML document is introduced by the XML declaration.
The question mark
? in the XML declaration signals that this is a processing instruction. The following bits state that what follows is XML which complies with version 1.0 of the recommendation and the used character encoding is UTF-8 or Unicode.
The two-paragraph document above is an example of an XML document, representing both information and
meta-information. Information (plain text) is contained in XML elements,
delimited by start tags (e.g.
<!--) and end markers (
Entity references are predefined strings of data that a parser must resolve before parsing the XML document.
An entity reference starts with an ampersand
& and closes with a semicolon
;. The entity name is the string between these two symbols. For instance, the entity reference for the less than sign
< the entity reference for the ampersand is
Not all computers support the Unicode encoding scheme XML works with. Portability of individual characters from the Unicode system, however, is supported by entity references that refer to their numeric or hexadecimal notation. For example, the character ø is represented within an XML document as the Unicode character with hexadecimal value
00F8 and decimal value
0248. For exporting an XML document containing this character, it may be represented by the character (or entity) reference
ø respectively, with the 'x' indicating that what follows is a hexadecimal value. References of this type do not need to be predefined, since the underlying character encoding for XML is always the same.
For legibility purposes, however, it is also possible to refer to this character by use of a mnemonic name, such as
oslash., provided that each such name is mapped to the required Unicode value by means of an ENTITY declaration.
The ENTITY declaration uses a non-XML syntax inherited from SGML and starts with an opening delimiter
< followed by an exclamation mark
! signalling that this is a declaration. The keyword
ENTITY names that an entity is being declared here. What follows next is the entity name - here the mnemonic name
ø - for which a declaration is given and the declaration itself inside quotation marks. In this example, it is the hexadecimal value of the character.
The same character can also be declared in the following ways.
Character entities must also be used in XML to escape the less than sign
< and the ampersand
& which are illegal because the XML parser could mistake them for markup.
Gimme pepper & salt!
A < B
Entities are not only capable to refer to character declarations but can also refer to strings of text with an unlimited extent. This way repetitive keying of repeated information can be avoided (aka string substitution), or standard expressions or formulae can be kept up to date. The first is useful, for instance, for the expansion of &TBE; to "TEI by Example" before the test is validated. This corresponding ENTITY declaration is as follows:
The second is used in contracts, books of laws etc. in which updating would otherwise mean the complete rekeying of the same (extensive) string of text. For example, the expression "This contract is concluded between &party1; and &party2; for the duration of 10 years starting from &date;" in legal texts can be updated simply by changing the value of the ENTITY declarations:
The substitution of the entities by their values in the given example results in the following expression "This contract is concluded between Rev Knyff and Lt Rosen for the duration of 10 years starting from 2007-01-01"
ENTITY declarations are placed inside a DOCTYPE declaration which follows the XML declaration at the beginning of the XML document.
Since the DOCTYPE declaration is a processing instruction, it starts with the opening delimiter
<! which is followed by the keyword
DOCTYPE. Next part is the name of the root element of the document. In the case of a TEI document, this will be
TEI. The entity references which must be interpreted by the XML processor are put inside square brackets. An XML parser encountering this DOCTYPE declaration will expand the entities with the values given in the ENTITY declaration before the document itself is validated.
All text in an XML document will normally be parsed by a parser. When an XML element is parsed, the text between the XML tags is also parsed. The parser does that because XML elements can nest as in the following example:
The XML parser will break this string up into an element
An XML document often contains data which need not be parsed by an XML parser. For instance, characters like
& are illegal in XML elements because the parser will interpret them as the beginning of new elements or the beginning of an entity reference which will result in an error message. Therefore, these characters can be escaped by the use of the entity references
&; is included in an XML document, it should not be parsed by the XML parser. We can avoid this by treating it as unparsed character data or CDATA in the document:
A CDATA section starts with
<![CDATA[ and ends with
]]>. Everything inside a CDATA section is ignored by the parser.
Depending on the nature of your XML documents and what you want to use them for, you will need different tools, ranging from freely available open source tools to highly priced industrial software. In principle, the simplest plain text editor suffices to author or edit XML. In order to validate or transform XML, additional tools will be needed which often come included in dedicated XML editors: validating parser, XSLT processor, tree-structure viewer etc. For publishing purposes, XML documents may be transformed to other XML formats, HTML or PDF - to name just a few of the possibilities - using XSLT and XSLFO scripts which are processed by an off the shelf or custom made XSL processor. These published documents can be viewed in generic web browsers or PDF viewers where it considers transformations to HTML or PDF. XML documents can further be indexed, excerpted, questioned and analysed with tools specifically designed for the job. | <urn:uuid:e7ab6d3e-dd09-4e64-bdc3-0b3f472180a7> | 3.359375 | 1,851 | Documentation | Software Dev. | 40.331919 | 95,588,838 |
The river quality model QUASAR (Quality Simulation Along Rivers) has been applied to the River Don system in North East Scotland. This mass balance model for nitrogen has been used to estimate the impacts of land use change and climate change on river water quality. Changing agricultural patterns have been simulated and indicate increased nitrogen levels in the river as catchment land use changes. Climate change will alter flow regimes, temperature and nitrogen mineralization patterns. Simulation runs for a range of scenarios illustrate the impacts of climate change with the most significant effect being mineralization of nitrogen in the soil feeding through into the river system. These higher nitrogen levels are reduced slightly by the increased temperatures and decreased summer flows, both of which enhance denitrification processes. © 1995.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:39695e2b-b812-4757-8656-973287d73786> | 3.515625 | 167 | Academic Writing | Science & Tech. | 17.3425 | 95,588,843 |
I read somewhere that gravity is not a force. Is this true? What does it mean?
according to the theory of General Relativity, there is an equivalence to a body falling in free fall to an identical body out in free space traveling at a constant velocity.
what a big mass (like a planet) does is warp or curve space in such a way so that objects flying about freely in this curved space appears to us in our Euclidian space to be following a curve where it just appears that a force is acting on the body that causes its path to be curved toward the center of that big mass.
where did you read
Actually, in a certain sense, it isn't.
For all practical purposes, we can say that Gravity 'produces' a force - because we can all feel it. Modern Physics takes matters further than that, of course, but the same 'force' effects are observable even when they are 'explained' slightly differently.
There is a class of forces in physics known as fictitious forces or inertial forces. Whether they act as real force or not depends on your choice of coordinate system. If you chose surface of the Earth as your reference point, for example, you have to account for the force of gravity for all of your equations to make sense. But if you happen to be inside an elevator during a free-fall, you will not experience gravity. For all intents and purposes, gravity just goes away.
This is very similar to centrifugal force. It is also a fictitious force. If you are observing a rotating space station from an inertial frame, you can describe everything that happens in terms of forces of interaction between various objects. But if you are standing inside the station, measuring all positions relative to the station, and you don't take rotation into account, it appears that there is another force, very similar to gravity, that keeps pushing you away from center of the station.
In general relativity, they don't count gravity as a force, but its effects are taken into account in the equations of the spacetime curvature. In the limit of low speed and weak curvature, we can say that gravity acts approximately like a force (which is the physics that was around before Einstein's relativity).
I think physicists don't get too worked up about what exactly constitutes a force. Depending on the context, they will call gravity a fundamental force or a fictitious force. But it's not something to argue over. They'll just use the term that is more convenient at the time. Being too concerned with the terminology is just stupid, since physicists will agree on what gravity _does_.
So if one textbook says gravity is a force and another one says it's a figment of the coordinate system, it doesn't mean that one textbook is wrong and one is right. We are all just working with human-defined models and it's the observable phenomena that really count.
In GR, gravity is not a force, but in everything else, gravity is a force.
No, you can't. Newtonian POV: We can feel every real force but gravity. There is no way to directly sense the gravitational force. However, note that we don't "feel" fictitious forces.
General relativity POV: We feel every real force, period. Gravitation falls into the real of fictitious forces in general relativity. That gravitation can't be sensed -- no surprise. We can't sense any of the fictitious forces.
Put an accelerometer at rest on top of a table. That accelerometer will register an acceleration of 1g pointing upward, yet it's obviously not accelerating. It's standing still. That 1g upward acceleration that it is registering is the normal force exerted by the table on the accelerometer. Now push the accelerometer off the table. In free fall, the accelerometer registers an acceleration of near zero. Yet it obviously *is* accelerating. The gravitational force acting on the accelerometer hasn't changed. What has changed is that the normal force is now absent. The accelerometer doesn't measure gravity because there is no way to measure gravity.
The same goes for what you feel. You aren't feeling gravity. What you are feeling is the ground pushing up on your feet, your skeleton transmitting this upward force to the less rigid parts of your body, and eventually, your inner ear. Your inner ear is equipped with a natural accelerometer. It senses that upward acceleration.
Now imagine taking one of those zero g amusement park rides, or being a passenger on NASA's Vomit Comet. During those intervals of zero g, your stomach and your inner ear rebel at the loss of this upward acceleration that is normally sensed when you are standing on the ground. You aren't feeling those forces during those intervals of zero g.
Exactly. Worrying about what different theories call things is just a semantics game. What really matters is how well physics predicts experimental outcomes. In those realms where Newtonian mechanics is (approximately) valid, Newtonian mechanics and relativity will agree on experimental outcomes, sensor readings, etc.
This is the sort of thing laymen hear on Carl Sagan programs. Actually, I suspect that most professional physicists would get it wrong.
If you have a box falling in the Earth's gravitational field and you shine a light horizontally, the light will be deflected downwards with an acceleration of around 19.6 meters per second squared. the doubling is because in Relativity there are gravitational forces that go as the square of the velocity. So the box is falling at around 9.8 meters per secomnd squared and the light is falling at double thatr--it therefore is possible to distinguish between the box in free fall vs a box at rest without gravity.
The is clearly incorrect. If you are standing here on Earth you are experiencing gravity, and as you yourself said, gravity invoves spacetime not being Euclidian. So we are not in "our Euclidian space".
I agree with that - the "force of gravity" acts on a spring scale when you stand on it. The "different explanation" is to say that that force is not due to gravitation but due to acceleration. And as others said, whatever you call it or how you interpret it, everyone agrees that the spring is compressed by a real force.
The situation is a lot more hazy when an object falls under the influence of a gravitational field; it then depends on one's exact definition of "force" (as well as on one's definition of "field"!).
Yes, the spring is being compressed by a force. GR just says that force is that of the earth pushing upwards on the scale. There is a reason that a stationary accelerometer reads as though the acceleration it experiences is up, not down. This is not sleight-of-hand with jargon to confuse people. In a very material, concrete way, objects cannot and do not know about how gravity affects their motions.
"I read somewhere that gravity is not a force. Is this true? What does it mean?"
Simple answer if gravity was not a force we wouldn't be here! Just because we haven't found graviton particle it dosn't mean gravity dosen't exit. 8)
The 'purist' view is strongly against allowing gravity to be a force. Fair enough, because a gravitational field 'causes' a force to act on a mass - rather than actually 'being' a force. But why is the same purist view not held so strongly in the case of EM fields? I appreciate that if you were in a charged ship with no windows then you could detect an acceleration due to the presence of an EM field (due to a measurable acceleration). This is different from the gravitational case, of course (an accelerometer wouldn't react because the Equivalence Princple is at work), but in both cases, the acceleration is actually measurable as soon as you can see (or reference) the outside world.
Why restrict the measuring conditions in order to 'classify' the gravitational effect as being different from the EM effect?
Weight is a readily perceived and measured force - given the appropriate equipment - so why are we not 'allowed' to treat gravity as a force? It is such a tangible thing that it seems to me that the purist tail is wagging the dog of experience.
I'm very much a 'purist' and I disagree with what you call the 'purist' view for reasons similar to the ones you mention.
It would be useful if opinionated people present their definitions of "force" and "field", and then explain how they reason that those definitions logically lead to their expressed opinions.
OK, my "opinion" is that a Force will produce an acceleration or change of shape. (That's Newtonian - based). How one measures these changes is, to my mind, irrelevant.
For a definition of Field in this context, I'd say that the presence of a Field will produce a Force on an object with a particular property - e.g. mass / charge / current. So F=mG and F=qE for instance. So the Field is force per unit of some property.
So we could continue the discussion with those definitions - or with others, perhaps(?).
Imagine two travellers standing at different points along the equator. Now both of them start walking north, towards the north pole. What happens ? The further north they walk, the closer they get to one another ! They approach each other not because of any force acting between them, but because of the geometry of earth's surface. Gravity works the same - as time passes, two bits of matter will gravitate towards each other, not because of any force, but because of the geometry of space-time.
If these two travellers stop walking, do they experience anything? Is there any Force acting on them to make them continue getting closer to the Pole? I do understand this is just an analogy but I feel it is too far away from the point about the 'reality' or otherwise of a gravitational force. You would need to say what the equivalent of a force is for these two travellers. It certainly couldn't be the same as the 'force' that pulls two masses together because the analogy is not 1:1.
I can see that GR tries to explain the origin of a force like the one that the proximity of two masses causes but, if the effect is the same as that which occurs between two charges, then why is it not allowed to be called a force? What distinguishes the one 'force' from the other force apart from the difficulty in detecting it?
You are right, it isn't at 1:1 analogy. However ( and I failed to mentioned that ), once one understands that the northward direction corresponds to time, and the distance between the travellers corresponds to spatial separation, then the situation is clear : there is a tendency for two bits of matter to approach one another over time, purely based on the geometry of space-time. This is, I believe, quantified via the Raychaudhuri equation.
The question as to whether you need a force for them to continue going north is rather meaningless, since one cannot stop moving through time.
Thanks for the precision!
If one uses either of those definitions then gravity is a force in GR that appears whenever a gravitational field is assumed. Einstein called in his 1916 GR paper the gravitational field a "field of force [..] which possesses the remarkable property of imparting the same acceleration to all bodies".
In a reference frame that is attached to the surface of the earth, your acceleration when you stand on a scale is zero so that the force that you feel is fully ascribed to gravitational field (and not acceleration)*.
Conversely, in a "free falling" GR frame the gravitational field has vanished and that same force is ascribed to acceleration instead.
*From the more standard ECI frame POV, it is gravity due to the mass of the Earth, which is partly reduced due to the acceleration of the rotating surface of the Earth.
I found this quote from these forums illuminating:
Yet Einstein's GR does not depend on traditional 'fields'....such fields are a different model than his final geometric interpretation of spacetime curvature.
I cited him in the context of his explanation of fields in GR. Do you know by chance how with that geometric interpretation of GR the words "force" and "field" were redefined, so that those words have different meanings than the same words of 1916?
It's precisely because we identify the four-force as something that all freely-falling obsrevers agree on. An object that is freely falling experiences zero four-force, and hence, because the four-force is more useful than ordinary force, we tend to use that versus ordinary force.
We dismiss the idea of gravitational force because different freely-falling observers may disagree whether there was a gravitational force at all--not just its components, but whether it has any magnitude. The same cannot be said of the electromagnetic four-force, which all freely-falling observers agree on.
Quote from whom, I'm not sure but it doesn't matter.
That bit has me flummoxed. You can explain the tides in terms of motion in a circle, with all the 'forces' we're familiar with. How can tidal forces not be taken care of in GR? They're only there because of what, presumably, GR predicts.
Separate names with a comma. | <urn:uuid:c3a6389c-2bde-42d1-82e1-adb84af347ff> | 3.4375 | 2,767 | Comment Section | Science & Tech. | 51.942347 | 95,588,845 |
A View from Emerging Technology from the arXiv
How to See Diffraction Patterns with Star Crusts
If neutron stars have thin crystalline crusts, then we ought to see diffraction patterns in x-rays reflected from their surfaces.
Nobody knows what the surface of neutron stars are made of but there’s no shortage of suggestions. We know that iron must play a role because we can see its characteristics absorption lines in the spectrum from these exotic objects.
But we can’t tell whether the iron forms a gaseous atmosphere, perhaps a meter or so thick above an ultra-hard crust, or whether the iron itself is solid.
Now Felipe Llanes-Estrada and Gaspar Moreno Navarro at the Universidad Complutense in Madrid Spain, say they know how to tell the difference. Here’s how:
If the iron is solid, then it ought to form an extraordinary crystal, perhaps a few dozen atomic layers thick, almost perfectly smooth and enveloping the entire star.
One way to examine crystalline solids is to use x-ray crystallography. The new idea from Llanes-Estrada and Navarro is to look for pairs of neutron stars in which one is an x-ray pulsar and the other is dead with an iron crust. Then x-rays from the pulsar should be diffracted from the surface of the dead star and detectable on Earth. The signature would consist of a main pulse from the pulsar followed by the reflection at wavelengths related by Bragg’s law of diffraction to the main pulse.
That’s a neat and potentially clean way to study the surface of neutron stars: if astronomers can find the right kind of pairs.
If they do, it’ll be a breath of fresh air for astrophysicists who have many ideas about the structure and behaviour of neutron stars but few ways to prune the dead wood from the theoretical undergrowth they’ve created.
Something for the team behind the Chandra X-ray telescope to get to work on, I’d say.
Ref: arxiv.org/abs/0905.4837: Bragg diffraction and the Iron crust of Neutron Stars
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:8921ea18-9bff-4e0d-986e-37f54f08df0c> | 3.5 | 493 | Truncated | Science & Tech. | 55.233651 | 95,588,847 |
|Scientific Name:||Loxodonta africana (Blumenbach, 1797)|
Elephas africana Blumenbach, 1797
Loxodonta cyclotis Matschie, 1900
|Taxonomic Notes:||Preliminary genetic evidence suggests that there may be at least two species of African elephants, namely the Savanna Elephant (Loxodonta africana) and the Forest Elephant (Loxodonta cyclotis). A third species, the West African Elephant, has also been postulated. The African Elephant Specialist Group believes that more extensive research is required to support the proposed re-classification. Premature allocation into more than one species may leave hybrids in an uncertain conservation status (IUCN SSC African Elephant Specialist Group 2003). For this reason, this assessment was conducted for the single species as currently described, encompassing all populations.|
|Red List Category & Criteria:||Vulnerable A2a ver 3.1|
|Reviewer(s):||Balfour, D., Craig, C., Dublin, H.T. & Thouless, C.|
Background Considerations and Choice of Criteria
The species is the largest terrestrial animal and has been the subject of considerable research, but continent-wide distribution and density estimates are difficult to obtain for any one time period. To a large extent this is due to the enormous range covered by the species (and thus the cost of estimating its numbers) as well as to the wide variety of habitats it occupies (often woodland and forest where visibility is poor from the ground as well as from the air; see Habitats list). These difficulties, coupled with the differential influence that various historical factors have played in different parts of the continent, result in a continental picture of the status of the African Elephant that varies considerably – qualitatively and quantitatively – across its range.
Although our knowledge of the status of African Elephants across their range has been progressively improving since the mid-1990s, when considerable resources began to be channelled into compiling and producing regular updates of the continental status of elephants with a standardized measure of certainty (Said et al. 1995; Barnes et al. 1999; Blanc et al. 2003; Blanc et al. 2007), large gaps still remain.
In investigating the Red List Criteria (Version 3.1) against these realities, it became clear to the group of assessors involved in the 2004 assessment, that the variability in population trends and levels of uncertainty would preclude a full quantitative Red List assessment, such as would be conducted under criterion E. It was therefore agreed that a compromise approximation would have to be made, and that the African Elephant Specialist Group would be best placed to undertake this task. In order to facilitate the process, extensive use was made of the Guidelines for Application of IUCN Red List Criteria at Regional Levels (IUCN 2003).
The criterion used for the categorization was criterion A. Criteria B, C and D are not applicable as the species currently occupies more that 20,000 km² and there are more than 10,000 mature individuals. No quantitative analysis was conducted and therefore criterion E does not apply. Substantial resources would be required to undertake a consensus-driven modelling approach, which would inevitably be based on a great deal of uncertainty with regard to some of the key parameters, including estimates of both human and elephant population size, as well as the scale and extent of threats to the species and its habitats. While ivory export records and other indirect data could be used to derive these models, they would still encounter the many uncertainties inherent in the reconstruction of events covering the better part of a century.
Subcriterion A2a was used because some of the major causes for decline, such as habitat loss due to human population expansion, have not ceased and may not be reversible throughout the species' range. While the recent data used in the assessment are based on direct observation, the population size reduction over three generations is only inferred (see below).
A generation time of 25 years, calculated as the average age of reproductive females, was established using data from many culling exercises in Kruger National Park, South Africa (I. Whyte pers. comm.).
There are no credible estimates for a continental population prior to the late 1970s. Thus for the continental (global) population, an extrapolation back to the beginning of three generations is plagued with high levels of uncertainty. Clearly, forward extrapolation to the mid-21st Century would also be troubled by uncertainty, not only for the reasons cited above, but also because of the variety of causes for decline and the nature of the current and likely future threats - mainly habitat loss and illegal hunting for both meat and ivory - which are in themselves variable in intensity across the continent.
One of the key components of the methodology adopted at the AfESG’s 2003 Etosha meeting was the assumption that continental elephant populations increased during the first half of the 20th century (as a result of the decline of the ivory trade from the outbreak of WWI, improved protection measures, and an increase in preferred secondary forest habitat in Central Africa), reaching a peak in the late 1960s and declining from then until the late 20th century.
In addition, African Elephant population trends in the course of the 20th century are believed to have differed considerably across the different African sub-regions (see Figure 1 in the Supplementary Material). In Eastern Africa, for instance, there is a general consensus that there was a peak (regional population maximum) around the late 1960s and early 1970s, followed by a decline in the 1980s and subsequent recovery in recent years (Blanc et al. 2005, 2007). In Southern Africa, which now harbours the largest known populations on the continent, elephant numbers are believed to have been at their lowest around the turn of the 20th century, and to have been increasing steadily ever since. The magnitude of the decline in Eastern Africa has in all likelihood been offset by the magnitude of the increase in Southern Africa. In West Africa, major declines probably occurred well before the turn of the 20th century and the population has remained at low levels ever since. There is insufficient information on sub-regional trends in Central Africa prior to 1977, but elephant populations are believed to have declined since that time. This is important as Central Africa accounts for a large proportion of the estimated continental range, but our knowledge of its current population size is the poorest.
Taking these problems into account, the consensus among contributors to the 2004 assessment was that it would be an appropriate and acceptable compromise, more likely to err on the conservative side relative to the final listing, to assume the continental population of three generations back (1927) to be equal to that of the first continental estimate in 1977. As the data used for the 2004 assessment were from 2002 (see section on 'Further Details on Data Used' in the Supplementary Material), it was thus assumed that the population in 1927 was approximately equal to the population estimate for 1977 derived by the contributors to the 2004 assessment.
For the present assessment, which uses 2006 data for the current generation, a comparison had to be made between 2006 and 1931. No consensus population estimate for 1931 is available for this assessment. Had the population remained constant or declined between 1927 and 1931, a comparison with the 2006 data used in this assessment would have resulted in a downlisting of the species to Near Threatened (NT). As mentioned above, however, according to the methodology and assumptions adopted at the 2003 AfESG meeting in Etosha, elephant populations were assumed to be increasing through the first part of the 20th Century. The extent to which the continental population would have increased is unknown. However, calculations reveal that, given the assumptions above, an annual rate of increase of greater than 1.53% would result in the species remaining in the Vulnerable category, and a rate of 1.53% or less would result in the species being re-categorized as Near Threatened. Under the conditions likely prevailing at the time the African Elephant Red List Authority believes that the likely annual rate of increase could easily have exceeded 1.53%. The conservative decision, again relative to the final global listing, is thus to accept a growth rate of greater than 1.53% per annum and to retain the African Elephant in the Vulnerable category in this assessment.
Changes to Status
The African Elephant was listed as Vulnerable (VU A2a) in the 2004 IUCN Red List, under the same IUCN Categories and Criteria used in this assessment (Version 3.1).
Prior to the 2004 assessment, the species was listed as Endangered (EN A1b) under the IUCN Categories and Criteria Version 2.3 (1994), in an assessment conducted in 1996 by the IUCN SSC African Elephant Specialist Group.
The status of African Elephants varies considerably across the species' range. These differences broadly follow regional boundaries, and are partly a result of the different historical trends. To better reflect this variation in status, it was decided to include in this assessment regional-level listings for the four African regions in which elephants occur (see Figure 2 in the Supplementary Material). The methodology and criteria used in these regional assessments is identical to that used for the global assessment, but employing only the relevant subsets of data. An exception to this rule is West Africa, where a more precautionary listing was obtained through the application of a different Red List Criterion. The results of the regional assessments are presented in Table 1 of the Supplementary Material.
|Previously published Red List assessments:|
|Range Description:||African Elephants currently occur in 37 countries in sub-Saharan Africa (see accompanying map in Supplementary Material, sourced from Blanc et al. 2007). They are known to have become nationally extinct in Burundi in the 1970s, in The Gambia in 1913, in Mauritania in the 1980s, and in Swaziland in 1920, where they were reintroduced in the 1980s and 1990s.|
Although large tracts of continuous elephant range remain in parts of Central, Eastern and Southern Africa, elephant distribution is becoming increasingly fragmented across the continent.
The quality of knowledge available on elephant distribution varies considerably across the species' range. While distribution patterns are well understood in most of Eastern, Southern and West Africa, there is little reliable information on elephant distribution for much of Central Africa.
Native:Angola; Benin; Botswana; Burkina Faso; Cameroon; Central African Republic; Chad; Congo; Congo, The Democratic Republic of the; Côte d'Ivoire; Equatorial Guinea; Eritrea; Ethiopia; Gabon; Ghana; Guinea; Guinea-Bissau; Kenya; Liberia; Malawi; Mali; Mozambique; Namibia; Niger; Nigeria; Rwanda; Senegal; Sierra Leone; Somalia; South Africa; South Sudan; Tanzania, United Republic of; Togo; Uganda; Zambia; Zimbabwe
Regionally extinct:Burundi; Gambia; Mauritania
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||Although elephant populations may at present be declining in parts of their range, major populations in Eastern and Southern Africa, accounting for over two thirds of all known elephants on the continent, have been surveyed, and are currently increasing at an average annual rate of 4.0% per annum (Blanc et al. 2005, 2007). As a result, more than 15,000 elephants are estimated to have been recruited into the population in 2006 and, if current rates of increase continue, the number of elephants born in these populations between 2005 and 2010 will be larger than the currently estimated total number of elephants in Central and West Africa combined. In other words, the magnitude of ongoing increases in Southern and Eastern Africa are likely to outweigh the magnitude of any likely declines in the other two regions.|
|Current Population Trend:||Increasing|
|Habitat and Ecology:||The African Elephant is very catholic in its range, and tends to move between a variety of habitats. It is found in dense forest, open and closed savanna, grassland and, at considerably lower densities, in the arid deserts of Namibia and Mali. They are also found over wide altitudinal and latitudinal ranges – from mountain slopes to oceanic beaches, and from the northern tropics to the southern temperate zone (approximately between 16.5° North and 34° South). See also the list of habitats.|
|Movement patterns:||Full Migrant|
|Major Threat(s):||Poaching for ivory and meat has traditionally been the major cause of the species' decline. Although illegal hunting remains a significant factor in some areas, particularly in Central Africa, currently the most important perceived threat is the loss and fragmentation of habitat caused by ongoing human population expansion and rapid land conversion. A specific manifestation of this trend is the reported increase in human-elephant conflict, which further aggravates the threat to elephant populations.|
The African Elephant has been listed in CITES Appendix I since 1989, but the populations of the following Range States have since been transferred back to Appendix II with specific annotations: Botswana (1997), Namibia (1997), South Africa (2000) and Zimbabwe (1997). These annotations have been recently replaced by a single annotation for all four countries, with certain specific sub-annotations for the populations of Namibia and Zimbabwe.
The African Elephant is subject to various degrees of legal protection in all Range States. Although up to 70% of the species range is believed to lie in unprotected land, most large populations occur within protected areas.
Conservation measures usually include habitat management and protection through law enforcement. Successful management at the site level can result in the build-up of high elephant densities. This is often perceived as a threat to their local habitats, as well as to other species and to elephant populations themselves. Management interventions to reduce elephant numbers and local densities have been limited and most recently been undertaken through contraception or translocation. Large-scale culling has not been performed as a population management option since Zimbabwe discontinued the practice in 1988 and South Africa did likewise in 1994.
The sport hunting of elephants is permitted under the legislation of a number of Range States, and the following countries currently (2007) have CITES export quotas for elephant trophies: Botswana, Cameroon, Gabon, Mozambique, Namibia, South Africa, Tanzania, Zambia and Zimbabwe.
Some community-based conservation programmes in which revenue from the sport hunting of elephants reverts directly to local communities have proved effective in increasing tolerance to elephants, and thus indirectly in reducing levels of human-elephant conflict.
An increasing number of transboundary elephant populations are co-managed through the collaboration of relevant neighbouring Range States. Large-scale conservation interventions are also planned through the development of conservation and management strategies at the national and regional level.
|Citation:||Blanc, J. 2008. Loxodonta africana. The IUCN Red List of Threatened Species 2008: e.T12392A3339343.Downloaded on 21 July 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:721f29e6-6533-4e05-9f7e-271d46c3a76d> | 3.6875 | 3,160 | Knowledge Article | Science & Tech. | 26.359259 | 95,588,869 |
Pursuing renewable energy will make a big dent in reducing carbon emissions, but one of the best solutions comes naturally. According to a study led by The Nature Conservancy, forest restoration projects and improving how we farm could lead up to a 30 percent overall decrease in carbon emissions. Should we aim toward these goals, there’s a much better chance of containing global warming.
Natural solutions to eliminate carbon emissions have a similar impact to removing cars off the road. According to the study released in October, the combination of reforestation and limiting human impact in cutting trees down would be the equivalent of removing 1.27 billion cars annually. Adding the next-biggest solution of maintaining these forests better, that would eliminate 7 billion metric tons of CO2 emissions.
That is huge potential, so if we are serious about climate change, then we are going to have to get serious about investing in nature, as well as in clean energy and clean transport,” Mark Tercek, CEO of The Nature Conservancy, said in a press release. “We are going to have to increase food and timber production to meet the demand of a growing population, but we know we must do so in a way that addresses climate change."
Indeed, we simply can’t avoid chopping trees down. It’s still necessary to build homes, produce furniture, and trees are responsible for the creation of thousands of items. Kitchen utensils, park benches, crayons, and even oil spill controlling agents are made from wood, which are all outside of the paper industry we commonly relate to with trees.
A way we can control deforestation is looking toward alternative products. Similar materials that could replace wood altogether include hemp, bamboo, soy, and composite lumber. Usually, these alternative options will last longer and requires less maintenance, but they are more expensive.
Another way we can limit global warming is better agricultural practices with chemical fertilizers. Not only would this improve the growth of our crops, but it would remove nitrous oxide emissions. According to the study, these are 300 times more potent than CO2 emissions, and enforcing these practices would be equivalent to removing 522 million cars off the road.
Naturally improving global warming is also much feasible for third-world countries. Dr. Ibrahim Mayaki, who is the former Prime Minister of Niger and current CEO of the New Partnership for Africa’s Development, points out that developed countries are focused on installing renewable energy while developing countries are adapting how they farm.
“This new study underlines the importance of nature, and especially trees and soils, as support for carbon sequestration through the cycle of plants based on photosynthesis. Promoting carbon sequestration in soils, with adapted agricultural and forestry practices, could lead to win-win solutions on mitigation, adaptation and increase of food security.”
Researchers from marine life advocates Oceana have discovered a surprising new world under the sea near Sicily.
Sweden's aggressive target of generating over 40 terawatt-hours of renewable energy by 2030 could be reached nearly a decade early. A massive amount of wind power projects could hit a snag in market value with subsidies, but SWEA could push to close those up by the end of the year.
Starbucks is ramping up its sustainability efforts with a plan to eradicate the use of plastic straws in its assembly line. | <urn:uuid:165580d8-dc7d-4c99-81c5-4a8d2a8457ef> | 3.421875 | 687 | News Article | Science & Tech. | 33.819351 | 95,588,920 |
February 27, 2018 In this post, it is shown how a simple Java UDP server and client socket application can be implemented in Java using the java.net package. For this example, Java SE. When UDP is used, a server doesn’t verify if all packets reached the destination. try uninstalling and reinstalling the VPN client. If the problem persists, reboot. UDP client-server example in python make use of socket objects created with SOCK_DGRAM and exchange data with sendto(), recvfrom() functions Mar 25, 2010. 1 Overview of Sockets; 2 Byte-Ordering Functions; 3 Data Structures; 4 Common Socket Calls. 4.1 socket(); 4.2 bind(); 4.3 listen(); 4.4 accept(); 4.5 connect(); 4.6 send(); 4.7 sendto(); 4.8 recv(); 4.9 recvfrom(); 4.10 close(). 5 Example Code. 5.1 UDP Client; 5.2 UDP Server; 5.3 TCP Client; 5.4 TCP Server. Client Server Communications Library for Visual Basic; Create server and client programs to communicate across TCP/IP or UDF through Visual Basic or VB.NET; add. A customer in the labor time-keeping business had a series of timeclock products that were installed on factory floors to allow the workers to clock in and out, and to change jobs they were working on. The Serial-port Server. The first versions of these clocks were connected via serial lines, and in 1992 I was contracted to. Then, the user chooses a local port number for the server to listen to. All connections that are received on that port are forwarded via the client on UDP port 53 to the remote host/port that is also chosen. Client connects to example.com. Then, the user chooses a local port number for the server to listen to. All connections that are received on that port are forwarded via the client on UDP port 53 to the remote host/port that is also chosen. Client connects to example.com. Nov 13, 2013 · I have a UDP Server that can work with multiple clients using Winsock control (the clients are on the same machine but different port numbers). However, it. When UDP is used, a server doesn’t verify if all packets reached the destination. try uninstalling and reinstalling the VPN client. If the problem persists, reboot. The following example shows a C socket UDP server (UDPS) program. The source code can be found in the UDPS member of the SEZAINST data set. From Nsasoft: UDP Client Sever is a network utility for testing network programs, network services, firewalls, and intrusion detection systems. UDP Client Sever can also be used for debugging network programs and configuring other network tools. The tool can work as a UDP client and UDP server, send and receive UDP. Creating Simple UDP Server And Client to. one a server, the other a client. 15 thoughts on “ Creating Simple UDP Server And Client to Transfer Data Using C#. Sep 20, 2015. So I wrote a nodejs app for the UDP server to accept data from my client (webapp ) and notifiy the other clients using websockets. To write a server which accepts UDP connections you can use dgram module in nodejs. var dgram = require(' dgram'); var udpPort = process.env.UDPPORT || 3000; var server. Clone via HTTPS Clone with Git or checkout with SVN using the repository’s web address. Exchange 2010 RPC Client Access Service. By default, the RPC Client Access service on an Exchange 2010 Client Access server uses the TCP End Point Mapper port (TCP. Feb 8, 2015. Testing TCP / UDP clients and servers with a Linux platform. Sometimes you need a quick method or mechanism to test either a TCP or UDP client or server works, this can be done with the Linux netcat application. Some examples of this application usage are :- Show Plain Text. Text code. netcat -ul -. Jul 29, 2012 · The real problem is that the server isn’t binding to an address. As a result, the client can’t connect to it. To bind, you should construct a struct sockaddr_in that. Jun 16, 2011. Overview To use UDP to make a chat application. Description: This is an example of building a chatter application by using the UDP knowledge that can. perlipc. NAME; DESCRIPTION; Signals. Handling the SIGHUP Signal in Daemons; Deferred Signals (Safe Signals) Named Pipes; Using. The company’s latest solution runs over the Adax SCTP/T as an optional module. Mar 7, 2013. I have noticed that quite a few people were trying to create UDP communication and I thought that proposing my class could help them. This is very basic as it does not define anything such as the size of a packet or any protocol to ensure arrival of the packets. However, it can be useful if you want to send a. This module shows the steps on how to build and develop the C++ and C# UDP client and Server.NET project and programs We recently noted that uTorrent, now owned by BitTorrent, released a new version of their client that lays uTP, the micro transport protocol, on top of UDP. That decision results in better congestion control, but it also prevents the kind of TCP. UDP Client and Server The UDP client and server are created with the help of DatagramSocket and Datagram packet classes. If the UDP. Does it mean our server is trying to do lookups, hence the port being accessed, or is it the ‘client’ machines doing the lookup? It’s NETBIOS name lookups, not DNS. DNS is udp/tcp 53. It’s typically used in an exploit. I get craploads of. Example 17.1 is a small example of a UDP program. It contacts the UDP time port of the machine whose name is given on the command line, or of the local machine by default. This doesn't work on all machines, but those with a server will send you back a 4-byte integer packed in network byte order that represents the time. Stampin Up Blogs Diana Gibbs Mar 01, 2013 · Stockingtease, The Hunsyellow Pages, Kmart, Msn, Microsoft, Noaa, Diet, Realtor, Motherless.com, Lobby.com, Hot, Kidscorner.com, Pof, Kelly Jeep. Re: "Pierce County building will go on the ballot" (TNT, 7-11). I am certainly glad that Jerry Gibbs and his committee have succeeded in getting the necessary signatures to place his referendum on the ballot 5 Pillars To Seo May 24, 2017. In this article, we are going to showcase the 5 pillars of digital marketing. These are. In SEO, Content is one of the best ways to keep your visitors engaged for a longer time on your website. There are. You can hire any SEO Services in Australia to rank your website higher Aug 24, 2009. In the previous section we looked at creating a basic TCP client using net/telnet. However, to demonstrate a basic client/server system, UDP is an ideal place to start. Unlike with TCP, UDP has no concept of connections, so it works on a simple system where messages are passed from place to another with. Sep 28, 2016. Developers from a web background often wonder why games go to such effort to build a client/server connection on top of UDP, when for many applications, TCP is good enough. *. The reason is that games send time critical data. Why don't games use TCP for time critical data? The answer is that TCP. Jul 15, 2008. What is UDP Server/Client Mode? UDP is a network socket communication protocol that is faster and more efficient than TCP. In UDP Mode, you can unicast or multi-unicast data from a serial device to one or multiple host computers. The serial device can also receive data from one or multiple host. tokens are good for 30 seconds and in order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the. May 3, 2017. Write UDP client and server programs that will provide the current date and time to a client using the UDP protocol;; Implement a multicast server and client. In its basic form, the client needs to know the server address before it sends a request. In this exercise, you will use multicast sockets so that the client. tokens are good for 30 seconds and in order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the. The hackers could then inject shell meta-characters into the DeviceUpgrade process to permit the attacker to execute commands instructing the bot to flood targets with manually crafted malicious TCP or UDP packets. These packets are. Does it mean our server is trying to do lookups, hence the port being accessed, or is it the ‘client’ machines doing the lookup? It’s NETBIOS name lookups, not DNS. DNS is udp/tcp 53. It’s typically used in an exploit. I get craploads of. This document describes how to configure Internet Key Exchange (IKE) shared secret using a RADIUS server. The IKE shared secret feature that uses an authentication. UDP clients and servers make use of datagrams, which are individual messages containing source and destination information. There is no state maintained by these messages, unless the client or server does so. The messages are not guaranteed to arrive, or may arrive out of order. The most common situation for a client. Socket Programming. How do we build Internet applications?. Figure 4 shows the the interaction between a UDP client and server. First of all, Jul 5, 2017. Now we are implementing simple echo UDP server/client, so our client will take the input from the console and send it to the server. The server will receive it and send it back to the client. When the client receives the data from the server it outputs it to the console. At first, we modify the server to send a data. Oct 6, 2012. Hello, I'm making simple client and server programs. The client just sends the message "HELLO" to the server and the server just sends back the time since epoch. However, the problem I'm having is how to setup my client so that it stops sending the same msg after it gets a response, while on the other. SAS Institute Inc. Cary, NC. Feb. 1998. Writing Client/Server Programs in C. Using Sockets (A Tutorial). Part I. Feb. 1998. TCP/UDP/IP Diagram. TCP Ports. UDP Ports. 0. 0. 64K. 64K. 1023. 1023. Well-known Ports. Well-known Ports. REXEC. Server port 512. IP Routing internet. Dev1.sas.com. (18.104.22.168). REXEC client. We recently noted that uTorrent, now owned by BitTorrent, released a new version of their client that lays uTP, the micro transport protocol, on top of UDP. That decision results in better congestion control, but it also prevents the kind of TCP. I am trying to write a basic C client/server program using sockets in Unix. I am logging on my school’s Unix server from my home computer. I am logging on twice, once. Gay Hardcore Blog Are You an Author? Help us improve our Author Pages by updating your bibliography and submitting a new or current image and biography. › Learn more at Author Central · Gay Hardcore 01: Ausgeliefert! (German Edition). $8.99. Kindle Edition. Ein Kerl zum Frühstück: Schwule Mini-Liebesgeschichten ( German Edition). Hot gay thugs naked with huge black Jan 29, 2016. gcc −g −Wall −o udp−client udp−client.c # to support debugging. 8. */. 9. 10. # include <stdio.h>. 11. #include <stdlib.h>. 12. #include <sys/socket.h>. 13. # include <netinet/in.h>. 14. #include <string.h>. 15. #include <assert.h>. 16. # include <arpa/inet.h>. 17. #include <netdb.h>. 18. #include <limits.h>. 19. Description The sample program depicts the communication between a client and server using UDP based sockets. Server starts first creates and binds socket waits for. The hackers could then inject shell meta-characters into the DeviceUpgrade process to permit the attacker to execute commands instructing the bot to flood targets with manually crafted malicious TCP or UDP packets. These packets are. The company’s latest solution runs over the Adax SCTP/T as an optional module. Nov 24, 2016. I finally made a working example of UDP Socket with Haxe, build it with neko if you want to try. Please don't care about my bad english UDP Server: package; import haxe.io.Bytes; import neko.vm.Thread; import openfl.display.Sprite; import openfl.Lib; import sys.net.Address; import sys.net.Host; Feb 25, 2011 · Hello Kahuna! A server or a client are both quite easy to implement involving only a couple classes. They probably suggested UDP to you as it’s connectionless and. | <urn:uuid:c753f43b-9ae8-4cb3-90ac-e493f8762469> | 3.171875 | 2,845 | Content Listing | Software Dev. | 74.388522 | 95,588,923 |
Air pollution in delhi case study pdf
As Earth’s population continues to grow, people are putting ever-increasing pressure on air pollution in delhi case study pdf planet’s water resources. In a sense, our oceans, rivers, and other inland waters are being “squeezed” by human activities—not so they take up less room, but so their quality is reduced. 19th century Industrial Revolution, people lived more in harmony with their immediate environment.
As industrialization has spread around the globe, so the problem of pollution has spread with it. When Earth’s population was much smaller, no one believed pollution would ever present a serious problem. It was once popularly believed that the oceans were far too big to pollute. According to the environmental campaign organization WWF: “Pollution from toxic chemicals threatens life on this planet.
Every ocean and every continent, from the tropics to the once-pristine polar regions, is contaminated. Photo: Detergent pollution entering a river. Water pollution can be defined in many ways. Usually, it means one or more substances have built up in water to such an extent that they cause problems for animals or people.
Oceans, lakes, rivers, and other inland waters can naturally clean up a certain amount of pollution by dispersing it harmlessly. If you poured a cup of black ink into a river, the ink would quickly disappear into the river’s much larger volume of clean water. Photo: Pollution means adding substances to the environment that don’t belong there—like the air pollution from this smokestack. Pollution is not always as obvious as this, however. A small quantity of a toxic chemical may have little impact if it is spilled into the ocean from a ship.
But the same amount of the same chemical can have a much bigger impact pumped into a lake or river, where there is less clean water to disperse it. Water pollution almost always means that some damage has been done to an ocean, river, lake, or other water source. Fortunately, Earth is forgiving and damage from water pollution is often reversible. What are the main types of water pollution? When we think of Earth’s water resources, we think of huge oceans, lakes, and rivers. The most obvious type of water pollution affects surface waters. For example, a spill from an oil tanker creates an oil slick that can affect a vast area of the ocean. | <urn:uuid:d01cb6f2-4c23-49c4-a57a-2790d97b01af> | 3.375 | 501 | Knowledge Article | Science & Tech. | 44.411725 | 95,588,938 |
NEW YORK (AP) _ Last year was the warmest in a century, nosing out 1998, a federal analysis concludes.
Researchers calculated that 2005 produced the highest annual average surface temperature worldwide since instrument recordings began in the late 1800s, said James Hansen, director of NASA's Goddard Institute for Space Studies.
The result confirms a prediction the institute made in December.
In a telephone interview, Hansen said the analysis estimated temperatures in the Arctic from nearby weather stations because no direct data were available. Because of that, ``we couldn't say with 100 percent certainty that it's the warmest year, but I'm reasonably confident that it was,'' Hansen said.
More important, he said, is that 2005 reached the warmth of 1998 without help of the ``El Nino of the century'' that pushed temperatures up in 1998.
Over the past 30 years, Earth has warmed a bit more than 1 degree in total, making it about the warmest it's been in 10,000 years, Hansen said. He blamed a buildup of heat-trapping greenhouse gases.
Jay Lawrimore of the federal government's National Climatic Data Center said his own center's current data suggest 2005 came in a close second to 1998, in part because of how the Arctic was factored in. But he said a forthcoming analysis ``will likely show that 2005 is slightly warmer than 1998.'' | <urn:uuid:cedaf292-b217-4015-bc1e-8f143d70478c> | 3.015625 | 279 | News Article | Science & Tech. | 48.13 | 95,588,963 |
Warning! This learning object uses Adobe Flash and your browser does not support Flash. To view this content you will either need to install and/or enable flash in your web browser or upgrade your web browser.
Click Here for instructions to enable Flash.
Dr. Miriam Douglass
Dr. Martin McClinton
In this interactive object, learners determine the limiting reagent and the excess reagent in chemical reactions. Learners test their knowledge by solving three problems.
Types of Elements in the Periodic Table and Their Properties
By Debbie McClinton, Dr. Miriam Douglass, Dr. Martin McClinton
Students review the positions of metals, metalloids, and nonmetals in the Periodic Table and the general characteristics of each. A quiz completes the object.
In this animated and interactive object, learners examine the inverse proportionality of wavelength and frequency and their relationship to the speed of light.
Heat of Fusion and Heat of Vaporization
Learners examine graphs and read that the heat of fusion is the heat energy absorbed by one mole of solid as it is converted to liquid, while the heat of vaporization is the heat energy absorbed by one mole of liquid as it is converted to gas.
Science Lab Equipment- Part 1
By Bruce Bell
Students read an introduction to the lab equipment used to contain and dispense chemicals. A quiz completes the activity.
why rinse filter paper which filtered copper(ii)susupernatant with distilled water
Creative Commons Attribution-NonCommercial 4.0 International License.
Learn more about the license » | <urn:uuid:e387844b-783a-4a3c-8c73-776de3be40ab> | 3.953125 | 320 | Content Listing | Science & Tech. | 43.365772 | 95,588,970 |
Electron microscopes have electron optical lens systems that are analogous to the glass lenses of an optical light microscope.
Electron microscopes are used to investigate the ultrastructure of a wide range of biological and inorganic specimens including microorganisms, cells, large molecules, biopsy samples, metals, and crystals. Industrially, electron microscopes are often used for quality control and failure analysis. Modern electron microscopes produce electron micrographs using specialized digital cameras and frame grabbers to capture the image.
The physicist Ernst Ruska and the electrical engineer Max Knoll constructed the prototype electron microscope in 1931, capable of four-hundred-power magnification; the apparatus was the first demonstration of the principles of electron microscopy. Two years later, in 1933, Ruska built an electron microscope that exceeded the resolution attainable with an optical (light) microscope. Moreover, Reinhold Rudenberg, the scientific director of Siemens-Schuckertwerke, obtained the patent for the electron microscope in May 1931.
In 1932, Ernst Lubcke of Siemens & Halske built and obtained images from a prototype electron microscope, applying concepts described in the Rudenberg patent applications. Five years later (1937), the firm financed the work of Ernst Ruska and Bodo von Borries, and employed Helmut Ruska (Ernst’s brother) to develop applications for the microscope, especially with biological specimens. Also in 1937, Manfred von Ardenne pioneered the scanning electron microscope. The first commercial electron microscope was produced in 1938 by Siemens. The first North American electron microscope was constructed in 1938, at the University of Toronto, by Eli Franklin Burton and students Cecil Hall, James Hillier, and Albert Prebus; and Siemens produced a transmission electron microscope (TEM) in 1939. Although contemporary transmission electron microscopes are capable of two million-power magnification, as scientific instruments, they remain based upon Ruska’s prototype.
The original form of electron microscope, the transmission electron microscope (TEM), uses a high voltage electron beam to illuminate the specimen and create an image. The electron beam is produced by an electron gun, commonly fitted with a tungsten filament cathode as the electron source. The electron beam is accelerated by an anode typically at +100 keV (40 to 400 keV) with respect to the cathode, focused by electrostatic and electromagnetic lenses, and transmitted through the specimen that is in part transparent to electrons and in part scatters them out of the beam. When it emerges from the specimen, the electron beam carries information about the structure of the specimen that is magnified by the objective lens system of the microscope. The spatial variation in this information (the "image") may be viewed by projecting the magnified electron image onto a fluorescent viewing screen coated with a phosphor or scintillator material such as zinc sulfide. Alternatively, the image can be photographically recorded by exposing a photographic film or plate directly to the electron beam, or a high-resolution phosphor may be coupled by means of a lens optical system or a fibre optic light-guide to the sensor of a digital camera. The image detected by the digital camera may be displayed on a monitor or computer. | <urn:uuid:7c9a7713-4cdb-4709-b333-d1b041473d8e> | 3.609375 | 659 | Knowledge Article | Science & Tech. | 12.896286 | 95,588,975 |
On the Structure of Hurricanes as Revealed by Research Aircraft Data
Research aircraft have been probing the inner core of tropical cyclones on a regular basis for more than two decades. Most of what we know today about the detailed internal structure and dynamics of these storms has resulted from studies based upon data collected from these airborne instrumented platforms. Noteworthy among earlier studies are those by Riehl and Malkus (1961) and LaSeur and Hawkins (1963). Riehl and Malkus used data collected in Hurricane Daisy (1958) to try to deduce the thermal and dynamical characteristics of a developing and mature storm. LaSeur and Hawkins used multiple level data collected in Hurricane Cleo (1958) to study the three-dimensional structure of a mature hurricane. Later, Hawkins and Rubsam (1968) conducted a detailed diagnostic study of Hurricane Hilda (1964), computing many of the same quantities as those computed by Riehl and Malkus in their 1961 study. Although, in general, findings were similar, significant differences between the two studies appeared, such as the depth of the inflow layer and the role of local generation as compared to advection for increases in the kinetic energy in the inner core of the hurricane. Similarly, several other case studies (Colon, 1961, 1964; Sheets, 1968, 1973; Hawkins and Imbembo, 1976) have shown gross features similar to those of the early Hurricane Cleo study, but have also shown wide variations from storm to storm and even from day to day within a given storm.
KeywordsTropical Cyclone Inner Core Vertical Wind Maximum Wind Maximum Wind Speed
Unable to display preview. Download preview PDF.
- Colon, J. and Staff, NHRP 1961 On the structure of hurricane Daisy (1958). National Hurricane Research Project No. 48, pp.102 (available from NOAA, NHRL, Coral Gables, FL)Google Scholar
- Colon, J. 1964 On the structure of hurricane Helene (1958). National Hurricane Research Project Report No. 72, pp.56 (available from NOAA, NHRL, Coral Gables, FL)Google Scholar
- Holliday, C. 1969 On the maximum sustained winds occurring in Atlantic hurricanes. ESSA Technical Memorandum WBTM-SR-45, pp.6 (available from U.S. Dept. of Commerce, Weather Bureau-Southern Region, Ft. Worth, TX)Google Scholar
- Sheets, R.C. 1968 The structure of hurricane Dora (1964). National Hurricane Research Laboratory Report No. 83, pp.64 (available from NOAA, NHRL, Coral Gables, FL)Google Scholar
- Willoughby, H.E. 1979 Some aspects of the dynamics in hurricane Anita of 1977. NOAA Technical Memorandum ERL NHEML-5, pp.30 (available from NOAA, NHRL, Coral Gables, FL)Google Scholar | <urn:uuid:36c76991-ca10-4813-9fcc-ac3eadfe7082> | 2.984375 | 600 | Truncated | Science & Tech. | 51.357547 | 95,588,979 |
El Niño, the abnormal warming of sea surface temperatures in the Pacific Ocean, is a well-studied tropical climate phenomenon that occurs every few years. It has major impacts on society and Earth's climate - inducing intense droughts and floods in multiple regions of the globe.
Further, scientists have observed that El Niño greatly influences the yearly variations of tropical cyclones (a general term which includes hurricanes, typhoons and cyclones) in the Pacific and Atlantic Oceans. However, there is a mismatch in both timing and location between this climate disturbance and the Northern Hemisphere hurricane season:
El Niño peaks in winter and its surface ocean warming occurs mostly along the equator, i.e. a season and region without tropical cyclone (TC) activity. This prompted scientists to investigate El Niño's influence on hurricanes via its remote ability to alter atmospheric conditions such as stability and vertical wind shear rather than the local oceanic environment.
Fei-Fei Jin and Julien Boucharel at the University of Hawai'i School of Ocean and Earth Science and Technology (SOEST) and I-I Lin at the National Taiwan University published a paper today in Nature that uncovers what's behind this "remote control."
Jin and colleagues uncovered an oceanic pathway that brings El Niño's heat into the Northeastern Pacific basin two or three seasons after its winter peak - right in time to directly fuel intense hurricanes in that region.
El Niño develops as the equatorial Pacific Ocean builds up a huge amount of heat underneath the surface and it turns into La Niña when this heat is discharged out of the equatorial region.
"This recharge/discharge of heat makes El Niño/La Niña evolve somewhat like a swing," said lead author of the study Jin.
Prior to Jin and colleagues' recent work, researchers had largely ignored the huge accumulation of heat occurring underneath the ocean surface during every El Niño event as a potential culprit for fueling hurricane activity.
"We did not connect the discharged heat of El Niño to the fueling of hurricanes until recently, when we noticed another line of active research in the tropical cyclone community that clearly demonstrated that a strong hurricane is able to get its energy not only from the warm surface water, but also by causing warm, deep water - up to 100 meters deep - to upwell to the surface," Jin continued.
Co-author Lin had been studying how heat beneath the ocean surface adds energy to intensify typhoons (tropical cyclones that occur in the western Pacific).
"The super Typhoon Hainan last year, for instance, reached strength way beyond normal category 5," said Lin. "This led to a proposed consideration to extend the scale to category 6, to be able to grasp more properly its intensity. The heat stored underneath the ocean surface can provide additional energy to fuel such extraordinarily intense tropical cyclones."
"The North-Eastern Pacific is a region normally without abundant subsurface heat," said Boucharel, a post-doctoral researcher at UH SOEST. "El Niño's heat discharged into this region provides conditions to generate abnormal amount of intense hurricanes that may threaten Mexico, the southwest of the US and the Hawaiian islands."
Furthermore, caution the authors, most climate models predict a slow down of the tropical atmospheric circulation as the mean global climate warms up. This will result in extra heat stored underneath the North-eastern Pacific and thus greatly increase the probability for this region to experience more frequent intense hurricanes.
Viewed more optimistically, the authors point out that their findings may provide a skillful method to anticipate the activeness of the coming hurricane season by monitoring the El Niño conditions two to three seasons ahead of potentially powerful hurricane that may result.
The School of Ocean and Earth Science and Technology at the University of Hawaii at Manoa was established by the Board of Regents of the University of Hawai'i in 1988 in recognition of the need to realign and further strengthen the excellent education and research resources available within the University. SOEST brings together four academic departments, three research institutes, several federal cooperative programs, and support facilities of the highest quality in the nation to meet challenges in the ocean, earth and planetary sciences and technologies.
Marcie Grabowski | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:2b01544b-3793-427e-99a0-9cc4fdb672a2> | 4 | 1,433 | Content Listing | Science & Tech. | 32.383856 | 95,588,994 |
A new study revealed that humans are responsible for starting 84 percent of all wildfires in the continental United States from 1992 to 2012.
Human-induced climate change has doubled the area affected by forest fires over the last 30 years.
Northern California, Western Oregon and the Great Plains are likely to suffer the highest exposure to wildlfire smoke.
You have already subscribed. Thank you. | <urn:uuid:a421d70f-2252-421b-b029-502d321e5550> | 2.890625 | 77 | Truncated | Science & Tech. | 43.963077 | 95,588,996 |
Scientists have discovered a rare mineral in Russia that could hold the key to boosting internet speeds by 1000%.
The mineral, perovskite, was first discovered in Russia in the 1830’s. Scientists say that it has a number of extraordinary properties, many of which they are now learning about.
Forbes.com reports: Perovskite (CaTiO3) is a calcium titanium oxide mineral, but the magic lies in this minerals ability to house many different cations in its physical structure, giving engineers the ability to modify the mineral as they see fit. While scientists have known about the mineral for quite some time, originally discovered in the Ural Mountains in Russia in 1839, researchers continue to find useful characteristics of this mineral.
Perovskite is found in Earth’s mantle has been mined in Arkansas, the Urals, Switzerland, Sweden, and Germany. Each variety has a slightly different chemical makeup, allowing for different physical characteristics. One such useful characteristic discovered in 2009 is perovskite’s ability to absorb sunlight and generate electricity, a natural form of a photovoltaic cell (solar cell). The mineral is currently under development for use in solar cells, displays, and catalytic converters.
Next Generation Terahertz Data Transfer
Now, scientists have discovered the mineral’s ability to use the terahertz spectrum in transferring data. The specific type of perovskite used is both inorganic and organic and can be thinly layered on a silicon wafer. The system’s unique ability is that it uses light instead of electricity to transfer data, allowing transfer speeds 1,000 times faster than current technology.
The terahertz band lies in between infrared light and radio frequency (100 to 10,000 gigahertz). This compares to the 2.4 gigahertz range most cellphones use today. The layered perovskite mineral can transfer data through light waves in the terahertz band using a simple halogen lamp. Using a halogen lamp, the research team found that they can modify the terahertz waves as they pass through the perovskite. This allowed the research team to encode data in the waves and transfer data 1,000 times faster than traditional electronic data transfers.
This research builds on the previous discovery of modulating waves in perovskite. However, that required expensive and high-powered lasers which made it commercially too expensive. The new discovery utilizes simple inexpensive halogen bulbs. In addition, the team found that they can specify the color of the light to modulate data simultaneously on different frequencies. Hence, not only can they transfer data 1,000 times faster using terahertz waves, they can simultaneously activate multiple data transfers using different colored lamps.
This technological breakthrough opens the door to using terahertz data transfer in future generation computing and communication. At a thousand times faster, this inexpensive and simple way to transfer data presents a multitude of opportunities to transform our digital lives. Unfortunately, we’ll have to wait at least 10 years until it becomes commercially ready according to the authors. When that time comes, this could present a step change in computing and communication.
Latest posts by Sean Adl-Tabatabai (see all)
- Ecuador Agrees To Hand Assange Over To Police ‘Within Days’ - July 20, 2018
- ABC Considers Firing Whoopi Goldberg For Spitting In Jeanine Pirro’s Face - July 20, 2018
- Mueller Grants Podesta ‘Full Immunity’ To Help Bring Down Trump - July 20, 2018 | <urn:uuid:be34387d-069c-4a4a-9ece-5a4a41fc4665> | 3.21875 | 745 | News Article | Science & Tech. | 30.558528 | 95,588,997 |
Most of the carbon resulting from wildfires and fossil fuel combustion is rapidly released into the atmosphere as carbon dioxide. Researchers at the University of Zurich have now shown that the leftover residue, so-called black carbon, can age for millennia on land and in rivers en route to the ocean, and thus constitutes a major long-term reservoir of organic carbon. The study adds a major missing piece to the puzzle of understanding the global carbon cycle.
Due to its widespread occurrence and tendency to linger in the environment, black carbon may be one of the keys in predicting and mitigating global climate change. In wildfires, typically one third of the burned organic carbon is retained as black carbon residues rather than emitted as greenhouse gases.
Initially, black carbon remains stored in the soil and in lakes, and is then eroded from river banks and transported to the ocean. However, black carbon is not taken into account in global carbon budget warming simulations, because its role in the global carbon cycle is not well understood as a result of a lack of knowledge about fluxes, stocks, and residence times in the environment.
First worldwide assessment of black carbon river transport
“Our study is the first to address the flux of black carbon in sediments by rivers on a global scale. We found that a surprisingly large amount of black carbon is exported by rivers,” says lead author Alysha Coppola, a postdoctoral researcher in the Department of Geography at the University of Zurich (UZH).
The study includes some of the largest rivers worldwide, such as the Amazon, Congo, Brahmaputra, and major Arctic rivers. It is the first global river assessment of the radiocarbon age values and amount of black carbon transported as particles. The researchers found that the more total river sediment is transported by rivers to the coast, the more black carbon travels with it and is ultimately buried in ocean sediments, forming an important long-term sink for atmospheric carbon dioxide.
Black carbon can age in intermediate reservoirs
To gain an overview of the processes occurring in the world’s rivers, the UZH researchers teamed up with colleagues from ETH Zurich, and the US-based Global Rivers Observatory at the Woods Hole Oceanographic Institution and the Woods Hole Research Center.
They discovered that the black carbon pathway from land to ocean is mainly shaped by erosion in river drainage basins. Surprisingly, they found that some black carbon can be stored for thousands of years before being exported to the ocean via rivers. This insight is new, since it was previously always assumed that after a fire, the remaining black carbon was quickly eroded by wind and water.
However, the authors found that black carbon does not always originate from recent wildfires, but could be up to 17,000 years old, particularly in the Arctic. “This explains the mystery as to why black carbon is continuously present in river waters, regardless of wildfire history. We found that black carbon can age in intermediate reservoirs that act as holding pools before being exported to the ocean,” says Alysha Coppola.
Alysha I. Coppola, et. al. Global scale evidence for the refractory nature of riverine black carbon. Nature Geoscience. July 9, 2018. DOI: 10.1038/s41561-018-0159-8
Alysha I. Coppola, PhD
Department of Geography
University of Zurich
Phone: +41 44 635 52 28
Prof. Michael W.I. Schmidt, PhD
Department of Geography
University of Zurich
Phone: +41 44 635 51 40
Rita Ziegler | Universität Zürich
Evolution and climate change in Southeast Africa
10.07.2018 | MARUM - Zentrum für Marine Umweltwissenschaften an der Universität Bremen
Ocean acidification: Coral core reveals dropping pH values in South Pacific
06.07.2018 | Leibniz-Zentrum für Marine Tropenforschung (ZMT)
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
Sizes and shapes of nuclei with more than 100 protons were so far experimentally inaccessible. Laser spectroscopy is an established technique in measuring fundamental properties of exotic atoms and their nuclei. For the first time, this technique was now extended to precisely measure the optical excitation of atomic levels in the atomic shell of three isotopes of the heavy element nobelium, which contain 102 protons in their nuclei and do not occur naturally. This was reported by an international team lead by scientists from GSI Helmholtzzentrum für Schwerionenforschung.
Nuclei of heavy elements can be produced at minute quantities of a few atoms per second in fusion reactions using powerful particle accelerators. The obtained...
A team headed by the TUM physicists Alexander Holleitner and Reinhard Kienberger has succeeded for the first time in generating ultrashort electric pulses on a chip using metal antennas only a few nanometers in size, then running the signals a few millimeters above the surface and reading them in again a controlled manner. The technology enables the development of new, powerful terahertz components.
Classical electronics allows frequencies up to around 100 gigahertz. Optoelectronics uses electromagnetic phenomena starting at 10 terahertz. This range in...
03.07.2018 | Event News
28.06.2018 | Event News
28.06.2018 | Event News
10.07.2018 | Earth Sciences
10.07.2018 | Agricultural and Forestry Science
09.07.2018 | Power and Electrical Engineering | <urn:uuid:d2c19616-1917-410e-a203-3520584590a2> | 4.15625 | 1,450 | Content Listing | Science & Tech. | 41.981781 | 95,589,005 |
Large lakes currently exhibit ecosystem responses to environmental changes such as climate and land use changes, nutrient loading, toxic contaminants, hydrological modifications and invasive species. These sources have impacted lake ecosystems over a number of years in various combinations and often in a spatially heterogeneous pattern. At the same time, many different kinds of mathematical models have been developed to help to understand ecosystem processes and improve cost-effective management. Here, the advantages and limitations of models and sources of uncertainty will be discussed. From these considerations and in view of the multiple environmental pressures, the following emerging issues still have to be met in order to improve the understanding of ecosystem function and management of large lakes: (1) the inclusion of thresholds and points-of-no-return; (2) construction of general models to simulate biogeochemical processes for a large number of lakes rather than for individual systems; (3) improvement of the understanding of spatio-temporal variability to quantify biogeochemical fluxes accurately; and (4) inclusion of biogeochemical linkages between terrestrial and aquatic ecosystems in model approaches to assess the effects of external environmental pressures such as land-use changes. The inclusion of the above-mentioned issues would substantially improve models as tools for the scientific understanding and cost-effective management of large lakes that are subject to multiple environmental pressures in a changing future.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:09c75918-2fca-4e16-8d7b-496079794b2b> | 3.03125 | 290 | Academic Writing | Science & Tech. | -12.835 | 95,589,016 |
Morphology and Ecological Physiology of Corals
Many important properties of coral reef ecosystems are closely connected with the morphological and functional characteristics of the corals themselves, as dominant inhabitants of their bottom biotopes. Among these properties there could be mentioned the adaptive variability of corals, their endosymbiosis with algae, and their ability for autotrophic feeding coupled with heterotrophy. Below the most important points of the ecological morphology and physiology of corals are discussed. The anatomy of polyps and the morphology of corallites and of coral colonies have been described in corresponding reviews, and we will deal with them only in part if necessary Among these reviews should be mentioned: Hyman (1940), Bayer (1973), Coates and Oliver (1973), Mariscal (1974), Preobrazhenski (1986), and Sokolov (1987).
KeywordsReef Slope Scleractinian Coral Coral Coloni Light Adaptation Ecological Physiology
Unable to display preview. Download preview PDF. | <urn:uuid:54b06365-31f5-4b45-8b17-20b8287f8e4c> | 2.828125 | 211 | Truncated | Science & Tech. | 5.9825 | 95,589,034 |
Scala Style Guide
Publisher: Scala Community 2011
Number of pages: 45
This document is intended to outline some basic Scala stylistic guidelines which should be followed with more or less fervency. Wherever possible, this guide attempts to detail why a particular style is encouraged and how it relates to other alternatives.
Home page url
Download or read it online for free here:
by Martin Odersky - EPFL
Scala is a concise, elegant, type-safe programming language that integrates object-oriented and functional features. This book is an excellent step-by-step introduction to many of the Scala features with the help of simple code examples.
by Jason Swartz - O'Reilly Media
An introduction and a guide to getting started with functional programming development. Written for programmers who are already familiar with object-oriented (OO) development, the book introduces the reader to the core Scala syntax and its OO models.
by Dean Wampler, Alex Payne - O'Reilly Media
The book introduces an exciting new language that offers all the benefits of a modern object model, functional programming, and an advanced type system. Packed with code examples, this comprehensive book teaches you how to be productive with Scala.
by Martin Odersky, Lex Spoon, Bill Venners - Artima Inc
Scala is an object-oriented programming language for the Java Virtual Machine. Scala is also a functional language, and combines the best approaches to OO and functional programming. This book is the authoritative tutorial on Scala programming. | <urn:uuid:5afaccd9-95d6-455c-99f6-16c08a779af6> | 2.640625 | 314 | Content Listing | Software Dev. | 25.928393 | 95,589,046 |
Gulf's sperm whales won't get separate protections, feds say
The Gulf of Mexico's sperm whales do not warrant a separate listing under the Endangered Species Act, the federal government concluded this week after nine months of study.
The National Marine Fisheries Service said the population fails to meet the requirements to be considered distinct from other sperm whales, which in general already receive protections as an endangered species in U.S. waters.
The finding comes in response to a petition by WildEarth Guardians. The New Mexico-based environmental group argued that the Gulf's isolated population of 1,300 sperm whales is genetically different than those found in other oceans and faces unique threats because of oil and gas development and a low-oxygen dead zone caused by runoff from the Mississippi River.
"We're disappointed in the decision," said Taylor Jones of WildEarth Guardians. "It implies that if the population of sperm whales in the Gulf disappears completely, that's not a problem."
Other whale species
The government lists six species of whales as endangered in the Gulf, including the blue and humpback, but only the sperm whale congregates year-round. They are smaller in size than their brethren and are found in smaller groups.
The subspecies also makes distinctive sounds because the cold canyons where the whales feed are entirely dark. They also use clicking and buzzing sounds to find their prey - a natural form of sonar known as echolocation.
Some evidence lacking
But the federal agency concluded: "The weight of the evidence does not indicate the (Gulf of Mexico) population of the sperm whale is 'markedly separated' from other populations."
Other research shows the Gulf's sperm whales can sustain no more than three human-caused deaths each year without threatening its recovery from endangered status. But scientists have said there is no evidence of a population decline in the area because of industrial activity. | <urn:uuid:a3407901-37d9-402b-ac42-abce0ac9787d> | 3.3125 | 384 | News Article | Science & Tech. | 37.261664 | 95,589,072 |
TO biologists, the Galapagos Islands are very special places, not least because their distinctive wildlife prompted Darwin to hit upon the theory of evolution. But the islands, off the coast of Ecuador, have long posed a troublesome evolutionary mystery: geologically, they are far too young for evolution, operating at its usual pace, to have produced the many unique life forms that now inhabit the islands.
A team of oceanographers and geologists has now happily resolved the mystery. They have found evidence that the first plants and animals to colonize the Galapagos chain probably landed on ancient islands that are no longer visible because they long ago sank beneath the waves. Their inhabitants were presumably forced to move on to the younger islands that exist today.
The discovery confirms a controversial hypothesis by two molecular biologists, Dr. Vincent M. Sarich and Dr. Jeffrey S. Wiles of the University of California at Berkeley. In 1983 they predicted that such "drowned" islands would be found. They reasoned that only the existence of long-vanished islands could account for the extensive evolutionary changes undergone by Galapagos species in the period since their ancestors arrived on the islands, which were originally lifeless. Plants and animals are believed to have arrived on the Galapagos aboard seaweed mats or driftwood rafts from the South American continent.
In the current issue of the British journal Nature, a team of researchers say they have found clear evidence that the oldest Galapagos Islands simply crumbled beneath the waves, a conclusion that strongly supports the Sarich-Wiles hypothesis. The authors of the report are Dr. David M. Christie of Oregon State University and his colleagues at the Universities of Oregon and California, Cornell University and the National Oceanic and Atmospheric Administration.
In a 26-day expedition, the scientists made detailed soundings and dredged samples from undersea mountains along the Carnegie Ridge, a submerged crest extending along the Pacific Ocean floor east of the Galapagos toward the coast of South America.Continue reading the main story
"All along this ridge, and especially around the seamounts protruding from it, we dredged up round basalt pebbles and cobbles," Dr. Christie said in an interview. "We know of no geological process other than beach erosion and near-surface wave action that could produce these rounded forms. The debris that settles to the ocean floor from underwater volcanoes is very different from these. We therefore conclude that these seamounts at one time reached above the surface as volcanic islands, and were later eroded away."
The submerged seamount of the Galapagos chain lying closest to the present-day coastline of South America is about 370 miles west of Ecuador. something less than half the distance from Ecuador to the existing Galapagos Islands. The age of this seamount, whose summit is now some 6,500 feet below the surface of the waves, is about nine million years, the scientists determined. That age is much greater than the estimated two million to three million years of today's Galapagos Islands. Evolution's Chemical Tracks
The existence of an island chain for nine million years would be long enough to account for the state of evolution of the Galapagos animals seen today, biologists say. Moreover, according to Dr. Christie, there are probably still more ancient members of the Galapagos archipelago that remain to be identified, and they may be 90 million years old.
Dr. Sarich and other molecular biologists have demonstrated a chemical basis for the ticking of an evolutionary clock at a more or less constant rate. Essentially, each tick occurs when one amino acid in the backbone chain of a particular protein molecule is switched for another. The protein Dr. Sarich uses for his clock is albumin, and he reckons that in a typical species, between 2.5 and 3 of these substitutions occur in the course of a million years.
He and his colleague, Dr. Allan C. Wilson, now deceased, estimated that because the proteins of chimpanzees and human beings differ by 12 units, the two species must have diverged from their common ancestor about five million years ago.
Dr. Sarich also extensively studied the protein chemistry of several Galapagos species, particularly that of the marine and land iguanas. These two species, Dr. Sarich said in an interview, clearly descended from a common ancestor, a sea-faring pioneer who floated from the South American coast aboard some kind of natural raft all the way to the Galapagos.
Reptiles are able to withstand long periods without water, he said, and this may have made it easier for iguanas to survive immense ocean journeys from South America to the Galapagos, and even to Fiji and Madagascar. But many other species, plants, birds and mammals, as well as the ancestors of the famous Galapagos giant tortoises, must have occasionally survived the ordeal of raft voyages lasting weeks or months from the mainland, carried westward by the Humboldt Current. Having arrived at the islands, they began evolutionary trees of their own that were different from those of their mainland cousins.
The marine and land iguanas of the Galapagos are more closely related to each other than either is to mainland relatives. However, Dr. Sarich said, they have evolved in very different ways. The units of difference in the amino acids of their respective albumins suggest that these two species must have diverged from their common ancestor many millions of years ago, and this finding led Dr. Sarich and a coauthor to publish a paper in 1983 provocatively entitled: "Are the Galapagos iguanas older than the Galapagos?"
"So you can see why," he said, "that we were pretty sure these sunken islands would eventually turn up. I'm not at all surprised by Dr. Christie's discovery."
This view was by no means universally accepted. In 1985, Dr. Carole S. Hickman, a paleontologist at the University of California at Berkeley, and Dr. Jere H. Lipps, a geologist at the University of California at Davis, took the contrary view. In a paper in the journal Science, they argued that fossils in Galapagos sediments were no more than two million years old and that "all evolution of the islands' unique terrestrial biota occurred within the last three to four million years."
In their new paper, Dr. Christie and his colleagues noted: "Some have argued that the geological youth of the present islands, less than or equal to three million years, requires that all adaptive radiation has occurred within that period. Our geological observations and radiometric data indicate that there may have been islands present over the Galapagos hot spot for at least nine million years and probably much longer."
The geological evidence, Dr. Christie said, shows there is a "hot spot" in the earth's mantle to which the Galapagos owe their creation. Molten lava from the hot spot burst out onto the floor of the Pacific Ocean, building up underwater volcanoes that eventually broke above the ocean's surface to form the Galapagos chain. The hot spot itself remains more or less stationary, but the crustal plate of the ocean floor has drifted eastward across it. The eruptions thus appear from the surface to move westward, with the westernmost islands being the newest; a half dozen volcanoes are still active on Isabela Island at the western end of the chain.
But at the eastern end, the oldest islands are no longer bolstered by fresh eruptions once they have moved passed the hot spot. Erosion and the movement of the tectonic plates have combined to pull down and "drown" the oldest islands, presumably, the very islands on which iguanas first arrived from the mainland. Animals living on older islands would presumably have moved in a succession of migrations to newer islands in the chain.
A similar situation prevails in the Hawaiian Islands, where the crustal plate moves steadily across a volcanic hot spot, periodically creating new islands. But while most of the native animals and birds that evolved in the Hawaiian Islands were exterminated by early human settlers or by the animals the settlers introduced, the original Galapagos fauna have been somewhat better protected, and many unique species have survived.
Scientific understanding of the evolution of species began in the Galapagos in 1834, when the young Charles Darwin stepped ashore from the H.M.S. Beagle to collect samples. Astonished by the great variety of previously unknown birds, reptiles and mammals inhabiting the islands and their waters, he killed and preserved as many as he could find. Returning to England, Darwin showed his specimens to the ornithologist John Gould, who recognized them as a scientific treasure.
Among the birds, Mr. Gould discerned 13 different varieties of finches. All were obviously related to each other, but with markedly different beaks, some adapted to crushing nuts, others to cracking seeds or drilling insects from tree bark. As Darwin thought about the differences between the 13 finches, he realized that the variations in their beaks had developed from a single ancestral line to take advantage of different ecological niches. From this conclusion, the theory of evolution was eventually born.
Dr. Sarich said that his investigation of the proteins of Galapagos animals had not included the historic Darwin finches, but that in general, evolutionary paces for most animals were similar, except for one group of rapidly evolving rodents.
"I'm sure investigators will eventually get around to the finches and other species," Dr. Sarich said. "The Galapagos Islands are a never-ending source of fascinating scientific material."Continue reading the main story | <urn:uuid:ab0bb6e2-bc1f-4fd0-b22c-f9baa6cecf13> | 4.5 | 1,980 | Truncated | Science & Tech. | 39.211909 | 95,589,084 |
The primary focus of this paper is to describe the development of a highly modified aircraft that carries a twenty ton telescope to the stratosphere and then loiters at this desired altitude to act as the observatory platform and dome. When the aircraft has reached its nominal cruise condition of Mach 0.84 in the stratosphere, a large cavity door opens (the dome opens), exposing a large portion of the interior of the fuselage that contains the telescope optics directly to the Universe. The topics covered in this paper include: the relevant criteria and the evaluation process that resulted in the selection of a Boeing 747-SP, the evolution of the design concept, the description of the structural modification including the analysis methods and tools, the aerodynamic issues associated with an open port cavity and how they were addressed, and the aeroloads/
disturbances imparted to the telescope and how they were measured in the wind tunnel and extrapolated to full size. This paper is complementary to a previous paper presented at the 2000 Airborne Telescope Systems conference which describes the challenges associated with the development of the SOFIA Telescope.
For completeness, this paper also provides a brief overview of the SOFIA project including the joint project arrangement between NASA and DLR, a top level overview of the requirements, and finally the current project status. | <urn:uuid:7d72571e-1453-4056-9114-4198e4c30b01> | 2.671875 | 266 | Academic Writing | Science & Tech. | 11.072024 | 95,589,099 |
Supersolids and superfluids rank among the most exotic of quantum mechanical phenomena. Superfluids can flow without any viscosity, and experience no friction as they flow along the walls of a container, because their atoms ‘condense’ into a highly coherent state of matter. Supersolids are also characterized by coherent effects, but between vacancies in a crystal lattice rather than between the solid’s atoms themselves.
The reduction in the rotational inertia of a bar of solid helium-4 as it was cooled to very low temperatures provided the first experimental evidence for supersolids. Physicists interpreted the reduction to mean that some amount of supersolid helium had formed and decoupled from the remainder of the bar, affecting its rotational inertia and frequency. Others argued that the reduction in inertia resulted from a change in the helium’s viscosity and elasticity with temperature, rather than from the onset of supersolidity.
Kimitoshi Kono from the RIKEN Advanced Science Institute in Wako, Eunseong Kim from KAIST in Korea, and their colleagues from these institutes have now disproved the alternative interpretation by simultaneously measuring the shear modulus (a measure of viscosity and elasticity) and the rotational inertia of a solid helium-4 cell as its temperature dropped from 1 kelvin to 15 thousandths of a kelvin. The cell was made to rotate clockwise and then counterclockwise periodically, as well as to rotate clockwise or counterclockwise continuously (Fig. 1). The continuous rotation affected the inertial mass of the helium but its shear modulus, allowing these quantities to be monitored independently.
Under continuous rotation, the degree of change in the rotational inertia had a clear dependence on rotation velocity, while the shear modulus did not. In addition, the energy dissipated by the rotation increased at high speeds. Both of these observations contrast to what would be expected if viscoelastic effects were at play, rather than supersolidity. The researchers also found that periodic rotation and continuous rotation affected the rotation differently, raising new questions about the experimental system.
The data support the interpretation that changes in the rotational inertia of helium-4 at low temperature result from supersolidity. This is important because of the novel and surprising nature of the phenomenon itself, says Kono. “Superfluidity in a solid is a very radical concept which, if proven, is certainly a good candidate for the Nobel Prize” he adds. “Therefore the first priority is to determine whether it can be proven in a fashion that will convince the low-temperature physics community.”
The corresponding author for this highlight is based at the Low Temperature Physics Laboratory, RIKEN Advanced Science Institute
Choi, H., Takahashi, D., Kono, K. & Kim, E. Evidence of supersolidity in rotating solid helium. Science 330, 1512–1515 (2010).
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:4a9f2ffe-2104-4014-a80c-29f23846e473> | 3.421875 | 1,187 | Content Listing | Science & Tech. | 30.739318 | 95,589,115 |
The prestigious journal Science is out with its top 10 breakthroughs of 2009. They include developments in anthropology, astronomy, and biology.
The breakthrough of the year was 15 years in coming. That's how long it took for an international team of scientists to excavate and analyze the fossilized skeleton of a 4.4 million year old human ancestor, Ardipithecus ramidus, which was discovered in Ethiopia. Science magazine deputy news editor Robert Coontz said "Ardi," as the creature was nicknamed, was especially surprising to scientists because of how she walked.
"The main thing was that it walked upright, just as we do. But what's unexpected about that is that our closest evolutionary relatives, chimpanzees and gorillas, don't do that. And so there was an assumption that our common ancestor with them would have been something that also walked that way. And it turns out that, no, Ardipithecus was designed for walking in trees or climbing trees."
Several of Science magazine's notable breakthroughs of the past year focused on astronomy and space. The journal cited the astronauts' service call to the Hubble Space Telescope, which gave the orbiting observatory a new lease on life. And editor Robert Coontz says the top 10 breakthroughs also included the discovery of water on the moon by the LCROSS mission.
"The poles of the moon have dark craters that never see. So if any ice were to wind up there, it really wouldn't go anywhere. So this year NASA sent up a spacecraft and sent the rocket stage right into the moon to 'bomb' the moon, basically, and see what came up. And they looked at it with a spectrometer and they found that the molecules that were coming up included water."
In the life sciences, the journal's editors noted advances in gene therapy — something that has long seemed on the verge of a breakthrough.
"This year, however, there were some very promising clinical results that indicate that it may be starting to work the way that people always hoped that it would. There was a form of inherited blindness, and some researchers in Britain injected patients with these viruses attached to genes. And it turns out that the patients actually did regain some of their sensitivity to light."
Coontz says some of the children in the study actually regained enough eyesight to be able to play sports normally.
Science magazine reports on these and the rest of its breakthroughs of the year online at ScienceMag.org. You'll have to register, but there's no cost.
At the website you'll also get a hint about areas to watch for breakthroughs in the coming year, including America's human spaceflight program.
"NASA is going to have to decide what it's going to do about the human space program. It will determine the whole direction that the future space program of the United States is going to take, and so that's something that we'll be looking at very closely."
Science magazine editor Robert Coontz says other areas to watch in 2010 include stem cell research and possible new cancer treatments. | <urn:uuid:899e56be-5341-46d7-be52-73b3bb7de83b> | 3.390625 | 628 | News Article | Science & Tech. | 51.843576 | 95,589,118 |
Astronomers led by Shiwei Wu of the Max Planck Institute for Astronomy have identified the most massive star in our home galaxy's largest stellar nursery, the star-forming region W49.
The star, named W49nr1, has a mass between 100 and 180 times the mass of the Sun. Only a few dozen of these very massive stars have been identified so far. As seen from Earth, W49 is obscured by dense clouds of dust, and the astronomers had to rely on near-infrared images from ESO's New Technology Telescope and the Large Binocular Telescope to obtain suitable data. The discovery is hoped to shed light on the formation of massive stars, and on the role they play in the biggest star clusters.
The discovery of a new, very massive star is exciting to astronomers for more than one reason: Very massive stars, more than 100 times the mass of our own Sun, are something of an astronomical mystery. They are very short-lived (a few million years compared to the 10 billion years of stars like our Sun), which is one reason they are so rare. Among the billions of stars catalogued and examined by astronomers, these very massive specimens amount to no more than a few dozen, most of them discovered over the past few years.
Though rare, the massive stars have a decisive influence on their surroundings. They are extremely bright, giving off large amounts of highly energetic UV radiation as well as streams of particles (stellar wind). Typically, such a star will create a bubble around itself, ionizing any nearby gas, and pushing more distant gas ever farther away. Some of this pushed-away gas might actually cause distant gas clouds to collapse, triggering the birth of new stars.
Until a few years ago, there was even doubt whether such stars could form at all. Theorists have only quite recently managed to simulate the genesis of these massive bodies, and there are now several competing explanations for very massive star formation. In some models, such a star is the result of the merger between two stars forming in an extended star cluster. Up to now, there had only been three clusters (NGC 3603 and the Arches Cluster in our galaxy, R136 in the Large Magellanic Cloud) where such massive stars had actually been found.
Now, a team of astronomers lead by Shiwei Wu from the Max Planck Institute for Astronomy (MPIA) has discovered such a massive star, and not in any location, but in the largest star-forming region known in our Milky Way galaxy, which is called W49. The discovery was a challenging task: W49 is located at a distance of 36,000 light-years (11.1 kpc), almost half-way across our home galaxy, cloaked by the dust of two spiral arms that lie between us and the cluster.
Shiwei Wu explains: „Because W49 is hidden behind huge regions of interstellar dust, only one trillionth of the visible light it sends in our direction actually reaches Earth. That’s why we observed the cluster’s infrared light, which can pass through dust almost unhindered.”
Using a spectrum obtained with the European Southern Observatory’s Very Large Telescope in the infrared, the astronomers could determine the star’s type (“O2-3.5If* star”) and use this information and the star’s measured brightness to estimate its temperature and total light emission. Comparison with models for stellar evolution give an estimate of the star’s mass between 100 and 180 solar masses.
Because of the cluster’s size, W49 is one of the most important sites within our galaxy for studying the formation and evolution of very massive stars – and with W49nr1, the astronomers have now identified the cluster’s key object. With this and future observations, they have hopes of settling one of astronomy’s weightiest open questions: the birth of our galaxy’s most massive stars.
Shiwei Wu (first author)
Max Planck Institute for Astronomy
Phone: (+49|0) 6221 –528 203
Klaus Jäger (public information officer)
Max Planck Institute for Astronomy
Phone: (+49|0) 6221 – 528 379
Further images and the original press release can be found here:
The results described here have been published as S.-W. Wu et al., “The Discovery of a Very Massive Star in W49” by the journal Astronomy & Astrophysics.
The co-authors are Shi-Wei Wu (Max Planck Institute for Astronomy [MPIA]), Arjan Bik (MPIA and Stockholm University), Thomas Henning (MPIA), Anna Pasquali (ZAH, Heidelberg University), Wolfgang Brandner (MPIA) and Andrea Stolte (Argelander Institute for Astronomy, Bonn).
The study is based on a medium-resolution K-band spectrum taken with the ISAAC instrument mounted at ESO's Very Large Telescope in Chile. Infrared images were obtained with SOFI at the New Technology Telescope at ESO's La Silla Observatory (J- and H-Band), and with LUCI mounted at the Large Binocular Telescope in Arizona (K-Band).
Dr. Klaus Jäger | Max-Planck-Institut
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
Theorists publish highest-precision prediction of muon magnetic anomaly
16.07.2018 | DOE/Brookhaven National Laboratory
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:29aaed28-a5ee-4b08-bcab-6adb6f6c07b4> | 3.578125 | 1,754 | Content Listing | Science & Tech. | 44.595844 | 95,589,131 |
The eerie 'Phantom of the Opera' neutron star that rotates faster than a helicopter's blades
- It formed 10,000 years ago after exploding as a supernova
- Scientists believe it is the first to wobble as it spews out charged particles
A neutron star with an eerie similarity to the Phantom of the Opera’s mask has been recorded by a NASA space telescope.
The Vela pulsar is about 12 miles in diameter and is spinning so fast that it rotates more than 11 times every second – faster than a helicoper’s blades.
It is about 1,000 light years from Earth and is the remains of a massive star that blew up an estimated 10,000 years ago as a supernova, then collapsed in on itself.
Scroll down to video
Neutron star Vela surrounded by the hot gas clouds formed into the shape of the Phantom of the Opera's mask.
As it spins round it spews out charged particles at about 70 per cent of the speed of light and Vela appears also to have a slight wobble.
But it also has clouds of hot gas surrounding it which have mysteriously formed themselves into the appearance of a mask.
Vela and its mask have been observed and recorded by NASA’s flagship X-ray space telescope, Chandra.
Rather than the appearance of the mask, which is put down to coincidence and the angle at which it is seen by Chandra, it is the wobble that has fascinated astronomers.
Further observations are needed to confirm the wobble indicated by the shape and motion of the jet of charged particles.
However, if it is confirmed by astrophysicists it will be the first wobble, or precess, attributed to a pulsar’s jet.
The Phantom Of The Opera's mask
A possible cause of the wobble is that the neutron star has become slightly distorted and is no longer a perfect sphere.
Such a distortion could be caused by the combined action of the fast rotation and sudden increases in the pulsar's rotational speed.
Rapid increases in spinning speed, or ‘glitches’, might be prompted by the interaction of the pulsar’s superfluid core with its crust.
A paper describing the findings results will be published in The Astrophysical Journal on January 10.
Vela’s suspected wobble is calculated to last about 120 days, somewhat shorter that the 26,000-year wobble that Earth has as it spins.
An earlier image of Vela showing the 'mask' and the jet of charged particles streaming into space
Most watched News videos
- Violent footage shows fight using baskets at Poundland store
- Courageous woman hides victim from kidnappers till cops arrive
- White woman confronts mother playing outside with child
- Son of billionaire oligarch marries model Alena Gavrilova
- Putin rolls into Helsinki summit in new Kortezh limousine
- Journalist forcibly removed from Trump-Putin joint press conference
- Lacoste store in Lyon looted following France's world cup win
- Air ambulance lands in Trafalgar Square after casino stabbing
- Brigitte Macron all smiles as she raises World Cup with France team
- France fans scuffle with police in Paris following World Cup win
- Obama dances with his step grandmother during visit to Kenya
- Brave lion cub forced to jump into raging river to follow mother | <urn:uuid:67dee5c0-b17a-4369-8fff-5e7764a03f10> | 2.546875 | 703 | Truncated | Science & Tech. | 34.882054 | 95,589,140 |
The metric conversion involves use of conversion factors and then solving problem in stepwise manner.
How we use the conversion factor to solve the problems of unit conversion
** given unit x Find unit/ given unit = Find unit
** Given unit x related unit/given unit x Find unit/related unit = Find unit
Arrange the conversion factor so that starting ( given ) units cancels, you can do this by arranging the conversion factors so that the starting units is on the bottom of the conversion factor.
You may find one step unit conversion or two step or chain unit conversions.
Systemetic approach :
Sort information from the problem:by identifying the given quantity and unit , the quantity and unit you want & any relationships implied in problem.
Design a strategy to solve the problem: Devise a conceptual plan.
Apply steps in conceptual plan: Check units properly and multiply terms accross the top & divide by each bottom term.
This should be more clear with these examples;
convert 1.76 yd to centimeter:-
sort info ; Given = 1.76 yd ( yards) & Find : length in cm
Strategy : conceptual plan yd ==> m ==> cm, equivalence : 1m = 1.094 yd , 1 cm = 10^-2 m
conversion factors : 1m / 1.094 yd or 1.094 yd/ 1m and 1cm/10^-2m or 10^-2m/1cm
Solution: Given unit x related unit/given unit x Find unit/related unit
so, we use the conversion factor in which we have given & related units at the bottom so that we can cancel the given & related units.
1.76 yd x 1m / 1.094 yd x 1cm/10^-2m = 160.8775 cm = 161 cm | <urn:uuid:98b3293c-7ccc-4b88-9b77-6fc898537d4a> | 3.796875 | 391 | Tutorial | Science & Tech. | 67.729798 | 95,589,145 |
If you have given the ionisation constant of a weak acid and its original concentration, the Ph of the the solution can be calculated.The approach used is the inverse of that followed in the tutorial pH to pKa conversion, here Ka is known and Ph must be calculated.
First, lets see an example to understand this:
Nicotinic acid HC6H4O2N (Ka=1.4 x 10^-5), is another name for niacin , an important member of vitamin group.Determine pH of solution prepared by dissolving 0.10 mol of nicotinic acid,in water to form one liter of solution.
stratefy: To start with, set up an equilibrium table .To accomplish this note that-
the original concentrations of HNic,H+ & Nic- are 0.10 M,0.00M nad 0.00M, respectively ignoring H+ ions from the ionisation of water.
the changes in concentration are related by the coeffcients of the balanced equation, all of which are 1;
Letting delta[H+]= x , it follows that delta[Nic-]=x;delta[HNic]=-x.This information should enable you to express the equilibrium concentration of all species in terms of x.The rest is algebra; substitute into the Ka expression and solve for x= [H+]
solution; setting up the table
HNic (aq) <---------> H+ (aq) + Nic- (aq)
I 0.10 0.0 0.0
C -X +X +X
E 0.10-X X X
substituting into the expression for Ka
Ka = x . x / 0.10-x = 1.4 x 10^-5
this is a quadratic equation .It could be rearranged to form the ax+^2 + bx + c = 0 and solved for x using the qudratic formula.Such a procedure is the time-consumiong and in this case unnecessary.Nicotinic aicd is a weak acid, only slightly ionized in water.The equilibrium concenration of HNic, 0.10-x, is probably only very slightly less than its original concentration.0.10M.So, lets make the approximation 0.10- x = 0.10.This simplifies the equation written above;
x^2 / 0.10 = 1.4 x 10^5
x^2 = 1.4 x 10^-6
taking square root,
x = 1.2 x 10^-3 M = [H+]
pH = -log[H+]
= - log 1.2 x 10^-3 | <urn:uuid:b227c162-4d16-4853-b4f5-556a1c2e01d9> | 3.28125 | 574 | Tutorial | Science & Tech. | 87.335958 | 95,589,146 |