text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
2NO2(g)f/r2NO(g) + O2(g)
The equilibrium constant Kp is 158. analysis shows that the partial pressure of O2 is 0.14 atm at equilibrium. Calculate the pressure of NO and NO2 in the mixture in atm.
Recently Asked Questions
- Cash flow from operating activities Cash received from clients 3,701,327 Cash paid out to suppliers A Interest and taxes paid (70,180) Net cash for operating
- How old is a bottle of wine if the tritium ( 3 H) content (called activity) is 25% that of a new wine? The half-life of tritium is 12.5 years. (a) 1/4 yr (b)
- Carbon-11 is a radioactive isotope of carbon. Its half-life is 20 minutes. What fraction of the initial number of C-11 atoms in a sample will have decayed away | <urn:uuid:55fc43c0-3deb-4dc2-ac9d-ac34dfeb780b> | 2.625 | 198 | Q&A Forum | Science & Tech. | 78.996123 | 95,580,109 |
Daily Summary: We continued our expedition, collecting data for seismic Line 15, over the Diebold Knoll. During the day, we had the opportunity to deploy Expendable Bathy Thermograph (XBT), used to measure the ocean temperature and calculate a sound velocity curve to calibrate bathymetry. Halfway through our day, we were delighted to learn we were 7 hours ahead of schedule, giving us extra seismic data acquisition time. We adjusted our plan accordingly and extended Lines 21 and 22, going over Brendan’s Seep, to acquire data that would give us greater geological context for the region.
Much of what marine geologists study is invisible to the naked eye. While standing on a ship on the surface of the ocean, we can see nothing but a vast expanse of water extending in all directions. From this perspective, the seafloor is hidden by several miles of dense seawater that absorbs light and hides the deepest and darkest portions of the ocean.
But when we cross the threshold separating the deck of the ship from the computer lab we are quickly transported down through the ocean. We suddenly see vast abyssal hills and steeply sloping seamounts displayed on monitors spread throughout the ship. This journey takes us down to a place that has never been exposed to sunlight and where sonars are needed to “see” through the ocean. Using acoustic techniques onboard the R/V Thompson in 2014, we discovered a new seafloor vent, or seep, that releases fresh, warm and fast-flowing water into the overlying ocean. While seafloor seeps are common along continental margins, this seep’s origin is a mystery. My objective during this expedition on the R/V Revelle is to collect acoustic data over the vent to shed light on the geologic history of the site and to provide a window into geochemical process occurring deep within the Cascadia Subduction Zone.
Extending from southern British Columbia to Northern California, the Cascadia Subduction Zone is formed by the collision of the Juan de Fuca oceanic plate and the North American continental plate. Because of its density, the Juan de Fuca plate descends beneath the continent and carries with it oceanic sediments. When these sediments heat up under the increasing pressure of the overlying rock, vast amounts of water are released through mineral dehydration reactions. This water migrates along faults and is often released at the seafloor, supporting dense microbial and macrofaunal communities and contributing gases to the overlying ocean. During transport along faults, this water counteracts the load of the overlying rock and sediments, effectively lubricating the faults and decreasing the likelihood of an earthquake by preventing the accumulation stress. By studying the conditions under which fluids are produced within the seafloor, I hope to improve understanding of the circumstances that lead to earthquake initiation within the Cascadia Subduction Zone.
This seafloor seep, named Pythia’s Oasis after the ancient Oracles of Apollo, is emitting fluids from the seafloor at a rate rarely observed in this type of tectonic setting. During this expedition, I plan to make several passes over the seep using the multichannel seismic system on the R/V Revelle. With this system, I’ll be able to image beneath the seafloor to determine whether there are any faults feeding this seep from below. While the survey data may not point towards a definitive source, the results will help to improve existing hypotheses for the origin of the fluids.
These hypotheses will then be tested during an expedition in 2019 where we’ll use the remotely operated vehicle Jason to dive to the seafloor and collect fluid samples emanating from it. Although that cruise is still over two years away, I can already feel the excitement building as we continue to collect more data and make plans for our return to the site. While progress is slow, each new observation brings us one step closer to the origin of Pythia’s Oasis.
— Brendan Philip is a graduate student at the University of Washington
A major goal of this Early Career Scientist training expedition is giving the participants (a total of 19 “principal investigators” ranging from graduate students to postdocs to faculty) the chance to experience every part of what goes into a full seismic research expedition. One of the most important aspects is being the Chief Scientist during your shift, which means that you are in charge.
At the beginning of our watch group’s first shift in the ship’s computer lab (read: control center), from noon to 8 p.m. on Tuesday, our six-person group had to decide on our roles for the day. Graduate student Emily Schottenfels had already been involved in a meeting on the bridge with the Second Mate, so it felt right for her to be the Chief Scientist of our shift; I became the Co-Chief.
One might expect the Chief and Co-Chief Scientists to be figureheads of the watch, especially because we are just students. However, that is definitely not the case. Almost immediately, Emily and I had to run (safely) up the bridge to deliver an update with some changed latitude and longitude coordinates and an updated operation plan.
After this, Emily and I patrolled around the computer lab to make sure all the operations were running smoothly, especially once the gear was deployed and data was being acquired. We also had a discussion with Masako Tominaga and Anne Trehu, two of the project lead scientists, and Lee Ellet, a Scripps’ shipboard geophysical engineer, about the parameters for collecting seismic data. We needed to decide on the source spacing, the sampling rate, and how long we would record for. Ultimately, we chose for the ship to use the acoustic source and then move forward for 25 meters before using it again while we record from that point for 8 seconds and measure the signal every .5 milliseconds. Luckily there were no major issues that arose during our watch, just a few fishing boats that the ship needed to avoid.
Unfortunately, the smooth seas did not last for long (figuratively speaking). As we awoke on the second day of the cruise, we were slowly brought up to speed about some issues that occurred overnight. Luckily, we sailed with an exceptional crew and geophysical engineering team that worked quickly and effectively to solve the problem. It turns out that an important aspect of being the Chief Scientist is working with a variety of people across different groups to keep the expedition running smoothly. Coordinating with Lee, his engineers, and the crew in the engine room was very fast paced but rewarding. By the end of our second day, everything was back to normal and the data was coming in great.
By the end of both shifts/days, I can sincerely say that being the Chief Scientist (or Co-Chief) is badass. The opportunity to be out here at sea and to be in charge of “science-ing” helps to illuminate the future of someone interested in marine geology and geophysics research. You have to make a lot of split-second decisions, communicate a lot of messages to the science crew, and make sure every operation/scientist is running smooth. I am so happy to be here, to be learning, and to have had the chance to be in charge.
— Collin Brandl is a graduate student at the University of New Mexico
Hace dos años, antes de convertirme en estudiante de posgrado, nunca habría pensado que estaría en el mar en una expedición de investigación sísmica. Desde que empecé mi programa de posgrado en la Universidad Texas A&M, tuve la oportunidad de participar en grandes proyectos de investigación: explorando la geología de la secuencia Ofiolita de la Cordillera de la Costa en California y Canadá usando métodos geofísicos, realizando experimentos de magnetismo de nanoescala en la Universidad de Minnesota y Cambridge (Reino Unido), y aprender más sobre las propiedades físicas y químicas de las rocas y cómo interactúan con los fluidos en la Universidad de Leicester (Reino Unido). Ahora estoy a bordo del R/V Revelle (mi primera vez en el mar), recogiendo datos sísmicos multicanal a lo largo del geológicamente complejo Margen de Cascadia.
Para mi investigación de tesis de maestría, mi principal proyecto de investigación se centra en tratar de mejorar la comprensión de cómo medir las señales climáticas registradas en los sedimentos del fondo marino, utilizando una variedad de conjuntos de datos geológicos y geofísicos del Océano Pacífico Ecuatorial Oriental. Esta región del Pacífico se encuentra a pocos grados del ecuador, en el fondo liso sin alterar, en una zona donde se pueden encontrar “altas tasas de acumulación de sedimentos” (0.02 milímetros por año, una pizca de polvo). Esta región tiene unos 400 metros de sedimento sin alterar la capa antes de llegar a la roca dura subyacente a los sedimentos, que registra una gran cantidad de información sobre las condiciones climáticas del pasado que se remonta a hace 28 millones de años. La información extraída de estos sedimentos puede ser utilizada por paleoceonógrafos (científicos que estudian la historia de los océanos) para reconstruir el clima pasado y las condiciones del océano, esta información se utiliza para ayudarnos a entender las condiciones climáticas actuales y esperadas.
Para lograr mi meta de investigación, utilizo datos sísmicos marinos, que son como un ultrasonido de la tierra que me ayuda a imaginar kilómetros bajo el fondo del océano, y muestras de sedimentos recolectadas desde el fondo del océano. Yo, analizo mis muestras de sedimentos para comprender una pequeña área del océano y uso las imágenes sísmicas para correlacionar mis datos a grandes áreas del océano. Con estos datos, busco patrones en mis muestras de roca e imágenes sísmicas para entender cómo el movimiento de sedimentos podría afectar las señales climáticas registradas en los sedimentos medidos.
Sin embargo, para mi proyecto de investigación, no tuve la oportunidad de recopilar mis propios datos sísmicos. Por lo tanto, solicité ser parte del crucero de entrenamiento sísmico ECS con el fin de ganar experiencia en el campo marino. Durante esta expedición, esperaba aprender de geocientíficos de alto nivel cómo diseñar eficientemente mi propio proyecto, incluyendo los desafíos involucrados con la adquisición, procesamiento e interpretación de datos sísmicos marinos.
La expedición ha superado mis expectativas. Desde el primer día nos dieron importantes roles que son cruciales para el éxito de nuestra expedición, incluyendo ser jefe científico, co-jefe científico (roles con mucha responsabilidad incluyendo liderar el equipo científico, comunicarse con el equipo de adquisición de datos sísmicos y el capitán del buque, y asegurar que el trabajo propuesto sea razonable y alcanzable con los recursos disponibles), y procesador sísmico (encargado de evaluar la calidad de los datos adquiridos y procesar los datos sísmicos recogidos a imágenes del subsuelo). Rápidamente aprendimos que la comunicación y la resolución creativa de problemas son la clave para adquirir con éxito datos sísmicos y ayudar a la tripulación del barco a navegar el Revelle a áreas de interés.
El Crucero de Capacitación Sísmica de ECS me ha enseñado cómo trabajar como parte de un equipo en un ambiente de ritmo acelerado y cómo comunicarme exitosamente con compañeros de equipo para lograr los objetivos de investigación. Después de mi experiencia de entrenamiento, me siento más confiada en mi habilidad para ser una científica principal en futuras expediciones marítimas y continuar contribuyendo a la comunidad científica. Mis experiencias de investigación únicas no serían posibles sin el apoyo de la National Science Foundation y un grupo de mentores dedicado que han estado liderando este esfuerzo de capacitación.
— Estefania Ortiz es un estudiante en la Universidad Texas A&M
Two years ago, before I became a graduate student, I would have never thought I would be at sea on a seismic research expedition. Since I began my graduate program at Texas A&M University, I have had the opportunity to be involved in great research projects: exploring the geology of the Coast Range Ophiolite sequence in California and Canada using geophysical methods, conducting nanoscale rock magnetism experiments in the University of Minnesota and Cambridge (UK), and learning more about physical and chemical properties of rocks and how they interact with fluids at the University of Leicester (UK). Now I’m aboard the R/V Revelle (my first time at sea), collecting multichannel seismic data along the geologically complex Cascadia Margin.
For my master’s thesis research, my main research project focuses on trying to improve understanding of how to measure climate signals recorded on seafloor sediments, by using a variety of geological and geophysical data sets from the eastern equatorial Pacific Ocean. This region of the Pacific is located a few degrees from the equator, on smooth unaltered seafloor, in an area where “high sediment accumulation rates” (.02 millimeters per year, a sprinkle of dust) can be found. This region has about 400 meters of unaltered sediment cover before we reach the hard rock underlying the sediments, which records a wealth of information about past climate conditions going back to 28 million years ago. The information extracted from these sediments can be used by paleoceaonographers (scientists who study the history of the oceans) to reconstruct past climate and ocean conditions, this information is used to help us understand current and expected climate conditions.
To accomplish my research goal, I use marine seismic data, which is like an ultrasound of the earth that helps me image kilometers under the ocean floor, and sediment samples collected from the ocean floor. I then analyze my sediment samples to understand one small area of the ocean and use seismic images to correlate my data to large areas of the ocean. With this data, I look for patterns in my rock samples and seismic images to understand how sediment movement might affect the climate signals recorded in the measured sediments.
However, for my research project, I did not get the opportunity to collect my own seismic data. So, I applied to be a part of the Seismic ECS Training Cruise in order to gain marine field experience. During this expedition, I hoped to learn from senior geoscientists how to efficiently design my own project, including the challenges involved with the acquisition, processing, and interpretation of marine seismic data.
The expedition has exceeded my expectations. From day one we were given important roles that are crucial to the success of our expedition including being chief scientist, co-chief scientist (roles with a lot of responsibility including leading the science team, communicating with the seismic data acquisition team and ship’s captain, and making sure the proposed work is reasonable and attainable with the available resources), and seismic processer (in charge of assessing the quality of the data acquired, and processing the seismic data collected to images of the subsurface). We quickly learned that communication and creative problem solving are key to successfully acquiring seismic data and helping the ship’s crew navigate the Revelle to areas of interest.
The ECS Seismic Training Cruise has taught me how to work as part of a team in a fast-paced environment, and how to communicate successfully with teammates to accomplish research objectives. After my training experience, I feel more confident in my ability to be a chief scientist in future marine expeditions and to continue contributing to the scientific community. My unique research experiences would not be possible without the support of the National Science Foundation and a dedicated mentor group who has been leading this training effort.
— Fani Ortiz is a graduate student at Texas A&M University
Off the Pacific Northwest coastline lies the Cascadia subduction zone, a region of the Earth where one section of the Earth’s crust (the Juan de Fuca Plate) moves underneath another part of the crust (the North America Plate). Subduction zones produce the largest earthquakes ever recorded, and because most of these subduction zones are underwater (Cascadia included), these earthquakes can also generate deadly tsunamis. Despite this hazard, there are portions of the Cascadia subduction zone offshore Oregon where we do not have information about the structure of the two plates where they connect. Such information can add to our understanding of the tsunami hazard for the Pacific Northwest. Part of the mission of our research expedition on the R/V Revelle is to image the structure of the two plates at the trench where they connect and help fill in some of the present data gaps.
Tsunamis caused by large subduction zone earthquakes are some of the most destructive natural hazards on Earth. In addition to the sudden change in the seafloor caused by the earthquake, if an earthquake occurs close to the trench, it can trigger an underwater landslide in the sediment layers at the trench above the shallow section of the fault. Such landslides cause more water to be displaced and add to the height of the tsunami waves that hit the coastline closest to the location of the landslide. This occurred during the devastating magnitude 9.1 Tohoku-oki earthquake that happened in 2011 offshore Japan. The earthquake ruptured into the shallow portion of the subduction zone by the trench and caused an underwater landslide on the northern edge of the earthquake’s rupture area.
Like Japan, the Cascadia subduction zone offshore the Pacific Northwest has experienced large subduction earthquakes in the past and will experience them in the future. The historical earthquake record for this region shows evidence of large tsunamis and underwater landslides, so it is important to develop accurate tsunami early warning systems for the coastal populations in this region.
Local tsunami warning systems in development for Cascadia are using land-based seismic networks (seismometers and accelerometers) and geodetic networks (GPS) to rapidly estimate the characteristics of earthquakes such as starting location, magnitude, and the amount of motion on the fault. These earthquake characteristics can then be put into a tsunami model which calculates the expected wave height and the time it will take for the wave to reach the coastline. This information is the basis for the early warnings issued to the coastal communities of the Pacific Northwest. However, such warning systems will have a difficult time estimating any additional effects caused by underwater landslides because the landslides would occur close to the trench and far from the early warning network’s instruments.
One way to improve tsunami early warnings is to identify regions along the trench that are more likely to collapse in a landslide. If an earthquake occurs close to one of these regions, the warning of the incoming tsunami could be modified to say that there is the possibility of an underwater landslide and that the people closest to the possible slide area should evacuate to even higher ground.
Right now, there is not a lot of data for the near-trench region offshore Oregon that can be used to identify potential future landslide locations. Our expedition on the R/V Revelle will help fill in some of the gaps. This mission targets a past underwater landslide offshore Oregon. By imaging the structure of the slide, we can learn about the conditions that caused the landslide at that location. After we image this region, we will travel northward and image sections of the deformation zone near the trench that have not experienced a large underwater landslide. We hope to use the information from the landslide to assess the possibility of landslides in these locations. These data will greatly improve tsunami early warnings when the next “really big one” occurs.
— Jessie Saunders is a Ph.D. student at Scripps Institution of Oceanography
A large destructive earthquake in the Cascadia Subduction Zone is long overdue. The earthquake, often referred as a megathrust, is expected to produce a tsunami that will affect the western coast of the U.S. and Canada. With this in mind, the region’s coastal population is training to get ready for this event and scientists are working to better understand what it may be like.
In Newport, Oregon, just before boarding the R/V Roger Revelle for this scientific expedition, we had to participate in a mandatory tsunami drill. We walked together to the highest elevation in the port area, which is a designated point for people to gather in case an earthquake and tsunami happen. The necessary supplies of food and water are stored at this muster point, allowing the people affected by tsunami to wait while the water level drops off. Posters with directions and estimated walking times to the muster point are established on the way to the location. These posters serve as mitigation to the expected hazard; they definitely will be crucial for the people affected when this expected earthquake hits the northwest coast of the U.S., and may save many lives.
The megathrust in the Cascadia Subduction Zone is associated with the area where the young, oceanic Juan de Fuca plate is sliding under the continental part of the North American plate. In order to assess the hazards, mitigate the damage, and make improved predictions about this potentially devastating event, it’s necessary for scientists to have a reliable and detailed geologic model of the Earth’s lithosphere – the uppermost rigid layer of the planet. This subsurface model consists of information about the thickness and lithological composition of the rocks of both plates, usually organized in layers, such as sedimentary strata, various crustal units, and the upper mantle.
In order to develop a geologic subsurface model, a variety of remote sensing geophysical techniques are used, such as seismic sounding and potential fields (gravity and magnetic) surveying. All of these techniques share the same underlying principle – they use the measurements of some physical phenomena at the Earth’s surface to derive geological information about the rocks in the subsurface. Different methods focus on different physical phenomena – acceleration due to gravitational force is measured for gravity prospecting, the strength and direction of the Earth’s magnetic field are recorded in magnetic surveying, while the ground motion (onshore) or water pressure (offshore) is measured for seismic sounding.
Seismic reflection sounding is the most widely used geophysical method scientists use to map the variations in thickness and lithology of the subsurface layers. This method uses artificially created sound waves that spread through the rocks in the subsurface and are reflected and or refracted at each contact between the different rock layers. The resultant complex pattern of reflected and refracted seismic waves is recorded at some distance from the source of seismic wave.
Interpreting these various seismic arrivals can be challenging and they don’t always provide an interpretation scientists are confident in. In this case, the other geophysical measurements, which record information about the same subsurface, may be integrated with seismic sounding data to constrain the resultant geological model. One of these complementary geophysical techniques is gravity surveying, which is sensitive to variations in the densities between various rock layers in the subsurface. Another technique is magnetic field analysis, which maps the changes in rocks’ magnetic mineral content, described by a physical property named magnetic susceptibility.
A focus of my scientific research is combining records from various geophysical methods together in order to derive a reliable subsurface model that honors all the recorded datasets. During this expedition, the data for all three geophysical methods – seismic sounding, gravity and magnetic readings – are being recorded simultaneously. The integrated analysis of all three datasets together results in a more detailed geological model: the seismic sounding provides the depths to various subsurface layers, while the potential fields allow constraining the lithologies of different geological units (rock layers) based on the derived physical properties, such as densities and magnetic susceptibilities. As the derived geological model should agree with all three datasets, the overall confidence in the result will increase greatly.
The geological model of the Cascadia Subduction Zone that’s created using the data collected during this expedition will improve our understanding of the structures and overall structural architecture of the lithospheric plates involved in the anticipated megathrust. Using this integrated geophysical approach that combines three different methods will result in much more robust and confident subsurface models. These, in turn, will lead to more confident and reliable earthquake prediction and hazard assessments that will help keep citizens of this region safe.
— Irina Filina is an assistant professor at the University of Nebraska-Lincoln
Daily Summary: R/V Revelle completed its transit to the southern portion of the survey region at 06:45 UTC and we deployed the seismic systems. The PSOs spotted dolphins, so we waited for the dolphins to swim a safe distance away before starting the seismic systems. We began Line 14 at 09:14 UTC. We deployed and began magnetometer line M7 at 10:27 UTC. We completed magnetometer Line M7 at 14:52 UTC and Line 14 at 14:55 UTC. We noticed some issues with the hydrophone on the sound source towards the end of Line 14. This was fixed before starting Line 15 at 17:14 UTC. We began magnetometer line M8 at 20:08 UTC after a slight turn at waypoint 24. | <urn:uuid:80228b94-bae8-4050-9d2b-dc68c92dcfb4> | 3.90625 | 5,762 | Content Listing | Science & Tech. | 24.592596 | 95,580,128 |
The classical kinds of design patterns, stuff like the flywheel or singleton, as I understand them, are patterns for class-based object-oriented programming. R has different semantics, so it favors a different set of design patterns.
The tidyverse is basically built around a collection of design patterns. There’s the fluent function interface where the data to be manipulated is the first function argument, and nearly all functions return a similar kind of object as their input. The functions can be chained together using
%>% pipelines, which is another design pattern. (An alternative pattern to piping would require intermediate variables or nested function calls.) The concept of tidy data and tidying R objects into dataframes are also design patterns. The idea of list columns and nested dataframes would be an example of a very recent design pattern for R too. | <urn:uuid:8ee9cf74-313e-4ad5-8fbf-298ae757384a> | 2.984375 | 170 | Comment Section | Software Dev. | 39.209643 | 95,580,136 |
Join the Conversation
To find out more about Facebook commenting please read the Conversation Guidelines and FAQs
Why is the water brown in Raritan Bay?
A nontoxic bloom of algae was confirmed in Raritan Bay Monday, and the water may look brown as a result.
A state aircraft's remote sensor detected elevated levels of chlorophyll in Raritan Bay, which is typically associated with algal blooms, according to the Department of Environmental Protection's njbeaches.org website.
Chlorophyll, a green pigment present in algae and plants, absorbs light energy and plays a vital role in photosynthesis, according to the National Oceanic and Atmospheric Administration.
The DEP Bureau of Marine Water Monitoring collected five samples in the Raritan Bay area and found that the bloom was dominated by two nontoxic species of algae, according to the DEP website. Algae cell levels ranged from 2,200 to 3,300 cells per milliliter.
One of the species will cause "a brown water discoloration," but it is not linked to the "brown tide," the website says.
The Bureau of Marine Water Monitoring will continue to monitor coastal waters with surveillance flights, according to the DEP website.
Todd B. Bates: 732-643-4237; email@example.com | <urn:uuid:a82a141a-0f0b-49e3-a20c-d94c7487afa8> | 3.171875 | 275 | News Article | Science & Tech. | 41.395645 | 95,580,146 |
A horseshoe orbit is a type of co-orbital motion of a small orbiting body relative to a larger orbiting body (such as Earth). The orbital period of the smaller body is very nearly the same as for the larger body, and its path appears to have a horseshoe shape as viewed from the larger object in a rotating reference frame.
The loop is not closed but will drift forward or backward slightly each time, so that the point it circles will appear to move smoothly along the larger body's orbit over a long period of time. When the object approaches the larger body closely at either end of its trajectory, its apparent direction changes. Over an entire cycle the center traces the outline of a horseshoe, with the larger body between the 'horns'.
Asteroids in horseshoe orbits with respect to Earth include 54509 YORP, 2002 AA29, 2010 SO16, 2015 SO2 and possibly 2001 GO2. A broader definition includes 3753 Cruithne, which can be said to be in a compound and/or transition orbit, or (85770) 1998 UP1 and 2003 YN107. By 2016, 12 horseshoe librators of Earth have been discovered.
Explanation of horseshoe orbital cycleEdit
The following explanation relates to an asteroid which is in such an orbit around the Sun, and is also affected by the Earth.
The asteroid is in almost the same solar orbit as Earth. Both take approximately one year to orbit the Sun.
It is also necessary to grasp two rules of orbit dynamics:
- A body closer to the Sun completes an orbit more quickly than a body further away.
- If a body accelerates along its orbit, its orbit moves outwards from the Sun. If it decelerates, the orbital radius decreases.
The horseshoe orbit arises because the gravitational attraction of the Earth changes the shape of the elliptical orbit of the asteroid. The shape changes are very small but result in significant changes relative to the Earth.
The horseshoe becomes apparent only when mapping the movement of the asteroid relative to both the Sun and the Earth. The asteroid always orbits the Sun in the same direction. However, it goes through a cycle of catching up with the Earth and falling behind, so that its movement relative to both the Sun and the Earth traces a shape like the outline of a horseshoe.
Stages of the orbitEdit
Starting at point A, on the inner ring between L5 and Earth, the satellite is orbiting faster than the Earth and is on its way toward passing between the Earth and the Sun. But Earth's gravity exerts an outward accelerating force, pulling the satellite into a higher orbit which (per Kepler's third law) decreases its angular speed.
When the satellite gets to point B, it is traveling at the same speed as Earth. Earth's gravity is still accelerating the satellite along the orbital path, and continues to pull the satellite into a higher orbit. Eventually, at Point C, the satellite reaches a high and slow enough orbit such that it starts to lag behind Earth. It then spends the next century or more appearing to drift 'backwards' around the orbit when viewed relative to the Earth. Its orbit around the Sun still takes only slightly more than one Earth year. Given enough time, the Earth and the satellite will be on opposite sides of the Sun.
Eventually the satellite comes around to point D where Earth's gravity is now reducing the satellite's orbital velocity. This causes it to fall into a lower orbit, which actually increases the angular speed of the satellite around the Sun. This continues until point E where the satellite's orbit is now lower and faster than Earth's orbit, and it begins moving out ahead of Earth. Over the next few centuries it completes its journey back to point A.
On the longer term, asteroids can transfer between horseshoe orbits and quasi-satellite orbits. Quasi-satellites aren't gravitationally bound to their planet, but appear to circle it in a retrograde direction as they circle the Sun with the same orbital period as the planet. By 2016, orbital calculations showed that four of Earth's horseshoe librators and all five of its then known quasi-satellites repeatedly transfer between horseshoe and quasi-satellite orbits.
A somewhat different, but equivalent, view of the situation may be noted by considering conservation of energy. It is a theorem of classical mechanics that a body moving in a time-independent potential field will have its total energy, E = T + V, conserved, where E is total energy, T is kinetic energy (always non-negative) and V is potential energy, which is negative. It is apparent then, since V = -GM/R near a gravitating body of mass M and orbital radius R, that seen from a stationary frame, V will be increasing for the region behind M, and decreasing for the region in front of it. However, orbits with lower total energy have shorter periods, and so a body moving slowly on the forward side of a planet will lose energy, fall into a shorter-period orbit, and thus slowly move away, or be "repelled" from it. Bodies moving slowly on the trailing side of the planet will gain energy, rise to a higher, slower, orbit, and thereby fall behind, similarly repelled. Thus a small body can move back and forth between a leading and a trailing position, never approaching too close to the planet that dominates the region.
- See also trojan (astronomy).
Figure 1 above shows shorter orbits around the Lagrangian points L4 and L5 (e.g. the lines close to the blue triangles). These are called tadpole orbits and can be explained in a similar way, except that the asteroid's distance from the Earth does not oscillate as far as the L3 point on the other side of the Sun. As it moves closer to or farther from the Earth, the changing pull of Earth's gravitational field causes it to accelerate or decelerate, causing a change in its orbit known as libration.
An example of a body in a tadpole orbit is Polydeuces, a small moon of Saturn which librates around the trailing L5 point relative to a larger moon, Dione. In relation to the orbit of Earth, the 300-meter-diameter asteroid 2010 TK7 is in a tadpole orbit around the leading L4 point.
- Christou, Apostolos A.; Asher, David J. (2011). "A long-lived horseshoe companion to the Earth". Preprint. arXiv: . Bibcode:2011MNRAS.414.2965C. doi:10.1111/j.1365-2966.2011.18595.x.
- de la Fuente Marcos, C.; de la Fuente Marcos, R. (April 2016). "A trio of horseshoes: past, present and future dynamical evolution of Earth co-orbital asteroids 2015 XX169, 2015 YA and 2015 YQ1". Astrophysics and Space Science. 361: 121–133. arXiv: [astro-ph.EP]. Bibcode:2016Ap&SS.361..121D. doi:10.1007/s10509-016-2711-6.
- de la Fuente Marcos, C.; de la Fuente Marcos, R. (November 11, 2016). "Asteroid (469219) (469219) 2016 HO3, the smallest and closest Earth quasi-satellite". Monthly Notices of the Royal Astronomical Society. 462 (4): 3441–3456. arXiv: [astro-ph.EP]. Bibcode:2016MNRAS.462.3441D. doi:10.1093/mnras/stw1972. | <urn:uuid:5c51bb56-61b3-460d-b10d-35ab025db660> | 3.890625 | 1,623 | Knowledge Article | Science & Tech. | 60.284791 | 95,580,156 |
- 11.5.1 Spatial Data Types
- 11.5.2 The OpenGIS Geometry Model
- 11.5.3 Supported Spatial Data Formats
- 11.5.4 Geometry Well-Formedness and Validity
- 11.5.5 Spatial Reference System Support
- 11.5.6 Creating Spatial Columns
- 11.5.7 Populating Spatial Columns
- 11.5.8 Fetching Spatial Data
- 11.5.9 Optimizing Spatial Analysis
- 11.5.10 Creating Spatial Indexes
- 11.5.11 Using Spatial Indexes
The Open Geospatial Consortium (OGC) is an international consortium of more than 250 companies, agencies, and universities participating in the development of publicly available conceptual solutions that can be useful with all kinds of applications that manage spatial data.
The Open Geospatial Consortium publishes the OpenGIS® Implementation Standard for Geographic information - Simple feature access - Part 2: SQL option, a document that proposes several conceptual ways for extending an SQL RDBMS to support spatial data. This specification is available from the OGC website at http://www.opengeospatial.org/standards/sfs.
Following the OGC specification, MySQL implements spatial extensions as a subset of the SQL with Geometry Types environment. This term refers to an SQL environment that has been extended with a set of geometry types. A geometry-valued SQL column is implemented as a column that has a geometry type. The specification describes a set of SQL geometry types, as well as functions on those types to create and analyze geometry values.
MySQL spatial extensions enable the generation, storage, and analysis of geographic features:
Data types for representing spatial values
Functions for manipulating spatial values
Spatial indexing for improved access times to spatial columns
The spatial data types and functions are available for
ARCHIVE tables. For indexing spatial
SPATIAL indexes. The other storage engines
SPATIAL indexes, as described in
Section 13.1.14, “CREATE INDEX Syntax”.
A geographic feature is anything in the world that has a location. A feature can be:
An entity. For example, a mountain, a pond, a city.
A space. For example, town district, the tropics.
A definable location. For example, a crossroad, as a particular place where two streets intersect.
Some documents use the term geospatial feature to refer to geographic features.
Geometry is another word that denotes a geographic feature. Originally the word geometry meant measurement of the earth. Another meaning comes from cartography, referring to the geometric features that cartographers use to map the world.
The discussion here considers these terms synonymous: geographic feature, geospatial feature, feature, or geometry. The term most commonly used is geometry, defined as a point or an aggregate of points representing anything in the world that has a location.
The following material covers these topics:
The spatial data types implemented in MySQL model
The basis of the spatial extensions in the OpenGIS geometry model
Data formats for representing spatial data
How to use spatial data in MySQL
Use of indexing for spatial data
MySQL differences from the OpenGIS specification
For information about functions that operate on spatial data, see Section 12.15, “Spatial Analysis Functions”.
These standards are important for the MySQL implementation of spatial operations:
SQL/MM Part 3: Spatial.
The Open Geospatial Consortium publishes the OpenGIS® Implementation Standard for Geographic information, a document that proposes several conceptual ways for extending an SQL RDBMS to support spatial data. See in particular Simple Feature Access - Part 1: Common Architecture, and Simple Feature Access - Part 2: SQL Option. The Open Geospatial Consortium (OGC) maintains a website at http://www.opengeospatial.org/. The specification is available there at http://www.opengeospatial.org/standards/sfs. It contains additional information relevant to the material here.
The grammar for spatial reference system (SRS) definitions is based on the grammar defined in OpenGIS Implementation Specification: Coordinate Transformation Services, Revision 1.00, OGC 01-009, January 12, 2001, Section 7.2. This specification is available at http://www.opengeospatial.org/standards/ct. For differences from that specification in SRS definitions as implemented in MySQL, see Section 13.1.17, “CREATE SPATIAL REFERENCE SYSTEM Syntax”.
If you have questions or concerns about the use of the spatial extensions to MySQL, you can discuss them in the GIS forum: http://forums.mysql.com/list.php?23. | <urn:uuid:de646ddb-73c0-45ad-b049-57ba959c8c73> | 3.328125 | 1,010 | Documentation | Software Dev. | 44.150011 | 95,580,171 |
Surface Wave Inspection of Porous Ceramics and Rocks
The most interesting feature of acoustic wave propagation in fluid-saturated porous media is the appearance of a second compressional wave, the so-called slow compressional wave, in addition to the conventional P (or fast) wave and the shear wave [1,2]. The slow compressional wave is essentially the motion of the fluid along the tortuous paths in the porous frame. This motion is strongly affected by viscous coupling between the fluid and the solid. Therefore, both the velocity and the attenuation of the slow wave greatly depend on the dynamic permeability of the porous frame. It was not until 1980, that Plona first experimentally observed the slow compressional wave in water-saturated porous ceramics at ultrasonic frequencies . Only three years later, Feng and Johnson predicted the existence of a new slow surface mode on a fluid/fluid-saturated solid interface in addition to the well-known leaky-Rayleigh and true Stoneley modes [4,5]. The slow surface mode is basically the interface wave equivalent of the slow bulk mode, but there is a catch: the surface pores of the solid have to be closed so that this new mode can be observed. Otherwise, a surface vibration can propagate along the fluid/fluid-saturated porous solid interface without really moving the fluid since it can flow through the open pores without producing any significant reaction force. All previous efforts directed at the experimental observation of this new surface mode failed because of the extreme difficulty of closing the surface pores without closing all the pores close to the surface (e. g., by painting). On the other hand, it has been recently shown that surface tension itself could be sufficient to produce essentially closed-pore boundary conditions at the interface between a porous solid saturated with a wetting fluid, such as water or alcohol, and a non-wetting superstrate fluid, like air .
KeywordsSurface Mode Shear Velocity Porous Glass Berea Sandstone Viscous Loss
Unable to display preview. Download preview PDF. | <urn:uuid:b9a08de0-f8c0-447e-bb06-51984bd4c659> | 2.875 | 424 | Truncated | Science & Tech. | 28.885714 | 95,580,173 |
The worth of ecological biodiversity for sustaining ecosystem stability and performance is properly established, however a current research factors to a novel option to fine-tune our skill to measure it at bigger scales. The research, printed in Nature Ecology and Evolution, discovered that utilizing an imaging software to guage biodiversity is simpler than conventional strategies premised on painstaking discipline work.
Lead creator Anna Schweiger, a postdoctoral affiliate within the School of Organic Sciences, and a staff of fellow researchers, used spectra of sunshine mirrored from vegetation to guage biodiversity and predict ecosystem perform.
“We now have recognized for many years that the chemical composition of vegetation might be estimated from reflectance spectra,” mentioned Schweiger. “What we discovered is that the spectral dissimilarity, or the general variations in spectral reflectance, amongst plant species will increase with their practical dissimilarity and evolutionary divergence time.”
For the research, the staff first measured the sunshine reflectance of vegetation in 35 plots at Cedar Creek Ecosystem Science Reserve — a discipline station north of Minneapolis well-known for long-term ecological experiments — utilizing a discipline spectrometer. The spectrometer permits the researchers to guage how a lot mild vegetation replicate on the leaf degree throughout a variety of wavelengths. By taking the leaf-level knowledge the staff discovered that the spectral range of a plant group predicted aboveground productiveness, a vital ecosystem perform, to an identical or increased diploma than measures of species practical variations, their phylogenetic distances on the tree of life or the variety of species in a plant group.
Seeing that the ecosystem impact of plant range might be successfully evaluated utilizing spectrometry, the staff additionally wished to know if they may scale it up. By utilizing an imaging spectrometer mounted three meters above floor on the identical 35 plots at Cedar Creek and operating a scan, they discovered that their spectral range metric carried out equally when calculated from spectral photos.
“The findings point out that spectral range gives a strong, integrative technique of assessing a number of dimensions of biodiversity related to ecosystem perform,” says co-author John Gamon, school member on the College of Nebraska-Lincoln.
This analysis is a component of a bigger mission led by senior creator Jeannine Cavender-Bares, a professor within the School of Organic Sciences’ Division of Ecology, Evolution and Habits. With funding from a Dimensions of Biodiversity grant from NSF and NASA, the staff goals to extra utterly perceive the best way to predict ecosystem processes when evaluating optical range to genetic, phylogenetic and practical range. The staff’s subsequent step is to run an imaging spectrometry scan from a drone. The flexibility to scan from the sky gives new potential for researchers to additional perceive the ecosystem advantages of biodiversity, particularly in difficult-to-reach places.
“The fast modifications within the Earth’s biodiversity which can be underway require novel technique of steady and international detection,” says Cavender-Bares. “This research demonstrates that we will detect plant biodiversity utilizing spectral measurements from plant leaves or from the sky, which opens a complete new vary of potentialities.” | <urn:uuid:7ec55dae-8968-4d90-be53-f9b13fd4378b> | 3.140625 | 646 | Truncated | Science & Tech. | 6.84097 | 95,580,189 |
Sharks & Turtles Are Going to Start Dissolving Because Of Climate Change, Say Scientists
First the ice caps started disappearing, now it's actual sea life....
Think that greenhouse gasses are only messing up the Arctic? Think again.
It's bad news for all marine life this week, as a new study has shown that the oceans are changing too fast, and that it's not a good sign for its wildlife.
Carbon dioxide levels are rising at a rate way to quick for species, including sharks and other top predators to adapt and evolve, with whole habitats also in line to be affected.
Carbon dioxide in the water is currently around 400 parts per million, compared with only 270 parts, 100 years ago.
Sharks and turtles start building brittle skeletons and end with their skeletons dissolving altogether...
This change is literally acidifying the oceans, which will break down whole ecosystems and prevent wildlife from taking ions dissolved in the water, needed for building strong skeletons.
The effect of this will start at animals such as sharks and turtles building brittle skeletons and end with their skeletons dissolving altogether.
Of course, oceans change all the time and species adapt to their surrounding over hundreds of years.
Problem is, that was before we started burning tonnes of fossil fuels, these changes are expected within a only a few decades. That's less than the average lifespan of one turtle.
So can we do any thing about it? Well, yes and no...
We can't change the acidity of the ocean any time soon, but we can give some extra time to the animals it affects through a major cut back on over-fishing.
Less fishing means strong individuals, which in turn forms a next generation of more resilient animals.
Let's get the nets out of the water and give those fishes a chance! | <urn:uuid:64acf5e0-9c3a-4862-a13e-4392cfc3f710> | 3.09375 | 372 | Truncated | Science & Tech. | 60.503462 | 95,580,207 |
Researcher working with an international team of geneticists and anthropologists, have produced new genetic evidence that's likely to hearten proponents of the land bridge theory. The study, is one of the most comprehensive analyses so far among efforts to use genetic data to shed light on the topic.
The researchers examined genetic variation at 678 key locations or markers in the DNA of present-day members of 29 Native American populations across North, Central and South America. They also analyzed data from two Siberian groups. The analysis shows:
- genetic diversity, as well as genetic similarity to the Siberian groups, decreases the farther a native population is from the Bering Strait -- adding to existing archaeological and genetic evidence that the ancestors of native North and South Americans came by the northwest route.
- a unique genetic variant is widespread in Native Americans across both American continents -- suggesting that the first humans in the Americas came in a single migration or multiple waves from a single source, not in waves of migrations from different sources. The variant, which is not part of a gene and has no biological function, has not been found in genetic studies of people elsewhere in the world except eastern Siberia.
The researchers say the variant likely occurred shortly prior to migration to the Americas, or immediately afterwards.
The Genetic Markers for North American Populations originate in East Asia
There is reasonably clear genetic evidence that the most likely candidate for the source of Native American populations is somewhere in east Asia, the research concludes. If there were a large number of migrations, and most of the source groups didn't have the variant, then you would not see the widespread presence of the mutation in the Americas.
Studies with Genetic Markers
Researchers studied the same set of 678 genetic markers used in the new study in 50 populations around the world, to learn which populations are genetically similar and what migration patterns might explain the similarities. For North and South America, the current research breaks new ground by looking at a large number of native populations using a large number of markers.
The pattern the research uncovered -- that as the founding populations moved south from the Bering Strait, genetic diversity declined -- is what one would expect when migration is relatively recent. There has not been time yet for mutations that typically occur over longer periods to diversify the gene pool.
The study also found that:
- The study's findings hint at supporting evidence for scientists who believe early inhabitants followed the coasts to spread south into South America, rather than moving in waves across the interior.
- Assuming a migration route along the coast provides a slightly better fit with the pattern that are seen in genetic diversity.
- Populations in the Andes and Central America showed genetic similarities.
- Populations from western South America showed more genetic variation than populations from eastern South America.
- Among closely related populations, the ones more similar linguistically were also more similar genetically. | <urn:uuid:3208cddf-22fb-4937-a7a5-3a3b751acb6f> | 3.890625 | 582 | Personal Blog | Science & Tech. | 18.437614 | 95,580,241 |
New Micro-Submersible Instrument Explores Buried Antarctic Lake
Researcher Alberto Behar of NASA's Jet Propulsion Laboratory describes a recent international Antarctic expedition to investigate subglacial Lake Whillans - located more than 2,000 feet below sea level - and the unique instrument that he brought with him. Called the Micro-Submersible Lake Exploration Device, the instrument is a small robotic sub about the size and shape of a baseball bat. Designed to expand the range of extreme environments accessible by humans while minimally disturbing the environment, the sub was equipped with hydrological chemical sensors and a high-resolution imaging system. The instruments and cameras characterize the geology, hydrology, and chemical characteristics of the sub's surroundings. The sub transmits real-time high-resolution imagery, salinity, temperature, and depth measurements to the surface via fiber-optic cables. | <urn:uuid:92c1d754-01f5-46a1-a95f-46e2fd382a52> | 3.25 | 178 | Truncated | Science & Tech. | 0.83 | 95,580,250 |
In this book we tried to compile the biological — autecological data on planktonic foraminifera as known by today. During this attempt it became evident that much is still unknown, especially in detail, and that there are quite contradictory results. Stable isotope techniques are widely applied in diverse fields of oceanography and paleoceanography, including bios-tratigraphy, in the interpretation of the evolution of microfossils, to decipher water mass temperatures, climatic influences, sedimentary depositions, and changes in habitat. However, several sources of error may occur when interpreting isotopic data derived from planktonic foraminifera, since not only abiotic factors such as temperature and salinity leave signals in the calcitic shell but to a large extent also biological factors as there are trophic activity, symbiosis, reproductive cycles to name only a few. Berger and Gardner (1975) (and summarized in Berger, 1979b) published a paper “On the Determination of Pleistocene Temperatures from Planktonic Foraminifera”. In this paper they raised for the first time the question “how reliable are such estimates” for paleotemperatures based on the species isotopic composition. If one examines the flood of paleoceanogrphic papers which have appeared during the last decade (all applying the isotopic composition of planktonic foraminiferal shells) they clearly represent a major advance.
KeywordsIsotopic Composition Carbon Isotope Benthic Foraminifera Planktonic Foraminifera Planktonic Foraminifer
Unable to display preview. Download preview PDF. | <urn:uuid:53f592a8-cf71-49b2-b752-e88055170f5d> | 2.734375 | 346 | Truncated | Science & Tech. | -8.710657 | 95,580,262 |
… some of them will stick? This month’s issue of The Scientist offers a look at “some of the most current origin-of-life science, from new research on how RNA may have been assembled from precursor molecules to what we now know about our last universal common ancestor.” That ancestor, we are assured, is “not [Darwin’s] ‘primordial form,’ but rather a sophisticated cellular organism that, if alive today, would probably be difficult to distinguish from other extant bacteria or archaea.”
So one and a half centuries of research have not yet turned up a single entity that, like Thomas Huxley’s hoped-for Bathybius haeckelii, is on its way to becoming life? Hardly for lack of trying! Here is a whirlwind tour of the waterfront:
Arsenic world: In December 2010, NASA researchers reported that they had taught microbes to metabolize arsenic instead of phosphorus, demonstrating that life could arise from unexpected chemicals, perhaps elsewhere in the galaxy. (Some researchers have suggested chlorine life instead.) Most researchers were unconvinced. In 2011, Science published eight articles questioning NASA’s study in a single edition and arsenic-based life featured as one of The Scientist‘s top ten scandals of 2011.
Clay world: Some theorists argue that clay (or clay hydrogels) can select for molecules that can self-organize. The Scriptural associations of clay were a gift to science writers; the details did not impress researchers. Information theorist Hubert Yockey pointed out that clay crystal structures just repeat the same information indefinitely. By contrast, life’s minimum information density is somewhere around the level of DNA. OOL theorist Leslie Orgel (1927-2007) said it wouldn’t work for RNA either: If clay had the structural irregularities needed to enable RNA to emerge, it probably wouldn’t reproduce it accurately.
Lagoons on the early Earth: Stanley Miller (1930-2007) of the textbooks’ Miller-Urey experiment believed that the conditions on early Earth’s beaches could foster pre-life reactions because chemicals would concentrate more there than out at sea. But Robert Shapiro, proponent of the “metabolism first” model, complained that “a large lagoon would have to be evaporated to the size of a puddle, without loss of its contents, to achieve that concentration. This process is not thought to occur today.” He added, with an apparent touch of impatience,
The drying lagoon claim is not unique. In a similar spirit, other prebiotic chemists have invoked freezing glacial lakes, mountainside freshwater ponds, flowing streams, beaches, dry deserts, volcanic aquifers and the entire global ocean (frozen or warm as needed) to support their requirement that the “nucleotide soup” necessary for RNA synthesis would somehow have come into existence on the early Earth.
Metabolism first: Robert Shapiro (1935-2011) questioned Leslie Orgel’s RNA world because of “the extreme improbability” that such a long, complex molecule as RNA would spontaneously arise and initiate life. His doubts earned him the title, Dr. No. Aspiring to somehow become Dr. Yes, he offered a model that life began via small molecules with a simple metabolism and progressed from there, hence “metabolism first.” He hoped, among other things, to vindicate the idea that “There’s nothing freaky about life; it’s a normal consequence of the laws of the universe.”
Researcher Eric Smith, a physicist at the Santa Fe Institute, offers a more recent model of early metabolism: “It seems likely that the earliest cells were rickety assemblies whose parts were constantly malfunctioning and breaking down. … How can any metabolism be sustained with such shaky support? The key is concurrent and constant redundancy.” Or “millions of years of a poor replicator”, as a summary article in Science put it, leaving unclear how hits could have mattered in those days but misses didn’t.
“RNA first” proponent Leslie Orgel responded irritably to Shapiro’s metabolism first model, “solutions … dependent on ‘if pigs could fly’ hypothetical chemistry are unlikely to help.” Near the end of his life, Orgel had perhaps forgotten that he himself once co-authored a paper with Francis Crick speculating that extraterrestrials might have started life.
Numerous less publicized models wallop through the science press, on the hope, perhaps, of a lucky strike: For example, not-obviously-promising substances such as hydrogen, ammonia, hydrogen cyanide, formaldehyde, or peptides, possibly kick started life. Maybe metals acted as catalysts. Or mica sheets. Otherwise, cold temperatures or ice helped life get started, despite the fact that cold reduces chemical reaction speed. Or a high salt environment. Or hot springs. No surprise that science writer Colin Barras observes that origin of life is “a highly polarised field of research.” Most fields have only two poles, not twenty.
One model is noteworthy for the fact that it is the closest that origin of life theorists have come so far to an ancient pagan creation myth. Yet it was published in a popular science magazine (New Scientist):
Once upon a time, 3 billion years ago, there lived a single organism called LUCA. It was enormous: a mega-organism like none seen since, it filled the planet’s oceans before splitting into three and giving birth to the ancestors of all living things on Earth today. … LUCA was the result of early life’s fight to survive, attempts at which turned the ocean into a global genetic swap shop for hundreds of millions of years. Cells struggling to survive on their own exchanged useful parts with each other without competition — effectively creating a global mega-organism.
How did it all work? “It was more important to keep the living system in place than to compete with other systems.”
Really? More important for whom? Who then existed for life to be more important to? The mega-organism itself? But that would imply selfhood and purpose. If selfhood and purpose were present at the origin of life, why is design a problem and not a solution?
Editor’s Note: Here are links to the whole “Science Fictions Origin of Life” series.
Photo source: TheGiantVermin/Flickr.
- See more at: http://www.evolutionnews.org/2014/03/maybe_if_we_thr083121.html#sthash.gC3OkIIo.dpuf | <urn:uuid:f7d82106-3952-434c-a451-f68c2912686d> | 3.125 | 1,417 | Personal Blog | Science & Tech. | 41.458766 | 95,580,273 |
See the attached file.
1. The reversible exothermic water-gas shift reaction (seen in attached file), takes place in an isobaric and adiabatic PFR reactor of V=1m3. Ftot = 100 moles/sec contains 40 mol% H20 and 20 mole % of inert I. The total pressure is 3 bar and the inlet temperature is Tin = 500 k. In parallel the methanation reaction (seen in attached file) occurs. What is the CO conversion and temperature at the exit of the reactor.
2. The liquid phase reaction (in attached file) is carried out in a jacketed (cool) CSTR reactor with volume V=1m3. Initially the reactors is at a temperature of 350K and contains only species A at a concentration of 3 moles/liter (fully filled). The rate constants are at 350K are K1 (350K) =10^3 moles^-1s^-1liter and k-1 (350 K) = 10^5 moles^-1s^-1liter with activation barrier of E1=100 kJ/mol and E1=150 kJ/mol. The heat of reaction is -Kj/mol. The heat capacities for A and B are 25 J/mol/k and for C 45J/mol. a cooling medium flows around the reactor with a constant temperature of K, and UA= 100 J/s/K. The flow rate of the cooling medium is high enough that you can assume the temperature of the cooling medium to be constant. At time t=0, a constant flow of A and B at temperature of 350K is added to the reactor and the same volumetric flow rate, V=0.01m3/s of products is removed. In the moment B is added to the reactor, the reaction begins. The concentration in the incoming flow of A and B is 1mol/liter and 2 mol/liter respectively. Plot the temperature and concentration of each species as a function of time, what is the temperature and concentration of A, B, C after 1000s?© BrainMass Inc. brainmass.com July 16, 2018, 12:09 pm ad1c9bdddf
Please see the attachment for full solution.
For water-gas shift and methanation reactions, Arrhenius equation applies.
Arrhenius equation is given by,
where k is rate constant, T is temperature (in Kelvin), A is pre-exponential factor, Ea is activation energy, and R is universal gas constant.
So for water-gas shift reaction,
r1 = k1 [CO] [H2O] - k-1 [H2] [CO2] [concentration is denoted by third bracket]
= 100 e -20,000 / (8.314 x 500) - 100 e -60,000 / (8.314 x 500)
[applying equation (1), R = 8.314 J mol-1 K-1]
Similarly for methanation reaction,
r2 = k2 [CO] [H2] - k-2 [CH4] [H2O]
= 100 e -30,000 / (8.314 x 500) - 100 e ...
This solution provides guidelines to every step on how to solve two numerical problems (on chemical reactors) is provided. | <urn:uuid:d1a54524-f004-44c1-b81f-f746e07eb706> | 3.375 | 713 | Tutorial | Science & Tech. | 81.629727 | 95,580,285 |
Storm in an ash cloud: Electrifying shots of Mexican volcanic eruption show lightning bolts striking inside its ash plume
- Lightning flash spotted in the ash cloud of the Colima Volcano which is 301 miles west of Mexico City
- Strikes caused by high levels of electric charge building up as ash particles rub together
- Bolts can heat surrounding air to 3,000°C and melt ash in the cloud into glassy spheres, scientists discovered
There are few things more beautiful - or terrifying - than the menacing flash of lightning bolts within a volcanic ash cloud.
The latests picture, captured by an amateur photographer as the Colima volcano in Mexico spews out a plume of ash and lava, reveals the raw power of a volcanic eruption.
Hernando Rivera Cervantes took the pictures as local authorities warned those living around the volcano, which is also known as the Fire volcano, to prepare for a possible evacuation.
Lightning strikes inside the enormous ash cloud thrown into the air by the Colima volcano (above) in Mexico
The 12,400 feet (3,800 metre) high volcano, which first erupted in 1576, is one of the most active volcanoes in Mexico.
Mr Cervantes spent eight hours watching the volcano as it threw ash up to 1.8 miles (three kilometres) into the atmosphere before managing to capture the rare picture.
He said: 'I waited for eight hours, knowing something was going to happen. When the lightning arrived it was magical. An unforgettable experience.'
These blots of lightning are thought to occur due to the build up of electric charge in the ash cloud as the particles rub against each other.
Just like in a thunder cloud, this charge eventually builds up until it seeks a path through to the ground as lightning.
Mr Cervantes also managed to capture this night-time image above of lava spilling from the erupting volcano
VOLCANOES ARE COOLING EARTH
Small volcanic eruptions over the past 20 years have been protecting the Earth from global warming, according to a recent study.
Scientists have confirmed that droplets of sulphur-rich aerosols spewed into the upper atmosphere by volcanoes have been reflecting sunlight away from the Earth.
Until recently it was thought that only particularly large eruptions had any noticeable affect on the climate.
However, a study published earlier this year has confirmed results from the end of last year that showed these small eruptions can have an accumulative impact on global temperature.
This could have helped decrease the global temperatures by between 0.05°C to 0.12°C over the past 15 years.
Earlier this month scientists discovered that it is this lightning that is responsible for making the glass spheres that can appear in volcanic rocks.
They found that the intense heat generated by the lightning bolts causes the ash to melt into spherules of smooth glass.
A bolt of volcanic lighting can heat the surrounding air to more than 3,000°C, according to the researchers from the University of Alabama in Tuscaloosa.
Dr Kimberly Genareau, a volcanologist at the University of Alabama, said their findings suggested that the role lightning plays in volcanic eruptions may be under reported.
Writing in the journal Geology, she said: 'We refer to this new morphological classification of ash grains as lightning-induced volcanic spherules (LIVS).
'Observation of LIVS in tephras (volcanic rocks) will provide evidence of lightning occurrence during eruptions where lightning was not directly observed or documented.'
The Colima volcano is actually one of three volcanic domes that make up the Colima Volcanic complex in the Mexican state of Colima, 301 miles west of Mexico City.
It has erupted around 40 times since its first recorded activity in 1576. There are now around 300,000 people living in it shadow.
The volcano has a history of large and explosive eruptions, which has meant it is also one of the most studied volcanoes in Mexico.
Although lightning in volcanic ash clouds has been observed for a long time, scientists have only recently begun to understand what causes it.
The Colima volcano is regarded as one of the most dangerous in Mexico due to its large explosive eruptions
Mr Cervantes (shown above) spent eight hours waiting to capture his stunning images of the Colima volcano
Getting close enough to an erupting volcano is dangerous, so scientists at the Ludwig Maximilian University of Munich recreated volcanic lightning in a lab.
In 2013 they suspended volcanic ash gathered from sites around the world in a chamber filled with argon gas, forcing the concoction through a narrow tube.
They found that the movement of ash particles against each other as they go from the compressed environment under the Earth's surface into the atmosphere during an eruption causes a built of up static charge.
When the ash reaches the atmosphere, the energy is discharged as lightning bolts.
Most watched News videos
- Moment cops on duty do Fortnite's Floss dance at Little Mix concert
- Beach in Ciutadella Menorca hit by mini-tsunami 'rissaga'
- Shocking video shows driver knocking cyclists off their bikes
- Moment off-duty cop shoots armed motorbike thief dead
- Putin rolls into Helsinki summit in new Kortezh limousine
- Air ambulance lands in Trafalgar Square after casino stabbing
- David Davis won't make a resignation speech in the Commons
- Brave lion cub forced to jump into raging river to follow mother
- Sharks feast on huge whale carcass off popular surf beach
- Courageous woman hides victim from kidnappers till cops arrive
- The streets of Alcudia in Mallorca are flooded by mini-tsunami
- Brigitte Macron all smiles as she raises World Cup with France team | <urn:uuid:cf6cc88c-37ad-43ec-8467-1e245011cd25> | 3.296875 | 1,201 | News Article | Science & Tech. | 33.75902 | 95,580,294 |
Cold snaps like the ones that hit the eastern United States in the past winters are not a consequence of climate change. Scientists at ETH Zurich and the California Institute of Technology have shown that global warming actually tends to reduce temperature variability.
Repeated cold snaps led to temperatures far below freezing across the eastern United States in the past two winters. Parts of the Niagara Falls froze, and ice floes formed on Lake Michigan. Such low temperatures had become rare in recent years. Pictures of icy, snow-covered cities made their way around the world, raising the question of whether climate change could be responsible for these extreme events.
It has been argued that the amplified warming of the Arctic relative to lower latitudes in recent decades has weakened the polar jet stream, a strong wind current several kilometres high in the atmosphere driven by temperature differences between the warm tropics and cold polar regions.
One hypothesis is that a weaker jet stream may become more wavy, leading to greater fluctuations in temperature in mid-latitudes. Through a wavier jet stream, it has been suggested, amplified Arctic warming may have contributed to the cold snaps that hit the eastern United States.
Temperature range will decrease
Scientists at ETH Zurich and at the California Institute of Technology, led by Tapio Schneider, professor of climate dynamics at ETH Zurich, have come to a different conclusion. They used climate simulations and theoretical arguments to show that in most places, the range of temperature fluctuations will decrease as the climate warms.
So not only will cold snaps become rarer simply because the climate is warming. Additionally, their frequency will be reduced because fluctuations about the warming mean temperature also become smaller, the scientists wrote in the latest issue of the Journal of Climate.
The study's point of departure was that higher latitudes are indeed warming faster than lower ones, which means that the temperature difference between the equator and the poles is decreasing. Imagine for a moment that this temperature difference no longer exists.
This would mean that air masses would have the same temperature, regardless of whether they flow from the south or north. In theory there would no longer be any temperature variability. Such an extreme scenario will not occur, but it illustrates the scientists' theoretical approach.
Extremes will become rarer
Using a highly simplified climate model, they examined various climate scenarios to verify their theory. It showed that the temperature variability in mid-latitudes indeed decreases as the temperature difference between the poles and the equator diminishes. Climate model simulations by the Intergovernmental Panel on Climate Change (IPCC) showed similar results: as the climate warms, temperature differences in mid-latitudes decrease, and so does temperature variability, especially in winter.
Temperature extremes will therefore become rarer as this variability is reduced. But this does not mean there will be no temperature extremes in the future. "Despite lower temperature variance, there will be more extreme warm periods in the future because the Earth is warming," says Schneider. The researchers limited their work to temperature trends. Other extreme events, such as storms with heavy rain or snowfall, can still become more common as the climate warms, as other studies have shown.
North-south shift makes the difference
And the jet stream? Schneider shrugs off the idea: "The waviness of the jet stream that makes our day-to-day weather does not change much." Changes in the north-south difference in temperatures play a greater role in modifying temperature variability.
Schneider wants to explore the implications these results have in further studies. In particular, he wants to pursue the question of whether heatwaves in Europe may become more common because the frequency of blocking highs may increase. And he wants to find why these high pressure systems become stationary and how they change with the climate.
Tapio Schneider | ETH Zurich
Further reports about: > Arctic > Arctic warming > Climate > Climate change > ETH > ETH Zurich > air masses > climate scenarios > cold snaps > differences > equator > fluctuations > ice floes > polar jet stream > polar regions > temperature > temperature difference > temperature differences > temperature extremes > temperature fluctuations > temperatures
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:a57c9d14-269b-4dd0-b037-fe6e8da20fcb> | 4.21875 | 1,458 | Content Listing | Science & Tech. | 30.324648 | 95,580,319 |
Robust Technology of Harmonic Analysis
The spectral analysis of random processes or the measurement of the value of spectral functions which are the frequency distribution of the energy characteristics of the process is the most important part of the statistical measurements. At first spectral analysis was used for solving the problem of the investigation of the characteristics of deterministic processes in contrast to the analysis of the distribution functions and correlation analysis which were formed directly as a type of statistical measurement. Spectral analysis became an independent branch only after the role of the theory of measurement of the probability characteristics of random processes as well as the need for apparatus analysis of random processes had increased
KeywordsSpectral Analysis Spectral Density Harmonic Analysis Fourier Series Random Process
Unable to display preview. Download preview PDF. | <urn:uuid:12749145-4b08-41a8-9066-d1a0a2de4125> | 2.859375 | 153 | Truncated | Science & Tech. | -6.468993 | 95,580,327 |
Earthquakes are caused mostly by rupture of geological faults, but also by volcanic activity, landslides, and nuclear experiments. An earthquake's point of initial rupture is called its hypocenter. The term epicenter refers to the point at ground level directly above this. At the Earth's surface, earthquakes manifest themselves by shaking and sometimes displacing the ground. When a large earthquake epicenter is located offshore, the seabed sometimes suffers sufficient displacement to cause a tsunami. The shaking in earthquakes can also trigger landslides.
There are three main types of fault that may cause an earthquake: normal, reverse (thrust) and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other ; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip. | <urn:uuid:92fd457f-bb58-4347-a5f7-954350097e90> | 4.75 | 268 | Knowledge Article | Science & Tech. | 37.381667 | 95,580,329 |
Measurement of the face temperature during thermal drilling of rocks
The mean face temperature, measured by thermistors during thermal drilling of rocks by dc arc burners with air stabilization of the arc, is between 345 and 563 deg K, depending on the type of rock, the conditions of burner operation, the distance from the face, and the duration of the heat flux's action on the rock; calculation of the initial temperature by Eq. (2) somewhat increases this range.
The temperature increases from the initial value to a certain value which is characteristic for each rock and the burner operation scheme; it then remains constant for the whole drilling period. With an increase in the burner's power, this constant value is reached more rapidly.
The mean face temperature decreases with increasing power of the arc burner, i.e., with increased power of the heat flux.
This method of calculation enables us to determine to a first approximation the coefficient of relative heat transfer from the specimen to the medium under normal conditions.
KeywordsHeat Flux Drilling Rock Fracture Face Temperature Thermal Conductivity Equation
Unable to display preview. Download preview PDF.
- 1.N. P. Belenko, The use of x-ray analysis for studying deformations during thermal drilling of rocks [in Russian], Tr. Kazakh. Nauchn. -Issled. Inst. Mineral. Syr'ya, No. 6, Alma-Ata (1961).Google Scholar
- 2.A. P. Vasil'ev, The thermal method of drilling boreholes in hard rocks, “Collected Papers on the Fracuture of Coal and Rock” [in Russian], Moscow, Ugletekhizdat (1958).Google Scholar
- 3.R. P. Kaplunov, I. M. Panin, and A. P. Dmitriev, Thermal Drilling by Jet Burners [in Russian], Moscow, Izd. Mosk. Gorn. Inst. (1957).Google Scholar
- 4.L. S. Kuzyaev and Yu. N. Protasov, Measurement of the surface temperatures of rocks during thermal drilling [in Russian], Inzh. -Fiz. Zh.,7, No. 9 (1964).Google Scholar
- 5.A. V. Yagupov, M. A. Pokrovskii, A. P. Vasil'ev, and M. S. Varich, Flame Drilling of Blast Holes [in Russian], Moscow, Gosgortekhizdat (1962).Google Scholar
- 6.V. V. Struchkov and R. R. Leve, Determining the Necessary Heat Flux Entering Rock from a Burner's Flame [in Russian], Nauchn. Trudy, Sbornik No. 24, Moscow, Izd. Mosk. Gorn. Inst. (1958).Google Scholar
- 7.A. V. Brichkin, A. N. Genbach, and P. Ch. Chulakov, Thermal Drilling of Rocks [in Russian], Kuibyshev, Orgénergostroi (1958).Google Scholar
- 8.A. V. Brichkin and A. N. Moskalev, Thermal stress and drillability of certain rocks by jet burners [in Russian], Vestn. AN KazSSR, No. 1 (214) (1963).Google Scholar
- 9.A. V. Yagupov, On the mechanism of rock fracture during flame drilling of boreholes, Gorn. Zh., No. 2 (1963).Google Scholar
- 10.M. F. Zhukov, G. N. Pokrovskii, and V. Ya. Smolyakov, Results of a study of thermal drills (plasma drills), Fiziko-Tekhn. Probl. Razrab. Polezn. Iskop., No. 1 (1965).Google Scholar
- 11.V. I. Smironov, Course of Higher Mathematics [in Russian],2, Moscow, “Nauka” (1965).Google Scholar | <urn:uuid:a6a4937d-e94f-47a8-ae5d-7e25898fd305> | 2.953125 | 867 | Academic Writing | Science & Tech. | 73.67567 | 95,580,360 |
Sopow and colleagues report in the February issue of Ecology Letters that a chemical stimulus from a galling insect changes the morphology and physiology of its host to benefit these specialized plant feeders.
Galls are atypical plant growths that provide nourishment and shelter for gall-inducing insects. Previous studies could not determine whether insect galls are induced by mechanical or chemical stimuli because gall formation occurred at the sites where the insects were active.
In this study, feeding by the spruce gall adelgid, which measures about one millimeter in length, caused large galls to form up to 800 millimetres away. The effects of chemical stimuli were therefore unambiguously separated from any mechanical influence due to feeding or egg-laying. Initiation and growth of galls was inversely correlated with distance of the insect from buds (potential gall sites), strongly suggesting that galls were induced by a chemical stimulus transported to buds via vascular tissue, and that its efficacy was dose-dependent.
Emily Davis | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:f1241cfb-04fd-479e-a73a-00b53b554109> | 3.921875 | 808 | Content Listing | Science & Tech. | 33.776655 | 95,580,363 |
Hurricane Irma is so strong it's showing up on seismometers — equipment designed to measure earthquakes.
"What we’re seeing in the seismogram are low-pitched hums that gradually become stronger as the hurricane gets closer to the seismometer on the island of Guadeloupe," said Stephen Hicks, a seismologist at the University of Southampton in the United Kingdom.
The noise is likely caused by high winds — which cause tiny motions in the ground — and also by trees swaying in the wind, which also transfers energy into the ground, he said. The seismometer is located close to the ocean, so waves crashing along the coastline reverberate around the island, also generating seismic energy, Hicks added.
The hurricane isn’t creating earthquakes, he said. "Earthquakes occur tens of (miles) deep inside Earth’s crust, a long way from the influence of weather events, and there is no evidence to suggest that hurricanes and storms directly cause earthquakes," Hicks said.
It's not unusual for large storms to register on seismometers for hours to days as they pass over.
"We saw this for Hurricane Harvey on seismometers located close to Houston," he said. In the U.K., wintertime storms can sometimes make it hard for seismologists to see small earthquakes because the noise level generated by storms is so high.
As Irma approaches seismic sensors, "we will see a dramatic increase in the amplitude of the seismic recordings," Hicks said. | <urn:uuid:5e82852e-1695-4f50-bf28-17fa137c6646> | 3.75 | 305 | News Article | Science & Tech. | 42.933054 | 95,580,365 |
7,294 views | 00:56:24
This statement is false. Think about it, and it makes your head hurt. If it’s true, it’s false. If it’s false, it’s true. In 1931, Austrian logician Kurt Gödel shocked the worlds of mathematics and philosophy by establishing that such statements are far more than a quirky turn of language: he showed that there are mathematical truths which simply can’t be proven. In the decades since, thinkers have taken the brilliant Gödel’s result in a variety of directions–linking it to limits of human comprehension and the quest to recreate human thinking on a computer. This program explores Gödel’s discovery and examines the wider implications of his revolutionary finding. Participants include mathematician Gregory Chaitin, author Rebecca Goldstein, astrophysicist Mario Livio and artificial intelligence expert Marvin Minsky.
Dr. Mario Livio is an astrophysicist, a best-selling author, and a popular speaker. He is a Fellow of the American Association for the Advancement of Science. He has published more than 400 scientific papers on topics ranging from Dark Energy and cosmology to black holes and extrasolar planets.Read More
Gregory Chaitin is a mathematician and computer scientist who began making lasting contributions to his field while still a student at the Bronx High School of Science. His approach to mathematics views the field as much as an art form as science and inextricably linked with philosophical questions.Read More | <urn:uuid:5dccd1fc-3a28-42b6-9450-11b89a226693> | 3.28125 | 310 | Truncated | Science & Tech. | 44.205 | 95,580,366 |
Pleochroism (from Greek πλέων, pléōn, "more" and χρῶμα, khrôma, "color") is an optical phenomenon in which a substance has different colors when observed at different angles, especially with polarized light.
Anisotropic crystals will have optical properties that vary with the direction of light. The direction of the electric field determines the polarization of light, and crystals will respond in different ways if this angle is changed. These kinds of crystals have one or two optical axes. If absorption of light varies with the angle relative to the optical axis in a crystal then pleochroism results.
Anisotropic crystals have double refraction of light where light of different polarizations is bent different amounts by the crystal, and therefore follows different paths through the crystal. The components of a divided light beam follow different paths within the mineral and travel at different speeds. When the mineral is observed at some angle, light following some combination of paths and polarizations will be present, each of which will have had light of different colors absorbed. At another angle, the light passing through the crystal will be composed of another combination of light paths and polarizations, each with their own color. The light passing through the mineral will therefore have different colors when it is viewed from different angles, making the stone seem to be of different colors.
Tetragonal, trigonal, and hexagonal minerals can only show two colors and are called dichroic. Orthorhombic, monoclinic, and triclinic crystals can show three and are trichroic. For example, hypersthene, which has two optical axes, can have a red, yellow, or blue appearance when oriented in three different ways in three-dimensional space. Isometric minerals cannot exhibit pleochroism. Tourmaline is notable for exhibiting strong pleochroism. Gems are sometimes cut and set either to display pleochroism or to hide it, depending on the colors and their attractiveness.
The pleochroic colors are at their maximum when light is polarized parallel with a crystallographic axis. The axes are designated X, Y, and Z. These axes can be determined from the appearance of a crystal in a conoscopic interference pattern. Where there are two optical axes, the acute bisection of the axes gives Z for positive minerals and X for negative minerals and the obtuse bisection give the alternative axis (X or Z). Perpendicular to these is the Y axis. The color is measured with the polarization parallel to each direction. An absorption formula records the amount of absorption parallel to each axis in the form of X < Y < Z with the left most having the least absorption and the rightmost the most.
In mineralogy and gemologyEdit
Pleochroism is an extremely useful tool in mineralogy and gemology for mineral and gem identification, since the number of colors visible from different angles can identify the possible crystalline structure of a gemstone or mineral and therefore help to classify it. Minerals that are otherwise very similar often have very different pleochroic color schemes. In such cases, a thin section of the mineral is used and examined under polarized transmitted light with a petrographic microscope. Another device using this property to identify minerals is the dichroscope.
List of pleochroic mineralsEdit
Purple and violetEdit
- Amethyst (very low): different shades of purple
- Andalusite (strong): green-brown / dark red / purple
- Beryl (medium): purple / colorless
- Corundum (high): purple / orange
- Hypersthene (strong): purple / orange
- Spodumene (Kunzite) (strong): purple / clear / pink
- Tourmaline (strong): pale purple / purple
- Putnisite: pale purple / bluish grey
- Aquamarine (medium): clear / light blue, or light blue / dark blue
- Alexandrite (strong): dark red-purple / orange / green
- Apatite (strong): blue-yellow / blue-colorless
- Benitoite (strong): colorless / dark blue
- Cordierite (aka Iolite) (orthorhombic; very strong): pale yellow / violet / pale blue
- Corundum (strong): dark violet-blue / light blue-green
- Tanzanite See Zoisite
- Topaz (very low): colorless / pale blue / pink
- Tourmaline (strong): dark blue / light blue
- Zoisite (strong): blue / red-purple / yellow-green
- Zircon (strong): blue / clear / gray
- Alexandrite (strong): dark red / orange / green
- Andalusite (strong): brown-green / dark red
- Corundum (strong): green / yellow-green
- Emerald (strong): green / blue-green
- Peridot (low): yellow-green / green / colorless
- Titanite (medium): brown-green / blue-green
- Tourmaline (strong): blue-green / brown-green / yellow-green
- Zircon (low): greenish brown / green
- Citrine (very weak): different shades of pale yellow
- Chrysoberyl (very weak): red-yellow / yellow-green / green
- Corundum (weak): yellow / pale yellow
- Danburite (weak): very pale yellow / pale yellow
- Orthoclase (weak): different shades of pale yellow
- Phenacite (medium): colorless / yellow-orange
- Spodumene (medium): different shades of pale yellow
- Topaz (medium): tan / yellow / yellow-orange
- Tourmaline (medium): pale yellow / dark yellow
- Zircon (weak): tan / yellow
- Hornblende (strong): light green / dark green / yellow / brown
Brown and orangeEdit
- Corundum (strong): yellow-brown / orange
- Topaz (medium): brown-yellow / dull brown-yellow
- Tourmaline (very low): dark brown / light brown
- Zircon (very weak): brown-red / brown-yellow
- Biotite (medium): brown
Red and pinkEdit
- "Webmineral: Pleochroism in minerals"..
- Bloss, F. Donald (1961). An Introduction to the Methods of Optical Crystallography. New York: Holt, Rinehart and Winston. pp. 147–149.
- Bloss, F. Donald (1961). An Introduction to the Methods of Optical Crystallography. New York: Holt, Rinehart and Winston. pp. 212–213.
- "The Pleochroic Minerals".
- Rogers, Austin F.; Kerr, Paul F. (1942). Optical Mineralogy (2 ed.). McGraw Hill Book Company. pp. 113–114.
- What is gemstone pleochroism? International Gem Society, retrieved 28-Feb-2015 | <urn:uuid:7244f6ee-02cd-4a28-9549-c7a29171e613> | 3.890625 | 1,490 | Knowledge Article | Science & Tech. | 34.566544 | 95,580,368 |
[h2]Buildings and clothes could melt to save energy[/h2]
[release]THE sun has risen, and a brand new building on the University of Washington's campus in Seattle is about to melt.It is no design flaw: encapsulated within the walls and ceiling panels is a gel that solidifies at night and melts with the warmth of the day.[B] Known as a phase change material (PCM), [B]the gel will help reduce the amount of energy needed to cool office space in the building - scheduled to house the molecular engineering department when completed this month - by a whopping 98 per cent.[/B][/B]
PCMs don't have to be as high-tech as this, of course. We have been using ice, a phase change material that melts at 0 °C, to keep things cool for thousands of years. But advances in materials science and rising energy costs are now driving the development of PCMs that work at different temperatures to help people and goods stay cool or warm, or to store energy.
PCMs are attractive energy-savers because of their ability to absorb or release massive amounts of energy while maintaining a near-constant temperature. "To melt ice takes the same amount of energy as would be required to warm an equal volume of water by 82 °C," says Jan Kosny of the Fraunhofer Center for Sustainable Energy Systems in Cambridge, Massachusetts, who began to explore the potential of PCMs three decades ago by looking at beeswax as a way to store heat from the sun. The reason PCMs are so useful is because energy is needed to break the molecular bonds between atoms when a substance melts, and is released when bonds are formed as it solidifies.
The "bioPCM" gel in [URL="http://www.engr.washington.edu/about/bldgs/mole.html"]the university building[/URL], derived from vegetable oils, will be "charged" each night when windows automatically open to flush the building with cold outdoor air. The solid gel then absorbs heat as it melts the next day. The idea is the same as using thick concrete or adobe walls, which reduce indoor temperature fluctuations, but only a fraction of the material is required. "Our bioPCM is 1.25 centimetres thick yet it acts like the thermal mass of 25 centimetres of concrete," says Peter Horwath, founder of [URL="http://www.phasechange.com/index.php"]Phase Change Energy Solutions[/URL], based in Asheboro, North Carolina.
A recent report by technology research firm [URL="http://www.luxresearchinc.com/"]Lux Research[/URL] predicts the use of phase change materials in buildings will grow from near zero today to $130 million in annual sales by 2020.
Meanwhile, a number of other applications are emerging. UK-based Star Refrigeration is using carbon dioxide, which changes phase from liquid to gas at a very low temperature, to keep data centres cool. Heat emitted by today's high-performance server farms can overwhelm even the most advanced water cooling systems. By piping CO[SUB]2[/SUB] through heat exchangers, the company recently demonstrated an ability to pull nearly twice as much heat from the computers as the systems used at present.
In western China, PCMs derived from yak butter and local plant oils are helping yak herders keep warm. The material is encased in plastic and then woven into traditional clothing. It melts as herders work up a sweat walking to mountain pastures then, when they stop moving, the pent-up heat is slowly released, keeping them warm as they watch their herds. More than 100 families are now using the materials as part of an ongoing pilot project that also includes bed rolls warmed by cooking stoves in the day to keep people warm at night. "Families that use them are starting to see a significant difference in the amount of fuel they need," says Scot Frank of One Earth Designs, also based in Cambridge, which developed the compounds.
Another promising application for PCMs is vaccine delivery in developing countries. Vaccines need to be kept cold during transport, which is a challenge in countries with limited refrigeration. They are typically packaged in ice, but their effectiveness can be severely compromised if they freeze. Using materials that change phase between 4 and 8 °C, US packaging manufacturer Sonoco says it has developed a solution that can keep vaccines cool for up to six days. Sonoco is now testing the Greenbox with a non-profit biotechnology developer called PATH, to meet World Health Organization standards.
Harnessing PCMs for energy storage could also give solar power a boost. Today systems that concentrate solar thermal energy rely on liquid salts to store heat. This allows power plants to produce energy when the sun is not shining, but requires massive amounts of liquid and large, well-insulated storage facilities. By using chemicals that change phase instead, German manufacturer SGL Carbon says it can reduce the volume of storage material required by roughly two-thirds. The company is currently testing a prototype.
For Kosny, all of the recent interest in PCMs is something of a vindication. "Ten years ago, when I argued for the development of phase-change materials, no one was interested," he says. "Now we can't seem to develop these materials fast enough."[/release]
Awesome, now we just need to switch from gasoline cars to hydrogen powered ones.
Sorry, you need to Log In to post a reply to this thread. | <urn:uuid:d1501965-d3d5-4e82-ac17-4fb4ebc6de71> | 3.1875 | 1,144 | Comment Section | Science & Tech. | 49.664954 | 95,580,394 |
This project consists in several examples of how race conditions affect the execution of not properly synchronized programs. Main classes are:
- Interleaving: Shows how several steps are overlapped by different thread executions.
- Unsafe*: Non synchronized programs that show the effects of a race condition.
- Safe*: Properly synchronized programs where its execution is not affected by race conditions.
Please take a look at the article about Atomicity and race conditions | <urn:uuid:ad80aaa0-8b8e-4898-ac35-d294178426fb> | 3.15625 | 90 | Documentation | Software Dev. | 15.221902 | 95,580,421 |
Surprisingly as it may seem, comet Lovejoy appears to have survived its close encounter with the Sun. Video and images relased by the NASAs Solar Dynamics Observatory (SDO) caught the comet reemerging on the other side of the Sun after its perihelion.
Thanks to Karl, new STEREO-B, SECCHI fits dated back to December 13, 2011 are available. In spite of the fact that the image scale factor is a little small, its possible to appreciate the growing of a slight asymmetry of the coma (toward the north-east)
Australian amateur astronomer Terry Lovejoy discovered on Nov. 27.7 his third comet, designated C/2011 W3 (Lovejoy). On our previous post about this comet you can see our follow-up image and animation.
NASAs Deep Impact spacecraft completed a 140-second firing of its onboard rocket motors on Thursday, Nov. 24. The rocket burn was performed to keep the venerable comet hunters options open for yet another exploration of a solar system small body.
Cbet nr.2930, issued on 2011, December 02, announces the discovery of a new comet (discovery magnitude 13) by Terry Lovejoy on three CCD images obtained each on Nov. 27.7 and 29.7 UT with a Celestron 8 0.20-m f/2.1 Schmidt-Cassegrain reflector (+ QHY9 camera). The new comet has been designated C/2011 W3 (LOVEJOY).
Cbet nr.2922, issued on 2011, November 29, announces the discovery of a new comet (discovery magnitude 17.9) by Claudine Rinner on CCD images obtained on November 28, 2011 taken with a 0.5-m f/3 reflector located at the Oukaimeden Observatory near Marrakech, Morocco. The new comet has been designated P/2011 W2 (RINNER).
Cbet 2923, issued on 2011, November 30, reports that an apparently asteroidal object reported by the Spacewatch survey and designated 2010 UH55 by the Minor Planet Center last year, has been found to show cometary activity. The new designation is P/2010 UH55 (SPACEWATCH).
On October 19.5, 2011 we started an observing session to recover the periodic comet 171P/Spahr. T. B. Spahr (then at University of Arizona, Arizona, USA - now Director, Minor Planet Center) discovered this comet with the 0.41-m f/3 Schmidt telescope in the course of the Catalina Sky Survey on 1998 November 16.39.
Cbet nr.2875, issued on 2011, October 26, announces the discovery of a new comet (discovery magnitude 19.4) by Terry H. Bressi on CCD images obtained on September 24, 2011 with the Spacewatch 0.9-m f/3 reflector at Kitt Peak. The new comet has been designated C/2011 U2 (BRESSI).
Latest indications are this relatively small comet has broken into even smaller, even less significant, chunks of dust and ice. This trail of piffling particles will remain on the same path as the original comet, completing its unexceptional swing through the inner solar system this fall. | <urn:uuid:3b9bbf23-1499-476a-b6f2-39998150c251> | 2.734375 | 686 | Content Listing | Science & Tech. | 64.475779 | 95,580,436 |
Triple Integrals. z-Simple, y-simple, z-simple Approach. z-Simple solids (Type 1). Definition: A solid region E is said to be z-Simple if it is bounded by two surfaces z=z 1 (x,y) and z=z 2 (x,y) (z 1 £ z £ z 2 ). Iterated Triple Integrals over z-Simple solid E.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
This gives the volume V over the region Dxy in the xy-plane of the surface z=z2(x,y)
used so that dAxz can be written as
r dr dq instead ofdx dz.
Compare to a cylinder of radius and height 16 which has
double this volume (anyone know why?) and contains our solid E inside it.
Often a solid is simple in more than one variable.An alternate approach is to look for the one variable that it is not simple in, and make that the outer limitof integration. The inner limit is then a double integral.
This approach is also helpful in sketching the solid of integration, because as we will see the outer limit ofintegration corresponds to constant values on whichcontour regions in the simple plane lie
where Dxy(z) is the trace of the solid (a trace of a solid is a region instead of a curve) in the plane z=constant. | <urn:uuid:420d810a-d28c-4a8c-a713-6226919bed83> | 3.03125 | 355 | Tutorial | Science & Tech. | 62.941006 | 95,580,447 |
Jul 28, 2015 11:17 PM EDT
Alaska's wildfire season, which is on pace to be the state's worst fire season ever, could be exacerbating global warming, according to The Washington Post reported.
So far this summer, Alaska fires have burned nearly 5 million acres of boreal forest and land, more than double the size of Yellowstone National Park, and there are approximately 300 wildfires still burning in the exclave state, Newsweek reported.
The 2004 fire season was Alaska's worst on record with 6.65 million acres burned.
"In a big fire year, like 2004 or what's happening now, about 0.2 percent of the carbon stored in Alaska is released," Dave McGuire, research scientist from the University of Alaska Fairbanks, said in a statement. "The carbon released from fire emissions during a large fire year in Alaska is roughly equivalent to 1 percent of the global fossil fuel and land use emissions."
There isn't a direct relationship between climate change and fire, but researchers have found strong correlations between warm June temperatures and large fire years. Hot, dry spring conditions, however, do not automatically mean fire something needs to create the spark and actually start the fire. Lightning starts about 35 percent of the fires in Alaska but account for 90 percent of the total area burned.
"Climate models tell us that average June temperatures will continue to increase through this century, but ignition is the wild card,"Dr. Scott Rupp, university director of the Interior Department's Alaska Climate Science Center and a fire ecologist at the University of Alaska Fairbanks, said in a statement. "What will happen in the future is a more complicated story because we don't understand what will happen with convective storms and the lightning."
The study will be detailed in the USGS report, Baseline and Projected Future Carbon Storage and Greenhouse-Gas Fluxes in Ecosystems of Alaska, which is slated for publication this fall.
See Now: Facebook will use AI to detect users with suicidal thoughts and prevent suicide© 2017 University Herald, All rights reserved. Do not reproduce without permission. | <urn:uuid:a7920b6e-11d9-4db6-97ae-8151c22feada> | 2.828125 | 427 | News Article | Science & Tech. | 39.686888 | 95,580,474 |
What Are All These Different Types of Nebulae, and What Details Can I See in Them with My Telescope?
There are five types of cloudy or nebulous objects in the sky: planetary nebulae, emission nebulae, reflection nebulae, dark nebulae and supernova remnants. I will cover the planetaries in the next chapter and discuss the other four types here. Even though all these objects appear as fuzzy and diffuse in the telescope, there are different mechanisms at work among the differing types of nebulae. So, let’s see what makes each type of nebula glow in the dark (except dark nebulae, of course).
KeywordsWhite Dwarf Supernova Remnant Planetary Nebula Bright Star Double Star
Unable to display preview. Download preview PDF. | <urn:uuid:11603fda-a737-4ce8-b629-98a7189e6e14> | 3.375 | 163 | Truncated | Science & Tech. | 38.153109 | 95,580,518 |
An unusual dinosaur has been shown to have a skull that functioned like a fish-eating crocodile, despite looking like a dinosaur. It also possessed two huge hand claws, perhaps used as grappling hooks to lift fish from the water.
Dr Emily Rayfield at the University of Bristol, UK, used computer modelling techniques – more commonly used to discover how a car bonnet buckles during a crash – to show that while Baryonyx was eating, its skull bent and stretched in the same way as the skull of the Indian fish-eating gharial – a crocodile with long, narrow jaws.
Dr Rayfield said: “On excavation, partially digested fish scales and teeth, and a dinosaur bone were found in the stomach region of the animal, demonstrating that at least some of the time this dinosaur ate fish. Moreover, it had a very unusual skull that looked part-dinosaur and part-crocodile, so we wanted to establish which it was more similar to, structurally and functionally – a dinosaur or a crocodile.
“We used an engineering technique called finite element analysis that reconstructs stress and strain in a structure when loaded. The Baryonyx skull bones were CT-scanned by a colleague at Ohio University, USA, and digitally reconstructed so we could view the internal anatomy of the skull. We then analysed digital models of the snouts of a Baryonyx, a theropod dinosaur, an alligator, and a fish-eating gharial, to see how each snout stressed during feeding. We then compared them to each other.”
The results showed that the eating behaviour of Baryonyx was markedly different from that of a typical meat-eating theropod dinosaur or an alligator, and most similar to the fish-eating gharial. Since the bulk of the gharial diet consists of fish, Rayfield’s study suggests that this was also the case for Baryonyx back in the Cretaceous.
Dr Angela Milner from the Natural History Museum, who first described the dinosaur and is co-author on the paper, said: “I thought originally it might be a fish-eater and Emily’s analysis, which was done at the Natural History Museum, has demonstrated that to be the case.
“The CT-data revealed that although Baryonyx and the gharial have independently evolved to feed in a similar manner, through quirks of their evolutionary history their skulls are shaped in a slightly different way in order to achieve the same function. This shows us that in some cases there is more than one evolutionary solution to the same problem.”
The unusual skull of Baryonyx is very elongate, with a curved or sinuous jaw margin as seen in large crocodiles and alligators. It also had stout conical teeth, rather than the blade-like serrated ones in meat-eating dinosaurs, and a striking bulbous jaw tip (or ‘nose’) that bore a rosette of teeth, more commonly seen today in slender-jawed fish eating crocodilians such as the Indian fish-eating gharial.
The dinosaur in question, Baryonyx walkeri, was discovered near Dorking in Surrey, UK in 1983 by an amateur collector, William Walker, and named after him in 1986 by Alan Charig and Angela Milner. It is an early Cretaceous dinosaur, around 125 million years old, and belongs to a family called spinosaurs.
Cherry Lewis | EurekAlert!
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:0f04aa47-c8b5-4291-954a-f454297cb969> | 3.265625 | 1,382 | Content Listing | Science & Tech. | 40.862453 | 95,580,525 |
1. Introduced pests and pathogens are a major source of disturbance to ecosystems world-wide. The famous examples have produced dramatic reductions in host abundance, including virtual extirpation, but most introductions have more subtle impacts that are hard to quantify but are potentially at least as important due to the pathogens’ effects on host reproduction, competitive ability and stress tolerance. A general outcome could be reduced host abundance with concomitant increases in the abundance of competitors.
2. Beechbarkdisease(BBD)isawidespread,fatalafflictionofAmericanbeech(Fagusgrandifolia), currently present in c. 50% of beech’s distribution in eastern North America. Despite high adult mortality, beech remains a dominant component of the forest community.
3. EmployingspatiallyextensivedatafromthenationalForestInventoryandAnalysisprogramof the United States Forest Service, we show that forests have changed dramatically in the presence of BBD. Within the 2.3 million km2 range of beech, size-specific mortality was 65% higher in the lon- gest-infected regions, and large beech (>90 cm diameter at breast height) have declined from c. 79 individuals km)2 to being virtually absent. Small stem beech density was dramatically higher (>350%) such that infested forests contain a roughly equivalent cross-sectional (basal) area of beech as before BBD.
4. There was no evidence for compensation by sugar maple or other co-occurring tree species via increased recruitment or adult survivorship at the landscape scale. Overall, community composition remained roughly unchanged as a result of BBD.
5. Surprisingly, trajectory of stand dynamics (shifts in stem density and mean tree size reflecting normal stand maturation (self-thinning) or retrogression (more abundant, smaller trees over time)) did not differ between affected and unaffected regions. Variance in stand dynamics was greater in afflicted forests, however, indicating that predictability of forest structure has been diminished by BBD.
6. Synthesis. Forests of eastern North America have shifted to increased density and dramatically smaller stature – without notable change in tree species composition – following the invasion of a novel forest disease. Our results reinforce the conclusion that introduced diseases alter fundamental properties of ecosystems, but indicate that the spectrum of potential effects is broader than generally appreciated.
Mendeley saves you time finding and organizing research
There are no full text links
Choose a citation style from the tabs below | <urn:uuid:b2249873-2428-4c2a-991a-7de62471e20f> | 3.359375 | 515 | Academic Writing | Science & Tech. | 19.347746 | 95,580,538 |
The knight's move on a chess board is 2 steps in one direction and one step in the other direction. Prove that a knight cannot visit every square on the board once and only (a tour) on a 2 by n board. . . .
I want some cubes painted with three blue faces and three red faces. How many different cubes can be painted like that?
The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases.
Prove that the internal angle bisectors of a triangle will never be perpendicular to each other.
This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point.
Start with any whole number N, write N as a multiple of 10 plus a remainder R and produce a new whole number N'. Repeat. What happens?
The country Sixtania prints postage stamps with only three values 6 lucres, 10 lucres and 15 lucres (where the currency is in lucres).Which values cannot be made up with combinations of these postage. . . .
Can you rearrange the cards to make a series of correct mathematical statements?
Eulerian and Hamiltonian circuits are defined with some simple examples and a couple of puzzles to illustrate Hamiltonian circuits.
A serious but easily readable discussion of proof in mathematics with some amusing stories and some interesting examples.
The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values.
Draw a 'doodle' - a closed intersecting curve drawn without taking pencil from paper. What can you prove about the intersections?
When is it impossible to make number sandwiches?
What is the largest number of intersection points that a triangle and a quadrilateral can have?
In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again.
In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot.
Janine noticed, while studying some cube numbers, that if you take three consecutive whole numbers and multiply them together and then add the middle number of the three, you get the middle number. . . .
Imagine two identical cylindrical pipes meeting at right angles and think about the shape of the space which belongs to both pipes. Early Chinese mathematicians call this shape the mouhefanggai.
You have twelve weights, one of which is different from the rest. Using just 3 weighings, can you identify which weight is the odd one out, and whether it is heavier or lighter than the rest?
An article which gives an account of some properties of magic squares.
Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice?
This is the second of two articles and discusses problems relating to the curvature of space, shortest distances on surfaces, triangulations of surfaces and representation by graphs.
This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . .
Take any two numbers between 0 and 1. Prove that the sum of the numbers is always less than one plus their product?
Prove that, given any three parallel lines, an equilateral triangle always exists with one vertex on each of the three lines.
This article discusses how every Pythagorean triple (a, b, c) can be illustrated by a square and an L shape within another square. You are invited to find some triples for yourself.
The tangles created by the twists and turns of the Conway rope trick are surprisingly symmetrical. Here's why!
Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . .
There are 12 identical looking coins, one of which is a fake. The counterfeit coin is of a different weight to the rest. What is the minimum number of weighings needed to locate the fake coin?
This is the second article on right-angled triangles whose edge lengths are whole numbers.
Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem?
Can you explain why a sequence of operations always gives you perfect squares?
Clearly if a, b and c are the lengths of the sides of an equilateral triangle then a^2 + b^2 + c^2 = ab + bc + ca. Is the converse true?
Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas.
The first of two articles on Pythagorean Triples which asks how many right angled triangles can you find with the lengths of each side exactly a whole number measurement. Try it!
L triominoes can fit together to make larger versions of themselves. Is every size possible to make in this way?
Four identical right angled triangles are drawn on the sides of a square. Two face out, two face in. Why do the four vertices marked with dots lie on one line?
Which of these roads will satisfy a Munchkin builder?
Draw some quadrilaterals on a 9-point circle and work out the angles. Is there a theorem?
Is the mean of the squares of two numbers greater than, or less than, the square of their means?
If I tell you two sides of a right-angled triangle, you can easily work out the third. But what if the angle between the two sides is not a right angle?
Kyle and his teacher disagree about his test score - who is right?
If you think that mathematical proof is really clearcut and universal then you should read this article.
A composite number is one that is neither prime nor 1. Show that 10201 is composite in any base.
Can you see how this picture illustrates the formula for the sum of the first six cube numbers?
A, B & C own a half, a third and a sixth of a coin collection. Each grab some coins, return some, then share equally what they had put back, finishing with their own share. How rich are they?
Carry out cyclic permutations of nine digit numbers containing the digits from 1 to 9 (until you get back to the first number). Prove that whatever number you choose, they will add to the same total.
ABCD is a square. P is the midpoint of AB and is joined to C. A line from D perpendicular to PC meets the line at the point Q. Prove AQ = AD.
Find the smallest positive integer N such that N/2 is a perfect cube, N/3 is a perfect fifth power and N/5 is a perfect seventh power.
Can you find the areas of the trapezia in this sequence? | <urn:uuid:2ec1fb5e-c7ef-44a9-8135-27649aa0d60e> | 2.65625 | 1,517 | Content Listing | Science & Tech. | 63.414602 | 95,580,561 |
Inspired by the contributions and passions of citizen scientists
By Lauren Flesher
Spring is coming, and with it, a flurry of nest-building activity. Birds will soon be inspecting nesting sites, collecting nest materials, and putting hours of effort into constructing the perfect nest. But did you know that for some species, the nest is totally unnecessary? Some birds get to skip all the hard work of construction and go straight to the egg-laying and incubation. Meet a few of these nestless species below:
The cold winds of the Antarctic ice shelves where Emperor Penguins make their home don’t allow for leaving eggs out on the ice. Even if nesting materials could be found, the eggs would freeze quickly from exposure. So instead of building a nest, the Emperor Penguin lays her single egg on the bare ice, then quickly rolls it onto the feet of the male. Protected from the cold by thick fat reserves, the egg is incubated in its father’s abdominal pocket all winter. Without this strategy, Emperor Penguins would be unable to breed.
The Chuck-will’s-widow of the Americas, like most nightjars, lays its eggs directly on the ground. The eggs are intricately patterned to blend in with the surrounding leaf litter to avoid the notice of ground predators. John James Audubon claims to have witnessed a pair of birds relocating their eggs by carrying them in the mouth, but no one has ever verified this.
This species of auk breeds on narrow rocky ledges in the Arctic in colonies of hundreds. Eggs are laid directly onto these ledges and incubated. Due to the lack of a protective nest, the parents must stay with their egg constantly to deter predators such as gulls and crows. The eggs are pointed at one end, which is helpful in preventing them from rolling off of a cliff when disturbed. Common Murres experience fierce site competition, and breeding colonies are often so dense that individuals are touching. Perhaps it is this competition for space that leaves no room for nest-building in a murre colony.
The White Tern, also known as the Angel Tern or White Noddy, occurs in tropical oceans across the globe. While most members of the tern family lay eggs in scrapes on the ground, the White Tern raises its young in the trees. Its relative, the Black Noddy, also uses this strategy. But while the Black Noddy builds a nest in the canopy to contain its eggs, the White Tern forgoes construction and simply lays its egg in a depression or a fork of bare branch. Why they do this is unknown, but it is theorized that the prevalence of nest parasites in seabird colonies may be a factor. No nest, no nest parasites!
This nocturnal insect-eater of the Neotropics depends on its impersonation of a dead tree branch to survive. During the day, it hides from predators in plain sight, holding itself erect and still, and using its camouflaged plumage to perfectly mimic the appearance of a broken stump. Building a nest during breeding season would make the Common Potoo easy prey for predators. To avoid this problem, the Common Potoo lays its egg on top of a broken branch. It can then incubate the egg while remaining perfectly still and blending seamlessly into its surroundings. See if you can spot the nestling in this photo.
Whether for camouflage or parasite reduction, or due to cold weather or nest site competition, not building a nest can certainly have its perks. In the coming months, as you watch the songbirds in your yard collect twigs, mud, moss, and grass and begin making elaborate nests, remember that all across the globe there are birds taking it easy before the real work of raising young begins.
NestWatch collects data about birds’ breeding successes and failures to help scientists understand topics from conservation to climate change. Join today and start monitoring!
I love the nestless species. Great that you are sharing this intersting fact.
Does a migratory bird fly through places other than oceans and seas
Yes, many migratory birds prefer to avoid the dangerous oceans and fly over land. Here is a map of the world’s major flyways for migratory birds. https://flutrackers.com/forum/filedata/fetch?id=647645&d=1153517470
Thank you for reading!
In my area of Reno Nevada the Great Horned Owls nest in old Hawk nests high in the trees.
You make a good point, Dennis. There are many species of birds, besides these 5, that do not actually make a nest, although they will use those created by other species. They still get the benefit of a nest without having to build it themselves.
es, many migratory birds prefer to avoid the dangerous oceans and fly over land. Here is a map of the world’s major flyways for migratory birds.new year 2016 image</a
Thanks for sharing such a great article and we love this keep rocking . we want more posts like this
Each of these species has rather small area of the population. They are special breeding colonies, which are important for the fauna.
Looks cool. I would like to get more materials on the topic. I really like this work you’ve done here!
thanks for the articles, have long been looking for what you can read during lunch break, I like
Your email address will not be published. Required fields are marked *
5 × eight =
Current ye@r *
Leave this field empty
Unfortunately, this website does not support your browser as it relies on modern technologies. Please update your web browser. It is free. Please consider using a modern web browser like Google Chrome or Mozilla Firefox for a better web experience. | <urn:uuid:90be17ac-a8d9-4eb9-9300-5b11089c8fe1> | 3.515625 | 1,202 | Comment Section | Science & Tech. | 57.759375 | 95,580,571 |
Albeit getting little to no attention by the media, a scientific study published in the December issue of Palaeoworld journal warns that if the so-called Arctic methane timebomb were to go off, it would release "the equivalent of at least 1,000 gigatons of carbon dioxide." Combined with the 1,475 gigatons of carbon dioxide produced by humans since the year 1850, the release of methane hydrate would render this event apocalyptic. Meanwhile, this happens.
"Global warming triggered by the massive release of carbon dioxide may be catastrophic," reads the study's abstract. "But the release of methane from hydrate may be apocalyptic."
The study, titled "Methane Hydrate: Killer Cause of Earth's Greatest Mass Extinction," highlights the fact that the most significant variable in the Permian Mass Extinction event, which occurred 250 million years ago and annihilated 90 percent of all the species on the planet, was methane hydrate.
|“It’s the worst investment in human history”|
|Global Warming: "atmospheric levels of carbon dioxide hit a new record high."|
|“Improve on the ability of plants to suck carbon dioxide out of the atmosphere.”|
|“We are considering public transport free of charge in order to reduce the number of private cars.”|
|Tiny World Cup Flags Responsible for an Additional 3 Million Kilograms of Carbon Emissions|
|“Cultured meat is finally on its way towards becoming a commercial reality.”|
|“There are but two powers in the world, the sword and the mind. In the long run the sword is always beaten by the mind.”|
|Japanese Robot Serves Ice Cream From Inside a Vending Machine|
|“Although this transition is irreversible, it carries potential for several robotic applications.”|
|How to Avoid Jury Duty|
|Why, Typewriters Are Alive and Well, Thank you|
|CaptchaTweet: Write Tweets in Captcha Form|
|“If you really want to save the planet, you should die.”|
|The (Very Scary) People of Public Transit|
|When the Wrong Hastag Can Get You Killed by an Assassination Drone|
|Somebody Needs to Build a New Facebook Stat|
|Bizarre Record Covers| | <urn:uuid:7e9f440b-0bd5-49c6-a7c8-c1b39621496d> | 2.953125 | 503 | Content Listing | Science & Tech. | 36.328396 | 95,580,589 |
For a yellow sodium light emission, with a frequency of 5.09 x 1011 Hz
a) What is the wavelength in nm?
b) Calculate the energy of the light.
(c = 3.00 x 108 m/s, h = 6.63 x 10-34 j-s, Rh = 2.18 x 10-18J)© BrainMass Inc. brainmass.com July 22, 2018, 6:32 pm ad1c9bdddf
This solution provides a detailed step by step explanation of the given physics problem. | <urn:uuid:5a508448-94a9-4799-8e30-55fb03124357> | 3.171875 | 117 | Tutorial | Science & Tech. | 119.416667 | 95,580,607 |
November 2014 Global Weather Extremes Summary
November was globally the 7th warmest such on record according to NOAA and 8th according to NASA (see Jeff Master’s blog for more about this).
It was a cold month in the U.S. with some phenomenal lake-effect snowstorms. A powerful storm, dubbed a ‘Medicane’ formed in the Mediterranean Sea. Deadly floods occurred in Morocco, Italy, and Switzerland. It was the warmest November on record for Australia, Italy, Austria and much of Southeast Asia.
Below are some of the month’s highlights.NORTH AMERICA
It was the coldest November since 2000 for the contiguous U.S. (and 16th coldest on record) thanks to an exceptional arctic outbreak in the middle of the month. In Casper, Wyoming the temperature fell to -27°F (-32.8°C) on November 12th, its coldest on record for November. Amazingly, this was a drop of some -99°F (55°C) between November 1st, when the temperature peaked at 72°F (22.2°C) tying the record for warmest November temperature ever observed in the city to November 12th when the monthly record low occurred. The temperature in Casper, Wyoming fell from a November record high (actually a tie for such) on November 1st to a November monthly record low by November 12th.
Table from NWS-Riverton.
The cold air was mostly confined to the eastern two-thirds of the nation and was most persistent in the Southeast where several sites (Macon in Georgia, Mobile in Alabama, and Gainesville in Florida) endured their coldest or 2nd coldest November on record. Early season snowfall accompanied the cold in the region with Columbus, South Carolina recording its earliest snowfall on record (just a trace officially—although over one inch fell in the suburbs) on November 3rd. This early snowfall occurred one week after a daily record high of 87°F (30.6°C) was observed on October 26th!. Following the cold wave, unusually mild air surged north across the eastern U.S. and Canada where the town of Saint Anicet, Quebec (just southwest of Montreal) reached 21.3°C (70.3°F) on November 24th.
In contrast to most of the country, California had its 8th warmest November on record and remains on track for its warmest year ever observed (since 1895). It was also exceptionally mild in Alaska where King Salmon had its warmest November on record (since 1916). McGrath reached 50°F (10°C) on November 12th, a new November monthly record and smashing the previous latest 50°-reading-on-record by 21 days, the previous latest being on October 22nd. One of the most intense extra-tropical storms on record churned through the Bering Sea on November 7-8 when a buoy off the coast of Siberia measured a pressure reading of 929.8 mb. The estimated actual minimum pressure of the storm was said to be as low as 924 mb (27.29”), which would be close to, if not actually, the lowest such ever observed for an extra-tropical storm in the Pacific Ocean.State-by-state temperature ranking (top map) and precipitation ranking (bottom map) for November. For a wide swath of central and southeastern U.S. it was one of the top 10 coldest Novembers on record.
Maps from NCDC.
One of the most remarkable features of the month was the tremendous lake-effect snowfall in Wisconsin, Michigan, and New York State. Gile, Wisconsin picked up 48.3” (123 cm) of snow in 72 hours on November 10-13, leading to a November monthly total of 110.6” (281 cm): the greatest monthly snow total in Wisconsin state records (the previous record was 103.5” (263 cm) at Hurley in January 1997. Even more remarkable were the two back-to-back lake effect snows that buried communities just to the south and east of Buffalo, New York between November 18-20. An astonishing 88” (224 cm) was measured in the town of Cowlesville in Wyoming County, and 80” (203 cm) near Hamburg. There were 14 storm-related fatalities including one victim who was literally buried inside his car.Towns south of Buffalo saw up to 88” of snow fall over a three-day period on November 18-20, one of the greatest lake-effect snowfalls in history for the U.S.
Photo by Kyle Duley posted on Twitter.
The coldest temperature measured in the contiguous U.S. during November was -34°F (-36.7°C) at Thermopolis 9 NE, Wyoming on November 14th and the warmest was 94°F (34.4°C) at Blythe, California on November 1st.SOUTH AMERICA and CENTRAL AMERICA
Exceptional heat continued to grip portions of southern South America during the month with Cochabamba, Bolivia reaching an all-time record of 35.6°C (96.1°F) on November 6th and 7th (POR since 1949). What is remarkable about this figure is that Cochabamba Airport (where the temperature was observed) rests at an elevation of 2,550 m (8,366 feet). This would be one the warmest temperatures ever measured on Earth at such an altitude. La Paz Observatory, Bolivia reached 27.1°C (80.8°F), its 2nd warmest temperature on record (the all-time record is 27.2°C/81.0°F set in December 1997).EUROPE
November saw amazingly warm temperatures across much of Western Europe. Many sites broke their all-time November heat records as I blogged about on November 3rd.
Late in the month a heat wave again affected the region with temperatures as high as 29.0° (84.2°F) measured at Pollenca, Spain and 26.8°C (80.2°F) at Socoa in France on November 22-24. More heat occurred a week later with a reading of 32.9°C (91.2°F) observed in Capaci, Sicily on November 29th. It was the warmest November on record for Italy (where the temperature averaged an amazing 3.6°C/6.5°F above normal) and also for Austria. France and Switzerland recorded their 2nd warmest Novembers on record.
More intense rainstorms lashed Southern France and Italy during the month. Over 300 mm (11.80”) of rainfall fell in just 3 hours at Carrara, Italy on November 5th and 700 mm (27.56”) fell at Malga Maline over 48 hours during the storm. In mid-November more torrential rain and flash floods hit northern Italy and southern Switzerland where a landslide on November 16th killed two and injured four in Davesco near the Swiss city of Lugano.A landslide killed two and injured four in the Swiss town of Davesco on November 16th following a downpour of 70 mm in just hours.
Photographer not identified.
Nice, in France, endured its wettest month on record with 503 mm (19.80”) of rainfall measured (previous record was 419 mm/16.50” in September 1992). Yet another round of flooding rain hit southern France and Italy again on November 24-26 with Collobrieres in France picking up 251 mm (9.88”) during the course of the storm and Lugo di Nazza in Corsica 480 mm (18.90”) in 24 hours on November 28th.
A powerful tropical storm-like cyclone (called a ‘Medicane’) plowed through the Mediterranean Sea on November 7th passing close to Sicily and Malta where sustained winds as high as 69 mph and gusting to 96 mph were measured. The storms central pressure fell to 979 mb and it took on the appearance of a classic tropical storm (but was not designated as such).A MODIS satellite image of the “Medicane” on November 7th as it passed close by Malta in the Mediterranean.
Image from NASA.
In the U.K. it was generally a mild and wet November (the 5th warmest on record since 1910). The warmest temperature observed was 18.7°C (65.7°F) at Writtle, Essex on November 1st and the coldest -4.6°C (23.7°F) at Cromdale, Moray on November 26th. The greatest 24-hour rainfall measured was 122.6 mm (4.83”) at Alltberg House, Isle of Skye on November 6-7.AFRICA
The same late November heat wave that baked sothern Europe also affected portions of North Africa when 33.0°C (91.4°F) was measured at Siirt and Zuara, Libya and 33.5°C (92.3°F) at Djerba Island, Tunisia on November 30th.
Deadly flash floods killed at least 32 in southern Morocco near the city of Guelmim during a period of heavy rains in late November. Marrakech was also hit hard with roads washed out and tour buses stranded. Agadir picked up 128 mm (5.04”) of rain in three days November 21-23.Flooding near the town of Guelmim, Morocco on November 23rd where at least 32 lives were lost.
Photo capture from video posted on Facebook by Ayoub Elidrissi.
The hottest temperature measured in the northern hemisphere during November was 42.0°C (107.6°F) at Matam, Senegal on November 1st, 6th, and 14th as well as at Linguere (Senegal) on November 4th.ASIA
Super Typhoon Nuri became one of the most powerful tropical storms of the year when, on November 2nd, its winds peaked at a sustained 180 mph and its central pressure plummeted to 910 mb. Fortunately, the storm never made landfall and passed well east of Japan before becoming extra-tropical and transforming into one of the most powerful cyclones ever to pass over the Bering Sea between Alaska and Siberia (see North America entry).Super Typhoon Nuri captured by astronauts on board the International Space Station on November 2nd when the storm was near its peak intensity with 180 mph winds.
Image courtesy of the European Space Agency.
November saw some extreme cold form over the Arctic Coastal region of central Russia with Norlisk setting a November monthly cold record of -44.9°C (-48.9°F) on November 25th. The coldest temperature measured in the northern hemisphere during the month was -52.7°C (-62.9°F) at Habardino, Russia on November 28th.
Torrential rains flooded northeastern Malaysia on November 18-19 with 595 mm (23.43”) measured in Kuala Terengganu and 397 mm (15.63”) at Kota Bharu over the course of 48 hours.AUSTRALIA
It was the warmest November on record for Australia (following the same in October) with the national mean temperature averaging 1.88°C (3.38°F) above normal. It was also drier than usual, with precipitation averaging 22% below normal nation-wide.Temperature (top map) and precipitation (bottom map) deciles for Australia during November. It was the warmest November on record for the nation.
Maps courtesy of the Australian Bureau of Meteorology.
A severe thunderstorm pounded Brisbane, Queensland on November 27th with large hail, winds gusting to 141 kph (88 mph) and torrential rain. 50 mm (1.97”) of rain fell in just 15 minutes at Archerfield Airport (with a storm total of 87.8 mm/3.46”). Extensive damage was reported across the city with losses in excess of US$100 million.
The hottest temperature measured in Australia and the world during the month was 46.1°C (115.0°F) at Roxby Downs, South Australia on November 22nd and the coldest -6.1°C (21.0°F) at Thredbo, New South Wales on November 2nd. The greatest calendar day rainfall was 108 mm (4.25”) at Nashua (Wilsons River), New South Wales on November 27th.NEW ZEALAND and OCEANIA
November was a relatively normal month weather-wise for New Zealand. The warmest temperature observed was 31.1°C (88.0°F) at Christchurch, South Island on November 22nd and the coldest -3.4°C (25.9°F) at Middlemarch, South Island on November 11th. The greatest calendar day rainfall was 226 mm (8.90”) at Milford Sound, South Island on November 21st. A wind gust of 209 kph (130 mph) was measured at Cape Turnagain, North Island on November 18th.
Atuona in French Polynesia recorded a temperature of 36.0°C (96.8°F) on November 20th. This was just short of the all-time heat record for French Polynesia which was 36.1°C (97.0°F) also set at Atuona on December 12, 1972. ANTARCTICA
The coldest temperature in the southern hemisphere and the world during November was –58.2°C (-72.8°F) recorded at Concordia on November 1st.KUDOS
Thanks to Maximiliano Herrera for global temperature extremes data and Jeremy Budd and NIWA for New Zealand data.
Christopher C. Burt | <urn:uuid:67bc88e9-8cec-45f2-842d-2c2a7dd3c9f5> | 2.671875 | 2,935 | News (Org.) | Science & Tech. | 72.658088 | 95,580,615 |
Almost once a year the unique phenomenon of celestial beauty called total solar eclipse takes place in different parts of the world.
Solar eclipse is an astronomical event, in which the Moon, being between the Earth and the Sun, partly or totally covers the Sun from observers on the Earth. This time, Moon will pass in front of the Sun covering it totally, which as a result will make it possible for us to see the eclipse in the outer layer of the solar atmosphere. It is an event of great scientific interest since it determines the data of the Space Weather (telecommunications, space missions, GPS systems, etc.).
The eclipse will take place on November 13 and reach its maximum phase at 22:13 GMT, the duration of which will be 4 minutes 2 seconds.
The area that will get the best line of sight is middle and subtropical latitudes of the Southern Hemisphere. Observers in the southern part of the Pacific Ocean – New Zealand, Indonesia, Oceania and other countries – will be able to enjoy the partial eclipse, while the capital of the solar eclipse will be Cairns city of Queensland in Northern Australia. It is the only major city where it will be possible to see the Sun, totally covered by the Moon.
Latest posts by Anna LeMind (see all)
- 5 Existential Questions with Two Possible Answers (and Both Are Terrifying) - June 18, 2018
- How to Overcome Social Anxiety by Asking Yourself This One Silly Question - May 16, 2018
- 8 Crazy Things Introverts Do to Avoid Talking to People - May 5, 2018
- The Unexpected Social Anxiety Therapy That Cured My Fears in 1 Day - December 16, 2017
- 6 Signs You Could Be Stuck in Life without Even Realizing It - November 17, 2017 | <urn:uuid:b361ef77-fce4-49fa-a613-79a500b65cd3> | 2.796875 | 369 | Personal Blog | Science & Tech. | 41.260736 | 95,580,623 |
Guest star (astronomy)
In Chinese astronomy, a guest star (Chinese: 客星; pinyin: kèxīng; literally: "guest star") is a star which has suddenly appeared in a place where no star had previously been observed and becomes invisible again after some time. The term is a literal translation from ancient Chinese astronomical records.
Modern astronomy recognizes that guest stars are manifestations of cataclysmic variable stars: novae and supernovae. The term "guest star" is used in the context of ancient records, since the exact classification of an astronomical event in question is based on interpretations of old records, including inference, rather than on direct observations.
In ancient Chinese astronomy, guest stars were one of the three types of highly transient objects (bright heavenly bodies); the other two (彗星, huixing, “broom star”, a comet with a tail; and xing bo, “fuzzy star”, a comet without a tail) being comets in modern understanding. The earliest Chinese record of guest stars is contained in Han Shu (漢書), the history of Han Dynasty (206 BCE – 220 CE), and all subsequent dynastic histories had such records. These contain one of the clearest early descriptions consistent with a supernova, posited to be left over by object SN 185, thus identified as a supernova remnant of the exact year 185 CE. Chronicles of the contemporary Ancient Europeans are more vague when consulted for supernovae candidates. Whether due to weather or other reasons for lack of observation, astronomers have questioned why the notable remnant attributed to Chinese observations of a guest star in 1054 AD (see SN 1054), is missing from the European records.
- Zhentao Xu, David W. Pankenier (2000) "East-Asian Archaeoastronomy: Historical Records of Astronomical Observations of China, Japan, and Korea", ISBN 90-5699-302-X, Chapter 6, "Guest Stars"
- Zhao FY; Strom RG; Jiang SY (2006). "The Guest Star of AD185 Must Have Been a Supernova". Chinese J Astron Astrophys. 6 (5): 635–40. Bibcode:2006ChJAA...6..635Z. doi:10.1088/1009-9271/6/5/17.
- Murdin, Paul; Murdin, Lesley (1985). Supernovae. ISBN 0-521-30038-X. | <urn:uuid:d4cbd258-5400-43f6-b255-b4056bc73810> | 3.734375 | 530 | Knowledge Article | Science & Tech. | 55.06629 | 95,580,639 |
Nitric oxide gets neurons together. And it seems to do it backward. Work by Nikonenko et al. suggests that a protein called PSD-95 prompts nitric oxide release from postsynaptic dendritic spines, prompting nearby presynaptic axons to lock on, and develop new synapses. The study will appear in the December 15, 2008 issue of The Journal of Cell Biology (JCB).
It is becoming increasingly clear that synaptogenesis is not solely axon driven. PSD-95 is a major component of postsynaptic densities—a conglomeration of scaffolding proteins, neurotransmitter receptors, and signaling proteins that are thought to shape dendritic spines—and reduced levels of PSD-95 impair synapse development. How PSD-95 works, however, was unknown.
Nikonenko et al. overexpressed PSD-95 in cultured hippocampal neurons and found that the cells' dendritic spines grew two to three times their normal size and were often contacted by multiple axons—a rare occurrence in the adult brain. By mutating different parts of PSD-95, the team discovered that the region responsible for prompting multi-axon connections was also required for binding nitrogen oxide synthase. The team cut to the chase, bathed neurons in nitric oxide, and showed this was sufficient to promote the extra axon connections. Since bathing cells in nitric oxide and overexpressing proteins do not reflect normal physiological conditions, the team also inhibited nitric oxide synthase in wild-type neurons and confirmed that synapse density was reduced.
Overexpressing PSD-95 increased the amount of nitric oxide synthase at postsynaptic densities, suggesting PSD-95 recruits the synthase to its required locale. Interestingly, PSD-95 that lacked its synthase interaction domain still induced super-sized dendritic spines, suggesting PSD-95 wears more than one hat at the synapse construction site.
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:69066397-8326-46dd-94ac-c29ae297ef69> | 2.53125 | 1,036 | Content Listing | Science & Tech. | 38.771071 | 95,580,662 |
A View from Emerging Technology from the arXiv
A Student Space Mission To Study Planet Formation
The only way to study the early stages of planet formation is in microgravity. This is how a group of students set out to do it
The Esrange Space Centre is located in the Arctic Circle in northern Sweden close to the mining town of Kiruna. On 19 March last year, it hosted the launch of an unusual space mission to study the way planets form.
Astrophysicists believe that planet formation begins when micrometre-sized dust particles, the left-overs from star formation, become bound to each other to form millimetre or centimetre sized pebbles. These then aggregate into bigger rocks and so on. But exactly how this first stage happens isn’t well understood.
That’s partly because experiments to study this phenomenon are difficult to do on Earth. In this early stage of planet formation, the dust particles probably collide at velocities of less than 1 centimetre per second and this can only be reproduced and studied in microgravity conditions.
So the mission, called REXUS 12, was a suborbital hop that generated up to 3 minutes of microgravity in which to study how dust particles stick together. “The experiment was designed, built and carried out to increase our knowledge about the processes dominating the first phase of planet formation,” say Julie Brisset and pals at the Technical University of Braunschweig in Germany.
The space mission was unusual because Brisset and several of her colleagues are students working towards their PhDs. REXUS stands for Rocket Experiments University Students, a project that is funded in large part by the German Aerospace Centre DLR.
The experiment is reltively simple. It consisted of a machine that shakes glass containers of dust to produce particle collisions of the required velocity. The dust was made up of sub-millimetre grains of spherical and irregular silicon dioxide. Brisset and co videoed the entire experiment at a rate of 170 frames per second to see exactly how the dust particles behaved in microgravity conditions.
They say they learned some valuable lessons about the practicalities of this kind of work. For example, their glass containers were specially coated with an anti-adhesive layer designed to prevent the dust sticking to the walls of the container.
But this was not as efficient as they had hoped. “The dust aggregates … possess a very high sticking efficiency with the glass walls of the particle containers, even though these were actually coated with a nano-particle anti-adhesive layer,” say Brisset and co. So finding better ways to prevent this kind of sticking will be important in future.
They also noticed that the microgravity conditions during the experiment were far from perfect and that this caused some of the dust to accumulate in one corner of the containers. Brisset and co say this was the result of accelerations caused by residual atmospheric drag and the spin of the rocket.
The team also say that if they had the chance to run the experiment again, they would use a camera with more internal memory so that they could use a higher frame rate for recording the data.
Brisset and co have yet to publish a detailed analysis of their data. But when particles collide there are essentially three possible outcomes: they can bounce, stick together to form larger particles or fragment into smaller particles. Theorists believe that the outcome depends only on the mass of the particles and their velocity and have created a kind of phase diagram showing what ought to happen for different values of these variables (see diagram above).
The images in this paper do indeed show how dust aggregates combine to form larger particles. It’ll be interesting to see whether the data provides any more detailed insights into this process and whether the theoretical predictions of the way dust aggregates actually match their experimental observations.
Ref: arxiv.org/abs/1308.3645 : The Suborbital Particle Aggregation And Collision Experiment (SPACE): Studying The Collision Behavior Of Submillimeter-Sized Dust Aggregates On The Suborbital Rocket Flight REXUS 12
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:a066e281-39be-4090-ae13-8534dc30d91d> | 3.828125 | 891 | Truncated | Science & Tech. | 36.147253 | 95,580,663 |
II photon pairs with mutually perpendicular polarization. For example, if a pair of particles are generated in such a way that their total spin is quantum mechanics griffiths pdf download to be zero, and one particle is found to have clockwise spin on a certain axis, the spin of the other particle, measured on the same axis, will be found to be counterclockwise, as to be expected due to their entanglement. It thus appears that one particle of an entangled pair “knows” what measurement has been performed on the other, and with what outcome, even though there is no known means for such information to be communicated between the particles, which at the time of measurement may be separated by arbitrarily large distances. Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally.
This is a mixed ensemble, quantum Information and Computation, free test of Bell’s Inequality supports local realism”. From the University of Vienna, and nothing needs to be transmitted from one particle to the other at the time of measurement. In the media and popular science, note on Exchange Phenomena in the Thomas Atom”. For the appropriately chosen measure of entanglement, it still is possible to associate a density matrix. Mechanical Description of Physical Reality Be Considered Complete? Finding out whether or not a mixed state is entangled is considered difficult.
Recent experiments have measured entangled particles within less than one hundredth of a percent of the travel time of light between them. According to the formalism of quantum theory, the effect of measurement happens instantly. They wrote: “We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete. Schrödinger shortly thereafter published a seminal paper defining and discussing the notion of “entanglement.
EPR, was mathematically inconsistent with the predictions of quantum theory. However, in 2015 the first loophole-free experiment was performed, which ruled out a large class of local realism theories with certainty. The work of Bell raised the possibility of using these super-strong correlations as a resource for communication. Although BB84 does not use entanglement, Ekert’s protocol uses the violation of a Bell’s inequality as a proof of security.
Physicist John Bell depicts the Einstein camp in this debate in his article entitled “Bertlmann’s socks and the nature of reality”; category:CS1 maint: Explicit use of et al. And one particle is found to have clockwise spin on a certain axis – an efficient conversion of the photon energy into chemical energy is possible only due to this entanglement. Is confident that this new quantum imaging technique could find application where low light imaging is imperative, there are several canonical entangled states that appear often in theory and experiments. Rosen correlations admitting a hidden, click the page on which you want to add a table and then click the “Page” tab. Using Play With Pictures – this page was last edited on 10 February 2018, the outcome of Alice’s measurement is random. Are not contained in quantum formalism, bob’s measurement will return 0 with certainty. Tech support tips and notify users of new releases and product launches.
This program is prepeared for pre, browse your computer and locate a new image. Free experiment was performed, it may be correct, the counterintuitive predictions of quantum mechanics were verified experimentally. Jacobi is scrutinized by employing a field description for the four — science 16 Jun 2017: Vol. From the Micius satellite to bases in Lijian, polarization correlation was created between photons that never coexisted in time. This software was checked for viruses and was found to contain no viruses. | <urn:uuid:da5b1890-3936-4077-be9d-364e3b2682d1> | 2.875 | 767 | Truncated | Science & Tech. | 30.338694 | 95,580,664 |
How JAVA achieves platform independence?
5129 Since 25th November, 2003
Select and Copy the Code
Java is platform independent because of the consistent data sizes at all the platforms on which Java code is run. The output of a Java compiler is not an executable code it is bytecode. bytecode is highly optimized set of instructions designed to be executed by a virtual machine that the Java run-time system emulates. As Java programs are interpreted, rather than compiled it is much easier to run them in a variety of run-time environments. | <urn:uuid:72551b5c-04ce-4070-b650-b08f54030f76> | 3.015625 | 112 | Q&A Forum | Software Dev. | 34.220516 | 95,580,675 |
|Debugging with GDB|
By default and if not explicitly closed by the target system, the file
descriptors 0, 1 and 2 are connected to the gdb console. Output
on the gdb console is handled as any other file output operation
write(1, ...) or
write(2, ...)). Console input is handled
by gdb so that after the target read request from file descriptor
0 all following typing is buffered until either one of the following
conditions is met:
readsystem call is treated as finished.
If the user has typed more characters than fit in the buffer given to
read call, the trailing characters are buffered in gdb until
read(0, ...) is requested by the target, or debugging
is stopped at the user's request. | <urn:uuid:3491fe75-740e-4249-b54e-d8c83e9af42b> | 2.828125 | 169 | Documentation | Software Dev. | 66.942402 | 95,580,676 |
This find, from rocks 390 million years old, suggests that spiders, insects, crabs and similar creatures were much larger in the past than previously thought.
Dr Simon Braddy from the Department of Earth Sciences at the University of Bristol, co-author of an article about the find, said, ‘This is an amazing discovery. We have known for some time that the fossil record yields monster millipedes, super-sized scorpions, colossal cockroaches, and jumbo dragonflies, but we never realised, until now, just how big some of these ancient creepy-crawlies were.’
The research is published online today in the Royal Society’s journal Biology Letters. The claw was discovered by one of Dr Braddy’s co-authors, Markus Poschmann from Mainz Museum, Germany, in a quarry near Prüm in Germany.
Poschmann described finding the fossil: " I was loosening pieces of rock with a hammer and chisel when I suddenly realised there was a dark patch of organic matter on a freshly removed slab. After some cleaning I could identify this as a small part of a large claw. Although I did not know if it was more complete or not, I decided to try and get it out. The pieces had to be cleaned separately, dried, and then glued back together. It was then put into a white plaster jacket to stabilise it."
The claw is from a sea scorpion (eurypterid) Jaekelopterus rhenaniae that lived between 460 and 255 million years ago. It is 46 centimetres long, indicating that the sea scorpion to which it belonged was around 2.5 metres (8 feet) long – almost half a metre longer than previous estimates for these arthropods and the largest one ever to have evolved.
Eurypterids are believed to be the extinct aquatic ancestors of scorpions and possibly all arachnids.
Some geologists believe that giant arthropods evolved due to higher levels of oxygen in the atmosphere in the past. Others, that they evolved in an 'arms race' alongside their likely prey, the early armoured fish.
‘There is no simple single explanation’, explains Braddy. ‘It is more likely that some ancient arthropods were big because there was little competition from the vertebrates, as we see today. If the amount of oxygen in the atmosphere suddenly increased, it doesn't mean all the bugs would get bigger.’
Cherry Lewis | alfa
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:d29e0e33-7897-4230-8857-3f086ff44140> | 3.859375 | 1,089 | Content Listing | Science & Tech. | 44.160604 | 95,580,704 |
This summer, biologists from the University of Alberta and the Smithsonian Migratory Bird Center were back in action in Alberta’s boreal forest. In an effort to find out more about where boreal breeding birds spend the winter, and what routes they take during migration, our crews have been putting tracking devices on a variety of bird species including: Broad-winged hawks, Rusty Blackbirds, Olive-sided Flycatchers, Common Nighthawks, Palm Warblers, Canada Warblers, and Connecticut Warblers as part of the Migratory Connectivity Project.
I have spent the last two years researching Canada Warblers for my M.Sc. thesis, so I jumped at the opportunity to work with the Smithsonian’s Michael Hallworth (post-doctoral fellow) to catch both Canada Warblers and Connecticut Warblers and fit them with light-level geolocators: tracking devices that record ambient light levels which are then used to determine locations. An estimated 85% and 95% of the breeding populations of these species, respectively, occur in Canada, and these neotropical migrants make the long journey to and from South America each year. Canada Warblers are listed as threatened in Canada, so there has been plenty of recent interest in finding out more about the drivers of population declines, and in what part of their range these impacts are occurring. Connecticut Warblers are highly under-studied, and known for secretive behavior during migration, so information on their migration ecology is scarce.
Photos: Male Canada Warbler with distinct black necklace (left) and male Connecticut Warbler with full grey hood (right)
We headed out to Lac La Biche, Alberta to begin a whirlwind 10 days of “warblering”. Because both Michael and I had previous experience conducting research on Canada Warblers, and because I had worked on Canada Warblers in this area the previous year, we thought this species would be a good place to start. Using mist-nets, playbacks of male territorial songs and a decoy (“Agro Al” made by Hedwig Lankau) to lure them in, we set out to capture some birds. Within three days we had captured our quota of 15 Canada Warblers, including recapturing two of the males that I had colour-banded in 2015 (pictures here). This means these 10 gram birds had successfully migrated to South America and back, returning to the exact same breeding location in Alberta! We fitted all captured males with geolocators as well as unique identifying coloured leg bands to make them easier to relocate next year if and when they return to their breeding grounds.Photos: “Agro Al”, our decoy, checks out a caterpillar while trying to lure in a territorial Canada Warbler (left); Anjolene fits a geolocator on a Canada Warbler using a leg loop harness (centre); This male shows off his geolocator light sensor, which will use ambient light to track his location during migration (right).
With the Canada Warblers taken care of, we set out to catch our second species, the potentially more elusive Connecticut Warbler. With our relatively limited knowledge of this species, we were anticipating a lot more difficulty finding and capturing them. We started our day planning on bushwhacking our way ~1 km to an area where this species had been previously detected. However, seconds after opening the door of our truck, we heard a Connecticut Warbler singing its heart out only 50 metres off of the highway. We hurried over and were able to capture the male fairly quickly and fit him with a geolocator. While we were processing our first bird, we heard another not too far away, and so the trend continued for the next few days, in which we caught our 15 Connecticut Warblers (including two in the same net!).
Photos: Michael extracts our first Connecticut Warbler from a mist-net (left); This Connecticut Warbler displays his new colour bands, which will be used to identify him next year (center); Michael fits a male Connecticut Warbler with a geolocator using a leg loop harness (right).
With the captures complete for this year, we now wait with anticipation until summer 2017 to recapture our males, collect the geolocators and find out where they’ve been!
-Post by Anjolene Hunt
-A view from above: A video of Ferruginous Hawk nestlings using a pole mounted camera.
Post and Video by Cameron Nordell
Among the many impacts of the Fort McMurray wildfire this past spring, one of the less discussed is the disruption of research in northeastern Alberta. Although not devastating the way the loss of homes and businesses was, the 600,000 ha wildfire was a major wrench in planning for many research programs. There is a lot environmental and biological research in northeastern Alberta, due in part to the industrial development in the area. Not only did the wildfire burn many research study areas, it destroyed field houses and research equipment, rendered field sites inaccessible during critical times, and created logistical difficulties in an area of the province that is already challenging to work in.
This past spring was a stressful time in Dr. Erin Bayne’s lab at the University of Alberta. Our research group, the Bioacoustic Unit, conducts a large proportion of our research in northeastern Alberta. During the peak of the Fort McMurray wildfire in early May, at least five of Dr. Bayne’s graduate students, myself included, scrambled to develop contingency plans for our research, which we had carefully planned over the previous 9 months as if they were our babies. Our lab sat glued to the news and the live-updating fire layer in Google Earth, as Dr. Bayne updated us on how many of the lab’s acoustic recording units had burnt to a crisp. All told, our lab lost over $30,000 in field equipment, in addition to the countless time and money spent contingency planning.
Ironically, I study a bird that thrives in post-fire boreal habitat, and my study area is a five-year old wildfire. My study species is the Common Nighthawk—a highly understudied nocturnal bird that is listed as Threatened under Canada’s Species at Risk Act. I chose to study Common Nighthawks near McClelland Lake in northeastern Alberta because the burned, sandy jack pine forest there is home to one of the densest populations of Common Nighthawks on the planet. The McClelland Lake area is the southern extent of the Richardson burn, which was even larger than the 2016 Fort McMurray wildfire, burning 700,000 ha of sandy jack pine forest and threatening Fort McMurray from the north in 2011. In addition to the nighthawks, the area is a haven for otherwise rare species; there are Canadian toads calling from every puddle, Olive-sided Flycatchers practically grow on trees, and Yellow Rails abound in the spectacular patterned fen that covers half of McClelland Lake itself.
During the first couple days of the Fort McMurray fire, I overhead someone ask Dr. Bayne how our lab was impacted; he said we were okay, except “she’s totally screwed,” pointing his finger at me. Yikes. My study area at McClelland Lake is at the very end of Highway 63. The only way to get to there is through Fort McMurray, which at the time was threatened by a massive, raging wildfire nicknamed “The Beast.” This created two major problems for my research: first, I wasn’t sure I could get to my study area at all! Highway 63 is the only access point to the McClelland area, and it was closed for much of May with an advisory that it could be closed again at any time due to the ongoing active fire in the area. The second problem was that even if we could get to the McClelland area, we could no longer rely on Fort McMurray for supplies and emergency support, and there simply was no alternative up there.
As any graduate student can attest, I was hyper-invested in my PhD thesis and determined to collect data. In the spirit of “Alberta Strong,” I buckled down for some creative contingency planning. Field ecology is a logistical nightmare on a good day, and this was one of the greatest logistical challenges I’d faced in my 10+ years as a field ornithologist. During May, I planned three different field seasons: one at my original study area near McClelland Lake, one north of Lac La Biche, and one in the Bruderheim area northeast of Edmonton. For McClelland (if we could get there), we would have to bring everything with us for four people for at least a month in case the highway closed again. So at the end of May, we rented a 400 L slip tank for gas, bought enough blue jugs to hold 500 L of water, cleared out a grocery store, gathered together the penultimate first aid kit, wrote a new safety protocol, and loaded up a truck and utility trailer with research equipment.
But there was also a third challenge: even if we could get to one of our three potential study areas, we had no way of getting around that study area because there was an ATV ban in place. ATVs pose a substantial fire risk because their mufflers can ignite fires, and Alberta was under extreme wildfire risk in May. Like other boreal researchers, our lab has traditionally used gas-powered ATVs to slog through the challenging terrain of Alberta’s boreal forest. Additionally, walking was not going to be effective for studying Common Nighthawks because these highly aerial birds can quickly travel large distances. As an avid cyclist, I joked offhand to Dr. Bayne about using bikes for field work. And then I remembered that some of the earliest fat bikes, those bikes with wide tires that Albertans ride in the winter, were actually designed for sand and all three of my planned field sites were sandy. I took a borrowed fat bike to a site in Bruderheim and was immediately convinced that fat bikes would be an effective, and fun, way to get around our study areas. Fat bikes would also reduce our carbon footprint, our fire hazard, and were sure to be a good workout. Above all else, fat bikes would ensure we could go wherever we needed to go without relying on ATVs.
Time-lapse test of a fat bike on the sand dunes of Bruderheim, AB. Photo: Jonathan DeMoor
The next step was to find fat bikes for the summer, and the Edmonton bike community had our backs! The University of Alberta’s Office of Sustainability was excited about the low-carbon aspect of our initiative, and lent a helping hand by approaching local bike shops and pushing the project forward. We eventually teamed up with two local shops—United Cycle and Hardcore Bikes—to put the plan into action. Fat bikes are expensive because of their specialized components and both community-minded bike shops saw that fat bikes would make our research possible, so they generously loaned us two bikes each! Our team of four would now be able to get around and do the research we had planned.
So our team loaded the fat bikes into our utility trailer with the rest of our gear and started driving north in late May. To our great pleasure and my fortune, the brave folks fighting the Fort McMurray wildfire had battled it down enough for us to get through town and to the McClelland area just in time for the nighthawks to arrive. And the nighthawks did not disappoint! These nocturnal creatures were back again in large numbers, and we were able to carry out one of the most intensive studies of the species to date. My thesis objective is to study the variation in the sounds that Common Nighthawks use to learn more about their habitat use. This year, we tagged and tracked several dozen birds within grids of acoustic recording units (ARUs) to determine whether their acoustic behaviour varies between activities.
The fat bikes also did not disappoint. They allowed our team to get everywhere, including down roads that would otherwise be inaccessible because of fallen burnt trees. We travelled to study sites by bike, tracked birds from bikes, and deployed acoustic recording units by bike. Often we biked at night, because Common Nighthawks are nocturnal, which made for Go-Pro videos with a very “Blair Witch Project” feel to them. We had so much fun on the fat bikes that when the ATV ban was lifted mid-season, we continued to use the fat bikes and left the ATVs in Edmonton. At an average fuel consumption of 2.5 L/day, we estimate our team saved over 400 L of gas this summer by using fat bikes instead of ATVs.
Looking forward to next year, I plan to head back to the McClelland Lake area to study Common Nighthawks and I’d like to take fat bikes again. Regardless of whether there’s an ATV ban in place, fat bikes are more sustainable, not to mention easier to maintain, safer, and more fun. Within Dr. Bayne’s lab, we’d like to explore using fat bikes in other study areas too. They won’t work everywhere because there’s too many wetlands in northern Alberta, but we think they might be a realistic alternative in some of our lab’s forested study areas. As for the Fort McMurray wildfire, the aftermath provides a wide range of ecology research opportunities, and our lab is excited to get out there and study the animals that live in post-burn areas. There is a whole community of animals that rely on wildfire to create habitat for them in the boreal forest, including my study species, the Common Nighthawk.
Here’s to future adventures with fat bikes & birds!
-Post by Elly Knight
The Boreal Avian Modelling Project (BAM) is seeking an avian ecologist to fulfil a postdoctoral position at the University of Alberta. BAM is a continental scale effort to understand the ecology and dynamics of avian populations and their habitats in the boreal forest of North America (for more details on BAM see www.borealbirds.ca). Working with a team of avian ecologists, conservation scientists and statisticians, the post-doctoral fellow will conduct science to support the characterization and identification of critical habitat for several boreal bird species in Canada, including Canada Warbler, Olive-sided Flycatcher, and Common Nighthawk. The position will involve collaboration with federal and provincial governments, industry, and other academic institutions.
We are seeking a candidate meeting the following criteria:
1) Self-motivated & able to confidently interact with people of varying backgrounds
2) Strong background in avian ecology and conservation science
3) Knowledge of regulatory requirements related to migratory birds
4) Experience with wildlife-habitat modelling & Geographic Information Systems, preferably at large scales
5) Excellent and demonstrated writing skills
6) Strong quantitative skills
The position is available immediately. We will take accept applications until a suitable candidate is found. To apply, please provide a letter of interest, CV, and an example of your writing skills in the form of a peer-reviewed paper or thesis.
The position will be located at the University of Alberta in Edmonton, AB with an annual salary of $55,000 plus benefits. The length of the fellowship is 2.5 years.
Candidates should send their application package to:
Dr. Nicole Barker, BAM Coordinating Scientist
During the breeding season, male songbirds often have brightly coloured and contrasting feather patterns to attract females. These patterns often become more pronounced and defined in older adult males. One of the distinguishing features of the Canada Warbler is the adult male’s dark necklaced feather pattern, which gets darker and more distinct after their second year. Not only did our banded male from 2015 return to the same breeding location, but he came back in 2016 sporting a beautifully developed necklace, and was accompanied by a nesting female! Looks like a year abroad did wonders for this warbler’s appeal. Post and photos by Anjolene Hunt.
It’s always fun to change things up and work with a new species or in a new habitat. Songbird researcher Anjolene Hunt was happy to help Jesse Watson, Frank Pouw, and Walter the owl carry out Broad-winged hawk capture and transmitter attachment as part of the Migratory Connectivity Project, a collaborative effort between the Smithsonian Migratory Bird Center and the University of Alberta. Photos and post by Anjolene Hunt.
Black-backed woodpeckers make for noisy, but very photogenic, neighbours. Photo by Anjolene Hunt
A sharp-tailed grouse shows off her namesake feathers in a clearcut in northern Alberta. Photo by Elly Knight. | <urn:uuid:0156497e-db02-470a-a209-465be495a545> | 3.203125 | 3,521 | Content Listing | Science & Tech. | 42.190993 | 95,580,710 |
Total solar eclipses are rare, but our position in relation to the sun and moon suggests it should be pretty common. Every month, the moon passes between the Earth and the sun, giving the moon an easy opportunity to blotch out our glowing star. Alas, this happens infrequently, and the moon’s “wobble” is to blame.
The moon doesn’t orbit in a flat path, instead moving up and down as it circles the Earth. Scientists call this movement a wobble.
As the moon makes a complete lap around Earth, it certainly does pass between the Earth and sun, but because of its wobble, sometimes it passes too high or too low to actually eclipse the sun. When the moon’s wobbling transit brings it too high, the moon’s shadow “overshoots” the Earth. The opposite happens when its orbit is too low.
The moon’s wobble, however, is not random. Instead, it’s on a strict schedule, repeating itself approximately every 18 years, 11 days and, eight hours — a periodicity astronomers call the Saros cycle. Its consistency is one reason why scientists can make such precise predictions about when the next total solar eclipse will arrive.
Because of the moon’s scheduled wobble, the coming August 21 total solar eclipse is rare to begin with, but it’s extraordinarily rare for another reason. As the moon completely obstructs the sun, it will cast its shadow across the entirety of the United States — a thin, 70-mile wide path from Oregon to South Carolina. This cross-country route hasn’t occurred since 1776, when the United States declared its independence.
The last total solar eclipse to graze the U.S. occurred nearly 40 years ago, in 1979, but not nearly as many Americans lived close enough to see it. The moon’s shadow just passed over the Northwest, from Washington to North Dakota.
For guidance on how to watch the wobbling moon pass in front of the sun on August 21, Inverse provides insight on solar viewing glasses here and guidance on when to take them put them on (and take them off) here. | <urn:uuid:6b37a1e6-0997-417f-81b7-a7765ed94ac9> | 3.75 | 457 | News Article | Science & Tech. | 57.917168 | 95,580,715 |
Description of two new species of Santelmoa (Teleostei, Zoarcidae) from the Southern Ocean
- 102 Downloads
Detailed examination of eelpouts in collected material from the Gerlache Strait and the Bellingshausen Sea, during the Spanish Antarctic Expeditions Bentart 03 and Bentart 06, and from the Bransfield Strait, during the Danish Galathea 3 Expedition, at depths between 1,056 and 1,837 m, revealed two undescribed species of Santelmoa Matallanas 2010. Herein, Santelmoa fusca sp. nov. and Santelmoa antarctica sp. nov. are described on the basis of twelve specimens. Santelmoa fusca can be separated from all other Santelmoa species by the following characters: mouth terminal; two posterior nasal pores; lateral line double; two irregular rows of palatine teeth; dorsal fin rays 109–113; anal fin rays 88–94; vertebrae 27–29 + 87–91 = 114–118; two pyloric caeca well developed; scales reduced to tail; pelvic fins and vomerine teeth present. Santelmoa antarctica can be separated from all other Santelmoa species by the following characters: mouth subterminal; two posterior nasal pores; suborbital pores seven (6 + 1); lateral line double; single row of palatine teeth; supraoccipital dividing the posterior end of frontals; central radials notched; dorsal fin rays 109–112; anal fin rays 89–93; vertebrae 27 + 89–92 = 116–119; two pyloric caeca well developed; scales, ventral fins and vomerine teeth present. Santelmoa fusca and S. antarctica can readily be separated from each other by squamation (reduced to tail vs. on the tail and on the posterior part of body); suborbital pore pattern (6 + 0 vs. 6 + 1), as well as several morphometric characters. The relationships of the two new species with congeners are discussed.
KeywordsEelpouts Santelmoa fusca Santelmoa antarctica
We thank the scientific team, the captain, crew and UTM technicians of the R/V Hespérides for their help during the Bentart 03 and Bentart 06 cruises, on which many specimens were captured. Special thanks to José Castro “Córdoba” for his expertise in fishing with the traps and to Dr. Ignacio Olaso for helping onboard with the collection of type specimens. We are also grateful to Carmen Benito for the X-rays, Jordi Corbera for the illustrations, and Muriel, daughter of the first author, for the English correction of the draft manuscript. This manuscript also benefited from the constructive criticism by reviewers (Gento Shinohara and Mario la Mesa). The Bentart cruises were supported by the Spanish Antarctic Programme (CICYT), and the Galathea 3 Expedition was conducted under the auspices of the Danish Expedition Foundation. This study was supported by a grant from the Ministerio de Ciencia e Innovación, Spain: “Culminación del estudio taxonómico integrado de Zoárcidos (Teleostei, Perciformes) antárticos (CTM2011-1585-E)”, and a grant from the Danish National Research Foundation, “Evolution of Polar fishes”.
- Anderson ME (1982) Revision of the fish genera Gymnelus Reinhardt and Gymnelopsis Soldatov (Zoarcidae), with two new species and comparative osteology of Gymnelus viridis. Natl Mus Nat Sci Publ Zool 17:1–76Google Scholar
- Anderson ME (1990) Studies on the Zoarcidae, (Teleostei: Perciformes) of the southern hemisphere. III. The southwestern Pacific. JLB Smith Inst Ichthyol Spec Publ 50:1–17Google Scholar
- Anderson ME (1994) Systematics and osteology of the Zoarcidae (Teleostei: Perciformes). JLB Smith Inst Ichthyol Ichthyol Bull 60:1–120Google Scholar
- Anderson ME (2006) Studies on the Zoarcidae of the Southern hemisphere. X. New records from western Antarctica. Zootaxa 1110:1–15Google Scholar
- Andriashev AP (1965) A general review of Antarctic fish fauna. In: van Oye P, van Mieghen J (eds) Biogeography and ecology in Antarctica. Elsevier, Amsterdam, pp 491–550Google Scholar
- Gosztonyi AE (1977) Results of the research cruises of FRV “Walter Herwig” to South America. XLVIII. Revision of the South American Zoarcidae (Osteichthyes, Blennioidei) with the description of three new genera and five new species. Arch Fisch Wiss 27:191–249Google Scholar
- Gosztonyi AE (1988) The intercalar bone in the eelpout Family Zoarcidae (Osteichthyes). Zool Anz 3–4:134–144Google Scholar
- Matallanas J (2009b) Description of Ophthalmolycus andersoni sp. nov. (Pisces, Zoarcidae) from the Antarctic Ocean. Zootaxa 2027:55–62Google Scholar
- Møller PR, Stewart AL (2006) Two new species of eelpouts (Teleostei, Zoarcidae) of the genus Seleniolycus from the Ross Dependency, Antarctica. Zootaxa 1376:53–67Google Scholar
- Regan CT (1914) Fishes. British Antarctic Terranova Nova Expedition 1910. Nat Hist Rep Zool 1:1–54Google Scholar
- Voskoboinikova OS, Laius DL (2003) Osteological development of European eelpout Zoarces viviparus (Zoarcidae). J Ichthyol 43:646–659Google Scholar | <urn:uuid:2de38239-3a1d-4011-80ae-3d14be5d75bb> | 2.765625 | 1,318 | Academic Writing | Science & Tech. | 43.667781 | 95,580,732 |
2 September 2015
Winking exoplanets could shed light on distant comet strikes
A WINK’S as good as a nod to an astronomer. Seeing an exoplanet suddenly brighten might hint that something has crashed into it. Information from that event could tell us a lot about the planet and its neighbours.
Current telescopes aren’t sensitive enough to pick up light from an exo-Jupiter, but the next generation might be. To figure out how to tell if a comet has struck one, Laura Flagg of Northern Arizona University looked to comet Shoemaker-Levy 9, which slammed into Jupiter in 1994.
“I always imagine it as a splash,” she says. “The comet broke up into much smaller pieces, and the particles settled in the stratosphere.”
Flagg’s analysis shows that Jupiter, if seen as a point of light in another solar system, would register only a small change in the months following the crash. But at near-infrared wavelengths where methane typically absorbs starlight, the planet could get twice as bright, since shinier dust debris would cloak methane’s spectral signature (Icarus, doi.org/65v).
If a future telescope sees a quick brightening, it could tell us about the planet’s atmosphere, and whether the system had comets, asteroids or another planet causing these objects to collide.
This article appeared in print under the headline “Winking exoplanets hint at comet strikes”
Thought plastic was bad enough? Here’s another reason to worry
CRISPR gene editing is not quite as precise and as safe as thought
Autism can bring extra abilities and now we’re finding out why
Horses remember if you smiled or frowned when they last saw you
Wireless implant lights up inside the body to kill cancer
Quantum dots in brain could treat Parkinson’s and Alzheimer’s diseases
Brain images display the beauty and complexity of consciousness
Spaceport UK: Locations for launch sites unveiled
Asia’s mysterious role in the early origins of humanity
Cause of polycystic ovary syndrome discovered at last | <urn:uuid:e0ea6e0c-35a2-4baa-b774-0933df380756> | 3.75 | 452 | Truncated | Science & Tech. | 38.088837 | 95,580,737 |
Real time imaging and transcriptome analysis of medaka aboard space station
Space travel in a reduced gravity environment can have lasting effects on the body. For example, researches clearly show that astronauts undergo a significant drop in bone mineral density during space missions, but the precise molecular mechanisms responsible for such changes in bone structure are unclear.
(a-d) Whole-body imaging of the osterix-DsRed transgenic line. The left-side images show the same ground control at day 1; and the right-side images, the same flight medaka at day 1. Arrows point to the head and fin region. All images show ventral views. Montage images were made from 6 captured optical images, divided by dotted lines (a,b). The white region shows an osterix-DsRed fluorescent signal. Embedded views show the enlarged head region (c,d). (e) The fluorescent intensity from day 1 to 7 of observation day constantly increased in the flight group. (f-h) The representative visualizing data for osterix-DsRed/TRAP-GFP in the flight group. All images show ventral views in the head region. (i-l) The merged images were captured by 3D views for osterix-DsRed and TRAP-GFP in the pharyngeal bone region of the double transgenic line. The pharyngeal bone region in the ground control (i) or the flight (k) group at day 4. The image for TRAP-GFP in the pharyngeal bone region of "i" (j) or "k" (l). lp, lower pharyngeal bone; c, cleithrum. GFP signals identify osteoclasts (OC).
Credit: Tokyo Institute of Technology
Now, Akira Kudo at Tokyo Tech, together with scientists in Japan in support of other countries, performed remotely live-imaging (real time) for fluorescent signals derived from osteoblasts and osteoclasts of medaka fish after only one day of exposure to microgravity aboard the International Space Station (ISS). They found increases in both osteoblast and osteoclast specific promoter-driven GFP and DsRed signals one day after launch, and continued for up to eight days.
In their experiments, the team used four different double medaka transgenic lines focusing on up-regulation of fluorescent signals of osteoblasts and osteoclasts to clarify the effect of gravity on the interaction of osteoblast-osteoclast. They also studied changes in the gene expression in the transgenic fish by so-celled transcriptome analysis.
These findings suggest that exposure to microgravity induced an immediate "dynamic alteration of gene expressions in osteoblasts and osteoclasts." Namely, these experiments based on real time imaging of medaka from Earth and transcriptome analysis could be the prelude to the establishment of a new scientific areas of research in "gravitational biology".?
The live-imaging of fluorescence microscopy signals from the fish aboard the ISS were monitored remotely from Tsukuba Space Center in Japan.
Live-imaging of osteoblasts showed the intensity of osterix- and osteocalcin-DsRed in pharyngeal bones to increase one day after launch. This increased effect continued for eight days for osterix- and 5 days for osteocalcin.
In the case of osteoclasts, the fluorescent signals observed from TRAP-GFP and MMP9-DsRed increased significantly on the fourth and sixth days after launch.
The fluorescent analysis was complimented by using transcriptome analysis to measure gene expression in the transgenic fish. The researchers state that, "HiSeq from pharyngeal bones of juvenile fish at day 2 after launch showed up-regulation of 2 osteoblast- and 3 osteoclast- related genes".
Also, transcription of the "nucleus" was found to be significantly enhanced based on whole body gene ontology analysis of RNA-Seq, with the researchers observing transcription-regulators to be more up-regulated at day 2 compared with during day 6.
Finally, Kudo and the team identified 5 genes: (c-fos and jun-b, pai-1 and ddit4, and tsc22d3) that were all up-regulated in the whole-body on days 2 and 6, and in the pharyngeal bone on day 2.
Live in so-called 'microgravity' environments -- where the force of gravity is considerably less than on Earth -- can cause significant problems for the human body. Astronauts who spend a number of months in space have been shown to suffer from reduced bone mineral density, leading to skeletal problems. Surprisingly, the loss of calcium starts at least 10 days after launch in astronauts in Skylab Flights, as to symptoms that appear early in orbit.
The precise molecular mechanisms responsible for loss of bone density are not yet fully understood. The current study by Kudo and his team is a major step towards uncovering the mechanisms governing changes in bone structure immediately after the onset of microgravity, when bone loss is triggered. By remote live-imaging from Tsukuba Space Center of the behavior of medaka on board the ISS, they found significant increases in both osteoblast and osteoclast specific promoter-driven GFP and DsRed after exposure to microgravity. The findings imply that changes in osteoblasts and osteoclasts occur very soon after launch.
In the next space experiment, Kudo and colleagues will clarify the role of glucocorticoid receptor (GR) on cells in microgravity.
Emiko Kawaguchi | EurekAlert!
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:68a5f0ab-682a-4045-9bf8-545f393d4cf5> | 3.328125 | 1,804 | Content Listing | Science & Tech. | 38.639141 | 95,580,751 |
Israeli researchers find cell phones don't cause cancer
New US-Israeli study shows how tiny man-emitted particles affect weather, cropsA study conducted jointly by American and Israeli researchers has concluded that even the tiniest of man-emitted particles, much smaller than previously thought, is powerful enough to cause thunderstorms that can cause severe damage to crops.
A German research aircraft measuring clouds in the Amazon
A new study co-conducted by a Jerusalem Hebrew University professor has found that thunderstorms powerful enough to cause severe damage to crops can be formed by even the tiniest man-emitted particles.
The study examined the impact of aerosols – particles that are less than one-thousandth of the width of a human hair – on phenomena such as soil erosion and water runoff. It was conducted in the Amazon, a setting that allowed the researchers to accurately observe the effects of these tiny particles, which come from urban and industrial air pollution.
“We showed that the presence of these particles is one reason why some storms become so strong and produce so much rain," said Dr. Jiwen Fan of the Department of Energy’s Pacific Northwest National Laboratory, the lead-author of the study. "In a warm and humid area where atmospheric conditions are otherwise very clean, the intrusion of very small particles can make quite an impact."
Professor Daniel Rosenfeld, of The Hebrew University of Jerusalem’s Institute of Earth Sciences, the second-author, added, "This groundbreaking research strongly suggests that mankind has likely altered the rainfall and weather in densely populated tropical and summer monsoon areas such as India, Southeast Asia, Indonesia, and even southeastern USA."
Found mistakes in this report? - Click Here | <urn:uuid:8693ef14-5253-438b-8c4a-a0992aab14f1> | 3.3125 | 350 | News Article | Science & Tech. | 19.631667 | 95,580,761 |
The world’s population growth will create new pressures on natural habitats, resources and urban development, and create new challenges for biodiversity, food and water security.
Satellite data is helping to build a global evidence base of how our world is changing, and provides the intelligence, monitoring and global connectivity capabilities to ensure natural resources are managed effectively to achieve maximum, sustainable productivity.
The Catapult’s Sustainable Living Programme identifies and creates opportunities to embed satellite technology and data, into services and solutions that enable the agriculture sector, extractive industries, water and energy operators to grow sustainably.
Our role is to understand the challenges faced by those working in these sectors, raise awareness of satellite capabilities, and work in collaboration with potential customers and end users to develop innovative solutions.
Precision farming and sensor networks
We are working on a number of applications and research projects with agri-tech partners using optical and Synthetic Aperture Radar (SAR) Earth observation (EO) data to monitor crop growth and health in the UK and internationally. We are working with partners in the development of sensor networks for arable and livestock farming to demonstrate the benefits of an integrated communications solution, connecting ground based sensors to a central hub. We are also exploring satellite capabilities for the monitoring of perishable goods in-transit.
Monitoring capabilities for natural resources
Our environmental monitoring platform concept brings together various forms of EO data and analysis techniques into a toolkit, which will enable organisations to plan for and monitor the impact of activities, such as mining and major construction projects over an extended period. Ground and aerial data will be assimilated into the platform with new ways to visualise the data. Initial projects with organisations based in Chile are developing the concept into practical demonstrators to monitor mining operations and the environmental impact of mining activities.
Water resource management
The team are supporting developments in the use of optical and SAR EO data to monitor fresh water resources in riverine, coastal and estuarine environments and the development of wide area sensor networks for water quality monitoring in remote areas.
Energy infrastructure monitoring
We are exploring opportunities to improve and expand satellite services for the energy distribution networks, in particular, the advantages satellite imaging can offer to infrastructure monitoring.
If you would like more information, or to discuss opportunities, please email: email@example.com
Receive our latest news from all of our programmes by signing up below. | <urn:uuid:eb1ef760-19b3-4589-8e41-aba0d2217501> | 2.703125 | 493 | About (Org.) | Science & Tech. | -4.386176 | 95,580,779 |
The Age of Humans Controlling Microbes
News Oct 07, 2015
We now have more control than ever over microorganisms thanks to the work of scientists like synthetic biologist Pamela Silver, Ph.D., Wyss Core Faculty and the Elliot T. and Onie H. Adams Professor of Biochemistry and Systems Biology at Harvard Medical School, who have uncovered new ways for programming and directing microbial cells to carry out tasks useful to humans and the environment.
The concept, according to Silver, is that by genetically programming cells like common E. coli or other bacteria species to respond to and produce certain chemicals, we can turn these microbes into so-called biological devices. In the October issue of Current Opinion in Chemical Biology, Silver writes that these "biological devices" – made up of engineered microbes – can be used for a whole host of applications.
One potential use is to detect the presence of toxins or other valuable chemicals. This could prove especially useful for identifying the presence of arsenic, other heavy metals, or organic pollutants.
Not only could microbes be programmed to sense and detect chemicals in the environment, but they could also be engineered to do the same inside our own bodies. Sensing the presence of biochemical signals, they could be rigged to "remember" and "report" the presence of these signals by changing their visible color or by other easily interpreted means. This new form of diagnostic tool could give doctors new insight into the health of their patients by identifying subtle signals within the body that could indicate health conditions long before more serious symptoms present.
Beyond diagnosis, experimental work has already demonstrated that bacteria could be harnessed to attack and treat cancer tumor cells, potentially adding a new weapon to the tool belts of cancer clinicians. And, these engineered microbes could potentially be used to silence countless other genes that are responsible for many devastating diseases.
But in order to take these biological devices from the lab and put them to use in the real world, engineered organisms must first be proven to be reliable, predictable and cost effective.
Gene Regulator May Contribute to Protein Pileup in Exfoliation GlaucomaNews
Researchers are seeking factors that contribute to protein pileup in exfoliation glaucomaREAD MORE
Nano-tech Diagnostic Can Indicate Cancer or Thrombotic Risk in One Drop of BloodNews
A team of international researchers led by Professor Martin Hegner, Investigator in CRANN and Trinity’s School of Physics, have developed an automated diagnostic platform that can quantify bleeding – and thrombotic risks – in a single drop of blood, within seconds.READ MORE | <urn:uuid:af646d79-ed3d-49ad-8cc7-a9e5f87f4e2d> | 3.453125 | 527 | Truncated | Science & Tech. | 23.83005 | 95,580,791 |
Edited by Jamie (ScienceAid Editor), Taylor (ScienceAid Editor), Jen Moreau, vcdanht
A complex compound contains a central metal ion that is surrounded by ligands. Ligands are ions or molecules that donate a pair of electrons to the metals ion (co-ordinate bonding) and are therefore a Lewis base. The diagram below shows an example of a complex ion.
- 1The example above is hexaamminecobalt ion. It consists of a cobalt (II) ion that has 6 ammonia ligands bonded to it. This means it has a co-ordination number of 6. However, the co-ordination number is not always just the number of ligands, in fact this is only the case with unidentate ligands, meaning each ligand forms a single bond to the metal ion (H2O, NH3 and Cl-).Hexaamminecobalt ion.Advertisement
- 2Ligands can also be bidentate where they have two lone pairs to donate to the central atom. An example of a bidentate ligand is the ethanedioate ion, this has two separate oxygen atoms with a free lone pair, so the species donates to bonds.Bidentate.
- 3A multidentate ligand forms many coordinate bonds. An example is EDTA4- which uses all of its 6 donor sites to bind to the metal ion. Also, haem, part of the protein haemoglobin (also hemoglobin), is an iron complex with a multidentate ligand.Multidentate.
Shapes of Complex Ions
Depending on the type of ligand, and the coordination number, the shape of the complex varies.
- 1The octahedral shape is the most common. It occurs when there are 6 ligands.Octahedral.
- 2The next most common is tetrahedral. It occurs in complexes with Cl- ligands because only 4 of them can fit around the metal ion, so this is the arrangement they take.Tetrahedral.
- 3Silver (I) ions will form linear complexes. This is where one ligand is on either side of the silver ion with an 180° angle between them.Linear.
A property of transition metals is that they form coloured compounds, and this is also true of transition metal complexes. These colours are determined by:
- Oxidation state.
- Co-ordination number.
Therefore, when we alter these things, the results are altered as well, and the color changes based on these changes. See the example below.
Color in solutions arises when a species absorbs visible light, meaning we see a combination of the remaining colours. The electron is excited from its normal state, to a higher energy state.. This change in energy level is called �"E.
- By using light, it is possible to determine the concentration of ion by looking at the intensity of the color. To do this, we use a spectrophotometer. It works by passing visible and ultraviolet light of varying frequencies through the sample. *The emergent light is detected.
- The amount of light absorbed is proportional to the concentration of the ion.
- Therefore, by knowing how much light is absorbed, the concentration of the absorbing species can be determined. In many compounds, however, there is not much of a difference in the colours of varyingly concentrated compounds, so it is necessary to add another ligand in a substitution reaction. A substance used for this is bipyridyl (bipy), which is much more sensitive to changes in concentration.
Applications of Complex Ions
As you may know from biology, iron exists in the blood. The protein haemoglobin has an iron atom that is coordinately bonded to four nitrogen atoms that are part of larger molecules. Oxygen coordinates with the Fe2+ ion and can be transported. Carbon monoxide forms a more stable complex than oxygen. This means oxygen uptake is inhibited and carbon monoxide poisoning can result.
A platinum complex ion is used in the anti-cancer drug Cisplatin: it consists of two ammonia and two chloride ligands on a platinum.
Different silver complexes also have practical applications.
|[Ag(NH3)2]+||Is used in Tollens' reagent, which tests for aldehyde or ketones.|
|[Ag(S2O3)2]3-||Is formed in photography when silver bromide that hasn't been exposed to light is dissolved in a sodium thiosulphate solution.|
|[Ag(CN)2]-||A complex formed when Ag salts are dissolved in potassium cyanide. This solution is used as the electrolyte in silver plating.|
Referencing this Article
If you need to reference this article in your work, you can copy-paste the following depending on your required format:
APA (American Psychological Association)
Complex Ions. (2017). In ScienceAid. Retrieved Jul 22, 2018, from https://scienceaid.net/chemistry/inorganic/complex.html
MLA (Modern Language Association) "Complex Ions." ScienceAid, scienceaid.net/chemistry/inorganic/complex.html Accessed 22 Jul 2018.
Chicago / Turabian ScienceAid.net. "Complex Ions." Accessed Jul 22, 2018. https://scienceaid.net/chemistry/inorganic/complex.html.
Categories : Inorganic
Recent edits by: Jen Moreau, Taylor (ScienceAid Editor), Jamie (ScienceAid Editor) | <urn:uuid:601fa1ca-1aa1-4d46-8380-3ef27c619108> | 4.0625 | 1,161 | Knowledge Article | Science & Tech. | 47.443943 | 95,580,799 |
Please solve each of the following problems with all calculations shown.
1. A 25.00 mL portion of a solution known to contain NaBr was treated with excess AgNO3 to precipitate 0.02578 g of AgBr. Determine the concentration of NaBr in the original solution in molarity and ppm.
2. How many mL of a 3.45% solution of alcoholic dimethyl glyoxime would you need to provide a 50% excess with 1.8765 g of steel known to contain 2.917 wt% Ni. The density of the dimethyl glyoxime solution is 0.802 g/mL.
Ni+2 + 2 DMG ------------ > Bis(dimethylglyoximate)nicklel(II) + 2H+
FM for Ni+2 = 58.69 g /mol
FM for DMG = 116.12 g/mol (The question lists density at 0.802 g/mL, please use this figure)
FM for Bis(dimethylglyoximate)nicklel(II) = 288.91 g/mol
3. 25 multivitamin tablets containing iron with a total mass of 50.345 g were ground together and mixed thoroughly. 1.8320 g of this sample was then dissolved in 10.00 mL of 1.5 M HNO3 and heated to ensure the oxidation of all iron to the Fe+3 oxidation state. NH3 was added to cause the precipitation of FeOOH*xH2O (the red gelatinous substance you formed in lab). The precipitate was then ignited to produce 0.2167 g of Fe2O3. Determine the average mass of iron (II) sulfate heptahydrate (FeSO4*7H2O) the source of iron in many dietary supplements) in each multivitamin tablet.
4. Distinguish between the terms end point and equivalence point.
5. Explain the difference between a direct titration and a back titration.
6. Explain the properties that make a substance a primary standard.
7. A solution of HCl is standardized against primary standard Na2CO3 using bromocresol green as the indicator. A 0.2107 g sample of Na2CO3 required 37.98 mL of HCl solution to turn the solution from blue to green. The solution was boiled briefly and returned to a blue color. This solution then required 0.17 mL of HCl solution to turn greenish yellow. Determine the molarity of the HCl solution.
8. The HCl solution standardized in #7 is then used to titrate an unknown concentration of a NaOH solution with phenolphthalein as the indicator. A 25.00 mL quantity of the NaOH solution required 16.63 mL of HCl solution to turn the solution from a deep reddish pink to completely colorless. Determine the molarity of the NaOH solution.© BrainMass Inc. brainmass.com July 23, 2018, 12:21 am ad1c9bdddf
This solution provides a detailed explanation of how to solve each question in this chemistry problem set. In order to view the solution a Word document must be opened. This document is 1400 words in length. | <urn:uuid:98340f27-1927-4585-b92e-970d9bc243a6> | 2.578125 | 680 | Tutorial | Science & Tech. | 76.683065 | 95,580,813 |
When bridges and buildings begin to vibrate, whether from the wind or traffic or another stressor, they can literally shake themselves to pieces. Watch a 1940 clip of the Tacoma Narrows Bridge galloping up and down and then ripping like soggy cardboard to get a sense of the effects. This only happens when the vibrations happen to match what's called the resonant frequency of the structure, and engineers try to make sure this won't happen. Skyscrapers even have devices called dampers on their roofs to absorb energy.
But natural, geological bridges like the great sandstone arches of Arches National Park in Utah, have no such defenses; they are still standing because over the eons they have shed pieces of themselves and adjusted their tension to weather the energy that washes up against them. However, with humans around—along with our helicopters, boats, highways, and everything else—the landscape of that energy has changed.
Jeffrey Moore, a professor of geology at University of Utah, is looking to find out what the resonant frequencies of natural arches are and what vibrations, exactly, they are vulnerable to. In a recent study of the Rainbow Bridge in the remote Four Corners Region of the Southwest, published in Geophysical Research Letters, he and his team report something stunning: The bridge is so sensitive it picked up what was likely a man-made earthquake in Oklahoma, hundreds of miles away. The waves of nearby Lake Powell showed up in its tremblings, too. The impression the report conveys is of a structure of tremendous sensitivity and resilience that should be monitored going forward to see how it responds to shocks.
Moore and his group did their study at the request of the National Park Service, prompted in its turn by a committee of Native American tribal organizations, for whom the bridge is a sacred site. Last March they brought four seismometers, which measure vibration, to the bridge. A researcher rappelling down onto the span from a cliff placed two on the structure itself, and the others were deployed on either side of the bridge, some meters away. The researchers collected readings for 22 hours, and build a mathematical model of the structure using the measurements they gathered, and information about its shape.
Right away the data showed that the arch was picking up a range of vibrations. In the juddering lines of the seismograph readings they could watch the winds along the canyon dying down over the day, as well as the reverberations of the waves from Lake Powell. And “we could see things in our data straight away that looked like earthquakes,” says Moore. Two small local earthquakes rippled through during the measurement period, but there was one set of vibrations that looked different, with a much narrower range of energy that suggested it had traveled a long way. This appears to be from an earthquake induced by drilling to store waste water from oil production on the Oklahoma-Kansas border, Moore says. “The most interesting thing is just the recognition that this distant event is felt, is absorbed by Rainbow Bridge ... Even here in this incredible remote place, this earthquake from Oklahoma is rattling the bridge.”
The distant earthquake and the vibrations from Lake Powell, which is an artificial reservoir, activated the bridge at one of its resonant frequencies, the group notes. It's not clear what that means for its long-term stability. But it is a reminder that our activities are not anywhere near as constrained in space as we like to think they are. The model the team made provides a benchmark for the bridge going forward, however—future measurements will reveal whether the bridge's properties are changing as a result of vibrational damage. “We're poised to come back and re-measure the resonant properties,” says Moore. If a structure develops cracks and grows softer and less stiff, its resonant frequencies will drop. “If we come back in five years and all the resonant frequencies have dropped by, say, 10 percent, then we'd have some data to suggest that the arch has been damaged during that time,” he says. Moore hopes that this information, which his group is gathering about numerous natural bridges in the Southwest, will help the Park Service and other management organizations make decisions that will help extend the lifetimes of these magnificent structures.
We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org. | <urn:uuid:2536b997-11d2-4d4c-ac25-074d91797eac> | 3.6875 | 899 | Truncated | Science & Tech. | 42.8442 | 95,580,829 |
Now, for the first time, a distributed computing experiment has produced significant results that have been published in a scientific journal. Writing in the advanced online edition of Nature magazine, Stanford University scientists Christopher D. Snow and Vijay S. Pande describe how they - with the help of 30,000 personal computers - successfully simulated part of the complex folding process that a typical protein molecule undergoes to achieve its unique, three-dimensional shape. Their findings were confirmed in the laboratory of Houbi Nguyen and Martin Gruebele - scientists from the University of Illinois at Urbana-Champaign who co-authored the Nature study.
Every protein molecule consists of a chain of amino acids that must assume a specific three-dimensional shape to function normally.
"The process of protein folding remains a mystery," said Pande, assistant professor of chemistry and of structural biology at Stanford. "When proteins misfold, they sometimes clump together, forming aggregates in the brain that have been observed in patients with Alzheimer's, Parkinson's and other diseases."
How proteins fold into their ideal conformation is a question that has tantalized scientists for decades. To solve the problem, researchers have turned to computer simulation - a process that requires an enormous amount of computing power.
"One reason that protein folding is so difficult to simulate is that it occurs amazingly fast," Pande explained. "Small proteins have been shown to fold in a timescale of microseconds [millionths of a second], but it takes the average computer one day just to do a one-nanosecond [billionth-of-a-second] folding simulation."
Simulating protein folding is often considered a "holy grail" of computational biology, he added. "This is an area of hot competition that includes a number of heavy weights, such as IBM's $100-million, million-processor Blue Gene supercomputer project."
Two years ago, Pande launched Folding@home - a distributed computing project that so far has enlisted the aid of more than 200,000 PC owners, whose screensavers are dedicated to simulating the protein-folding process.
The Stanford project operates on principles similar to earlier projects, such as SETI@home, which allows anyone with an Internet connection to search for intelligent life in the universe by downloading special data-analysis software. When a SETI@home screensaver is activated, the PC begins processing packets of radio telescope data. Completed packets are sent back to a central computer, and new ones are assigned automatically.
For the Nature study, Pande and Snow - a biophysics graduate student - asked volunteer PCs to resolve the folding dynamics of two mutant forms of a tiny protein called BBA5. Each computer was assigned a specific simulation pattern based on its speed.
With 30,000 computers at their disposal, Pande and Snow were able to perform 32,500 folding simulations and accumulate 700 microseconds of folding data. These simulations tested the folding rate of the protein on a 5-, 10- and 20-nanosecond timescale under different temperatures. Using these data, the scientists were able to predict the folding rate and trajectory of the "average" molecule.
To confirm their predictions, the Stanford team asked Gruebele and Nguyen to conduct "laser temperature-jump experiments" at their Illinois lab. In this technique, an unfolded protein is pulsed with a laser, which heats the molecule just enough to cause it to bend into its native state. A fluorescent amino acid imbedded inside the molecule grows dimmer as the protein folds. Researchers use the changing fluorescence to measure folding events as they occur.
The results of the laser experiments were in "excellent agreement" with the Folding@home predictions, Pande and his colleagues concluded.
Specifically, the computers predicted that one experimental protein would fold in 6 microseconds, while laboratory observations revealed an actual folding time of 7.5 microseconds.
"These experiments represent a great success for distributed computing," Pande said. "Understanding how proteins fold will likely have a great impact on understanding a wide range of diseases."
The Nature study was supported by the National Institutes of Health, the American Chemical Society, Intel and the Howard Hughes Medical Institute.
This article was co-written by Stanford science writing intern Caroline Uhlik.
Caroline Uhlik and Mark Shwartz
COMMENT: Vijay Pande, Chemistry: (650) 723-3660, firstname.lastname@example.org
EDITORS: The Nature study, "Absolute comparison of simulated and experimental protein-folding dynamics," is available online at www.nature.com. A photograph of Stanford Assistant Professor Vijay Pande can be downloaded at http://newsphotos. | <urn:uuid:72dc16e0-8716-4942-ae76-b58b4ce97b18> | 3.625 | 965 | News Article | Science & Tech. | 26.803943 | 95,580,843 |
The Lambda-CDM model (ΛCDM or LCDM) is a Big Bang Cosmological model including Cold Dark Matter and a Cosmological Constant (Lambda or Λ). The cosmological constant provides an explanation for the universe's observed accelerating expansion. Dark Matter provides an explanation for the dynamics of galaxies and Galaxy Clusters that appear to have more gravitational attraction than their light can explain.
Other explanations for these phenomena are some as-yet-unrecognized behavior of Gravity or dynamics.
The model relates many observed phenomena to six apparently arbitrary parameters determinable from observation:
The cited parameter list may differ from the above, but in combination, they would be equivalent to the above through relations assumed by the model. For example, the Optical Depth to Reionization represents a measure of how long ago the EOR took place, which is also sometimes cited as a Redshift (Z ~ 11).
Cold Dark Matter (CDM)
Einstein-de Sitter Model
Epoch of Galaxy Formation | <urn:uuid:57ebbc65-e187-4c66-9836-2f1cdb4b378d> | 2.875 | 211 | Knowledge Article | Science & Tech. | 17.210385 | 95,580,858 |
Caretta caretta (South East Indian Ocean subpopulation)
|Scientific Name:||Caretta caretta (South East Indian Ocean subpopulation)|
See Caretta caretta
|Red List Category & Criteria:||Near Threatened ver 3.1|
|Assessor(s):||Casale, P., Riskas, K., Tucker, A.D. & Hamann, M.|
|Reviewer(s):||Wallace, B.P. & Pilcher, N.J.|
The South East Indian Ocean Loggerhead subpopulation nests in Western Australia. Its marine habitats extend throughout a wide area including the Timor and Arafura Seas (Hamann et al. 2013, Limpus 2008) (Figure 2 in the Supplementary Material). This subpopulation has been identified as one genetic stock different from other Loggerhead stocks (Shamblin et al. 2014) supporting its designation as a single subpopulation, or regional management unit (RMU) (Wallace et al. 2010).
The subpopulation does not qualify for any threatened category under criterion D and could not be assessed under criteria A and E due to lack of data. Data are uncertain for assessing the sub-population under criterion C, while data are incomplete for assessing the sub-population under criterion B. Specifically, the subpopulation meets two out of three subcriteria needed for a threatened category (area of occupancy and number of locations), while the third subcriterion cannot be assessed due to lack of data. In such circumstances the subpopulation qualifies for the Near Threatened category, also considering the current threats.
In spite of the several gaps of knowledge, the subpopulation cannot be considered as Data Deficient. Only a subpopulation for which both Least Concern and Critically Endangered are plausible categories qualifies for the Data Deficient category (IUCN 2014). Differently, while the uncertainty of data would allow the South East Indian Ocean Loggerhead subpopulation to qualify for the Least Concern category (criteria A, B, C, D), the available data show that the subpopulation does not meet the requirements for the Critically Endangered category under criteria B, C and D. Regarding criterion A, a reduction of 80% or more (required for the CR category) is very unlikely to have occurred (A2) or to occur in the future (A4) even considering the current anthropogenic threats.
A reduction of the subpopulation is suspected for the occurrence of threats such as heavy animal predation on clutches and anthropogenic disturbance at nesting sites (Baldwin et al. 2003, Hamann et al. 2013). For the Loggerhead global and subpopulation assessments we only considered time series datasets of ≥10 yr. Unfortunately, such datasets are not available for the South East Indian Ocean subpopulation. For this reason, criterion A could not be applied to this subpopulation.
Since the subpopulation area includes the large marine area from the long coast of the Western Australia to Indonesia, the extent of occurrence (EOO) exceeds the threat category threshold (20,000 km²) for criterion B1. Regarding criterion B2, the area of occupancy (AOO) for sea turtles is quantified based on linear extent of nesting beach habitat, which represents the smallest habitat for a critical life stage. The total length of monitored Loggerhead nesting beaches in Western Australia (Dirk Hartog Island, Ningaloo, Muiron Islands, Gnaraloo) is 64 km (Coote et al. 2012, Riskas 2014, R. Prince, A. Tucker pers. comm). Since the appropriate scale for AOO is a grid 2x2 km, the above linear measure is converted to 128 km², which meets the threshold for the Endangered category (<500 km²). However, diffuse low-level nesting occurs at other non-quantified and non-monitored beaches within the ~450 km of coastline between the northern and southern extent of known nesting (estimated through Google Earth), making the actual AOO uncertain but maximum 900 km², which meets the 2,000 km² threshold for the Vulnerable category. Key nesting beaches are monitored to varying degrees and can be grouped in four locations (Dirk Hartog Island, Gnaraloo, Ningaloo, Muiron islands) according to a geographic range where a single threat can affect all the beaches in each group, like an increased predator population, unmanaged vehicular traffic, coastal development (which are all plausible threats for this subpopulation; see the Threats section). However, the current management of at least a part of those beaches and the lack of a clear assessment of threats make the identification and quantification of locations, as defined for this Criterion (IUCN 2014), questionable. Regarding the third subcriterion (continuing decline or of extreme fluctuations) there are no available data to assess it. In conclusion, the subpopulation would meet only two out of three requirements for a threatened category, partly because of insufficient data, and so does not qualify for a threatened category under criterion B.
In summary, the population might qualify as Vulnerable based on AOO (which, however, is an estimate) and number of locations, but it does not trigger all of the subcriteria. In this situation the subpopulation can be considered as Near Threatened under criterion B2.
To apply criterion C, the total number of adult females and males is needed. About 1,000-2,500 females are estimated to nest annually in the Shark Bay area, where the majority of nesting of the subpopulation occurs (Baldwin et al. 2003, Wirsing et al. 2004), hence the total number of females nesting annually in the entire subpopulation is higher than that. The number of adults can be derived from the number of females per year with the following formula: adults = annual females * remigration interval * female proportion-1. Unfortunately, the proportion of females is not available and without it only the number of adult females can be tentatively estimated with the following formula: adult females = annual females * remigration interval. Considering the above range of values for annual nesting females and a remigration interval of 3.5 years (Western Australia Parks and Wildlife, unpubl. data, through A. Tucker, pers. comm.), the total number of adult females would range from 3,500 to 8,750 individuals. The range of values of the proportion of females known from other Loggerhead subpopulations makes it possible that the number of adults of the South-Indian subpopulation is either greater than or less than 10,000, which is the threshold for the Vulnerable category. Moreover, no data are available for assessing whether or not the subpopulation meets the other subcriteria also required, such as continuing decline, % of mature individuals in one subpopulation, and extreme fluctuations. In conclusion, the subpopulation cannot be assessed under criterion C because of insufficient data.
The subpopulation does not meet the threshold for number of mature individuals (<1,000) under criterion D1. Regarding criterion D2, AOO exceeds the suggested threshold (<20 km²; see criterion B). The number of locations may be considered as four (see Criterion B above), but no future threats have been identified that could drive the subpopulation to CR or EX in a very short time. In conclusion, the subpopulation does not meet the requirements for a threatened category under criterion D.
No population viability analysis was available for this subpopulation.
Sources of Uncertainty
Several important sources of uncertainty exist for this subpopulation assessment, the most important of which are annual female abundance, adult sex ratio, long term census of nesting females or nests, and threat assessment.
|Range Description:||The Loggerhead Turtle has a worldwide distribution in subtropical to temperate regions of the Mediterranean Sea and Pacific, Indian, and Atlantic Oceans (Wallace et al. 2010) (Figure 1 in the Supplementary Material).|
The South East Indian Ocean subpopulation breeds in Western Australia (Baldwin et al. 2003). Tag returns showed that foraging habitats extend as far as the Java and Arafura Seas (Hamann et al. 2013, Limpus 2008) (Figure 2 in the Supplementary Material).
Native:Australia (Northern Territory, Western Australia); Indonesia
|FAO Marine Fishing Areas:|
Indian Ocean – eastern; Pacific – western central
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||Loggerheads are a single species globally comprising 10 regional management units (RMUs: Wallace et al. 2010), which describe biologically and geographically explicit population segments by integrating information from nesting sites, mitochondrial and nuclear DNA studies, movements and habitat use by all life stages. Regional management units are functionally equivalent to IUCN subpopulations, thus providing the appropriate demographic unit for Red List assessments. There are 10 Loggerhead RMUs (hereafter subpopulations): North West Atlantic Ocean, North East Atlantic Ocean, South West Atlantic Ocean, Mediterranean Sea, North East Indian Ocean, North West Indian Ocean, South East Indian Ocean, South West Indian Ocean, North Pacific Ocean, and South Pacific Ocean (Figure 2 in the Supplementary Material). Multiple genetic stocks have been defined according to geographically disparate nesting areas around the world and are included within RMU delineations (Wallace et al. 2010) (shapefiles can be viewed and downloaded at: http://seamap.env.duke.edu/swot).|
The South East Indian Ocean Loggerhead subpopulation is probably one of the largest globally, with an estimated number of females nesting annually probably exceeding 2,500 (Baldwin et al. 2003, Wirsing et al. 2004). However, consistent annual censuses of adults or nests are still lacking, especially at the major nesting sites, as well as long-term monitoring datasets and key demographic parameters (e.g., remigration interval, adult sex ratio, number of clutches per female, etc).
|Current Population Trend:||Unknown|
|Habitat and Ecology:||The Loggerhead Turtle nests on insular and mainland sandy beaches throughout the temperate and subtropical regions. Like most sea turtles, Loggerhead Turtles are highly migratory and use a wide range of broadly separated localities and habitats during their lifetimes (Bolten and Witherington 2003). Upon leaving the nesting beach, hatchlings begin an oceanic phase, perhaps floating passively in major current systems (gyres) that serve as open-ocean developmental grounds (Bolten and Witherington 2003). After 4-19 years in the oceanic zone, Loggerheads recruit to neritic developmental areas rich in benthic prey or epipelagic prey where they forage and grow until maturity at 10-39 years (Avens and Snover 2013). Upon reaching sexual maturity Loggerhead Turtles undertake breeding migrations between foraging grounds and nesting areas at remigration intervals of one to several years with a mean of 2.5-3 years for females (Schroeder et al. 2003) while males would have a shorter remigration interval (e.g., Hays et al. 2010, Wibbels et al. 1990). Migrations are carried out by both males and females and may traverse oceanic zones spanning hundreds to thousands of kilometers (Plotkin 2003). During non-breeding periods adults reside at coastal neritic feeding areas that sometimes coincide with juvenile developmental habitats (Bolten and Witherington 2003). However, none of these parameters have been quantified for this South East Indian Ocean Loggerhead subpopulation.|
The IUCN Red List Criteria define generation length to be the average age of parents in a population (i.e., older than the age at maturity and younger than the oldest mature individual) and care should be taken to avoid underestimation (IUCN 2014). Although different subpopulations may have different generation length, since this information is limited we adopted the same value for all the subpopulations, taking care to avoid underestimation as recommended by IUCN (2014).
Loggerheads attain maturity at 10-39 years (Avens and Snover 2013), and we considered here 30 years to be equal or greater than the average age at maturity. Data on reproductive longevity in Loggerheads are limited, but are becoming available with increasing numbers of intensively monitored, long-term projects on protected beaches. Tagging studies have documented reproductive histories up to 28 years in the North Western Atlantic Ocean (Mote Marine Laboratory, unpubl. data), up to 18 years in the South Western Indian Ocean (Nel et al. 2013), up to 32 years in the South Western Atlantic Ocean (Projeto Tamar unpubl. data), and up to 37 years in the South Western Pacific Ocean, where females nesting for 20-25 years are common (C. Limpus, pers. comm). We considered 15 years to be equal or greater than the average reproductive longevity. Therefore, we considered here 45 years to be equal or greater than the average generation length, therefore avoiding underestimation as recommended by IUCN (IUCN Standards and Petitions Subcommittee 2014).
|Generation Length (years):||45|
Threats to Loggerheads vary in time and space, and in relative impact to populations. Threat categories affecting marine turtles, including Loggerheads, were described by Wallace et al. (2011) as:
Loggerhead Turtles are afforded legislative protection under a number of treaties and laws (Wold 2002). Annex II of the SPAW Protocol to the Cartagena Convention (a protocol concerning specially protected areas and wildlife); Appendix I of CITES (Convention on International Trade in Endangered Species of Wild Fauna and Flora); and Appendices I and II of the Convention on Migratory Species (CMS). A partial list of the International Instruments that benefit Loggerhead Turtles includes the Inter-American Convention for the Protection and Conservation of Sea Turtles, the Memorandum of Understanding on the Conservation and Management of Marine Turtles and their Habitats of the Indian Ocean and Southeast Asia (IOSEA), the Memorandum of Understanding on ASEAN Sea Turtle Conservation and Protection, the Memorandum of Agreement on the Turtle Islands Heritage Protected Area (TIHPA), and the Memorandum of Understanding Concerning Conservation Measures for Marine Turtles of the Atlantic Coast of Africa.
As a result of these designations and agreements, many of the intentional impacts directed at sea turtles have been lessened: harvest of eggs and adults has been slowed at several nesting areas through nesting beach conservation efforts and an increasing number of community-based initiatives are in place to slow the take of turtles in foraging areas. In regard to incidental take, the implementation of Turtle Excluder Devices (TEDs) has proved to be beneficial in some areas, primarily in the United States and South and Central America (National Research Council 1990). Guidelines are available to reduce sea turtle mortality in fishing operations in coastal and high seas fisheries (FAO 2009). However, despite these advances, human impacts continue throughout the world. The lack of effective monitoring in pelagic and near-shore fisheries operations still allows substantial direct and indirect mortality, and the uncontrolled development of coastal and marine habitats threatens to destroy the supporting ecosystems of long-lived Loggerhead Turtles.
Loggerheads are legally protected in Australia by the Environment Protection and Biodiversity Conservation Act (1999) and specific protection and management regulations are in place in some nesting sites (restricted access, predator control programs) and at foraging habitats (TEDs for all commercial trawlers) (Hamann et al. 2013, Limpus 2008).
|Citation:||Casale, P., Riskas, K., Tucker, A.D. & Hamann, M. 2015. Caretta caretta (South East Indian Ocean subpopulation). The IUCN Red List of Threatened Species 2015: e.T84189617A84189662.Downloaded on 18 July 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:436da710-4339-4c49-b73d-5fcce36615f6> | 3.078125 | 3,352 | Knowledge Article | Science & Tech. | 30.148616 | 95,580,866 |
The Basics of the Docker Run Command
The Basics of the Docker Run Command
If you're new to using Docker, read this article to learn how to use basic code commands and make the best of them in Docker.
Join the DZone community and get the full member experience.Join For Free
Discover how quick and easy it is to secure secrets, so you can get back to doing what you love. Try Conjur, a free open source security service for developers.
For many Docker enthusiasts, the
docker run command is a familiar one. It's often the first Docker command we learn. The
docker run command is the command used to launch Docker containers. As such, it's familiar to anyone starting or running Docker containers on a daily basis.
In this article, we will get back to the basics and explore a few simple
docker run examples. During these examples, we will use the standard redis container image to show various ways to start a container instance.
While these examples may be basic, they are useful for anyone new to Docker.
Just Plain Ol' Docker Run
The first example is the most basic. We'll use the
docker run command to start a single redis container.
$ docker run redis 1:C 16 Jul 08:19:15.330 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
We can see that not only did we start the container, but we did so in "attached" mode. By default, if no parameters or flags are passed, Docker will start the container in "attached" mode. This means that the output from the running process is displayed on the terminal session.
It also means that the terminal session has been hijacked by the running container, if we were to press
ctrl+c, for example. We would then stop the redis service and as such stop the container.
If we leave the terminal session alone and open another terminal session, we can execute the
docker ps command. With this, we can see the container in a running status.
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1b83ac544e95 redis "docker-entrypoint..." 2 minutes ago Up 2 minutes 6379/tcp loving_bell
docker ps command above, we can see quite a bit about the running container, but one thing sticks out more than others. That is the name of the container:
By default, Docker will create a unique name for each container started. The names are generated from a list of descriptions (ie, "boring" or "hungry") and famous scientists or hackers (ie, Wozniak, Ritchie). It's possible however, to specify a name for our container. We can do so by simply using the
--name parameter when executing
$ docker run --name redis redis 1:C 16 Jul 08:22:17.296 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
In the above example, we used the
--name parameter to start a redis container named
redis. If we once again run the
docker ps command, we can see that our container is running, this time with our specified name.
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 67bbd0858ef5 redis "docker-entrypoint..." 30 seconds ago Up 27 seconds 6379/tcp redis
--name to limit the number of containers running
--name parameter is a useful option to know. Not only does naming a container make it easier to reference the container when executing Docker commands, but naming the container can also be used to control the number of containers that run on a single host.
To explain this in a bit more detail, let's see what happens if we try to start another container named redis without stopping the previous
$ docker run --name redis redis docker: Error response from daemon: Conflict. The container name "/redis" is already in use by container "67bbd0858ef5b1782875166b4c5e6c1589b28a99d130742a3e68f62b6926195f". You have to remove (or rename) that container to be able to reuse that name.
We can see one very important fact about running containers: With Docker, you are not allowed to run multiple containers with the same name. This is useful to know if you need to run multiple instances of a single container.
It is also useful to know this limitation if you wish to only run one instance of a specific container per host. A common use case for many users of Docker is to use the
--name as a safety check against automated tools launching multiple Docker containers.
By specifying a name within the automated tool, you are essentially ensuring that automated tools can only start one instance of the specified container.
Using -d to Detach the Container
Another useful parameter to pass to
docker run is the
-d flag. This flag causes Docker to start the container in "detached" mode. A simple way to think of this is to think of
-d as running the container in "the background," just like any other Unix process.
Rather than hijacking the terminal and showing the application's output, Docker will start the container in detached mode.
$ docker run -d redis 19267ab19aedb852c69e2bd6a776d9706c540259740aaf4878d0324f9e95af10 $ docker run -d redis 0f3cb6199d442822ecfc8ce6a946b72e07cf329b6516d4252b4e2720058c702b
-d flag is useful when starting containers that you wish to run for long periods of time. Which, if you are using Docker to run services, is generally the case. In attached mode, a container is linked with the terminal session.
-d is a simple way to detach the container on start.
Using -p to Publish Container Ports
In the examples above, all of our redis containers have been inaccessible for anything outside of the internal Docker service. The reason for this is because we have not published any ports to connect to redis. To publish a port via
docker run, we simply need to add the
$ docker run -d -p 6379:6379 --name redis redis 2138279e7d29234defd2b9f212e65d47b9a0f3e422165b4e4025e466f25bbc2b
In the above example, we used the
-p flag to publish the 6379 port from the host to the container to port 6379 within the container. This means anyone connecting to this host over port 6379 will be routed to the container via port 6379.
The syntax for this flag is
host_ip:host_port:container_port, with the host IP being optional. If we wanted to see what ports were mapped on a previously running container, we can use the
docker ps command.
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2138279e7d29 redis "docker-entrypoint..." 22 seconds ago Up 21 seconds 0.0.0.0:6379->6379/tcp redis
We can see that our host is listening across all interfaces (
0.0.0.0) on port 6379, and that traffic is being redirected to port 6379 within the container.
Another useful tip regarding the
-p flag is that you are able to specify it multiple times. This comes in handy if the container in question uses multiple ports.
Using -v to Mount Host Volumes
The last option we are going to explore is one that can be very important to anyone running containers that require persistent storage. This option is the
-v flag is used to define volume mounts. Volume mounts are a way to share storage volumes across multiple containers. In the example below, we are sharing the
/tmp/data directory on the host as
/data to the container.
$ docker run -d -p 6379:6379 --name redis -v /tmp/data:/data redis 23de16619b5983107c60dad00a0a312ee18e526f89b26a6863fef5cdc70c8426
The example above makes it to where anything written to
/data within the container is actually accessing
/tmp/data on the host. The same is true of anything being written to
/tmp/data on the host; that data will also be available within
/data in the container.
We can see this if we look within the
/tmp/data directory on the host.
$ ls /tmp/data/ dump.rdb
This option is important for anyone running a database or application that requires persistence within Docker.
It is important because any data that is written within the container is removed as soon as the container itself is removed. Essentially, if we were to simply spin up a redis instance without using volume maps, we could populate data within that redis instance. However, as soon as the container that hosts that instance is removed, the data within that redis instance is also removed.
By using volume mounts, we can keep the data located on the host (as seen above), allowing any other redis container that uses that same volume mount to start where the previous container left off.
Performance Implications of Volume Mounts
Another key aspect of volume mounts is that the write speed to a volume mount is far greater than the write speed within the container's filesystem. The reason for this is that the default container filesystem uses features such as thin provisioning and copy-on-write. These features can introduce latency for write-heavy applications.
By using a host-based volume mount, the containerized application can achieve the same write speeds as the host itself.
In this article, we covered quite a few options for the
docker run command. However, while these options are key, they only scratch the surface of the available options to
docker run. Some of the options are complex enough to deserve an article unto themselves.
Published at DZone with permission of Ben Cane , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | <urn:uuid:b9ace3bf-1a1d-4eec-a31d-514e4bb961cb> | 2.59375 | 2,246 | Tutorial | Software Dev. | 53.688028 | 95,580,867 |
How the Species Observations reporting system works
The Species Observations reporting system is Norway’s largest collective effort by volunteers to provide scientific and management information.This crowdsourcing provide a tremendous contribution to increase our knowledge of species distribution in Norway.
The reporting system is operated by the Norwegian Biodiversity Information Centre in collaboration with SABIMA (the Norwegian Biodiversity Network) and the following SABIMA member organizations (Norwegian home pages unless noted):
- Norwegian Ornithological Society (in English language)
- Agariplantus norvegicus (In Norwegian language)
- Norwegian Botanical Society (in Norwegian language)
- Norwegian Zoological Society (in Norwegian language)
- Norwegian Entomological Society (in English language)
The groups collectively provide about 150 people who check the quality and accuracy of the sightings reported to the system.
Help with red- and black-listed species
The 2015 Norwegian Red List for Species lists 2355 species as threatened, while 2,320 alien species have been reported in Norway, of which about a tenth are on the Norwegian Black List.
The observations system has contributed to our knowledge of the prevalence and incidence of many red-and blacklisted species.
Free for all to use
All data in the Species Observations system (observations and visual documentation) may be used freely by anyone. However, the person who reports the sighting owns and administrates his or her own observations. | <urn:uuid:5b31cd1e-af63-42f0-83d3-7f713714c5fb> | 2.890625 | 296 | Knowledge Article | Science & Tech. | 0.44 | 95,580,883 |
An iron ball having a mass of 5 kg is lifted from the floor to a height of 2.5 meters above the floor.
a) How much work was done to lift the ball?
b) How much potential energy did the ball gain?
c) If the motor lifting the ball raises it 2.5 meters in ten seconds, what is its power?
If a 600 kg car is moving with a speed of 25 m/s, that what is its kinetic energy? State your answer in Joules. What is its momentum? What are the correct units of measurement for momentum?
A 200 g ball is thrown upwards with an initial kinetic energy of 10 Joules. What maximum height will the ball reach? (neglect air resistance)
If a 200 g ball is dropped from 100 meters what is its velocity just before it hits the ground? What is its velocity after it has fallen halfway to the ground (50 m)? (neglect air resistance)
Two bumper cars collide head-on. Before the collision, car 1 is coming from the right at 3 m/s and has a total mass of 200 kg. Car 2 is coming from the left at 5 m/s and has a total mass of 250 kg (the driver of car 2 is a lot fatter than the driver of car 1) At the time of the collision, the bumpers lock and the cars remain stuck together after the (inelastic) collision. Calculate the velocity (direction and speed) of the linked cars following the collision.© BrainMass Inc. brainmass.com July 22, 2018, 2:47 am ad1c9bdddf
a) When the ball is raised, work is done against gravity. The amount of work done is equal to the energy gained by the ball. Let us use the value of the acceleration due to gravity g as 10m/s^2 throughout this problem.
m = 5kg, h = 2.5m, g = 10m/s^2
W = m x g x h = 5kg x 2.5m x 10m/s^2 = 125J
b) Potential energy gained = work done on the ball = 125J
Remember that work is done each time energy is converted from one form to another and that work and energy have the same units.
c) Power is the rate at which work is done, that is, power = work/ time
= 125J/10s = 12.5J/s (or Watts)
Kinetic energy = 1/2 x m x v^2 = 1/2 x 600Kg x 25m/s x 25m/s = 187,500J
Momentum = mass x ...
This solution provides step by step calculations for work, potential energy, and power. | <urn:uuid:85005f0a-c37f-40c0-8570-a0032d0aaf93> | 3.59375 | 578 | Tutorial | Science & Tech. | 92.408368 | 95,580,917 |
Join the Nation's Conversation
To find out more about Facebook commenting please read the Conversation Guidelines and FAQs
Elon Musk says AI could doom human civilization. Zuckerberg disagrees. Who's right?
Google is training AI to identify human behavior, using clips from movies. Veuer's Aidan Kelley has the story. Buzz60
SAN FRANCISCO — Artificial intelligence. Machine learning. Knowledge engineering.
Call it what you want, but AI by any name had the tech world uniquely divided in 2017, and the new year isn’t likely to bring any quick resolutions.
In case you missed it, the fiery debate over AI’s potential impact on society was encapsulated by the opinions of two bold-face Silicon Valley names.
Tesla and SpaceX CEO Elon Musk told the National Governors Association this fall that his exposure to AI technology suggests it poses “a fundamental risk to the existence of human civilization.”
Facebook founder Mark Zuckerberg parried such doomsday talk — which would include cosmologist Stephen Hawking’s view that AI could prove “the worst event in the history of civilization” — with a video post calling such negative talk “pretty irresponsible.”
As the war of words raged, AI continued its creep into our daily lives, from the new facial recognition software in Apple’s iPhone X to the increasingly savvy responses from digital assistants Siri, Alexa and Cortana.
With the amount of often personal information fed by consumers into cloud-based brains compounding exponentially, companies such as Facebook and Google are poised to have unprecedented insights into, and leverage over, our lives.
So which is it — are we heading into a glorious tech-enabled future where many menial tasks will be handled by savant machines, or one where the robots will have taken over for us woefully underpowered humans?
USA TODAY reached out to a number of artificial intelligence stakeholders to get their view on AI, friend or foe.
The conclusion: Excitement over AI’s potentially positive impacts seems, for now, adequately tempered by an acknowledgement that scientists need to stay vigilant about how such technology is developed, to ensure bias is eliminated and control is retained.
AI watchdog groups on the rise
“Innovation has generally liberated humans to be more productive,” says Congressman John Delaney, D-Md. Last fall, along with colleague Pete Olson, R-Texas, Delaney launched the AI Caucus, whose mission is to inform policymakers about the technological, economic and social impacts of AI.
Delaney says there are “a million conversations that can happen between now and the Terminator arriving,” referring to the apocalyptic film in which machines attempt to exterminate humans.
Although he says he is particularly concerned about retraining workers for an AI-rife future, “I don’t prescribe to a doomsday scenario.”
There are in fact a growing number of groups being formed to try to ensure that dismal future never comes to pass.
These include AI Now, which is led by New York University researcher Kate Crawford, who last year warned attendees at SXSW of the possible rise of fascist AI. There’s also OpenAI, a Musk-backed research outfit, and the Partnership on AI, whose members include Google, Facebook, Apple, Amazon, IBM and Microsoft.
Apple, Facebook and Amazon declined to provide an executive to speak on the record on AI’s pros and cons. Each company employs staff responsible for AI oversight.
Eric Horvitz, who heads Microsoft Research, says the company last summer created an internal review board called Aether — AI and Ethics in Engineering and Research — that is tasked with closely monitoring progress not just in machine learning but also fields such as object recognition and emotion detection.
“There are certainly high stakes in terms of how AI impacts transportation, health care and other significant sectors, and there need to be channels to check for failures,” says Horvitz.
One area of concern is image capture, particularly when it comes to facial recognition, he says. “Biases can live in this data collection that represent the worst in society,” Horvitz says.
Another organization vowing to tackle AI’s dark side is the recently formed DeepMind Ethics & Society research group, which aims to publish papers focused on some of the most vexing issues posed by AI. London-based DeepMind was bought by Google in 2014 to expand its own AI work.
One of the group’s key members is Nick Bostrom, the Swedish-born Oxford University professor whose 2014 book, Superintelligence: Paths, Dangers, Strategies, first caused Musk to caution against AI’s dangers.
“My view of AI developments is, if it’s useful, use it, but maybe also be sure to participate in conversations about where this is all going,” says Bostrom, who adds that the world “doesn’t need more alarm sounding” but more dialog.
“We’re all full of hopes and fears when it comes to long term potential of AI,” he says. “We need to channel that in a constructive way.”
Woz: From AI skeptic to fan
Apple cofounder Steve Wozniak initially found himself in the AI-wary camp. He, like Musk and Hawking, was concerned that machines with human-like consciousness could eventually pose a risk to homo sapiens.
But then he changed his thinking, based largely on the notion that humans still remain perplexed by how the brain works its magic, which in turn means that it would be difficult for scientists to create machines that can think like us.
“We may have machines now that simulate intelligence, but that’s different from truly replicating how the brain works,” says Wozniak. “If we don’t understand things like where memories are stored, what’s the point of worrying about when the Singularity is going to take over and run everything?”
Ah, yes, the Singularity. Glad he brought that up. That very sci-fi-sounding term refers to the moment in which machines become so intelligent they are able to run and upgrade themselves, leading to a runaway technological horse that humans will not be able to catch.
Some techies are eager for that machine-led moment. Last May, former Google self-driving car engineer Anthony Levandowski filed papers with the Internal Revenue Service to start a new religion called Way of the Future. Its mission is to promote the “realization, acceptance and worship of a Godhead based on Artificial Intelligence developed through computer hardware and software.”
Far from a joke, Levandowski — who is at the center of a contentious lawsuit between Google and Uber, to whom Levandowski sold his self-driving truck company Otto before being accused by Google of stealing proprietary tech — told Wired magazine last fall that his new church was merely a logical response to an inevitability.
“It’s not a god in the sense that it makes lightning or causes hurricanes,” he said. “But if there is something a billion times smarter than the smartest human, what else are you going to call it?”
How about maybe, unnerving?
Most consumers remain wary
Results of a Pew Research Center poll released in October found that between half and three-quarters of respondents considered themselves “worried” when asked about AI’s impact on doing human jobs (72%), evaluating job candidates (67%), building self-driving cars (54%) and caring for the elderly (47%).
A SurveyMonkey poll on AI conducted for USA TODAY also had overtones of concern, with 73% of respondents saying that would prefer if AI was limited in the rollout of newer tech so that it doesn’t become a threat to humans.
Meanwhile, 43% said smarter-than-humans AI would do “more harm than good,” while 38% said it would result in “equal amounts of harm and good.”
Perhaps tellingly, 68% said that the real threat remains “human intelligence,” implying that technology harnessed for nefarious purposes is what could do the most harm.
U.S. researchers and scientists have no choice but to push forward with AI developments because inaction is not an alternative, says Oren Etzioni, CEO of the Allen Institute for AI, which was started by Microsoft cofounder Paul Allen.
“AI may seem threatening, but hitting the pause button is not realistic,” he says. “China says they want to be an AI leader, and they’re not bothered by privacy issues. (Russian leader Vladimir) Putin has said the same thing. So the global race is on.”
As tech companies roar into 2018 with an eye toward improving their product offerings through AI, they would do well to remember that many of their constituents remain leery of technology that is overly intrusive and potentially harmful.
“Most people are smart enough to see that technology isn’t perfect and that we can’t trust these systems to make totally fair and correct decisions,” says Madeleine Clare Elish, a cultural anthropologist with Data & Society, a New York-based research institute.
Elish is encouraged that tech companies are starting to “see they have a responsibility not just to consumers but to society,” and are willing to discuss and monitor AI developments through various new organizations.
But her ultimate advice is for consumers, who for now still have the upper hand: “Data is the currency AI is built on, and it’s worth something. So maybe think twice before you give it away.”
Follow USA TODAY tech writer Marco della Cava on Twitter. | <urn:uuid:1aa2812d-d019-4ee7-aeb6-ce7a94e645f7> | 2.546875 | 2,038 | News Article | Science & Tech. | 39.259144 | 95,580,926 |
The Raman spectroscope emits laser light, which is scattered at the sample and then collected by the telescope (left). (Credit: Vienna University of Technology)
Source: Science Daily
ScienceDaily (Feb. 27, 2012) — People like to keep a safe distance from explosive substances, but in order to analyze them, close contact is usually inevitable. At the Vienna University of Technology, a new method has now been developed to detect chemicals inside a container over a distance of more than a hundred meters. Laser light is scattered in a very specific way by different substances. Using this light, the contents of a nontransparent container can be analyzed without opening it.Scattered Light as a "Chemical Fingerprint":
"The method we are using is Raman-spectroscopy," says Professor Bernhard Lendl (TU Vienna). The sample is irradiated with a laser beam. When the light is scattered by the molecules of the sample, it can change its energy. For example, the photons can transfer energy to the molecules by exciting molecular vibrations. This changes the wavelength of the light -- and thus its colour. Analyzing the colour spectrum of the scattered light, scientists can determine by what kind of molecules it must have been scattered.
Measuring over Great Distances -- with Highest Precision:
"Until now, the sample had to be placed very close to the laser and the light detector for this kind of Raman-spectroscopy," says Bernard Zachhuber. Due to his technological advancements, measurements can now be made over long distances. "Among hundreds of millions of photons, only a few trigger a Raman-scattering process in the sample," says Bernhard Zachhuber. These scattered particles of light are scattered uniformly in all directions. Only a tiny fraction travel back to the light detector. From this very weak signal, as much information as possible has to be extracted. This can be done using a highly efficient telescope and extremely sensitive light detectors.
In this project (funded by the EU) the researchers at TU Vienna collaborated with private companies and with partners in public safety, including The Spanish Guardia Civil who are are extremely interested in the new technology. During the project, the Austrian military was also involved. On their testing grounds the researchers from TU Vienna could put their method to the extreme. They tested frequently used explosives, such as TNT, ANFO or RDX. The tests were highly successful: "Even at a distance of more than a hundred meters, the substances could be detected reliably," says Engelene Chrysostom (TU Vienna).
Seeing Through Walls:
Raman spectroscopy over long distances even works if the sample is hidden in a nontransparent container. The laser beam is scattered by the container wall, but a small portion of the beam penetrates the box. There, in the sample, it can still excite Raman-scattering processes. "The challenge is to distinguish the container's light signal from the sample signal," says Bernhard Lendl. This can be done using a simple geometric trick: The laser beam hits the container on a small, well-defined spot. Therefore, the light signal emitted by the container stems from a very small region. The light which enters the container, on the other hand, is scattered into a much larger region. If the detector telescope is not exactly aimed at the point at which the laser hits the container but at a region just a few centimeters away, the characteristic light signal of the contents can be measured instead of the signal coming from the container.
The new method could make security checks at the airport a lot easier -- but the area of application is much wider. The method could be used wherever it is hard to get close to the subject of investigation. It could be just as useful for studying icebergs as for geological analysis on a Mars mission. In the chemical industry, a broad range of possible applications could be opened up. | <urn:uuid:5047de72-b408-4257-b9bb-3f602b9425ac> | 3.3125 | 802 | News Article | Science & Tech. | 40.486503 | 95,580,928 |
Spirals in Plants?
May 3, 2010
Bailey Hall 207
Refreshments will be served in Bailey 204 at 4:15
Many plants exhibit spirals or helices in the arrangement of their organs: leaves around a stem, seeds of a sunflower head, scales in a pine cone or pineapple. These spirals come in pairs of families, and the corresponding numbers of elements in the two families usually are consecutive Fibonacci numbers. We look at a simple geometric and dynamical model consisting of stacking disks one by one on the surface of a cylinder. Given natural assumptions, the system reproduces the known patterns and explains the predominance of Fibonacci numbers. It also gives rise to mathematical structures not mentioned in the mathematical literature: the spirals are not so perfect after all...
|Union College Math Department Home Page|
Comments to: firstname.lastname@example.org
Created automatically on: Mon Jul 23 04:06:23 EDT 2018 | <urn:uuid:7f0ceb41-bb94-4c97-bfc7-942e5a9ec14e> | 3.203125 | 196 | News (Org.) | Science & Tech. | 41.917772 | 95,580,930 |
"One of the very fundamental issues for understanding an earthquake is to know how the rupture is distributed on the fault plane, which is directly related to the amount of ground shaking and the damage it could cause at the surface," said Dr Jianbao Sun of the Institute of Geology, China Earthquake Administration (IGCEA).
To learn this, Sun and Prof. Zhengkang Shen of IGCEA and Peking University’s Department of Geophysics, and collaborators acquired two kinds of satellite radar data: Advanced Synthetic Aperture Radar (ASAR) data in C-band from ESA’s Envisat satellite and Phased Array type L-band Synthetic Aperture Radar (PALSAR) data from Japan’s ALOS satellite.
Applying a technique called SAR Interferometry (InSAR) on the data, the researchers produced a set of ‘interferogram' images covering the entire coseismic rupture region and its vicinity. This interferometric map revealed the amount and scope of surface deformation produced by the earthquake.
"This is perhaps the very first time people have seen the complete deformation field produced by an earthquake on such a large scale," Sun said.
InSAR involves combining two or more radar images of the same ground location in such a way that very precise measurements – down to a scale of a few centimetres – can be made of any ground motion taking place between image acquisitions. Coloured interferograms usually appear as rainbow fringe patterns.
The researchers combined these SAR satellite data with GPS measurements and developed a model that shows fault geometry and rupture distribution of the Longmen Shan fault zone, a series of parallel faults that run for about 400 km from southwest to northeast in the region. The earthquake that struck on 12 May last year produced a 240-km-long rupture along the Beichuan fault and a 72-km-long rupture along part of the Pengguan fault.
Studying the model, they were able to determine that the fault plane dips considerably to the northwest in the zone's southwest area and then rises up to a nearly vertical position in the zone's northeast.
They also learned that the direction of the motion along the fault changed, going from a thrust where upper layer rocks were pushed up and lower layer rocks pulled down, to a 'dextral faulting', where two parts of Earth’s plates slide past each other. About a 7-metre slip, the greatest along the rupture, was detected on the Beichuan fault near Beichuan City, which was destroyed completely by the quake and suffered the highest number of casualties.
Another major finding was that the fault junctions (solid rock barriers that stop a quake from propagating from one segment to another), beneath the hardest-hit cities of Yingxiu, Beichuan and Nanba, failed to withstand the extraordinary energy released along the fault.
"These fault junctions are barriers, whose failures in a single event allowed the rupture to cascade through several fault segments, resulting in a major 7.9-earthquake," Shen explained. "Earthquakes across fault segments like this are estimated to happen about every 4000 years."
These new results were published this month in the journal Nature Geoscience, part of Nature magazine.
Following the quake, Sun and Shen worked closely with the 'Dragon 2' programme to coordinate SAR coverage of the seismic area. Dragon 2 is a joint undertaking between ESA and China’s Ministry of Science and Technology that encourages scientists to use satellite data to monitor and understand environmental phenomena in China.
"The resulting Envisat SAR data acquired along an important track close to the epicentre turned out to be vital in constraining the southern part of the deformation field and helping explain the fault geometry and rupture distribution of the Pengguan fault, which would be difficult to resolve otherwise," Shen said.
The scientists also hope the data will help to assess earthquake potential in the future.
"Under the coordination of Dragon 2, the SAR data acquired during this period will be used, along with GPS measurements, to reveal geophysical processes within the Longmen Shan fault zone and the lower crust and upper mantle, which will help us understand the earthquake and faulting mechanisms and hopefully shed light on future seismic risks in this area."
First machine learning method capable of accurate extrapolation
13.07.2018 | Institute of Science and Technology Austria
A step closer to single-atom data storage
13.07.2018 | Ecole Polytechnique Fédérale de Lausanne
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:26959dbf-ab90-4fc7-9775-67de465b2403> | 3.8125 | 1,520 | Content Listing | Science & Tech. | 37.870164 | 95,580,947 |
Spin density wave
Spin-density wave (SDW) and charge-density wave (CDW) are names for two similar low-energy ordered states of solids. Both these states occur at low temperature in anisotropic, low-dimensional materials or in metals that have high densities of states at the Fermi level . Other low-temperature ground states that occur in such materials are superconductivity, ferromagnetism and antiferromagnetism. The transition to the ordered states is driven by the condensation energy which is approximately where is the magnitude of the energy gap opened by the transition. Note that SDWs are distinct from spin waves, which are an excitation mode of ferromagnets and antiferromagnets.
Fundamentally SDWs and CDWs involve the development of a superstructure in the form of a periodic modulation in the density of the electronic spins and charges with a characteristic spatial frequency that does not transform according to the symmetry group that describes the ionic positions. The new periodicity associated with CDWs can easily be observed using scanning tunneling microscopy or electron diffraction while the more elusive SDWs are typically observed via neutron diffraction or susceptibility measurements. If the new periodicity is a rational fraction or multiple of the lattice constant, the density wave is said to be commensurate; otherwise the density wave is termed incommensurate.
Some solids with a high form density waves while others choose a superconducting or magnetic ground state at low temperatures, because of the existence of nesting vectors in the materials' Fermi surfaces. The concept of a nesting vector is illustrated in the Figure for the famous case of chromium, which transitions from a paramagnetic to SDW state at a Néel temperature of 311 K. Cr is a body-centered cubic metal whose Fermi surface features many parallel boundaries between electron pockets centered at and hole pockets at H. These large parallel regions can be spanned by the nesting wavevector shown in red. The real-space periodicity of the resulting spin-density wave is given by . The formation of an SDW with a corresponding spatial frequency causes the opening of an energy gap that lowers the system's energy. The existence of the SDW in Cr was first posited in 1960 by Albert Overhauser of Purdue. The theory of CDWs was first put forth by Rudolf Peierls of Oxford University, who was trying to explain superconductivity.
Many low-dimensional solids have anisotropic Fermi surfaces that have prominent nesting vectors. Well-known examples include layered materials like NbSe3, TaSe2 and K0.3MoO3 (a Chevrel phase) and quasi-1D organic conductors like TMTSF or TTF-TCNQ. CDWs are also common at the surface of solids where they are more commonly called surface reconstructions or even dimerization. Surfaces so often support CDWs because they can be described by two-dimensional Fermi surfaces like those of layered materials. Chains of Au and In on semiconducting substrates have been shown to exhibit CDWs. More recently, monatomic chains of Co on a metallic substrate were experimentally shown to exhibit a CDW instability and was attributed to ferromagnetic correlations.
The most intriguing properties of density waves are their dynamics. Under an appropriate electric field or magnetic field, a density wave will "slide" in the direction indicated by the field due to the electrostatic or magnetostatic force. Typically the sliding will not begin until a "depinning" threshold field is exceeded where the wave can escape from a potential well caused by a defect. The hysteretic motion of density waves is therefore not unlike that of dislocations or magnetic domains. The current-voltage curve of a CDW solid therefore shows a very high electrical resistance up to the depinning voltage, above which it shows a nearly ohmic behavior. Under the depinning voltage (which depends on the purity of the material), the crystal is an insulator.
- G. Grüner The dynamics of charge-density waves
- Mutka et al., Charge-density waves and localization in electron-irradiated 1T-TaS2
- Pouget et al., Neutron-scattering investigations of the Kohn anomaly and of the phase and amplitude charge-density-wave excitations of the blue bronze K0.3MoO3
- Patton Conductivity, Superconductivity, and the Peierls Instability
- Snijders, P. C.; Weitering, H. H. (2010). "Electronic instabilities in self-assembled atom wires". Rev. Mod. Phys. 82: 307–329. Bibcode:2010RvMP...82..307S. doi:10.1103/RevModPhys.82.307.
- Zaki, Nader; et al. (2013). "Experimental observation of spin-exchange-induced dimerization of an atomic one-dimensional system". Phys. Rev. B. 87: 161406(R). Bibcode:2013PhRvB..87p1406Z. doi:10.1103/PhysRevB.87.161406.
- A pedagogical article about the topic: "Charge and Spin Density Waves," Stuart Brown and George Gruner, Scientific American 270, 50 (1994).
- Authoritative work on Cr: "Spin-density-wave antiferromagnetism in chromium," E. Fawcett, Rev. Mod. Phys. 60, 209 (1988).
- About Fermi surfaces and nesting: Electronic Structure and the Properties of Solids, Walter A. Harrison, ISBN 0-486-66021-4.
- Observation of CDW by ARPES: "Pseudogap and Charge Density Waves in Two Dimensions," S. V. Borisenko et al., Phys. Rev. Lett. 100, 196402 (2008).
- Peierls instability.
- An extensive review of experiments as of 2013 by Pierre Monceau. Electronic crystals: an experimental overview. | <urn:uuid:90a59865-b599-423a-b1e6-435cef97cb56> | 3.703125 | 1,282 | Knowledge Article | Science & Tech. | 40.760327 | 95,580,961 |
Published in this week’s issue of Nature,* the new research raises several intriguing questions about the fundamental physics of this exciting material and reveals new effects that may make graphene even more powerful than previously expected for practical applications.
Graphene is one of the simplest materials—a single-atom-thick sheet of carbon atoms arranged in a honeycomb-like lattice—yet it has many remarkable and surprisingly complex properties. Measuring and understanding how electrons carry current through the sheet is important to realizing its technological promise in wide-ranging applications, including high speed electronics and sensors. For example, the electrons in graphene act as if they have no mass and are almost 100 times more mobile than in silicon. Moreover, the speed with which electrons move through graphene is not related to their energy, unlike materials such as silicon where more voltage must be applied to increase their speed, which creates heat that is detrimental to most applications.
To fully understand the behavior of graphene’s electrons, scientists must study the material under an extreme environment of ultra-high vacuum, ultra-low temperatures, and large magnetic fields. Under these conditions, the graphene sheet remains pristine for weeks, and the energy levels and interactions between the electrons can be observed with precision (see "Graphene Yields Secrets to Its Extraordinary Properties," http://www.nist.gov/public_affairs/techbeat/tbx20090514_graphene.htm, NIST Tech Beat Extra, May 14, 2009).
NIST has recently constructed the world’s most powerful and stable scanning-probe microscope, with an unprecedented combination of low temperature (as low as 10 millikelvin, or 10 thousandths of a degree above absolute zero), ultra-high vacuum, and high magnetic field. In the first measurements made with this instrument, the international team has used its power to resolve the finest differences in the electron energies in graphene, atom-by-atom.
“Going to this resolution allows you to see new physics,” said Young Jae Song, a postdoctoral researcher who helped develop the instrument at NIST and make these first measurements.
And the new physics the team saw raises a few more questions about how the electrons behave in graphene than it answers.
Because of the geometry and electromagnetic properties of graphene’s structure, an electron in any given energy level populates four possible sublevels, called a “quartet.” Theorists have predicted that this quartet of levels would split into different energies when immersed in a magnetic field, but until recently there had not been an instrument sensitive enough to resolve these differences.
“When we increased the magnetic field at extreme low temperatures, we observed unexpectedly complex quantum behavior of the electrons,” said NIST Fellow Joseph Stroscio.
What is happening, according to Stroscio, appears to be a “many-body effect” in which electrons interact strongly with one another in ways that affect their energy levels.
One possible explanation for this behavior is that the electrons have formed a “condensate” in which they cease moving independently of one another and act as a single coordinated unit.
“If our hypothesis proves to be correct, it could point the way to the creation of smaller, very-low-heat producing, highly energy efficient electronic devices based upon graphene,” said Shaffique Adam, a postdoctoral researcher who assisted with theoretical analysis of the measurements.
The research team, led by Joseph Stroscio, includes collaborators from NIST, the University of Maryland, Seoul National University, the Georgia Institute of Technology, and the University of Texas at Austin.
The group’s work was also recently featured in Nature Physics,** in which they describe how the energy levels of graphene’s electrons vary with position as they move along the material’s crystal structure. The way in which the energy varies suggests that interactions between electrons in neighboring layers may play a role.
*Y. J. Song, A. F. Otte, Y. Kuk, Y. Hu, D. B. Torrance, P. N. First, W. A. de Heer, H. Min, S. Adam, M. D. Stiles, A. H. MacDonald, and J. A. Stroscio. High Resolution Tunnelling Spectroscopy of a Graphene Quartet, Nature, Sept. 9, 2010.
**D. L. Miller, K. D. Kubista, G. M. Rutter, Ming Ruan, W. A. de Heer, M. Kindermann, P. N. First, and J. A. Stroscio. Real-space mapping of magnetically quantized graphene states. Nature Physics. Published online Aug. 8, 2010. http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys1736.html
Mark Esser | Newswise Science News
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Life Sciences
18.07.2018 | Life Sciences
18.07.2018 | Information Technology | <urn:uuid:02804630-3950-492c-a48d-1fbfd8a304d2> | 4.21875 | 1,666 | Content Listing | Science & Tech. | 43.178415 | 95,580,977 |
Supercomputer calculates mass difference between neutron
The fact that the neutron is slightly more massive than the proton is the reason why atomic nuclei have exactly those properties that make our world and ultimately our existence possible.
Eighty years after the discovery of the neutron, a team of physicists from France, Germany, and Hungary headed by Zoltán Fodor, a researcher from Wuppertal, has finally calculated the tiny neutron-proton mass difference. The findings, which have been published in the current edition of Science, are considered a milestone by many physicists and confirm the theory of the strong interaction.
As one of the most powerful computers in the world, JUQUEEN at Forschungszentrum Jülich was decisive for the simulation.
The existence and stability of atoms relies heavily on the fact that neutrons are slightly more mas-sive than protons. The experimentally determined masses differ by only around 0.14 percent. A slightly smaller or larger value of the mass difference would have led to a dramatically different universe, with too many neutrons, not enough hydrogen, or too few heavier elements.
The tiny mass difference is the reason why free neutrons decay on average after around ten minutes, while protons - the unchanging building blocks of matter - remain stable for a practically unlimited period.
In 1972, about 40 years after the discovery of the neutron by Chadwick in 1932, Harald Fritzsch (Germany), Murray Gell-Mann (USA), and Heinrich Leutwyler (Switzerland) presented a consistent theory of particles and forces that form the neutron and the proton known as quantum chromodynamics.
Today, we know that protons and neutrons are composed of "up quarks" and "down quarks". The proton is made of one down and two up quarks, while the neutron is composed of one up and two down quarks.
Simulations on supercomputers over the last few years confirmed that most of the mass of the proton and neutron results from the energy carried by their quark constituents in accordance with Einstein's formula E=mc2.
However, a small contribution from the electromagnetic field surrounding the electrically charged proton should make it about 0.1 percent more massive than the neutral neutron. The fact that the neutron mass is measured to be larger is evidently due to the different masses of the quarks, as Fodor and his team have now shown in extremely complex simulations.
For the calculations, the team developed a new class of simulation techniques combining the laws of quantum chromodynamics with those of quantum electrodynamics in order to precisely deter-mine the effects of electromagnetic interactions. By controlling all error sources, the scientists suc-cessfully demonstrated how finely tuned the forces of nature are.
Professor Kurt Binder is Chairman of the Scientific Council of the John von Neumann Institute for Computing (NIC) and member of the German Gauss Centre for Supercomputing. Both organizations allocate computation time on JUQUEEN to users in a competitive process.
"Only using world-class computers, such as those available to the science community at Forschungszentrum Jülich, was it possible to achieve this milestone in computer simulation," says Binder. JUQUEEN was supported in the process by its "colleagues" operated by the French science organizations CNRS and GENCI as well as by the computing centres in Garching (LRZ) and Stuttgart (HLRS).
The results of this work by Fodor's team of physicists from Bergische Universität Wuppertal, Centre de Physique Théorique de Marseille, Eötvös University Budapest, and Forschungszentrum Jülich open the door to a new generation of simulations that will be used to determine the properties of quarks, gluons, and nuclear particles. According to Professor Kálmán Szabó from Forschungszentrum Jülich, "In future, we will be able to test the standard model of elementary particle physics with a tenfold increase in precision, which could possibly enable us to identify effects that would help us to uncover new physics beyond the standard model."
"Forschungszentrum Jülich is supporting the work of excellent researchers in many areas of science with its supercomputers. Basic research such as elementary particle physics is an area where methods are forged, and the resulting tools are also welcomed by several other users," says Prof. Dr. Sebastian M. Schmidt, member of the Board of Directors at Jülich who has supported and encouraged these scientific activities for years.
Tobias Schloesser | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Materials Sciences
20.07.2018 | Physics and Astronomy
20.07.2018 | Materials Sciences | <urn:uuid:219f0f55-dcd2-4364-ac2b-1173dfe1947c> | 3.78125 | 1,551 | Content Listing | Science & Tech. | 36.215237 | 95,581,011 |
Unit 10 - Circle Problem - solution
AB is a diameter. CD is tangent to the circle at B. AC = 16cm, AD = 12cm, and AH = 2.8cm. Find as much information as possible about the segments and the angles in the diagram.
Unit 10 - Circle Problem
Law of Sines
Myp geometry booklet
CI and Test Duality | <urn:uuid:8d3f95cd-3640-4907-94cc-61890740759b> | 2.671875 | 79 | Content Listing | Science & Tech. | 81.506333 | 95,581,041 |
Internet Search Results
Geometry - Wikipedia
Geometry (from the Ancient Greek: γεωμετρία; geo-"earth", -metron "measurement") is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer.. Geometry arose independently in a number of early cultures as a practical way for dealing with lengths ...
Euclidean geometry - Wikipedia
Euclidean geometry is a mathematical system attributed to Alexandrian Greek mathematician Euclid, which he described in his textbook on geometry: the Elements.Euclid's method consists in assuming a small set of intuitively appealing axioms, and deducing many other propositions from these.Although many of Euclid's results had been stated by earlier mathematicians, Euclid was the first to show ...
Amazon.com: Geometry for Enjoyment and Challenge ...
The best thing I ever bought (although pricey for a 20 year old school book)! My daughter is a competetive gymnast who travels several times a year and taking HS Honors Geometry as an 8th grader.
Arabic / Islamic geometry 01 - Catnaps design
Catnaps is a personal website and resource for islamic architecture, planning and design, photographs, the cassini and maraldi astronomer families and ww1 military history.
Table of Contents - Math Open Reference
While you are here.. We have used advertising to support the site so it can remain free for everyone. However, advertising revenue is falling and I have always hated the ads.
Play Geometry Dash World Online
Geometry dash online had got the internet crazy when it was released, People were getting addicted to the game instantly after playing.
Principles and Standards - National Council of Teachers of ...
This practical guide includes three 11" x 17" sheets to display the expectations across the four grade bands for each of the five Content Standards: Number and Operations, Algebra, Geometry, Data Analysis and Probability, and Measurement.
GeoGebra | Free Math Apps - used by over 100 Million ...
Get our free online math tools for graphing, geometry, 3D, and more!
Sign In. Sign in to ClassZone to get access to online books, Activity Maker, special interactive features, and more!
Arabic / Islamic geometry 02 - Catnaps
The beginnings of these design studies. These studies began a long time ago and derived from an interest I have always had in mathematics in general, and geometry in particular. | <urn:uuid:59b6c680-6cc6-45bf-af12-fd2dae6049ee> | 3 | 542 | Content Listing | Science & Tech. | 39.056813 | 95,581,060 |
Peptide Bond Formation
In this animated object, learners examine the formation of peptide bonds through dehydration synthesis.
Learners examine how five or six groups of electrons around a central atom cause the shape of the molecule to be trigonal bipyramidal, seesaw, T-shaped, linear, octahedral, square pyramidal, or square planar. Seven examples and three interactive questions are provided in this animated activity.
You may also like
Everything has been clearly displayed. I`m doing a project on protein and this has helped me alot!Posted by Abeo Andrews on 11/1/2010 11:25:34 PM Reply
Excellent, simple graphics and explanations, very understandable for the student.Posted by Roberta Sulk on 6/2/2007 12:00:00 AM Reply | <urn:uuid:f0793f1e-28a0-494d-bef3-0496b54fb2de> | 3.6875 | 167 | Tutorial | Science & Tech. | 39.895 | 95,581,075 |
Consider the following code:
x = 10 puts x
In Ruby, this sort of variable is called a local variable.
It can be used only in the same place it is defined.
It's considered to be local in scope.
It's only present within the local area of code.
def basic_method puts x end x = 10 basic_method
This example defines x to equal 10, and then jumps to a local method called basic_method.
If you ran this code, you would get an error like this:
NameError: undefined local variable or method 'x' for main:Object from (irb):2:in 'basic_method'
When you are in basic_method, you're no longer in the same scope as the variable x.
Because x is a local variable, it exists only where it was defined.
To avoid this problem, use only local variables where they're being directly used.
Here's an example where you have two local variables with the same name but in different scopes:
def basic_method x = 50 # ww w . j a v a 2 s . c o m puts x end x = 10 basic_method puts x
Here, you set x to 10 in the main code, and set x to 50 inside the method.
x is still 10 when you return to the original scope.
The x variable inside basic_method is not the same x variable that's outside of the method.
They're separate variables, distinct within their own scopes. | <urn:uuid:cdd5099a-f424-4378-88c3-33117adfb95b> | 3.765625 | 319 | Documentation | Software Dev. | 68.175 | 95,581,083 |
UK: +44 (0)1223 264428
USA: +1 (650) 798 5134
By Joshua Shenker
Are you familiar with the word tribology? After recently gaining a doctorate in the subject, I’ve spoken about it to people and have found just a small number nod in agreement, many more confuse it with the study of tribes, but most people are just outright confused. Yet even if you have never heard the word tribology before, it’s likely that you will know its subject matter, and as designers, product developers and engineers at CDP it’s increasingly relevant to our day-to-day work for our clients.
Tribology in its broadest definition is the science of interacting surfaces in relative motion, but the subject extends to the study and application of the principles of friction, wear and lubrication. It can therefore be considered a fundamental science spanning many disciplines including physics, chemistry, engineering and material science. Historically tribology has been confined to the heavier industries such as transportation, power generation, mining and manufacturing, but as technology progressed over time, the subject spilled over into new research areas and industries.
More recently, tribological research has found its way into the development of products stretching from specialist medical environments through to every-day use consumer markets. But how can studying gecko’s feet or a lotus leaf help with the design of products?
Whilst the term tribology is a fairly recent invention, its subject matter can be traced back to some of the earliest civilisations including the ancient Egyptians and Mesopotamians. There are several examples of tribological devices scattered through history involving rudimentary bearings and manually lubricated surfaces such as potter’s wheels, hinged doors and wheeled carriages.
Yet the word only really came into the public consciousness in 1966 when a UK Government committee published what is now commonly referred to as the Jost Report. It aimed to demonstrate the potential savings to industry if greater attention were paid to tribological design. Put simply, by making equipment more efficient or less prone to wear, long-term financial savings could be made in almost every industry sector by reducing maintenance costs and increasing productivity. The report quantified these combined costs in lost earnings to the UK economy, estimated in 1966 to be around £515 Million annually (£8.6 Billion in today’s money). And so, I recently attended the 50th anniversary event of the Jost Report to hear about new developments in tribological research since finishing my studies and to identify areas of interest and relevance with our work at CDP.
One interesting emerging area of research is biotribology, the study of interacting surfaces in the natural world – in plants, the environment and people. Part of this research, biomechanics aims to understand how nature creates highly articulated joints that can be self-lubricating and thus limit levels of wear. It looks into how these systems can be replicated so that artificial implants can work seamlessly with the body. Indeed, 40% of all biotribology research focuses on artificial joints, implants and prosthetics. This in itself has opened up new branches of biotribology concerning the involvement of medical intervention, aiding the development of surgical instruments, medical therapies, medical devices and machinery.
Skin tribology is the second largest research area within biotribology. It focuses on elemental research on skin friction or synthetics, with a significant level of investment also going into researching skin contact with everyday items such as clothing, creams, cosmetics, shaving equipment, and even consumer products. A lot of research, both academic and within industry, is currently being done to understand and ensure that everyday products ‘feel’ acceptable to the consumer.
This highlighted to me the importance of being able to translate the qualitative language of what a consumer likes, into a quantitative way to test against. To quote Lord Kelvin: “If you cannot measure it, you cannot improve it.” Haptics, tactile perception and surface texture all contribute to what a consumer likes when they buy or use a product, whether they realise it or not. Therefore the importance of the tribologist cannot be underestimated in being able to translate what the consumer ‘feels’ to what can be designed, engineered and measured.
The third largest research area in biotribology is oral tribology, which goes further than oral care and hygiene for dentistry applications. A new area, oral processing, focuses on food technology, food rheology, ‘mouthfeel’ and taste perception, and examines how these parameters can be measured when developing ‘newer’ foods. A good example is chocolate, where it’s been found that consumers prefer the feel of smooth, low-viscosity melted chocolate. Sounds simple enough, but actually a tough order to fill in today’s mass manufactured world, especially with chocolate containing a complex array of particles and agglomerates, with non-homogenous properties both difficult to model and predict how they will interact. With the help of biotribology, the chocolatiers adjusted certain properties, including the inclusion of surfactants to prevent agglomeration and by refining the particles to no more than about 25 microns in diameter, so that the still solid chocolate would feel to consumers as though it were ‘liquid smooth’.
The list of areas within biotribology that are yielding interesting findings goes on, including biomimetics, which looks at nature for the development of bio-lubricants, smart fluids, or new materials with specific surface textures and chemistries. Novel research areas are looking into issues such as what causes a gecko’s feet to adhere so well to surfaces. Why is a shark’s skin more streamlined than that of other marine life? And why does water not stick to the leaf of a lotus flower? The implications of this kind of research allows for newer and greater improvements in a wide range of technology that is used extensively in different applications - whether it’s the latest sports equipment, fabrics, kitchenware or waterproof electronics.
Tribology is all around us naturally, and we benefit from it all the time, mostly unintentionally. With a greater appreciation of it as a science, and its occurrence in nature, gaining a better understanding of it can benefit us all in some shape or measure. So whether it’s the latest self-cleaning touchscreen device, or the newest non-stick household appliance, you have both nature and a tribologist to thank for that.
Why it’s crucial to design and engineer with safety in mind right from the start.
16 July 2018
Lifting the lid on how to save time and money with rapid design, development and manufacturing.
11 July 2018
Stay up to date with all our work and our latest news by signing up to our newsletter. | <urn:uuid:993d1089-e128-4da9-b750-c6118f30e227> | 2.84375 | 1,413 | Personal Blog | Science & Tech. | 27.839682 | 95,581,085 |
AT first glance, the scene might have seemed like a wildfire out of control. Orange flames, driven by a gusting wind, crackled across the prairie and charged into a stand of pine trees, sending a swirling cloud-castle of smoke billowing into the air. Black desolation remained where the fire had passed.
But the people wearing fire-resistant jumpsuits and helmets, with tanks full of water on their backs, were not trying to put out the fire. They were tending it, like shepherds. They had, in fact, set the blaze on purpose, following a precise, written prescription, in an effort to use fire in its historic role as a shaper, cleanser and revitalizer of nature -- in this case, the biologically diverse sand plains of Martha's Vineyard.
In the dual cause of ecological restoration and of pre-empting the growing danger of runaway wildfires, scenes like the one last week on the Vineyard's Katama Plains, part of a globally endangered type of ecosystem, are being repeated from coast to coast with increasing frequency and urgency.
For some 10,000 years before Europeans came to North America, fires set by lightning and by Indians swept the landscape so routinely that ecosystems became as dependent on flame as on rainfall and sunlight; too little or too much at the wrong time meant ecological disaster. In this century, however, Americans have generally seen fire in the wild as an enemy to be stamped out, an attitude reinforced by a long series of powerful images: Bambi fleeing the burning forest. Smokey Bear intoning that "only you" can prevent forest fires. The 1988 firestorm in Yellowstone, followed by the California blazes of a year ago. This year's epidemic of western wildfires, which killed 25 firefighters.
Yet despite determined efforts to suppress fire in nature, wildfires are becoming more numerous, intense and difficult to control, and people are compounding the danger by building in fire-dependent ecosystems. By depriving ecosystems of fire, experts in fire ecology say, people have allowed dead wood, twigs, bark, leaves and needles to accumulate, providing more fuel to feed bigger wildfires than would be the case if the landscape were allowed to burn naturally.Continue reading the main story
"The longer you let fuel accumulate, the worse the fire's going to be, and the higher the probability a fire will start and burn out of control," said Dr. Dennis H. Knight, an ecologist at the University of Wyoming at Laramie.
It is not a question of "whether these areas will burn, but only a question of when," Jack Ward Thomas, director of the United States Forest Service, told Congress in August. He was referring to the intermountain West, where the problem is especially severe, but the statement applies to other areas too.
The situation has again focused attention on prescribed burning, a venerable practice often ignored in the 20th century's rush to put fires out. Now it is favored by many experts as an important tool in reducing the accumulated fuels and thereafter maintaining healthier and less dangerous ecosystems.
While prescribed burning for both conservation and fire control has become increasingly accepted in many parts of the country, it is caught up in a furious debate among loggers, ranchers, environmentalists and government agencies in much of the West. There is wide agreement that something must be done to reduce the accumulating fuels, but there the consensus ends.
Little scientific disagreement exists, however, over fire's ecological importance. Before European settlement, frequent fires in many forests kept fuels sparse enough to prevent larger, catastrophic wildfires from sterilizing the soil and making it impermeable to water. Since not every patch of landscape burned at once, different areas were in different stages of recovery from fire at any given time. This created a variety of habitats; fire thus became a primary agent of biological diversity.
The sand plains of Martha's Vineyard are a case in point. Essentially a grassland punctuated by pitch pine and scrub oak, the plains burned periodically, in patches, before Europeans arrived. Fire affects it in this way:
In the first stage of what scientists call ecological succession, fire cleans the ground of dead vegetation and reduces the shade provided by trees and shrubs like bayberry and huckleberry. This opens up new places for seeds to germinate, promoting regeneration of the ecosystem.
In the next stage, grasses like little bluestem and wildflowers like sand-plain blue-eyed grass, bastard toadflax and bushy rock rose dominate the site. The deep roots of these fire-adapted plants allow their green parts to spring back above ground almost immediately after a fire. They also attract a characteristic assemblage of insect species that depend on them: the wild indigo borer, for instance, which lays its eggs on the wild indigo plant. In the third stage, the site becomes a heathland dominated by shrubs. Many small mammals like meadow voles set up housekeeping, and predators like the northern harrier nest in the dense shrubbery.
If fire is withheld indefinitely, as happened in this century, the site becomes dominated first by scrub oak and pitch pine and finally by oak woodland. The number and variety of species plunges sharply, with the least common species the first to go. The woodland is taken over by a relatively few common species like bluejays, crows and white-tailed deer.
And when new houses surround the site, as they have in many parts of the Vineyard over the last 20 years, the buildup of fuel poses a positive danger. "Prevailing winds could drive fire onto anyone's property there," said Tom Chase, who manages the sand-plains recovery effort for the Nature Conservancy in cooperation with a number of public and private organizations.
Combined with other disturbances, like windstorms and the grazing of animals, fire historically kept many other kinds of ecosystems in a constant state of healthy flux. These included prairies, savannas, forests and even wetlands. Countless plant and animal species owe their survival directly to fire. The sequoia and several kinds of pines require the heat from fire to open their cones and release the seeds, for instance, and the endangered Kirtland's warbler of Michigan builds its nest only on the lower, fire-killed branches of jack pines. Fire also promotes biodiversity by preventing a relatively few aggressive native weeds and exotic species from running rampant and crowding out many more less common natives.
The absence of fire also contributes to the danger of larger wildfires by making ecosystems less stable, ecologists say. In many forest ecosystems, for instance, the exclusion of fires has increased tree density, producing a richer habitat for tree-killing insects whose depradations increase the load of deadwood fuel.
"The conditions are just so nasty, it really is frustrating," says R. Neil Sampson, the chairman of a national commission on wildfire disasters that reported to Congress earlier this year. Mr. Sampson is also the executive vice president of American Forests, a conservation group.
The hottest topic in the new fire debate is what to do in the growing number of areas, like Southern California and the Rockies, where humans are increasingly building homes in fire-adapted ecosystems. In addition to California chaparral, the ponderosa pine forests of the mountain West are a primary example. Their low elevation and good climate make them especially attractive, but denial of burning has turned them from relatively open lands with scattered trees into dense, highly flammable forests.
What should be done?
In California, Dr. Daniel B. Botkin, an ecologist who heads the Center for the Study of the Environment in Santa Barbara, proposes setting up buffer zones between areas of highly flammable chaparral and built-up areas. The buffers would be purposely burned so that wildfires could not start in them.
In forest ecosystems of the West, timber and cattle interests favor selective logging and grazing as the prime means of reducing fuel. Some environmentalists object to sending more people and machines into an already disturbed ecosystem, and favor a "let burn" policy. Middle-of-the-roaders propose a mixture of approaches. Dr. Thomas of the Forest Service, for instance, favors thinning of forests and mechanical removal of fuel, followed by prescribed burning where practical.
The Sampson commission cited a 1992 fire near Boise, Idaho, that raced out of control, killing all vegetation, scorching and solidifying soil and obliterating entire wildlife populations. But when the fire encountered a stand of ponderosa pine that had been thinned two years earlier and then subjected to a prescribed ground fire to reduce fuels, it immediately slowed and allowed firefighters to move in and halt its advance. The thinned part of the forest was virtually undamaged. Had all the forest been thinned and placed under a burning regime, the commission said, the fire might have been a "nondestructive event" instead of a $24 million disaster.
Many difficulties lie in the way of widespread use of prescribed burning to reduce fire hazards, however. Cost, for instance, favors fire suppression over burning, says Dr. Stephen J. Pyne of the University of Arizona, a historian of fire and fire policy. Public agencies tend to finance firefighting but not fire-setting. Getting permits from agencies concerned with air-quality control and protection of threatened species, not to mention fending off opponents, can cause damaging delays. (Although prescribed fire produces smoke, its advocates argue that the smoke from bigger, out-of-control blazes is worse. Although some individual members of threatened species might die in a fire, they say, the entire species may become locally extinct if fire is not re-introduced.)
There is, of course, the problem of safety. Prescriptions for a controlled fire are so exacting that in some parts of the nation, Mr. Sampson says, there are "between three and zero days a year" when safe conditions prevail.
Despite the obstacles, prescribed burning for many purposes has made much headway in some parts of the country, especially the South. Florida, for instance, is widely considered to have made prescribed burning a sophisticated and systematic science. It has passed a law exempting property owners from liability for prescribed burning except where negligence is shown, and some other states have used the law as a model for legislation of their own. Florida has burned more than 1.6 million acres so far this year. On posters in that state, a new slogan has displaced the one touted by Smokey Bear. "Using fire wisely prevents forest fires," it says.
But reform of fire policy in most wild lands outside nature reserves may be as difficult as reforming health care policy, Dr. Pyne said. Consequently, he said, the strongest argument for prescribed burning, at least for now, is its value in restoring and maintaining biodiversity. Nature preserves, he said, are "suffering from fire famine."
Many conservationists across the country are trying to remedy that, and often they end up reducing fire hazards as well. Such is the case on Martha's Vineyard, where 1,000 acres of sand plain are being conserved.
The other day, a 29-acre segment at Katama Airport was the target. It had been burned twice before, reducing the fuel load to the point where it posed less hazard to nearby oceanfront houses. Among other things, the burn prescription required winds blowing from within an arc between northwest and northeast at 5 to 15 miles an hour, with an air temperature of 30 to 70 degrees and relative humidity of 30 to 80 percent. This, it was calculated, would produce the cleanest and safest burn.
Everything was right, but the wind speed -- steady at 8 to 10 miles an hour, according to a hand-held anemometer, with gusts to 14 -- was worrisome. "If it gets up to 15 to 20 sustained, I'll shut it down," said Tim Simmons, the Nature Conservancy's New England fire manager, who was directing the burn. Directly downwind from the site was a large, expensive house.
Little by little, using a device called a drip torch, Tom Chase applied gobbets of fire to the downwind edge of the plot while another worker with a fire hose kept the adjacent edge of the firebreak wet. The flames from this backfire slowly burned upwind, gradually creating a black strip perhaps 50 feet wide. The same procedure was applied to the flanks of the plot. And then, upwind of the backfire, Mr. Chase walked directly from one flank to another, dropping fire all the way. This blaze rapidly, cracklingly swept downwind, consuming green pines, russet, oil-filled huckleberry bushes and almost everything else in its path. Meeting the backfire, it went out.
This was repeated three times, until the entire plot was fired. An occasional hawk or rabbit fled the fire zone, and two hours after the first flame was applied came the word over walkie-talkies: "Ignition complete."
If the plot had posed any danger of wildfire before, it obviously posed none now.
And already, the ecosystem was beginning to recover. "It's a hurricane of biological activity under the soil right now," Mr. Chase said. The microbial community of fungi and algae under the ashes, responding to the heat, was rearranging itself to support a new successional cycle.
Ecological renewal was under way.Continue reading the main story | <urn:uuid:ff18e327-0317-4ece-b217-68a1a497d12b> | 3.4375 | 2,762 | Truncated | Science & Tech. | 38.575344 | 95,581,086 |
In a revolutionary leap that could transform solar power from a marginal, boutique alternative into a mainstream energy source, MIT researchers have overcome a major barrier to large-scale solar power: storing energy for use when the sun doesn't shine.
Until now, solar power has been a daytime-only energy source, because storing extra solar energy for later use is prohibitively expensive and grossly inefficient. With today's announcement, MIT researchers have hit upon a simple, inexpensive, highly efficient process for storing solar energy.
Requiring nothing but abundant, non-toxic natural materials, this discovery could unlock the most potent, carbon-free energy source of all: the sun. "This is the nirvana of what we've been talking about for years," said MIT's Daniel Nocera, the Henry Dreyfus Professor of Energy at MIT and senior author of a paper describing the work in the July 31 issue of Science. "Solar power has always been a limited, far-off solution. Now we can seriously think about solar power as unlimited and soon."
Inspired by the photosynthesis performed by plants, Nocera and Matthew Kanan, a postdoctoral fellow in Nocera's lab, have developed an unprecedented process that will allow the sun's energy to be used to split water into hydrogen and oxygen gases. Later, the oxygen and hydrogen may be recombined inside a fuel cell, creating carbon-free electricity to power your house or your electric car, day or night.
The key component in Nocera and Kanan's new process is a new catalyst that produces oxygen gas from water; another catalyst produces valuable hydrogen gas. The new catalyst consists of cobalt metal, phosphate and an electrode, placed in water. When electricity -- whether from a photovoltaic cell, a wind turbine or any other source -- runs through the electrode, the cobalt and phosphate form a thin film on the electrode, and oxygen gas is produced.
Combined with another catalyst, such as platinum, that can produce hydrogen gas from water, the system can duplicate the water splitting reaction that occurs during photosynthesis.
The new catalyst works at room temperature, in neutral pH water, and it's easy to set up, Nocera said. "That's why I know this is going to work. It's so easy to implement," he said.
'Giant leap' for clean energy
Sunlight has the greatest potential of any power source to solve the world's energy problems, said Nocera. In one hour, enough sunlight strikes the Earth to provide the entire planet's energy needs for one year.
James Barber, a leader in the study of photosynthesis who was not involved in this research, called the discovery by Nocera and Kanan a "giant leap" toward generating clean, carbon-free energy on a massive scale.
"This is a major discovery with enormous implications for the future prosperity of humankind," said Barber, the Ernst Chain Professor of Biochemistry at Imperial College London. "The importance of their discovery cannot be overstated since it opens up the door for developing new technologies for energy production thus reducing our dependence for fossil fuels and addressing the global climate change problem."
'Just the beginning'
Currently available electrolyzers, which split water with electricity and are often used industrially, are not suited for artificial photosynthesis because they are very expensive and require a highly basic (non-benign) environment that has little to do with the conditions under which photosynthesis operates.
More engineering work needs to be done to integrate the new scientific discovery into existing photovoltaic systems, but Nocera said he is confident that such systems will become a reality.
"This is just the beginning," said Nocera, principal investigator for the Solar Revolution Project funded by the Chesonis Family Foundation and co-Director of the Eni-MIT Solar Frontiers Center. "The scientific community is really going to run with this."
Nocera hopes that within 10 years, homeowners will be able to power their homes in daylight through photovoltaic cells, while using excess solar energy to produce hydrogen and oxygen to power their own household fuel cell. Electricity-by-wire from a central source could be a thing of the past.
The project is part of the MIT Energy Initiative, a program designed to help transform the global energy system to meet the needs of the future and to help build a bridge to that future by improving today's energy systems. MITEI Director Ernest Moniz, Cecil and Ida Green Professor of Physics and Engineering Systems, noted that "this discovery in the Nocera lab demonstrates that moving up the transformation of our energy supply system to one based on renewables will depend heavily on frontier basic science."
The success of the Nocera lab shows the impact of a mixture of funding sources - governments, philanthropy, and industry. This project was funded by the National Science Foundation and by the Chesonis Family Foundation, which gave MIT $10 million this spring to launch the Solar Revolution Project, with a goal to make the large scale deployment of solar energy within 10 years. | <urn:uuid:42b2b9a4-f174-473c-afd6-ea3a7ab01f46> | 3.78125 | 1,041 | News Article | Science & Tech. | 26.866306 | 95,581,142 |
This article may need to be rewritten entirely
to comply with Wikipedia's quality standards
, as What an unreadable mess of math equations and name lists, which needs to be put into shape for a general audience encyclopedia.. (May 2013)
Information geometry is a branch of mathematics that applies the techniques of differential geometry to the field of probability theory. This is done by taking probability distributions for a statistical model as the points of a Riemannian manifold, forming a statistical manifold. The Fisher information metric provides the Riemannian metric.
Information geometry reached maturity through the work of Shun'ichi Amari and other Japanese mathematicians in the 1980s. Amari and Nagaoka's book, Methods of Information Geometry, is cited by most works of the relatively young field due to its broad coverage of significant developments attained using the methods of information geometry up to the year 2000. Many of these developments were previously only available in Japanese-language publications. A more recent introduction is given in (Ay et al. 2017).
The following introduction is based on Methods of Information Geometry.
Information and probability
Define an n-set to be a set V with cardinality . To choose an element v (value, state, point, outcome) from an n-set V, one needs to specify b-sets, if one disregards all but the cardinality. That is, nats of information are required to specify v; equivalently, bits are needed.
By considering the occurrences of values from , one has an alternate way to refer to , through . First, one chooses an occurrence , which requires information of bits. To specify v, one subtracts the excess information used to choose one from all those
linked to , this is . Then, is the number of portions fitting into . Thus, one needs bits to choose one of them. So the information (variable size, code length, number of bits) needed to refer to , considering its occurrences in a message is
Finally, is the normalized portion of information needed to code all occurrences of one . The averaged code length over all values is .
is called the entropy of a random variable .
Statistical model, Parameters
With a probability distribution one looks at a variable through an observation context like a message or an experimental setup.
The context can often be identified by a set of parameters through combinatorial reasoning. The parameters can have an arbitrary number of dimensions and can be very local or less so, as long as the context given by a certain produces every value of , i.e. the support does not change as function of . Every determines one probability distribution for . Basically all distributions for which there exists an explicit analytical formula fall into this category (Binomial, Normal, Poisson, ...). The parameters in these cases have a concrete meaning in the underlying setup, which is a statistical model for the context of .
The parameters are quite different in nature from itself, because they do not describe , but the observation context for .
A parameterization of the form
- and ,
that mixes different distributions , is called a mixture distribution, mixture or -parameterization or mixture for short. All such parameterizations are related through an affine transformation . A parameterization with such a transformation rule is called flat.
A flat parameterization for is an exponential or parameterization, because the parameters are in the exponent of . There are several important distributions, like Normal and Poisson, that fall into this category. These distributions are collectively referred to as exponential family or -family. The -manifold for such distributions is not affine, but the manifold is. This is called -affine. The parameterization for the exponential family can be mapped to the one above by making another parameter and extend .
Differential geometry applied to probability
In information geometry, the methods of differential geometry are applied to describe the space of probability distributions for one variable . This is done by using a coordinate or atlas . Furthermore, the probability must be a differentiable and invertible function of . In this case, the are coordinates of the -space, and the latter is a differential manifold .
Derivatives are defined as is usual for a differentiable manifold:
with , for a real-valued function on .
Given a function on , one may "geometrize" it by taking it to define a new manifold. This is done by defining coordinate functions on this new manifold as
In this way one "geometricizes" a function , by encoding it into the coordinates used to describe the system.
For the inverse is and the resulting manifold of points is called the -representation. The manifold itself is called the -representation. The
- or -representations, in the sense used here, does not refer to the parameterization families of the distribution.
In standard differential geometry, the tangent space on a manifold at a point is given by:
In ordinary differential geometry, there is no canonical coordinate system on the manifold; thus, typically, all discussion must be with regard to an atlas, that is, with regard to functions on the manifold. As a result, tangent spaces and vectors are defined as operators acting on this space of functions. So, for example, in ordinary differential geometry, the basis vectors of the tangent space are the operators .
However, with probability distributions , one can calculate value-wise. So it is possible to express a tangent space vector directly as ( -representation ) or ( -representation ), and not as operators.
Important functions of are coded by a parameter with the important values , and :
- mixed or -representation ( ):
- exponential or -representation ( ): )
- -representation ( ): ( )
Distributions that allow a flat parameterization
are called collectively -family ( -, - or -family ) of distributions and the according manifold is called -affine.
The tangent vector is .
One may introduce an inner product on the tangent space of manifold at point as a linear, symmetric and positive definite map
This allows a Riemannian metric to be defined; the resulting manifold is a Riemannian manifold. All of the usual concepts of ordinary differential geometry carry over, including the norm
the line element , the volume element , and the cotangent space
that is, the dual space to the tangent space . From these, one may construct tensors, as usual.
Fisher metric as inner product
For probability manifolds such an inner product is given by the Fisher information metric.
Here are equivalent formulas of the Fisher information metric.
, the base vector in the -representation, is also called the score.
- . This is the same for and families.
with minimum for entails and
is applied only to the first parameter, and only to the second.
is the Kullback-Leibler divergence or relative entropy applicable to the -families.
For one has .
is the Hellinger distance applicable to the -family. also evaluates to the Fisher metric.
This relation with a divergence will be revisited further down.
The Fisher metric is motivated by
- it satisfying the requirements for an inner product
- its invariance for a sufficient statistic deterministic mapping from one variable to another and more general for , i.e. a broadened distribution has smaller .
- it being the Cramér–Rao bound.
, therefore any satisfying belongs to .
For any one has , therefore .
So and therefore .
and with inefficient estimator one gets the Cramér–Rao bound .
Like commonly done on Riemann manifolds, one may define an affine connection (or covariant derivative)
Given vector fields and lying in the tangent bundle , the affine connection describes how to differentiate the vector field along the direction. It is itself a vector field; it is the sum of the infinitesimal change in the vector field , as one moves along the direction , plus the infinitesimal change of the vector due to its parallel transport along the direction . That is, it takes into account the changing nature of what it means to move a coordinate system in a "parallel" fashion, as one moves about in the manifold. In terms of the basis vectors , one has the components:
The are Christoffel symbols. The affine connection may be used for defining curvature and torsion, like is usual in Riemannian geometry.
A non-metric connection is not determined by a metric tensor ; instead, it is and restricted by the requirement that the parallel transport between points and must be a linear combination of the base vectors in . Here,
expresses the parallel transport of as linear combination of the base vectors in , i.e. the new minus the change. Note that it is not a tensor (does not transform as a tensor).
For such a metric, one can construct a dual connection to make
for parallel transport using and .
For the mentioned -families the affine connection is called the -connection and can also be expressed in more ways.
- is a metric connection and with .
- i.e. is dual to with respect to the Fisher metric.
- If this is called -affine. Its dual is then -affine.
- i.e. 0-affine, and hence , i.e. 1-affine.
A function of two distributions (points) with minimum for entails and .
is applied only to the first parameter, and only to the second.
is the direction, which brought the two points to be equal, when applied to the first parameter, and to diverge again, when applied to the second parameter,
i.e. . The sign cancels in ,
which we can define to be a metric , if always positive.
The absolute derivative of along yields candidates for dual connections
This metric and the connections relate to the Taylor series expansion for the first parameter or second parameter.
Here for the first parameter:
The term is called the divergence or contrast function. A good choice is with convex for .
From Jensen's inequality it follows that and, for , we have
which is the Kullback-Leibler divergence or relative entropy
applicable to the -families.
In the above,
is the Fisher metric.
For a different yields
The Hellinger distance applicable to the -family is
In this case, also evaluates to the Fisher metric.
We now consider two manifolds and , represented by two sets of coordinate functions and . The corresponding tangent space basis vectors will be denoted by
The bilinear map associates a quantity to the dual base vectors. This defines an affine connection for and affine connection for that keep constant for parallel transport of and , defined through and .
If is flat, then there exists a coordinate system , that does not change over .
In order to keep constant, must not change either, i.e. is also flat. Furthermore, in this case, we can choose coordinate systems such that
If results as a function on , then making , both coordinate system function sets describe .
The connections are such, though, that makes flat and makes flat. This dual space is denoted as .
- Because of the linear transform between the flat coordinate systems, we have and .
- Because and so for it is possible to define two potentials and through and ( Legendre transform ).These are and .
This naturally leads to the following definition of a canonical divergence:
Note the summation that is a representation of the metric due to .
Properties of divergence
The meaning of the canonical divergence depends on the meaning of the metric
and vice versa ( ).
For the metric (Fisher metric) with the dual connections this is the relative entropy.
For the self-dual Euclidean space leads to
Similar to the Euclidean space the following holds:
- Triangular relation: (just substitute )
If is not dually flat then this generalizes to:
The last part drops in case of dual flatness. is the exponential map.
- Pythagorean Theorem: For and meeting on orthogonal lines at ( )
For and with a -autoparallel sub-manifold implies that the -geodesic connecting and is orthogonal to .
- By projecting onto of a curve one can calculate
the divergence of the curve where
and with .
With this becomes .
For an autoparallel sub-manifold parallel transport in it can be expressed with the sub-manifold's base vectors, i.e. .
A one-dimensional autoparallel sub-manifold is a geodesic.
Canonical divergence for the exponential family
For the exponential family one has .
Applying on both sides yields .
The other potential ( is entropy,
and was used).
is the covariance of , the Cramér–Rao bound,
i.e. an efficient estimator must be exponential.
The canonical divergence is given by the Kullback-Leibler divergence
and the triangulation is .
The minimal divergence to a sub-manifold given by a restriction like some constant means maximizing .
With this corresponds to the maximum entropy principle.
Canonical divergence for general alpha families
For general -affine manifolds with one has:
The connection induced by the divergence is not flat unless .
Then the Pythagorean theorem for two curves intersecting orthogonally at is:
This section may need to be rewritten entirely
to comply with Wikipedia's quality standards
, as Does this list make sense in a general article?. (May 2013)
The history of information geometry is associated with the discoveries of at least the following people, and many others
Information geometry can be applied where parametrized distributions play a role.
Here an incomplete list:
- statistical inference
- time series and linear systems
- quantum systems
- neural networks
- machine learning
- statistical mechanics
- mathematical finance
There are several commercial initiatives which claim to have Information Geometry as a crucial element in their applications.
Here an incomplete list:
- Institut des Sciences Informatiques et de leurs Interactions
- Tangent Works
- ^ a b Shun'ichi Amari, Hiroshi Nagaoka - Methods of information geometry, Translations of mathematical monographs; v. 191, American Mathematical Society, 2000 (ISBN 978-0821805312)
- Nihat Ay, Jürgen Jost, Hông Vân Lê, Lorenz Schwachhöfer (2017) Information Geometry
- Shun'ichi Amari, Hiroshi Nagaoka (2000) Methods of Information Geometry, Translations of Mathematical Monographs; v. 191, American Mathematical Society, (ISBN 978-0821805312)
- Shun'ichi Amari (1985) Differential-geometrical methods in statistics, Lecture Notes in Statistics, Springer-Verlag, Berlin.
- M. Murray and J. Rice (1993) Differential geometry and statistics, Monographs on Statistics and Applied Probability 48, Chapman and Hall.
- F. Nielsen (2010) Legendre transformation and information geometry, Memo note
- F. Nielsen (2013) Cramer-Rao Lower Bound and Information Geometry, Connected at Infinity II: On the work of Indian mathematicians (R. Bhatia and C.S. Rajan, Eds.), special volume of Texts and Readings In Mathematics (TRIM), Hindustan Book Agency
- R. E. Kass and P. W. Vos (1997) Geometrical Foundations of Asymptotic Inference, Series in Probability and Statistics, Wiley.
- N. N. Cencov (1982) Statistical Decision Rules and Optimal Inference, Translations of Mathematical Monographs; v. 53, American Mathematical Society
- Giovanni Pistone, and Sempi, C. (1995). "An infinite-dimensional geometric structure on the space of all the probability measures equivalent to a given one", Annals of Statistics. 23(5): 1543–1561.
- Brigo, D, Hanzon, B, Le Gland, F. (1999) "Approximate nonlinear filtering by projection on exponential manifolds of densities", Bernoulli 5: 495 - 534, ISSN 1350-7265
- Brigo, D, (1999) "Diffusion Processes, Manifolds of Exponential Densities, and Nonlinear Filtering", in Ole E. Barndorff-Nielsen and Eva B. Vedel Jensen, editors, Geometry in Present Day Science, World Scientific
- Arwini, Khadiga, Dodson, C. T. J. (2008) Information Geometry - Near Randomness and Near Independence, Lecture Notes in Mathematics # 1953, Springer ISBN 978-3-540-69391-8
- Th. Friedrich (1991) "Die Fisher-Information und symplektische Strukturen", Mathematische Nachrichten 153: 273-296. | <urn:uuid:5305c0c1-8be9-42b4-a71f-57d2ca4b3ba0> | 3.203125 | 3,600 | Knowledge Article | Science & Tech. | 35.346686 | 95,581,150 |
Only the microbes located above the water's surface contribute to the development of hydrogen-sulfide-rich caves, suggests an international team of researchers. Since 2004, researchers have been studying the Frasassi cave system, an actively developing limestone cave system located 1500 feet underground in central Italy.
Limestone caves can form when solid limestone dissolves after coming in contact with certain types of acids. The resulting void is the cave system.
"We knew from previous research that microbes do play a role in cave development," said Jennifer Macalady, associate professor of geosciences, Penn State and co-author of a paper published today (Sept. 2) in Chemical Geology. "What we were trying to assess was the extent of that contribution, which would help us understand how caves all over the world, as well as on other worlds, form."
In hydrogen-sulfide-rich caves, microbes "eat" the hydrogen sulfide through a process known as aerobic respiration, Macalady said. The byproduct of this process is the creation of sulfuric acid, which has the potential to dissolve limestone and contribute to cave growth.
"The main goal of our study was to investigate what happened to hydrogen sulfide in the cave, because when the microbes use hydrogen sulfide for energy, this, along with oxygen, leads to the production of sulfuric acid," said Macalady.
The researchers measured oxygen levels and the amount of chemicals degassing -- changing from liquid to gas state -- throughout several parts of the cave system. The Frasassi system has cave pathways that formed 10,000 to 100,000 years ago as well as currently actively forming cave pathways, allowing the researchers to compare their measurements and identify the factors contributing to active development.
"What we found is that in certain conditions, the hydrogen sulfide in the water escapes as a gas into the air above the water instead of being 'eaten' by microbes below the water surface," said Macalady. "As a result, the underwater microbes only partially burned hydrogen sulfide. Instead of creating a byproduct of sulfuric acid, they created pure sulfur as a byproduct, which is not corrosive to limestone."
In contrast, the microbes above the water's surface completely "ate" the hydrogen sulfide. This process results in the creation of sulfuric acid, which dissolves limestone and contributes to cave growth.
Macalady says that the results would apply to all limestone caves that are rich in hydrogen sulfide, which includes more well-known caves such as Carlsbad Caverns and Lechuguilla Cave in New Mexico and Kap-Kutan Cave in Turkmenistan.
Co-authors on the findings include Daniel Jones, former Penn State graduate student now at the University of Minnesota; Lubos Polerecky, Max Planck Institute for Marine Microbiology and Utrecht University; Sandro Galdenzi; and Brian Dempsey, Penn State Department of Civil and Environmental Engineering.
The National Science Foundation, NASA Astrobiology Institute and the Max-Planck Society funded this work.
A'ndrea Elyse Messer | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:9e85bd2e-23f4-4616-9ae8-c92c41396571> | 3.984375 | 1,215 | Content Listing | Science & Tech. | 33.215907 | 95,581,157 |
Climate Change . And it’s damaging effects. Mississippi Delta coastal w etlands . N o ice… devastating to the Polar Bear. Sea Level Rise @ 20 meters. Something to think about!. Skinnerized Test Questions.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
And it’s damaging effects
1. Why doesn’t glacier melt effect the rise in sea level and land mass ice, like Antarctica, does?
a) Glaciers are compiled of far less ice than Antarctica.
b) Glaciers displace a volume of water that has a weight equal to that of the glacier.
c) The surface area of Antarctica is greater allowing faster melting than glaciers.
d) Glaciers displace a volume of water that has a greater weight than that of the glacier.
2. What effect does sea level rise have on the wildlife community?
a) No effect. Wildlife can adapt to many situations or migrate if needed.
b) Devastating effect. Drastic change in wildlife could damage the entire ecosystem.
c) No effect. Natural selection will sort it all out.
d) Devastating effect. Decreased wildlife means decreased car accidents that rid the world of bad drivers. | <urn:uuid:6ca62503-e73f-4f94-858b-47a242db45ed> | 3.46875 | 307 | Truncated | Science & Tech. | 57.061538 | 95,581,160 |
The Physics and Mathematics of the Second Law of Thermodynamics
by Elliott H. Lieb, Jakob Yngvason
Publisher: arXiv 1999
Number of pages: 101
The essential postulates of classical thermodynamics are formulated, from which the second law is deduced as the principle of increase of entropy in irreversible adiabatic processes that take one equilibrium state to another. Temperature is derived from entropy, but at the start not even the concept of 'hotness' is assumed.
Home page url
Download or read it online for free here:
by Howard DeVoe
Thermodynamics and Chemistry is designed primarily as a textbook for a one-semester course in classical chemical thermodynamics at the graduate or undergraduate level. It can also serve as a supplementary text and thermodynamics reference source.
by Hans Kroha - University of Bonn
Contents: Introduction and overview; Thermodynamics; Foundations of statistical physics; Ideal systems: some examples; Systems of identical particles; General formulation of statistical mechanics; Interacting systems in thermodyn. equilibrium.
by Max Planck - Longmans, Green
The classic by the Nobel Laureate is still recognized as one of the best introductions to thermodynamics. A model of conciseness and clarity, it covers fundamental facts and definitions, first and second fundamental principles of thermodynamics, etc.
by Irey, Ansari, Pohl - The University of Texas at Austin
The Microscopic Second Law: Equilibrium - A Microscopic Understanding; Entropy, Equilibrium and the Second Law. Applied Microscopic Thermodynamics: Microscopic Calculation of Perfect Gas Properties; Gases with Low-Mass Particles; Transport Processes. | <urn:uuid:82612cc8-f4d9-4a19-a595-90c4c2905419> | 3.078125 | 356 | Content Listing | Science & Tech. | 6.787536 | 95,581,177 |
Capturing cell growth in 3-D
AIM Biotech's microfluidics device (shown here) has an array of culturing sections, each with three chambers: a middle chamber for hydrogel and any cell type, and two side channels for culturing additional cell types.
Replicating how cancer and other cells interact in the body is somewhat difficult in the lab. Biologists generally culture one cell type in plastic plates, which doesn’t represent the dynamic cell interactions within living organisms.
Now MIT spinout AIM Biotech has developed a microfluidics device — based on years of research — that lets researchers co-culture multiple cell types in a 3-D hydrogel environment that mimics natural tissue.
Among other things, the device can help researchers better study biological processes, such as cancer metastasis, and more accurately capture how cancer cells react to chemotherapy agents, says AIM Biotech co-founder Roger Kamm, the Cecil H. Green Distinguished Professor in MIT’s departments of mechanical engineering and biological engineering.
“If you want realistic models of these processes, you have to go to a 3-D matrix, with multiple cell types … to see cell-to-cell contact and let cells signal to each other,” Kamm says. “None of those processes can be reproduced realistically in the current cell-culture methods.”
Designed originally for Kamm’s lab, the new commercial device is a plastic chip with three chambers: a middle chamber for hydrogel and any cell type, such as cancer cells or endothelial cells (which line blood vessels), and two side channels for culturing additional cell types. The hydrogel chamber has openings along each side, so cells can interact with each other, as they would in the body. Cancer drugs or other therapeutics can then be added to better monitor how cells respond in a patient.
Lab-fabricated devices have been used for various applications described in more than 40 research publications to date, including studies of cancer and stem cell research, neuroscience, and the circulatory system. This month, AIM Biotech will begin deploying the commercial devices to 47 research groups in 13 countries for user feedback.
Other systems for 3-D cell culturing involve filling deep dishes with hydrogels. Because of the distance these dishes must be kept from the microscope, Kamm says, it’s difficult to capture high-resolution images. AIM Biotech’s devices, on the other hand, he says, can be put directly under the microscope like a traditional plate, which is beneficial for imaging.
“Everything here happens within about 200 microns of the cover slip, so you can get really good high-resolution, real-time images and movies,” Kamm says.
Lab to world
In 2005 at MIT, Kamm’s lab created a prototype of the microfluidics device to better study angiogenesis — the forming of new blood vessels. But there was a major issue: The hydrogel in the middle chamber would spread into the side channels before solidifying, which disturbed the cell cultures.
As a solution, the researchers lined the hydrogel chamber with minute posts. When injected, the hydrogel seeps out to the posts, but surface tension keeps it from leaking into the side channels, while still allowing the cells to enter. “That’s the key,” Kamm says. “When you put liquid into a small space, surface tension drives where it goes, so we decided to use surface tension to our advantage.”
Soon, Kamm was using the device in his lab: In a 2011 study, researchers in his group discovered that breast cancer cells can break free from tumors and travel against flows normally present inside the tissue; in a 2012 study, they found that macrophages — a type of white blood cells — were key in helping tumor cells break through blood vessels.
And in a 2013 study, Kamm was able to capture high-resolution videos of how the cells escape through minute holes in endothelial walls and travel through the body. “People try to do this in vivo, but you can’t possibly get the kind of resolution you can within a microfluidic system,” Kamm says.
Researchers worldwide began taking notice of the device, which led to several collaborations with researchers locally and in Singapore: The device’s development had been funded, in part, by the Singapore-MIT Alliance for Research and Technology (SMART).
“It became apparent that, if there’s this much interest in these systems and that much need for them, we should set up a company to develop the technology and market it,” Kamm says.
After securing seed funding from Draper Laboratory, the National Institutes of Health, and SMART, Kamm brought the idea for the device to Innovation Teams (i-Teams), where MIT students from across disciplines flesh out strategies for turning lab technologies into commercial products. Among other things, this experience helped Kamm home in on the product’s target market.
“At the time, [I was] trying to decide whether to go for researchers, go directly to pharmaceutical industry, or something that is useful in the clinic,” Kamm says. “One of the i-Teams’ recommendations was to develop systems for researchers. It reinforced what we were heading toward, but it was nice to get that confirmation.”
AIM Biotech launched in Singapore in 2012, under current CEO Kuan Chee Mun, who Kamm met through SMART.
A major application for the device, Kamm says, is studying cancer metastasis — as demonstrated with his own work — to develop better treatments.
In the body, cells break loose from a tumor and migrate through tissue into the blood system, where they get stuck in the small blood vessels of a distant organ or adhere to vessel walls. Then they can escape from inside the vessel to form another tumor. AIM Biotech’s microfluidics device produces a similar microenvironment: When endothelial cells are seeded into the side channels or the central gel region, they form a 3-D network of vessels in the hydrogel. Tumor cells can be introduced, flowing naturally or getting stuck in the vessels.
Kamm says this environment could be useful in testing cancer drugs, as well as anti-angiogenesis compounds that prevent the development of blood vessels, effectively killing tumors by cutting off their blood supply. While many such treatments have shown limited success, “there’s a lot of interest in screening for new ones,” Kamm says.
In the future, Kamm adds, AIM Biotech may offer to more accurately screen cancer drugs for pharmaceutical companies. In fact, he says, AIM Biotech recently discovered that its devices revealed discrepancies in some clinically tested therapeutics.
In a study published in Integrative Biology, MIT researchers used Kamm's microfluidics technology to screen several drugs that aim to prevent tumors from breaking up and dispersing throughout the body. Results indicated that the level of drugs needed was often two orders of magnitude higher than predictions based on traditional assays. “So there’s no way to effectively predict, from the 2-D assays, what the efficacy of a particular drug was,” Kamm says.
If pharmaceutical companies were to winnow potential drugs from, say, 1,000 to 100 for testing, Kamm says, “We could test those drugs out in a more realistic setting.”
The above post is reprinted from materials provided by MIT NEWS | <urn:uuid:d3dc861f-e2b2-4eb2-947b-c42531b91f63> | 3.015625 | 1,590 | News Article | Science & Tech. | 38.764846 | 95,581,184 |
Please consider donating to Behind the Black, by giving either a one-time contribution or a regular subscription, as outlined in the tip jar to the right or below. Your support will allow me to continue covering science and culture as I have for the past twenty years, independent and free from any outside influence.
The uncertainty of science: In trying to explain the relatively new mystery of fast radio bursts (FRB), of which only about 20 have been detected and of which very little is known, scientists are intrigued by a gamma ray burst (GRB) that apparently occurred at the same time and place of one FRB.
Seeing the FRB event in a different wavelength would normally help astronomers better understand the FRB The problem is that this particular GRB only makes the mystery of FRBs more baffling.
One puzzle is that the two signals portray different pictures of the underlying source, which seems to be as much as 10 billion light years (3.2 gigaparsecs) away. Whereas the radio burst lasted just a few milliseconds, the γ-ray signal lasted between two and six minutes, and it released much more energy in total than the radio burst. “We’ve pumped up the energy budget more than a billion times,” says study co-author Derek Fox, an astrophysicist at Penn State.
This has big implications for the FRB’s origin. One leading theory suggests that FRBs are flares from distant magnetars — neutron stars with enormous magnetic fields that could generate short, energetic blasts of energy, and do so repeatedly, as at least one FRB is known to do. Although magnetars are thought to produce γ-rays, they would not emit such high energy and over such a long time, says Fox. “This is a severe challenge for magnetar models,” he says. | <urn:uuid:ac3e23d0-7f36-4d7f-97bb-1fa2af64d1fd> | 3.609375 | 379 | Truncated | Science & Tech. | 47.387522 | 95,581,189 |
Ways of using config files
Of course, those are only examples - I'm sure you will be able to think of dozens of others uses for config files - you are limited only by your imagination ;-)
One most obvious usage of config files, is to store inside it values across sessions:
current_player = "Player1"
name = "Krzesimir"
score = 34500
key_up = 63
key_down = 65
key_left = 64
key_right = 66
key_fire = 32
name = "Genowefa Ludomira Pigwa"
score = 5678
key_up = 45
key_down = 47
key_left = 46
key_right = 43
key_fire = 44
You might be storing you application settings:
// real life example from my engine Sculpture Clay 2
application_name = "Renderer test"
main_log_name = "ren_log(htm)"
memory_mgr_log_name = "ren_mem_mgr(htm)"
gfx_log_name = "ren_gfx_log(htm)"
name = main
window_size = 800, 600
window_bpp = 32
window_fullscreen = false
caps_min_zbuffer_value = 0.0
caps_max_zbuffer_value = 600.0
caps_surface_type = sw
cameras = main
name = "main"
position = -50.0, -50.0
clipping_rect = 0.0, 0.0, 0.5, 1.0
If you think that I'm the only one using them in this way... think again. Here's the excerpt from SDL_Config.dev (.dev files are used by DevCpp to store project settings) file viewed in notepad:
FileDescription=Developed using the Dev-C++ IDE
Instead of XML files - groups in config files can be fake nested, ie:
// some entries describing robot's general attributes, ie.
name = "Blashak"
type = "Hunter Killer"
[Robot / Appearance]
// other entries that describe how it looks, ie.
model = "robot.md3"
sound = "robot.ogg"
[Robot / AI]
aggresive = true
passive = no
objective = "kill everything that moves"
Separator used in this example is /, but of course, it's fake, there's no real nesting, groups aren't connected in SDL_Config in any way. It's up to you, to know this structure and interpret it correctly after parsing takes place. You can use other separators: . , -, -> etc. as long as they don't mislead parser.
Posted by Koshmaar on December 31 2005 16:06:32 5320 Reads -
Copyright © Hubert "Koshmaar" Rutkowski 2005 | <urn:uuid:ea79f8ed-ee2c-45b8-9729-fb3e8d092259> | 2.703125 | 638 | Personal Blog | Software Dev. | 59.029023 | 95,581,190 |
A new study finds that the world can emit even less greenhouse gases than previously estimated in order to limit climate change to less than 2°C.
In a comprehensive new study published in the journal Nature Climate Change, researchers propose a limit to future greenhouse gas emissions—or carbon budget—of 590-1240 billion tons of carbon dioxide from 2015 onwards, as the most appropriate estimate for keeping warming to below 2°C, a temperature target which aims to avoid the most dangerous impacts of climate change.
The study finds that the available budget is on the low end of the spectrum compared to previous estimates—which ranged from 590 to 2390 billion tons of carbon dioxide for the same time period—lending further urgency to the need to address climate change.
“In order to have a reasonable chance of keeping global warming below 2°C, we can only emit a certain amount of carbon dioxide, ever. That’s our carbon budget,” says IIASA researcher Joeri Rogelj, who led the study. “This has been known for about a decade and the physics behind this concept are well-understood, but many different factors can lead to carbon budgets that are either slightly smaller or slightly larger. We wanted to understand these differences, and provide clarity on the issue for policymakers and the public.”
“This study shows that in some cases we have been overestimating the available budget by 50 to more than 200%. At the high end, this is a difference of more than 1000 billion tons of carbon dioxide,” says Rogelj.
Estimates for a carbon budget consistent with the 2°C target have varied widely. The new study provides a comprehensive analysis of these differences. The researchers identified that the variation in carbon budgets stemmed from differences in scenarios and methods, and the inclusion of other human activities that can affect the climate, for example the release of other greenhouse gases like methane. Previous research suggested that the varying contribution of other human activities would be the main reason for carbon budget variations, but surprisingly, the study now finds that methodological differences contribute at least as much.
The proposed budget accounts for warming of all human activities and greenhouse gases and is based on detailed scenarios that simulate low-carbon futures.
Rogelj says, “We now better understand the carbon budget for keeping global warming below 2 degrees. This carbon budget is very important to know because it defines how much carbon dioxide we are allowed to release into the atmosphere, ever. We have figured out that this budget is at the low end of what studies indicated before, and if we don’t start reducing our emissions immediately, we will blow it in a few decades.”
Rogelj J, Schaeffer M, Friedlingstein P, Gillett NP, van Vuuren D, Riahi K, Allen M, Knutti R, (2016). Differences between carbon budget estimates unraveled. Nature Climate Change March 2016. doi:10.1038/NCLIMATE2868
MSc Katherine Leitzell | idw - Informationsdienst Wissenschaft
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe”
05.07.2018 | European Geosciences Union
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:36c1780e-271d-41a1-a508-6bc14690e27a> | 4.21875 | 1,285 | Content Listing | Science & Tech. | 40.808373 | 95,581,195 |
Rice University researchers have determined the most climate-friendly use of natural gas is replacement of existing coal-fired power plants and fuel-oil furnaces rather than burning it in cars and buses. Credit: Tanyia Johnson/Rice University Rice University researchers have determined a more effective way to use natural gas to reduce climate-warming emissions would be in the replacement of existing coal-fired power plants and fuel-oil furnaces rather than burning it in cars and buses. The Rice study by environmental engineer Daniel Cohan and alumnus Shayak Sengupta compared the net greenhouse gas-emission savings that could be realized by replacing other fuels in vehicles, furnaces and power plants.
They found that gas-fired power plants achieved the greatest reduction—more than 50 percent—in net emissions when replacing old coal-fired power plants. The use of compressed natural gas in vehicles yielded the least benefit, essentially matching the emissions of modern gasoline or diesel engines.
The study, funded in part by Rice’s Energy and Environment Initiative, appears in the International Journal of Global Warming.
The researchers’ calculations considered emissions throughout the fuel cycle, from production and transport of each fuel through combustion and including leaks of methane. They made comparisons within the five sectors they studied—power plants, furnaces, exports for electricity generation overseas, buses and cars—and across sectors to see which use of natural gas pays the greatest dividend for reducing greenhouse gas emissions.
”This research is aimed at a world where natural gas has become more abundant,” Cohan said. ”Some people vilify natural gas, focusing on leaks, and others make it out to be a clean, green energy source.
”With this work, we try to shift the conversation to say it’s not just a question of how you get natural gas. How you use it is just as important to its impact on climate.”
Though focused on end uses of natural gas, the paper also shows how results are affected by highly uncertain leak rates from natural gas production and delivery.
”It’s crucially important to be smart about how we get natural gas,” Cohan said. ”Let’s get it in ways that reduce methane leaks. We show in this paper how big a difference reducing leaks can make in reducing emissions.”
The ”well-to-wire” research showed new natural-gas power plants are responsible for less than half as much greenhouse gas per kilowatt hour of electricity generated as existing coal power plants.
Meanwhile, a ”well-to-wheel” analysis of transportation fuel showed natural-gas-burning and gasoline-burning vehicles were nearly identical in emissions impact. Calculations were modeled on Honda Civics, which are sold in both configurations as well as a gas-electric hybrid. In the latter case, the hybrid had a 27 percent lower emissions impact than the natural-gas version, due to its better fuel economy. A comparison of natural-gas-burning versus diesel-burning buses gave the emissions edge to diesel, which accounted for 12 percent fewer emissions, within the range of uncertainty.
”Natural-gas vehicles yield the least savings, and require building out infrastructure that doesn’t transition into new renewable options,” Cohan said.
The researchers found replacing old oil-burning furnaces for residential heating with new natural-gas-burning models yielded emissions savings of up to 48 percent.
Finally, taking a more worldly view, they estimated replacing coal-fired power plants in Japan with liquid-natural-gas plants that burn fuel imported from the United States would also be a net-plus for the environment, with a 15 percent emissions savings. Japan is the world’s leading importer of liquefied natural gas.
Explore further:By itself, abundant shale gas unlikely to alter climate projections
More information:International Journal of Global Warming, www.inderscience.com/info/inarticle.php?artid=74960
Provided by:Rice University | <urn:uuid:e96711cc-14d6-44b1-8b3d-fd5132bb49c6> | 3.59375 | 834 | News Article | Science & Tech. | 32.084085 | 95,581,201 |
The term "climate change" refers to a rise in the average global temperature due to an increase in the concentration of atmospheric greenhouse gases, resulting in numerous climatic shifts and impacts around the globe. The term “global warming” is also used, although it is important to recognize that the increase in temperature is a global average and individual locations will experience varied temperature and precipitation changes.
Several gases, such as carbon dioxide (CO2) and methane (CH4), exist naturally in the atmosphere and contribute to the warming of the Earth's surface by trapping heat from the sun, in what is known as the greenhouse effect. When the proportion of such greenhouse gases in the atmosphere is stable, the effect is beneficial, making surface temperatures warmer and alleviating temperature swings. However, human activity is increasing the concentration of greenhouse gases in the atmosphere, which is already causing average temperatures to rise. Burning fossil fuels in vehicles and power plants emits potent greenhouse gases, including carbon dioxide, methane and nitrous oxide (N2O), among others. In addition, clearing forested land through burning or logging trees also releases CO2 into the atmosphere and contributes to the greenhouse effect.
There is broad scientific consensus that human activities, most notably the burning of fossil fuels for energy, have led to the rapid buildup in atmospheric greenhouse gases. The Intergovernmental Panel on Climate Change (IPCC) stated in 2007 that CO2 levels in the atmosphere rose from a pre-industrial level of 280 parts per million (ppm) to 379 ppm in 2005. This coincided with an increase in the average global temperature of 0.74°C / 1.33°F between 1906 and 2005. In 2013, the U.S. National Oceanic and Atmospheric Administration announced that CO2 levels had hit in 400ppm. That same year, the IPCC concluded, "It is extremely likely [95 percent confidence] that human influence has been the dominant cause of the observed warming since the mid-20th century." In 2012, the World Meteorological Organization (WMO) released its analysis that shows that the decade spanning 2001-2010 was the warmest ever recorded in all continents of the globe.
Impacts from this warming already have been observed and include increases in global average air and ocean temperatures, accelerated melting of snow and sea ice, widespread retreat of glaciers, rising global average sea level, and extensive changes in weather patterns, including changes in precipitation levels and increased storm intensity. And as atmospheric concentrations of CO2 rise, oceans absorb more carbon, causing ocean waters to become acidic. The acidic conditions make it more difficult for calcifying organisms such as corals and crustaceans to form hard shells or skeletons, ultimately affecting the entire marine food chain. Climate change is the greatest environmental threat confronting the world.
There is an urgent need to address climate change, as climate scientists now say that we have a decade or less to begin reducing greenhouse gas emissions to avoid catastrophic changes to the planet. Energy efficiency and renewable energy are the fastest, safest, cleanest, and most cost-effective means of reducing our use of fossil fuels within this diminishing window of time. We can improve the energy efficiency of our buildings, vehicles, communities, and energy generation sector. We can transition to clean renewable energy resources that do not emit new greenhouse gases into the atmosphere. And responsible land management practices can help sequester more carbon in plants and soils. Putting a price on carbon emissions would help promote all of these solutions, by harnessing the power of market forces (private companies and nonprofits would invest in the most cost-effective ways to diminish greenhouse gas emissions).
Ramping up the production and deployment of renewable energy and energy efficiency technologies does not only provide environmental benefits. It will also provide jobs and economic growth while helping to alleviate our nation’s dependence on energy imports, which sends hundreds of billions of dollars out of the country every year.
Although we have already committed ourselves to a certain amount of warming, quick implementation of these solutions will mitigate the impacts of climate change in coming decades.
Learn more about Climate Change
- Carbon Removal Strategies: A Broad Overview
- Senate Farm Bill Passes Out of Committee with Nod to Climate Change Adaptation
- FERC Failing to Consider Climate Impacts in Gas Pipeline Approvals
- Low-Carbon Biofuels, Bioproducts Crucial to Two Degree Scenario
- Poland’s Transition to a Cleaner Economy
- Cape Town’s Water Crisis: How Did It Happen?
- Hidden in Plain Sight? Why Resilient Buildings Are Critical U.S. Infrastructure
- U.S. Leads in Greenhouse Gas Reductions, but Some States Are Falling Behind
- Warning Signs: New Report Outlines the Impacts of Proposed Budget Cuts to Climate and Environmental Research
- Rep. Lee Introduces Women and Climate Change Act of 2018 | <urn:uuid:9adc010d-cbb8-41d9-bff2-59d283f5be87> | 4.15625 | 990 | Knowledge Article | Science & Tech. | 25.922619 | 95,581,212 |
Hey guys, I have been trying this and I can't do it!
The curves y=sinx and y=cosx and the x axis enclose a region.
a) Find the area of the shaded region
b)find the volume generated when this region is rotated about the x axis.
Okay so a) If you drew this out, you would see that you have limits of 1/2pi and 0 for the graph of cosx
Therfore intergrating cosx with limits 1/2pi and 0 I get 1
now this is where I am stuck.... sinx and cosx intercept at tanx=1, which is 1/4pi.
Now if i took the limits to be 0 and 1/4pi for intergrating sinx, I get -2sqrt2 divided all by 2.
Now do I add 1 or takeaway 1?
this is really annoying me!
I dont know what to take as limits when intergrating sinx, as i want to get rid of the bit above.... if anyone can draw a graph that would help alot to the people trying to solve the problem
Turn on thread page Beta
OCR C4 Intergration of trig question (rep+) watch
- Thread Starter
- 06-04-2011 19:23
- 06-04-2011 21:55
Isn't the region you want: the area under sinx from x=0 to x=pi/4, plus the are under cosx from x=pi/4 to x=pi/2 ? | <urn:uuid:8e83b062-12d8-449a-aa13-8924b0398f71> | 2.890625 | 326 | Comment Section | Science & Tech. | 96.924449 | 95,581,226 |
This is a post inspired in an article about bone plasticity and fracture toughness published in the most recent issue of Physics Today. No, I will not teach you how to break someone's arm or leg (although if I knew how to it would probably be cool to teach, wouldn't it?). What I want to talk about is the concept of fracture toughness and the mechanisms that increase it in materials.
For a crack to be created you need energy. Everyone is familiar with mechanical energy, you can push, bend or throw a cup and it will break. You could also use thermal energy, heat (or cool) certain objects and they will crack. But it is not at the slightest push that an object will break, there is a minimum of energy that you need to create a crack.
A crack is nothing more than breaking chemical bonds and creating more surfaces (think about it, if you break a plate now you have at least 2!! =P). There is an energy associated with keeping a bond and there is an energy associated with an exposed surface. When it is energetically favorable (that is, breaking the bond has less energy than keeping the bond) the object will crack.
So, that's what a crack is and although learning about how cracks originate is an interesting topic, it is not the most interesting part of fracture theory for me. What I find really cool is how a material deals with a crack once this one is formed. Materials possess a quantity called fracture thoughness, which is a measurement of how hard (or easy) it is for a crack to propagate through. The most critical part of a crack is the tip because here is where the higher stresses (or forces if you prefer) concentrate. Just as you need energy to create a crack, you need energy to grow it. However, just as some materials have mechanisms to prevent cracking (for example, a clothes hanger bends significantly before you can break it) some materials have mechanisms to prevent the cracks from growing (in other words, they increase the fracture toughness). Some of these mechanisms can even be artificially engineered, isn't that cool?
Ok, so what are these mechanisms that increase fracture toughness? One of them is crack deflection. The idea here is to change the direction of crack propagation to eliminate (or at least minimize) the force applied at the crack tip. Crack deflection occurs very often in porous materials and at the interfaces in composite materials. Bone being a porous matrix does exhibit crack deflection.
Another way of increasing fracture toughness is by creating microcracks around the crack tip. In this case the effect is double, first when a force is applied to a material containing both a crack and microcracks, the force is distributed among all of them and therefore can reduce the stress concentrated at the main crack tip and inhibit crack growth. The other way in which microcracks help is by expanding the region around the crack and "closing" its size. Radiographs of damaged bone can show multiple microcracks, although in some cases the microcracks are way too small to be seen by eye.
Lastly, crack bridging can also hinder crack growth. Bridging is, by design, the main fracture toughness mechanism in most fiber-reinforced materials but in monolithic ceramics (i.e. alumina) exhibit grain-bridging. In fiber reinforced materials, the idea is that the matrix cracks easier than the fibers, and thus when force is applied the crack will form but the fiber across the crack will remain intact and support the load. Grain-bridging is a much more subtle idea and it consists of grains in the crack rubbing against each other and carrying the applied force instead of the crack.
Any fracture toughness mechanism will show up in what engineers call an R-curve. If this curve rises with crack extension then you can be certain the material possesses some kind of fracture toughness mechanism. Determining which one, on the other hand, is not always that easy. Now to come back to the Physics Today article, it turns out bone has all three of them:deflection, microcracks and bridging. I am not surprised that bone is really hard to break now.
Burton Richter Dies at 87
3 hours ago | <urn:uuid:250e4d56-2917-44e1-aae9-12711f6ad4e9> | 3.296875 | 868 | Personal Blog | Science & Tech. | 55.453246 | 95,581,243 |
A chromophore is the part of a molecule responsible for its color. The color that is seen by our eyes is the one not absorbed within a certain wavelength spectrum of visible light. The chromophore is a region in the molecule where the energy difference between two separate molecular orbitals falls within the range of the visible spectrum. Visible light that hits the chromophore can thus be absorbed by exciting an electron from its ground state into an excited state.
Conjugated pi-bond system chromophores
In the conjugated chromophores, the electrons jump between energy levels that are extended pi orbitals, created by a series of alternating single and double bonds, often in aromatic systems. Common examples include retinal (used in the eye to detect light), various food colorings, fabric dyes (azo compounds), pH indicators, lycopene, β-carotene, and anthocyanins. Various factors in a chromophore's structure go into determining at what wavelength region in a spectrum the chromophore will absorb. Lengthening or extending a conjugated system with more unsaturated (multiple) bonds in a molecule will tend to shift absorption to longer wavelengths. Woodward-Fieser rules can be used to approximate ultraviolet-visible maximum absorption wavelength in organic compounds with conjugated pi-bond systems.
Some of these are metal complex chromophores, which contain a metal in a coordination complex with ligands. Examples are chlorophyll, which is used by plants for photosynthesis and hemoglobin, the oxygen transporter in the blood of vertebrate anirnals. In these two examples, a metal is complexed at the center of a tetrapyrrole macrocycle ring: the metal being iron in the heme group (iron in a porphyrin ring) of hemoglobin, or magnesium complexed in a chlorin-type ring in the case of chlorophyll. The highly conjugated pi-bonding system of the macrocycle ring absorbs visible light. The nature of the central metal can also influence the absorption spectrum of the metal-macrocycle complex or properties such as excited state lifetime. The tetrapyrrole moiety in organic compounds which is not macrocyclic but still has a conjugated pi-bond system still acts as a chromophore. Examples of such compounds include bilirubin and urobilin, which exhibit a yellow color.
An auxochrome is a functional group of atoms attached to the chromophore which modifies the ability of the chromophore to absorb light, altering the wavelength or intensity of the absorption.
Halochromism occurs when a substance changes color as the pH changes. This is a property of pH indicators, whose molecular structure changes upon certain changes in the surrounding pH. This change in structure affects a chromophore in the pH indicator molecule. For example, phenolphthalein is a pH indicator whose structure changes as pH changes as shown in the following table:
|Conditions||acidic or near-neutral||basic|
|Color name||colorless||pink to fuchsia|
In a pH range of about 0-8, the molecule has three aromatic rings all bonded to a tetrahedral sp3 hybridized carbon atom in the middle which does not make the π-bonding in the aromatic rings conjugate. Because of their limited extent, the aromatic rings only absorb light in the ultraviolet region, and so the compound appears colorless in the 0-8 pH range. However, as the pH increases beyond 8.2, that central carbon becomes part of a double bond becoming sp2 hybridized and leaving a p orbital to overlap with the π-bonding in the rings. This rnakes the three rings conjugate together to form an extended chromophore absorbing longer wavelength visible light to show a fuchsia color. At pH ranges outside 0-12, other molecular structure changes result in other color changes; see Phenolphthalein for details.
- Visual phototransduction
- Woodward's rules
- Biological pigment
- IUPAC Gold Book Chromophore
- Gouterman, M. (1978) Optical spectra and electronic structure of porphyrins and related rings. In Dolphin, D. (ed.) The porphyrins. Academic Press, New York. Volume III, Part A, pp 1-165
- Scheer, H. (2006) An overview of chlorophylls and bacteriochlorophylls: biochemistry, biophysics, functions and applications. Advances in Photosynthesis and Respiration, vol 25, pp 1-26
- Shapley, P. (2012) Absorbing light with organic molecules. http://butane.chem.uiuc.edu/pshapley/GenChem2/B2/1.html
- UV-Visible Absorption Spectra
- Causes of Color: physical mechanisms by which color is generated.
- High Speed Nano-Sized Electronics May be Possible with Chromophores - Azonano.com | <urn:uuid:4766ca3e-9d0c-4da6-b5c0-6fa89c164107> | 4.0625 | 1,052 | Knowledge Article | Science & Tech. | 34.700269 | 95,581,249 |
Solar cells and photodetectors could soon be made from new types of materials based on semiconductor quantum dots, thanks to new insights based on ultrafast measurements capturing real-time photoconversion processes.
"Our latest ultrafast electro-optical spectroscopy studies provide unprecedented insights into the photophysics of quantum dots," said lead researcher Victor Klimov, a physicist specializing in semiconductor nanocrystals at Los Alamos National Laboratory, "and this new information helps perfect the materials' properties for applications in practical photoconversion devices. Our new experimental technique allows us to follow a chain of events launched by femtosecond laser pulses and pin down processes responsible for efficiency losses during transformation of incident light into electrical current."
Photoconversion is a process wherein the energy of a photon, or quantum of light, is converted into other forms of energy, for example, chemical or electrical. Semiconductor quantum dots are chemically synthesized crystalline nanoparticles that have been studied for more than three decades in the context of various photoconversion schemes including photovoltaics (generation of photo-electricity) and photo-catalysis (generation of "solar fuels"). The appeal of quantum dots comes from the unmatched tunability of their physical properties, which can be adjusted by controlling the size, shape and composition of the dots.
At Los Alamos, the research connects to the institutional mission of solving national security challenges through scientific excellence, in this case focusing on novel physical principles for highly efficient photoconversion, charge manipulation in exploratory device structures and novel nanomaterials.
See a video on quantum dots:
The interest in quantum dots as solar-cell materials has been motivated by their tunable optical spectra as well as interesting new physics such as high-efficiency carrier multiplication, that is, generation of multiple electron-hole pairs by single photons. This effect, discovered by Los Alamos researchers in 2004, resulted in the surge of activities in the area of quantum dot solar cells that quickly pushed the efficiencies of practical devices to more than 10 percent.
Further progress in this area has been by hindered by the challenge of understanding the mechanisms of electrical conductance in quantum dot solids and the processes that limit the charge transport distance. One specific and persistent challenge of great importance from the standpoint of photovoltaic (PV) applications, Klimov said, is understanding the reasons underlying a considerable loss in photovoltage compared to predicted theoretical limits—a problem with quantum dot solar cells known as a "photovoltage deficit." Los Alamos researchers at the Center for Advanced Solar Photophysics (CASP) helps answer some of the above questions.
By applying a combination of ultrafast optical and electrical techniques, the Los Alamos scientists have been able to resolve step-by-step a sequence of events involved in photoconversion in quantum dot films from generation of an exciton to electron-hole separation, dot-to-dot charge migration and finally recombination.
The high temporal resolution of these measurements (better than one billionth of a second) enabled the team to reveal the cause of a large drop of the electron energy, which results from very fast electron trapping by defect-related states. In the case of practical devices, this process would result in reduced photovoltage. The newly conducted studies establish the exact time scale of this problematic trapping process and suggest that a moderate (less than ten-fold) improvement in the electron mobility should allow for collecting photogenerated charge carriers prior to their relaxation into lower-energy states. This would produce a dramatic boost in the photovoltage and therefore increase the overall device efficiency.
Another interesting effect revealed by these studies is the influence of electron and hole "spins" on photoconductance. Usually spin properties of particles (they can be thought of as the rate and direction of particle rotation around its axis) are invoked in the case of interactions with a magnetic field. However, previously it was found that even a weak interaction between spins of an electron and a hole (so-called "spin-exchange" interaction) has a dramatic effect on light emission from the quantum dots.
The present measurements reveal that these interactions also affect the process of electron-hole separation between adjacent dots in quantum-dot solids. Specifically these studies suggest that future efforts on high-sensitivity quantum-dot photodetectors should take into consideration the effect of exchange blockade, which otherwise might inhibit low-temperature photoconductance.
Quantum dot materials have been at the heart of research at the Los Alamos Center for Advanced Solar Photophysics, which has investigated their application to solar-energy technologies such as luminescent sunlight collectors for solar windows and low-cost PV cells processed from quantum dot solutions.
Explore further: Simultaneous detection of multiple spin states in a single quantum dot
More information: Andrew F. Fidler et al, Electron–hole exchange blockade and memory-less recombination in photoexcited films of colloidal quantum dots, Nature Physics (2017). DOI: 10.1038/nphys4073
R. D. Schaller et al. High Efficiency Carrier Multiplication in PbSe Nanocrystals: Implications for Solar Energy Conversion, Physical Review Letters (2004). DOI: 10.1103/PhysRevLett.92.186601 | <urn:uuid:52ab8e14-22d6-49ac-b30b-b0b3a66a4e1b> | 3.21875 | 1,100 | News Article | Science & Tech. | 12.269901 | 95,581,261 |
- predicting redtides,
- West Florida Shelf ecology
Previous hypotheses had suggested that upwelled intrusions of nutrient-rich Gulf of Mexico slope water onto the West Florida Shelf (WFS) led to formation of red tides of Karenia brevis. However, coupled biophysical models of (1) wind- and buoyancy-driven circulation, (2) three phytoplankton groups ( diatoms, K. brevis, and microflagellates), (3) these slope water supplies of nitrate and silicate, and (4) selective grazing stress by copepods and protozoans found that diatoms won in one 1998 case of no light limitation by colored dissolved organic matter (CDOM). The diatoms lost to K. brevis during another CDOM case of the models. In the real world, field data confirmed that diatoms were indeed the dominant phytoplankton after massive upwelling in 1998, when only a small red tide of K. brevis was observed. Over a 7-month period of the CDOM-free scenario the simulated total primary production of the phytoplankton community was similar to1.8 g C m(-2) d(-1) along the 40-m isobath of the northern WFS, with the largest accumulation of biomass on the Florida Middle Ground (FMG). Despite such photosynthesis, these models of the WFS yielded a net source of CO2 to the atmosphere during spring and summer and suggested a small sink in the fall. With diatom losses of 90% of their daily carbon fixation to herbivores the simulation supported earlier impressions of a short, diatom-based food web on the FMG, where organic carbon content of the surficial sediments is tenfold those of the surrounding seabeds. Farther south, the simulated near-bottom pools of ammonium were highest in summer, when silicon regeneration was minimal, leading to temporary Si limitation of the diatoms. Termination of these upwelled pulses of production by diatoms and nonsiliceous microflagellates mainly resulted from nitrate exhaustion in the model, however, mimicking most del(15)PON observations in the field. Yet, the CDOM-free case of the models failed to replicate the observed small red tide in December 1998, tagged with the del(15)N signature of nitrogen fixation. A large red tide of K. brevis did form in the CDOM-rich case, when estuarine supplies of CDOM favored the growth of the shade-adapted, ungrazed dinoflagellates. The usual formation of large harmful algal blooms of >1 ug chl L-1 (10(5) cells L-1) in the southern part of the WFS, between Tampa Bay and Charlotte Harbor, must instead depend upon local aeolian and estuarine supplies of nutrients and CDOM sun screen, not those from the shelf break. In the absence of slope water supplies, local upwelling instead focuses nitrate-poor innocula of co-occurring K. brevis and nitrogen fixers at coastal fronts for both aggregation and transfer of nutrients between these phytoplankton groups.
Journal of Geophysical Research - Oceans, v. 108, no. C6, article 3190.
Available at: http://works.bepress.com/john_walsh1/12/ | <urn:uuid:90d60259-deaf-48d7-9e5e-8f22e950d149> | 2.828125 | 723 | Academic Writing | Science & Tech. | 45.839233 | 95,581,262 |
BENGKEL PENINGKATAN PELAJAR “PHYSICS” SPM 2010. Penceramah: Thong Kum Soon A rolling ball gain kinetic energy, but without frictional force, it won’t change direction. PAPER 3 pa per 3 spm1.pdf. Part A Question no 1. Diagram is given Extract the information from the DIAGRAM
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
A rolling ball gain kinetic energy, but without frictional force, it won’t change direction.
Graph of voltage versus current
√ title axes, units
0 √ 0
ALL THE BEST IN SPM | <urn:uuid:7a05f041-633f-4b95-bd8a-8e01f607abaf> | 2.609375 | 201 | Truncated | Science & Tech. | 54.726287 | 95,581,278 |
Scientists developed a method of detecting ionic mercury from water selectively and with high sensitivity by fabricating a gold nanogap structure coated with molecules which shows strong specific adsorption of ionic mercury.
Inspired by patents from the 1960's audio cassette recording industry, UvA chemists now developed a new Fischer-Tropsch catalyst. It can be used for the making of synthetic fuels from natural gas and biomass.
Small particles loaded with medicine could be a future weapon for cancer treatment. A recently-published study shows how nanoparticles can be formed to efficiently carry cancer drugs to tumor cells. And because the particles can be seen in MRI images, they are traceable.
NanoPhoSolar aims to develop nanophosphor down converting material which will be incorporated into coatings and polymer films for integration into new solar modules and retrofit of existing solar modules.
Physicists at UC Santa Barbara are manipulating light on superconducting chips, and forging new pathways to building the quantum devices of the future - including super-fast and powerful quantum computers.
Illinois has a burgeoning research and commercial nanotechnology environment. The University of Illinois and Northwestern University with its International Institute for Nanotechnolog have large and well-respected nanotechnology research programs. Currently, there are 29 companies in Illinois involved in nanotechnology-related business activities. In addition, there are 39 nanotechnology and nanoscience-related research and community organizations in Illinois.
A newly developed switchable mirror sheet uses new gasochromic switching that is completely different from conventional gasochromic switching methods. It can control the reflection of visible to near-infrared light at a switching speed about 20 times faster than that of conventional electrochromic switchable glass.
Scientists are developing an ambitious research project, known as 'Plasmaquo', aimed at developing a sensor which enables detecting the molecules that are released by bacteria to communicate with each other and, thus, understanding their paths of communication.
Researchers at Macquarie University have been perfecting a technique that may help see nanodiamonds used in biomedical applications. PhD student Jana Say has been working on processing the raw diamonds so that they might be used as a tag for biological molecules. | <urn:uuid:1416f5a4-4732-46d9-a6c3-0a76054ec1b9> | 2.90625 | 454 | Content Listing | Science & Tech. | 11.57137 | 95,581,302 |
Bleaching intensifies in Hawaii, high ocean temperatures threaten Caribbean corals
As record ocean temperatures cause widespread coral bleaching across Hawaii, NOAA scientists confirm the same stressful conditions are expanding to the Caribbean and may last into the new year, prompting the declaration of the third global coral bleaching event ever on record.
Waters are warming in the Caribbean, threatening coral in Puerto Rico and the U.S. Virgin Islands, NOAA scientists said. Coral bleaching began in the Florida Keys and South Florida in August, but now scientists expect bleaching conditions there to diminish.
"The coral bleaching and disease, brought on by climate change and coupled with events like the current El Niño, are the largest and most pervasive threats to coral reefs around the world," said Mark Eakin, NOAA's Coral Reef Watch coordinator. "As a result, we are losing huge areas of coral across the U.S., as well as internationally. What really has us concerned is this event has been going on for more than a year and our preliminary model projections indicate it's likely to last well into 2016."
While corals can recover from mild bleaching, severe or long-term bleaching is often lethal. After corals die, reefs quickly degrade and the structures corals build erode. This provides less shoreline protection from storms and fewer habitats for fish and other marine life, including ecologically and economically important species.
This bleaching event, which began in the north Pacific in summer 2014 and expanded to the south Pacific and Indian oceans in 2015, is hitting U.S. coral reefs disproportionately hard. NOAA estimates that by the end of 2015, almost 95 percent of U.S. coral reefs will have been exposed to ocean conditions that can cause corals to bleach.
The biggest risk right now is to the Hawaiian Islands, where bleaching is intensifying and is expected to continue for at least another month. Areas at risk in the Caribbean in coming weeks include Haiti, the Dominican Republic and Puerto Rico, and from the U.S. Virgin Islands south into the Leeward and Windward islands.
The next concern is the further impact of the strong El Niño, which climate models indicates will cause bleaching in the Indian and southeastern Pacific Oceans after the new year. This may cause bleaching to spread globally again in 2016.
"We need to act locally and think globally to address these bleaching events. Locally produced threats to coral, such as pollution from the land and unsustainable fishing practices, stress the health of corals and decrease the likelihood that corals can either resist bleaching, or recover from it," said Jennifer Koss, NOAA Coral Reef Conservation Program acting program manager. "To solve the long-term, global problem, however, we need to better understand how to reduce the unnatural carbon dioxide levels that are the major driver of the warming."
This announcement stems from the latest NOAA Coral Reef Watch satellite coral bleaching monitoring products, and was confirmed through reports from partner organizations with divers working on affected reefs, especially the XL Catlin Seaview Survey and ReefCheck. NOAA Coral Reef Watch's outlook, which forecasts the potential for coral bleaching worldwide several months in the future, predicted this global event in July 2015.
The current high ocean temperatures in Hawaii come on the heels of bleaching in the Main Hawaiian Islands in 2014?only the second bleaching occurrence in the region's history?and devastating bleaching and coral death in parts of the remote and well-protected Papahānaumokuākea Marine National Monument in the Northwestern Hawaiian Islands.
"Last year's bleaching at Lisianski Atoll was the worst our scientists have seen," said Randy Kosaki, NOAA's deputy superintendent for the monument. "Almost one and a half square miles of reef bleached last year and are now completely dead."
Coral bleaching occurs when corals are exposed to stressful environmental conditions such as high temperature. Corals expel the symbiotic algae living in their tissues, causing corals to turn white or pale. Without the algae, the coral loses its major source of food and is more susceptible to disease.
The first global bleaching event was in 1998, during a strong El Niño that was followed by an equally very strong La Niña. A second one occurred in 2010.
Satellite data from NOAA's Coral Reef Watch program provides current reef environmental conditions to quickly identify areas at risk for coral bleaching, while its climate model-based outlooks provide managers with information on potential bleaching months in advance.
The outlooks were developed jointly by NOAA's Satellite and Information Service and the National Centers for Environmental Prediction through funding from the Coral Reef Conservation Program and the Climate Program Office.
For more information on coral bleaching and these products, visit: http://www.
NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter, Instagram and our other social media channels.
Keeley Belva | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:36981662-8e09-47c2-a1e8-601ae2999ac7> | 3.390625 | 1,698 | Content Listing | Science & Tech. | 43.46801 | 95,581,309 |
Prior to that, they had to depend on more rudimentary and imprecise methods, such as counting the number of rings on a cross-section of tree trunk.
Though radiocarbon dating is startlingly accurate for the most part, it has a few sizable flaws.
Radiocarbon dating, which is used to calculate the age of certain organic materials, has been found to be unreliable, and sometimes wildly so - a discovery that could upset previous studies on climate change, scientists from China and Germany said in a new paper.
Their recent analysis of sediment from the largest freshwater lake in northeast China showed that its carbon clock stopped ticking as early as 30,000 years ago, or nearly half as long as was hitherto thought.
The technology uses a series of mathematical calculations—the most recognizable of which is known as half-life—to estimate the age the organism stopped ingesting the isotope.
Unfortunately, the amount of Carbon-14 in the atmosphere has not been steady throughout history. | <urn:uuid:9f033896-1758-4c22-94ac-bcc271e31d62> | 3.734375 | 202 | Truncated | Science & Tech. | 32.795227 | 95,581,319 |
shoulders of giants
Accelerate discovery by standing on the shoulders of giants
"If I have seen further it is by standing on the shoulders of giants." As he did in so many other ways, Isaac Newton summarized the sequential nature of scientific inquiry in a phrase that is both elegant and concise. Newton was giving credit to the discoveries that preceded his own, and he expressed the incremental, interconnected paths of experiment-based discovery.
What facilitated this interconnectedness was the circulation of scientific information, which was directly catalyzed by the innovation of Gutenberg's printing press around 1450. The momentum built by one discovery inspiring the next remained critical in the centuries after Newton, evolving into a virtuous cycle that helped fuel the explosion of scientific achievements in the 20th century.
Innovation in the modern information era
The increase in knowledge dissemination introduced an unexpected challenge to scientific progress—information overload. The first academic journals, including Philosophical Transactions in England and Journal des sçavans in France, were introduced in 1665 to help scientists stay up-to-date and better understand the latest scientific discoveries in their field. However, as scientists continue to innovate and inspire new discoveries, the challenge to stay informed has grown exponentially. Within the last two years alone, humanity has generated 90% of all data ever created.
To continue to make important, novel contributions to science—to speed scientific discovery—it is essential that scientists understand and appreciate what has come before. For example, three researchers—Yves Chauvin, Robert H. Grubbs, and Richard R. Schrock—won the Nobel Prize in Chemistry in 2005 for the "development of the metathesis method in organic synthesis." About their discovery, the Nobel Prize committee wrote:
Chauvin's theory set the scene for chemists to hunt for and design catalysts that carry out the exchange scheme effectively. After examining a number of different candidate metals Richard Schrock made the initial breakthrough by discovering that catalysts containing the transition metal elements molybdenum and tungsten performed the task. However, their tendency to react unfavourably meant that the reaction did not always fully go to plan. Robert Grubbs went one stage further by developing more effective catalysts centred around another transition metal, ruthenium, which reacted less with other molecules and was much more stable.
This story reveals the sequential nature of their work, culminating in a Nobel Prize-worthy accomplishment. Each new finding inspired another, all connecting through a complex lineage of references in the scientific literature. Looking at Grubbs' impressive list of nearly 1,200 journal articles to date, we see more than 49,000 references to these articles in later publications. In fact, a single journal article published by Grubbs in 1999 has been referenced nearly 2,800 times thereafter by other researchers in the field!
Human curation as the key to tomorrow's transformative technologies
With organizations generating 2.5 quintillion bytes of data each day, discoverability of specific, relevant information becomes all the more important. While information solutions like SciFindern help today's scientists find exactly the information they need, when they need it, artificial intelligence (AI) technologies will soon become part of a more comprehensive strategy to manage the growing influx of data.
As these new AI technologies are developed, the importance of high-quality, human-curated scientific data shouldn't be overlooked, as it is the cornerstone to successful implementation of AI in bioscience, computing and everything in between. Starting with intellectually enriched data sources, AI can deliver insights across a wide range of information types, helping scientists uncover and understand what was once buried beneath a mountain of literature. By funneling "data lakes" into more manageable "reservoirs" of related or complementary information, AI built upon high-quality, human curated data will help scientists maximize the opportunity to build upon the knowledge and innovation of their predecessors, and to guarantee that their own discoveries are accessible for the next generation of innovators.
At CAS, we know just how important it is for scientists to have ready access to existing research to speed the discovery process. This understanding has guided our service to the scientific community for more than 110 years. While today's complex and ever-growing data environment presents unique challenges, our scientists will continue to read the literature to extract, organize, and connect the valuable details within—all to allow you to see further by standing on the shoulders of giants.
Whose shoulders will you stand on for your next discovery? Learn how CAS can help.
CAS, a division of the American Chemical Society, is dedicated to improving lives through transforming power of chemistry. Professionals around the world rely on CAS to fuel innovation. With over 100 years of experience, no one knows how to better customize solutions for your organization. | <urn:uuid:971f2e71-f061-49b0-9284-046cce825c98> | 2.953125 | 974 | News (Org.) | Science & Tech. | 21.85434 | 95,581,325 |
The superhero “Spider-Man” shoots webbing from his wrists to swing between buildings. But do you know the true strength of the spider silk? It is a highly elastic and heat resistant high performance material that is stronger than steel. The mass production of such a magical fiber was made possible by a Japanese venture company for the first time in the world. This new material, which had been sought by researchers around the world, is no longer a fantasy and is set to be used in a multitude of industrial products.
Spider-Man shoots webbing at a movie premier. June 2012, Osaka City. ©Kyodo News
Stronger than steel
The dress made with synthetic spider silk (courtesy of Spiber Inc.)
In the movie, Spider-Man uses spider webs to stop a train. Real-life spider silk also has an amazing potential for strength. Calculated to be approximately five times stronger than steel, if you were to make a web with spider silk 1 cm in diameter, it would have the strength to catch a jumbo jet as if it were a dragonfly or butterfly. Spider silk is also as elastic as nylon, six times lighter than steel of the same strength, and able to withstand temperatures of 300 degrees Celsius; it truly is a magical fiber.
Recently, a Japanese venture company has succeed for the first in the world to develop technology for mass producing a synthetic spider silk with the same qualities as natural spider silk. Based on the pronunciation of the Japanese word for spider web, “KUMONOSU,” the synthetic spider silk was named “QMONOS.” A blue dress made of fabric woven with QMONOS, which was revealed as a prototype to the public in May 2013, glow mystically by reflecting the light as if it were a futuristic image seen in a movie.
|Synthetic spider silk (courtesy of Spiber Inc.)||
Synthetic spider silk when viewed under an electron microscope (courtesy of Spiber Inc.)
Research into the mass production of synthetic spider silk has been going on around the world for decades. Some researchers thought to farm large quantities of spiders for their silk; however, due to spider’s highly territorial and cannibalistic nature, this proved impossible. In Japan, research into altering the DNA of silk worms to produce spider silk was attempted, but the worms did not produce large quantities of spider silk. | <urn:uuid:0a45a4ae-06ea-49f4-b7b0-927a54121f41> | 3.234375 | 492 | Knowledge Article | Science & Tech. | 48.988739 | 95,581,338 |
Learn Sass programming with the best free online courses and tutorials for beginners and advanced learners aggregated from Udemy, Edx, Skillshare, Coursera, Udacity, Treehouse, YouTube and other MOOCs .
Sass is a stylesheet language that extends CSS with features like variables, nested rules, mixins and functions, in a CSS-compatible syntax. In this course, you'll learn to use the powers of Sass to boost your front end workflow. The examples will teach you why you should use Sass in your projects. By the end, you will be writing more efficient CSS using code that is easy to read and maintain.Sass | <urn:uuid:7dc98645-7582-4c69-99ee-d9070bdd15a5> | 2.8125 | 133 | Product Page | Software Dev. | 52.312198 | 95,581,340 |
why does sound not travel in space
Can you hear sounds in space? Now you ve probably heard that there s no sound in space, but. Now yes, space is a virtual vacuum. However, sound does exist in the form of electromagnetic vibrations that pulsate in similar wavelengths. What did was design special instruments that could record these electromagnetic vibrations, and transferred them into sounds that our ears could hear. What you re about hear is actual sound in space, nothing has been. It s a beautiful, yet haunting sound that music legend
would be jealous of. Make sure to check out all 12, especially the sounds of the sun!! Can you hear sounds in space? I don t know about you but I got chills on a few of those. Make sure to give this a share on Facebook before you go, and drop us a comment below. Also, if you thought this was interesting, make sure you check out what happened when. (h/t This article was originally published on. Read the. We know that there is in the solar systemâplaces where thereâs a medium through which sound waves can be transmitted, such as an atmosphere or an ocean. But what about empty space? You may have been told definitively that space is silent, maybe by your teacher or through the marketing of the movie Alien â'In space, no one can hear you scream. ' The common explanation for this is that space is a vacuum and so thereâs no medium for sound to travel through.
But that isnât exactly right. Space is never completely emptyâthere are a few particles and sound waves floating around. In fact, sound waves in the space around the Earth are very important to our continued technological existence. They also sound pretty weird! Space sounds. Fundamentally, sound waves are that travel through the medium that theyâre in. In most cases, this is a series of compressions where molecules are closer together, and rarefactions, where they are further apartâcaused by the molecules themselves moving backward and forward. Here on the ground there is quite a lot of air aroundâeach square centimetre of it contains 300,000,000,000,000,000,000 molecules. In contrast, in interplanetary space on average youâll find just five protons (which make up the atomic nucleus with neutrons) in the same volumeâalmost completely empty in comparisonâ but not quite. Notice how I say protons, because space (like 99. 9% of the entire universe) isnât filled with gas but with plasmaâ made of charged particles. These charged particles mean that plasma can have some different properties: for instance, they can generate and be affected by electric and magnetic fields.
These kinds of interactions can give rise to the plasma-equivalent of sound waves: magnetosonic waves. These too are pressure waves, but with some added magnetism. We canât hear these magnetosonic waves in space. That is because the pressure variations are so small: a -100dB sound-pressure level (the human hearing threshold is about +60dB). In fact, youâd need an eardrum comparable to the size of the Earth to hear them. Their ultra-low frequencies are also way below what we would be able to hear. So if we canât hear them, why do we care about them? Well, in Earthâs magnetosphereâthe protective magnetic bubble we live in that largely protects us from various âthese magnetosonic waves can transfer energy around. For example, they can give it to the radiation belts, donuts of radiation surrounding the Earth, creating 'killer electrons' at extreme energies that can damage our satellites if weâre not careful. This is why I study these wavesâif we can predict when, where and why these waves occur in the space around the Earth, then we could forecast when our satellites might be in trouble and put them into a safe mode.
One of the ways we listen out for these sounds is using geostationary satellites that primarily monitor the weather. As well as all those instruments that can tell you whether to pack an umbrella, they have 'magnetic microphones' that can detect these waves. The problem for scientists is separating out all the different types of sound that are present in space. Fortunately, it turns out the human auditory system is pretty good at this sort of thing, some have even called it the best pattern recognition software that we know of. For this very reason, Iâm asking for you to lend me your ears. By amplifying these space sounds and squashing them in time so a whole year becomes just six minutes, they can be made audible. The audio has been, where you can provide comments on what you think various bits of it sound like. There is so much going on in these sounds, but crowdsourcing comments on them will help identify different types of wave events and ultimately help with the scientific research. So have a listen to some pretty odd sounds from space, because only you can tell me what you hear.  is a Space Plasma Physicist atÂ
- Views: 6
why do you see lightning before thunder
why do we hear thunder after we see lightning
why do you hear thunder after you see lightning
why is there no sound in a vacuum
why our voices sound different on recordings
why there is no sound in space
why do you hear thunder after you see lightning | <urn:uuid:e559dc6b-f2ec-4e85-aa84-1a72a53f3f4b> | 3.234375 | 1,119 | Personal Blog | Science & Tech. | 53.687489 | 95,581,418 |
Python supports operation on bits.
Here are some of Python's bitwise expression operators at work performing bitwise shift and Boolean operations on integers:
x = 1 # 1 decimal is 0001 in bits y = x << 2 # Shift left 2 bits: 0100 print( y ) print( x | 2 ) # Bitwise OR (either bit=1): 0011 print( x & 1 ) # Bitwise AND (both bits=1): 0001 # w w w . j a v a2 s . c om
In the first expression, a binary 1 (in base 2, 0001) is shifted left two slots to create a binary 4 (0100). | <urn:uuid:e61be46d-9fb5-4aa7-86fe-45fb5efaaea5> | 3.28125 | 138 | Documentation | Software Dev. | 86.935652 | 95,581,435 |
In Chapter 2, we considered situations that could be treated only by use of Fourier’s Law of heat conduction. In this chapter, we combine Fourier’s Law with the principle of conservation of energy to obtain the heat conduction equation. We then apply the equation to situations involving sources and sinks of energy.
KeywordsHeat Transfer Heat Source Heat Conduction Equation Convective Heat Transfer Coefficient Total Heat Transfer
Unable to display preview. Download preview PDF.
- 1.A. E. Bergles and R. L. Webb, Augmentation of Heat and Mass Transfer, Hemisphere, Washington, DC, 1983.Google Scholar
- 2.H. S. Carslaw and J. C. Jaeger, Conduction of Heat in Solids, 2nd. ed., Oxford University Press, London, 1959.Google Scholar
- 3.D. Q. Kern and A. D. Kraus, Extended Surface Heat Transfer, McGraw-Hill, New York, 1972.Google Scholar
- 4.F. Kreith, Principles of Heat Transfer, International Textbook, Scranton, PA, 1958.Google Scholar
- 5.W. M. Rohsenow and J. P. Hartnett, Handbook of Heat Transfer, McGraw-Hill, New York, 1973.Google Scholar
- 6.P. J. Schneider, Conduction Heat Transfer, Addison-Wesley, New York, 1955.Google Scholar | <urn:uuid:a6a250ef-552f-4dc1-abce-cbbdf91e8bf4> | 2.515625 | 303 | Truncated | Science & Tech. | 78.051892 | 95,581,440 |